Can AI help us think better now?

Humans think in terms of 1,2,3,4 lots and lots, while machines think in billions

This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about how AI is being used to make and break our world. I write this because, while I am optimistic about the technology, I am pessimistic about the power structures that command it. 

We are starting Artificiality Pro, a paid subscriber version of this newsletter, where our focus is on the frontier of AI and its interaction with human intelligence. 

If you haven’t signed up yet, you can do that here, for both free and paid. If you like this post, share it with a friend :)


The greatest shortcoming of the human race is our inability to understand the exponential function. - Physicist Albert Allen Bartlett

Humans struggle to understand exponential growth. Our brains have a linearity bias - we tend to see change in linear terms and struggle to comprehend the magnitude of exponential growth.

There are tons of great examples of how exponential growth runs counter to our intuition. One favorite of math teachers is to ask how many times you would need to fold a piece of paper for it to be thick enough to reach the moon. The answer is 45 times.

Exponential changes run counter-intuitive to the way our linear brains make projections about change, & so we don’t realize how fast the future is coming. - Jason Silva

There are many reasons we weren’t prepared for the pandemic but this feature of our cognition is part of the story. Countries that have previously dealt with SARS know what exponential feels like; they have been through the emotional conditioning required to mobilize quickly and have been able to act ahead of a visible appearance of catastrophe.

It’s sometimes difficult to even spot that a problem is an exponential problem in the first place. Take the Birthday Paradox — in a room of just 23 people there’s a 50-50 chance of at least two people having the same birthday. The birthday paradox is strange and counter-intuitive.

It’s only a “paradox” because our brains can’t handle the compounding power of exponents. We expect probabilities to be linear and only consider the scenarios we’re involved in. The fact that we neglect the 10 times as many comparisons that don’t include us helps us see why the “paradox” can happen. - Kalid Azad

The epidemic also introduces a similar selfish bias: many people only consider the impact of the virus on them, not fully recognizing that every person, everywhere is now connected in a chain of infection.

In our community in Oregon, many people have no intuition of these two compounding biases. Some people’s instincts are to go out and do things, almost as an act of willpower over the virus or denial about the risks to themselves and others. We’re observing a complex mix of cultural values (freedom), denial (it won’t happen to me) and an inability to comprehend what’s coming at us (can’t think exponentially, understand the connectivity of communities or accurately evaluate risk).

Even with excellent visualizations of the spread of infection and the power of social distancing, people don’t seem to be able to get their head around how quickly this disease spreads. On social media, I find myself in debates with people about basic facts. Reason fails far more than it should.

Reason didn’t evolve because inquiry, science and progress are inevitable. Reason evolved to prevent us from getting screwed by other members of our group. We are skilled at spotting flaws in other people’s arguments and not skilled at spotting them in our own. This is confirmation bias and it is adaptive because of our hyper-social nature — winning arguments and convincing others you are right builds social support. Unfortunately, in a politically polarized, hyper-social-media-AI-powered connected world, confirmation bias is beginning to look like a maladaptive strategy.

AI has a role to play. Because humans are good at considering our own cognitive strategies — something Tom Griffiths calls meta-reasoning — there are opportunities to develop AI that acts as a “cognitive crutch.” AI thinks in multiple dimensions, is able to handle exponential growth, so can help us think differently about the world. AI can show us alternate futures in ways that can overcome our cognitive limitations and motivate us to act differently.

Our future selves have preferences that our current selves fail to act on because we get tempted in the present. But we are good at deciding on a course of action based on a realistic understanding of the future, then setting a strategy to get ourselves there — exercise targets on an Apple Watch, for instance.

Complacency will be a huge enemy in the US. People will fatigue, especially as fear declines and people adjust to new probabilities. We have been two steps behind the virus from the beginning — everything that seemed impossible last week is now reality this week, things that should have been done two weeks ago have only now been put into effect. We know we cannot catch up to an exponential curve yet we seem unable to act ahead of it. It’s urgent to find ways to help people more effectively see ahead, to develop an intuitive sense of what it all means and what they need to do.

Our species is able to imagine, to develop scenarios and to think ahead. But an enemy that moves exponentially has our cognition beat. We need AI and tech that turns numbers into intuition, gives us cognitive crutches and scaffolds our resolve to act on hard choices today so we can protect tomorrow.


Also this week:

  • Sonder Scheme Pro-members article on new research in how people react to autonomy-decreasing AI, ie over-personalization.

  • Spooky drone footage of San Francisco in the lock-down.

  • Fascinating visualization of the effect of not social distancing on potential spread as spring breakers leave Florida for various parts of the US.

  • Honestly, time for some light relief from Twitter. ICYMI.
    Dog in leaves, home exercise, sock-puppet entertainment, how to take a break from the kids….

Ep. 04: Chelsea Barabas on bias and power in AI

  
0:00
-41:08

In this episode, I have the pleasure of interviewing Chelsea Barabas a PhD candidate at MIT’s Media Lab. We talk about her work on bias in the criminal justice system as well as her most recent work applying the concept of “studying up” from anthropology to the data science world.

Here are some of the links we refer to in the episode:

http://www.chelsbar.com

https://cmsw.mit.edu/profile/chelsea-barabas/

https://medium.com/@chelsea_barabas

https://www.nytimes.com/2019/07/17/opinion/pretrial-ai.html

https://journal.culanth.org/index.php/ca/article/view/ca31.3.01/367

https://discardstudies.com/2016/08/08/ethnographic-refusal-a-how-to-guide/

https://science.sciencemag.org/content/366/6464/421/tab-article-info

COVID-19, health information, technology, AI

Privacy, safety, barriers and opportunities

This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about how AI is being used to make and break our world. I write this because, while I am optimistic about the technology, I am pessimistic about the power structures that command it.

We are starting Artificiality Pro, a paid subscriber version of this newsletter, where our focus is on the frontier of AI and its interaction with human intelligence.

If you haven’t signed up yet, you can do that here, for both free and paid. If you like this post, share it with a friend :)


This week’s Artificiality is a bumper issue. There’s so much going on and so much changing. Here’s a quick take on what’s in this week’s newsletter:

  • Pandemic privacy.

  • How COVID-19 will reveal gaps in the EHR (electronic health record).

  • How AI isn’t up for this (yet) and what kind of technology solutions people need right now.

  • The opportunity to collect real-time data and why it’s so vital.

  • Why human experience and collaboration is more valuable than AI predictions right now.

  • The biggest gap in US health information system - death stats.

The US health system is about to be stress-tested like never before. Underneath the physical infrastructure and network of people lies an information infrastructure and a human-machine decision system. Is this system ready? How will it fail? What will it reveal about opportunities and barriers to effective AI in health?

Public good v privacy

A technology response to the current pandemic is important but it presents another challenge that we are unprepared for: redefining privacy. Just as we are unprepared for the outbreak itself, we are unprepared for the consequences of the technology response.

The most effective testing and tracking likely means that some privacy is lost. For example, when someone is tested positive, that person’s locational information is released to the public and so that people who may have been exposed can get themselves tested.

Just as medical practitioners are making up the rules as they go, privacy watch dogs and regulators should too. We’re all distinctly aware that times of crisis can usher in fundamental shifts in privacy. What were originally emergency measures become the new standard. In anticipation of this, Privacy International has set up an international tracker to monitor important measures, so that people can return to them in future.

Many measures are based on extraordinary powers, only to be used temporarily in emergencies. Others use exemptions in data protection laws to share data. Some may be effective and based on advice from epidemiologists, others will not be. But all of them must be temporary, necessary, and proportionate. - Privacy International

This is especially relevant if we look to big tech to provide part of the solution. Already, there are significant tensions between public health and privacy as governments turn to facial recognition and geolocation; Palantir is working with the CDC on data collection, as is Crimson Hexagon a company that scrapes Facebook, Instagram and Twitter. K Health is in talks with the CDC about aggregating data to map where people are showing signs of the virus.

We absolutely must do everything possible to control the spread of COVID-19 but we need to be vigilant on other things too. Soshana Zuboff writes in Surveillance Capitalism how the response to 9/11 allowed platforms to develop extraordinary surveillance capabilities. We can’t let today’s crisis be the platform for privacy erosion with vast, unaccountable AI and commercial entities securing a future moat for personal health data.

Trade-off: information exposure v movement restriction

Another way to look at the tension between personal privacy and public benefit is through the lens of individual rights. South Korea’s success in controlling the spread is just as much about its acceptance of surveillance as it is about its testing regime, according to Jung Won Sonn, Associate Professor in Urban Economic Development, UCL.

South Korea is the most surveilled country in the world. In 2010, everyone in South Korea was captured an average of 83.1 times per day and every nine seconds while traveling. This is likely to be far higher now. South Korea’s testing strategy is successful because sitting behind it is a huge surveillance network that combines CCTV and the tracking of bank card and mobile phone usage, to identify who to test in the first place.

Here’s how it works:

  1. Banking: by tracking transactions, it’s possible to draw a card user’s movements on a map.

  2. Mobile phones: Phone companies require all customers to provide their real names and national registry numbers. This means it’s possible to track nearly everyone by following the location of their phones.

  3. Facial recognition (both AI and human): CCTV cameras also enable authorities to identify people who have been in contact with COVID-19 patients.

  4. Combine the data: Nearly all potential patients can be found and tested by overlaying these three data sources. A new patient’s movement can be compared against those of earlier patients. This reveals exactly where, when and from whom the new patient was infected.

  5. This information is made public through various smartphone apps, websites and text messages, which help people avoid hotspots.

In many ways, this is an overexposure of private information about people’s movements. But it is actually an effective way for the authorities to gain public trust, which in turn is important in preventing people from panicking. - Jon Won Sunn

South Korea’s public health information systems are an extension of the smart city infrastructure the country has built. The response to its repurposing is specific to the culture and demonstrates the interconnectedness of health, tech, politics and society.

Information is for billing not real-time care

In an article in Stat News, Eric Perakslis, Ph.D., Rubenstein Fellow at Duke University and Erich Huang, M.D., chief data officer for Duke Health raise the alarm on US hospital information systems’ readiness for the current crisis.

The EHR is not designed to give a clinician a cohesive picture of the patient. There is no fast way for clinicians to see an essential timeline of a patient. Tabs are split into sections - problems, medications, imaging. There’s friction that takes time to overcome - time that clinicians won’t necessarily have.

The undeniable fact that electronic health record systems are designed to track and bill procedures rather than provide optimal patient care is likely to be on full display as the health system becomes increasingly saturated with Covid-19 patients. - Perakslis and Huang

Prediction-action pairing is vital; keep it simple

AI systems for healthcare need a tight coupling between a prediction and the recommended treatment path. A National Academy of Medicine report on the opportunity for AI in health refers to this as “prediction-action pairing” and it’s critical to effectiveness of AI in health. It’s also a complex design challenge and one that’s hard to make up on the fly.

In the Stat News article, the authors advocate going back to simple ideas; say, an app that simplifies workflow and uses decision trees with a tight focus on COVID-19 diagnosis, that can be downloaded onto clinicians’ phones and updated continuously.

A simple app that guides frontline clinicians through a decision tree for evaluating and managing potential Covid-19 cases could reduce confusion and variation in care. - Perakslis and Huang

The information about this disease is changing rapidly - geographic information, comorbidities, risk factors and new symptoms. Current EHR systems update, at best, every quarter, which means it could be a while before there is a reliable predictive model embedded in current systems.

And while there are a number of organizations developing AI apps and AI-enabled symptom trackers for consumers to check their symptoms, it’s proving to be challenging to update models as new information emerges.

Data and real-time info is a huge opportunity

Research activity is going crazy and researchers need information as close to real-time as possible. A simple app could be very effective as a data gathering tool.

Because disease outbreaks are also times of intense research activity, well-designed apps may improve digital data collection and help research occur in a way that is less disruptive to clinical care. - Perakslis and Huang

We shouldn’t forget that humans are especially useful in the health information ecosystem. Scribes cut in half the time it takes to document a patient encounter. Scribes are a luxury in normal times but may prove to be essential in a crisis.

Causality + human connection > correlation + automation

AI relies on data. If there’s thin data then AI can’t do much. AI doesn’t understand causality which is of supreme importance in medicine. It’s critical to have a theoretical basis for why something might or might not work. There is no substitute for expertise and human experience.

Right now, the most valuable systems might enable clinicians to share information in an efficacious and privacy-secure way. This isn’t an AI opportunity per se, but it would bring in the data rapidly and lay the foundations for new models. Jennifer Ellice, an ER doctor in LA, said this via Twitter:

Death in real-time

And here’s something I had never considered - there is no robust information system for tracking deaths. The authors of the Stat News article claim this is one of the biggest gaps in the health information flow in the US.

A health infrastructure that cannot properly track death is unprepared to manage catastrophes. - Perakslis and Huang

It’s chilling to read that in Italy doctors are forced to ration access to critical care. The state of Washington is also preparing for this. As someone who lives in an immune-compromised household, this idea is personally terrifying. But “first come, first served” doesn’t work when hospitals are in crisis.

In a public health emergency, you shift from a focus on individual patients to how society as a whole benefits, and that’s a big change from usual care.” - George L. Anesi, critical care specialist at the University of Pennsylvania, per The Washington Post.

This idea is an important one for AI - under business-as-usual, AI can optimize for the individual. But with COVID-19, AI has to optimize for society, with all the contingencies and redundancies that it takes.

Dean Sittig, professor at the School of Biomedical Informatics in The University of Texas Health Science Center, told me that an important role of AI in health could be to optimize use of scarce shared resources.

Perhaps the most valuable AI to be developed in health is one that tells us who is going to live and who is going to die. - Dean Sittig.

This is all quite grim. There will be many lessons. The crisis will reveal vulnerabilities and gaps that experts have known about for a long time. The only silver lining is that perhaps now we will start listening to those who have always highlighted the interconnectedness of our health and social systems, the value of being prepared for the worst and leaving enough fat in the system to provide contingency. AI’s big role in this has been to optimize everything to the bone - supply chains, human attention and behavior, finance - and now there’s very little cushion.


Also this week:

  • An update on Porkbun from last week’s newsletter. Leanne Carroll, who alerted me to the company’s Name Spinner got in contact with me with this:

    “I sent them some emails asking to chat with them about the product and shared examples of what I was seeing. Their CTO got back to me and let me know they've decided to disable the Name Spinner until they have the time to build something without the bias issues. A minor victory! Apparently they were using the Wordnet database which I'm sure many people are and clearly it has some awful association issues.”

  • A Sonder Scheme article on how developers and programmers who try to become AI developers suffer from imposter syndrome because of a lack of conceptual understanding of AI and a dearth of practical support in AI development tools.

  • Plus two members-only articles from Sonder Scheme - new research on how humans judge machines and how to map human and machine roles. New canvases are included and available for download in the membership which will help you work through the concepts and produce practical work product.

  • From MobiHealthNews, a comprehensive list of tech's role in tracking, testing, treating COVID-19. This will be worth tracking over the next few months to see what sticks ITRW.

  • This interactive model is worth spending some time on. You can experiment with variables such as infectiousness, social distancing measures etc. It’s not meant to be about COVID-19 per se, but more diseases processes in general but it’s handy to be able to play around with the sliders and develop a more intuitive sense of “exponential transmission.”

  • And surprise, surprise, Ring’s surveillance doesn’t actually reduce crime, despite the cheery anecdotes. Cnet takes a close look at the data.

Finally, I recommend this wonderful piece by my friend Ephrat Livni in Quartz; The lessons of cherry blossoms are most relevant during the coronavirus pandemic.

Highlights from AI's power problem

My key takeaways from the series published on Quartz

Hi! This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about artificial intelligence in the wild; how AI is being used to make and break our world. I write this because, while I am optimistic about the technology, I am pessimistic about the power structures that command it. So we need to talk more about the power than the tech.

And - big news - we are starting a paid subscriber newsletter, Artificiality Pro, where our focus is on the frontier of AI and its interaction with human intelligence. First up will be the frontier of natural language - who’s ahead, who’s behind and what it will take to be in the lead. 

If you haven’t signed up yet, you can do that here, for both free and paid. If you like this post, share it with a friend :)


This week, Quartz published a series of articles that I spent the last two months researching and writing. The series, called AI’s Power Problem, includes a state of play, a toolkit of people, podcasts, papers and books to follow as well as deep dives into what AI ethicists do and the nature of power in algorithmic systems. The full series is available to Quartz members.

This week’s newsletter is a highlights reel with a few of my personal observations - I hope you can check out the full series.

1. There’s no such thing as unbiased AI

Getting rid of bias in AI is impossible. Bias is inherent in our world, it is an integral part of human experience. We need to get better at recognizing sources of bias before it is propagated at speed and at immense scale. The harm happens before we see it.

2. Bias propagates along existing seams of inequality

Unexpected consequences of AI are a key concern and they pop up all the time. But many of these consequences are able to be foreseen - it’s just that they can’t be anticipated by people who don’t understand existing prejudice. Common problems in AI come from inequities that are already well known; whether they are gender, race or other types of minority representation. The first question to ask is “what do we already know about bias in this context?”

3. There are some places we shouldn’t use AI

AI can be used everywhere, right? This is worth challenging. There are places where AI is a bad fit; where human systems of backup are fragile and easily biased. For example, before we use AI to ration access to scarce resources in human social systems, we may need to reform and repair those systems. A good example is pre-trial evaluation in the criminal justice system, where the practical realities of how judges use algorithmic recommendations appear to run counter to people’s constitutional rights.

4. AI can make discrimination acceptable

AI’s ability to see patterns in data and associate characteristics in non-intuitive ways can be used in online advertising in a practice called “affinity profiling.” Affinity profiling uses personal characteristics and behavior traits rather than self-identified features such as gender or race. But because a person’s affinities are more opaque and less obvious, yet still correlate strongly with characteristics such as race, discrimination can hide in plain sight. This is an ethically dubious and legal grey area and is called “friction-free racism” by scholar and teacher Chris Gilliard.

5. Fairness isn’t free

AI optimizes for one thing - whether it’s profit or clicks or whatever. It won’t be fair unless it’s told to be. Sacrificing predictive power for fairness can be very expensive. But ignoring fairness is also expensive; mainly with reputation and harm to users. AI ethics needs to be driven from the top, which means that leaders need to understand why AI is different. Leaders need to guide people as they translate company values into practical standards.

6. Design is the future

AI design is different. In the age of AI, striving for “good design” means doing more work up front to define intent, anticipate consequences, map power and ensure explanations, justifications and accountabilities are sound. Before AI, a designer’s most valuable resource was glass or steel or plastic but now, it’s human behavior. AI acts in the real world and influences behavior beyond the initial product release, which means designers cannot escape that they are now responsible for the consequences of the use of technology. More diversity and inclusion can provide more design material and more meaningful evaluation of AI. Everyone is an AI designer now.

7. AI needs to think a bit more like us

Humans are very good at thinking about thinking. We can think about strategies for cognition and problem solving. We can reason about our thought processes and come up with ways to solve problems. We are good at reasoning about black boxes - just look at how we think about other people all the time. This is why mental models are so important in AI design - it’s vital to understand how a human will reason about what a machine is doing and how it makes decisions. AI can work better for humans if it too can apply reason to its thinking.

8. It takes time to learn how to avoid the Black Mirror

It takes time to learn how to deal with new technologies. Researchers and lawyers who step in and work pro-bono for people harmed by algorithmic systems are at the front lines of the fight. Today this is a question of social justice systems but unfairness and power imbalance in AI systems goes beyond social justice. As a society we must deal with issues of AI injustice in social and government systems or risk a backlash which may deprive communities of its benefits. The lessons from groups such as AINow apply to all AI applications.

9. Bias is a consequence of being human

One of the reasons that humans have such incredible general intelligence is because we evolved our cognition under serious resource constraints. Our brains are energy efficient but do not have unlimited compute power. We have limited time; our life spans are just not that long. We have limited space inside our skulls. Bias is an outcome of the constraints we are under.

10. Bias in AI will force us to face it in ourselves

AI can reveal our biases in ways that force us out of denial and engage us in deeper reflection and conversation. The people with the “problem” are rarely the ones who get to design or have a say over the use of AI; they are the ones that are usually the decision subjects rather than the decision makers. AI design should include power mapping so that people understand how power is instantiated in AI and then design an ethical response. Our societies are not static and biases are progressively revealed. AI can play a role in progressing our societies, but are we ready?


People to follow:

Josh Lovejoy, Microsoft Cloud and AI

Annette Zimmerman, Princeton University

Jacob Metcalf, Data & Society

Jason Schultz, NYU, AI Now

John C. Havens, IEEE 

Maria Axente, PwC UK

Chelsea Barabas, MIT

Michael Kearns, University of Pennsylvania

A must-read book: The Ethical Algorithm, Michael Kearns and Aaron Roth.

This relatively short read is best book I’ve come across to understand the science of ethical AI. It’s fabulously written and makes some otherwise difficult technical concepts easy to understand. I hardly ever needed to re-read paragraphs (which is how I measure this stuff). It’s also on point - highly prioritized with key concepts perfectly curated. If you only read one book, this one is it.


Here in the Cascades of Central Oregon, we are on lock down in an attempt to #FlattenTheCurve. Safeway is out of chicken (weird) and toilet paper (less weird but still perplexing). Kids are home from school or on their way. WHO let the dog out - she has been cleared, thankfully, so at least she can go out. Stay safe, people. Hopefully we can get back to talking about AI soon.

Ep. 03: Are AI ethicists making any difference?

  
0:00
-24:28

In this episode, Dave interviews Helen about her recent article in Quartz, “Are AI ethicists making any difference?” Some of the topics we explore include:

  • Why is there a rush to hire AI ethicists in the tech industry?

  • What do AI ethicists do?

  • Why are people skeptical and what is “ethics washing” and “ethics bashing?”

  • What does Jacob Metcalf of Data & Society mean by saying that ethics is “the vessel which we use to hold our values?”

  • What does Josh Lovejoy of Microsoft mean by saying that ethics need not be seen as a philosophical add-on “but as just good design?”

  • What are AI checklists and why is their use good practice?

Loading more posts…