AI as a cognitive crutch
How AI could help our reasoning when it's most flawed
Before we get to the good stuff…we’d love your help.
We are working on a new podcast series about how human biology can inform AI design. Great reviews on Apple podcasts really, really help.
Sharing this newsletter helps us reach more people.
To understand how AI might change our reasoning, we have to understand our reasoning. Intuition is our “go-to” for making decisions. Our intuitions evolved in the real world of our senses. It’s pretty good at what it does and exists because our thinking evolved under resource constraints. We have limited energy from food, limited space in our skulls and limited life spans so we have certain biases built in. Intuition is fast and efficient but unreliable when we are up against something new, we lack expertise or have no feedback on our judgments.
Many people are familiar with Kahneman and Tversky’s System one (intuition) and System two (analysis) and also with various types of cognitive biases. But being familiar with them doesn’t mean we can avoid them. Kahneman himself says he’s subject to all of them.
Try this. If it takes 5 machines 5 minutes to make 5 lattes, how long would it take 100 machines to make 100 lattes?
The intuitive answer is 100 minutes while the correct answer is 5 minutes. Even if you resisted the urge to blurt out 100, you almost certainly found that it came to mind. It’s very hard to suppress intuition.
Just noticing your error is sometimes enough to correct it. An AI could just point out our errors and leave the response to us. Maybe just a nudge to point us in the right direction.
Another important aspect of reasoning is that we think in terms of cause and effect. Humans care deeply about the causal structure of the world because it enables us to generalize. Causal thinking is natural and we have a tendency to find causes even where there are none. Causal reasoning introduces variability in our thinking because we sample from memory and from data in ways which are “noisy,” to quote Kahneman again. And this gets us into trouble when we work with other people who look at the same data and interpret its meaning differently. AI could provide us with a kind of cognitive crutch in decision making if it can tell us alternative causes for the things we observe.
So a partial answer...AI changes our reasoning because it can change what we think is meaningful. AI can guide our attention or pique our curiosity or prompt us to be more analytical. AI can talk about patterns and probability but it ultimately could be best used to help us think differently about the information in front of us. For a moment, we suppress intuition, wallow in the problem for longer or be curious about contradictions between the data and our experience.
If our relationship with a machine is that it compares you with a “data-driven” you and leaves you to decide on your own action, are you “irrational” if you choose differently? Maybe you aren’t you if you decide to change your choice based on what a machine says is the better decision. How do you know it’s really better for the real you, not for the you that the machine thinks you are?
Resolving this incompatibility isn’t going to be a simple task. For a start, it will require a fundamental rethink of privacy. How a machine knows what it knows will be just as important than what it knows. But it also involves a broader definition of agency. In our individualistic societies we think of agency as having the freedom to choose. Bayo Akomolafe gives us an alternative framing: agency is a sense of indebtedness to something greater. We decide because we want to participate. We have agency precisely because the group has already decided.
More resources for these ideas:
Book: Noise. A Flaw in Human Judgment
Paper: Tom Griffiths, Understanding Human Intelligence through Human Limitations
Wikipedia: Cognitive Reflection Test
Book: The Knowledge Illusion
Podcast: How MBS wallows in the problem
Quartz (paywall): The quest to make AI less prejudiced
Other things that caught our eye.
Shoshana Zuboff in the NYT - You are the object of a secret extraction operation. (Paywall). Timely refresh of the principles of surveillance capitalism.
Facebook AI Research view of commonsense reasoning as the Dark Matter of intelligence. Strange choice of terminology but a worthwhile read if you are interested in Facebook’s approach to self-supervised learning.
Excellent article from the Neilson Norman Group on manipulation of metrics in social decision making. “Recognize that all metrics are limited in their ability to describe the world fully and accurately; every metric that you collect reflects a decision about what you consider to be important.”