Humans think in terms of 1,2,3,4 lots and lots, while machines think in billions
|Helen Edwards||19 hr|
This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about how AI is being used to make and break our world. I write this because, while I am optimistic about the technology, I am pessimistic about the power structures that command it.
We are starting Artificiality Pro, a paid subscriber version of this newsletter, where our focus is on the frontier of AI and its interaction with human intelligence.
If you haven’t signed up yet, you can do that here, for both free and paid. If you like this post, share it with a friend :)
The greatest shortcoming of the human race is our inability to understand the exponential function. - Physicist Albert Allen Bartlett
Humans struggle to understand exponential growth. Our brains have a linearity bias - we tend to see change in linear terms and struggle to comprehend the magnitude of exponential growth.
There are tons of great examples of how exponential growth runs counter to our intuition. One favorite of math teachers is to ask how many times you would need to fold a piece of paper for it to be thick enough to reach the moon. The answer is 45 times.
Exponential changes run counter-intuitive to the way our linear brains make projections about change, & so we don’t realize how fast the future is coming. - Jason Silva
There are many reasons we weren’t prepared for the pandemic but this feature of our cognition is part of the story. Countries that have previously dealt with SARS know what exponential feels like; they have been through the emotional conditioning required to mobilize quickly and have been able to act ahead of a visible appearance of catastrophe.
It’s sometimes difficult to even spot that a problem is an exponential problem in the first place. Take the Birthday Paradox — in a room of just 23 people there’s a 50-50 chance of at least two people having the same birthday. The birthday paradox is strange and counter-intuitive.
It’s only a “paradox” because our brains can’t handle the compounding power of exponents. We expect probabilities to be linear and only consider the scenarios we’re involved in. The fact that we neglect the 10 times as many comparisons that don’t include us helps us see why the “paradox” can happen. - Kalid Azad
The epidemic also introduces a similar selfish bias: many people only consider the impact of the virus on them, not fully recognizing that every person, everywhere is now connected in a chain of infection.
In our community in Oregon, many people have no intuition of these two compounding biases. Some people’s instincts are to go out and do things, almost as an act of willpower over the virus or denial about the risks to themselves and others. We’re observing a complex mix of cultural values (freedom), denial (it won’t happen to me) and an inability to comprehend what’s coming at us (can’t think exponentially, understand the connectivity of communities or accurately evaluate risk).
Even with excellent visualizations of the spread of infection and the power of social distancing, people don’t seem to be able to get their head around how quickly this disease spreads. On social media, I find myself in debates with people about basic facts. Reason fails far more than it should.
Reason didn’t evolve because inquiry, science and progress are inevitable. Reason evolved to prevent us from getting screwed by other members of our group. We are skilled at spotting flaws in other people’s arguments and not skilled at spotting them in our own. This is confirmation bias and it is adaptive because of our hyper-social nature — winning arguments and convincing others you are right builds social support. Unfortunately, in a politically polarized, hyper-social-media-AI-powered connected world, confirmation bias is beginning to look like a maladaptive strategy.
AI has a role to play. Because humans are good at considering our own cognitive strategies — something Tom Griffiths calls meta-reasoning — there are opportunities to develop AI that acts as a “cognitive crutch.” AI thinks in multiple dimensions, is able to handle exponential growth, so can help us think differently about the world. AI can show us alternate futures in ways that can overcome our cognitive limitations and motivate us to act differently.
Our future selves have preferences that our current selves fail to act on because we get tempted in the present. But we are good at deciding on a course of action based on a realistic understanding of the future, then setting a strategy to get ourselves there — exercise targets on an Apple Watch, for instance.
Complacency will be a huge enemy in the US. People will fatigue, especially as fear declines and people adjust to new probabilities. We have been two steps behind the virus from the beginning — everything that seemed impossible last week is now reality this week, things that should have been done two weeks ago have only now been put into effect. We know we cannot catch up to an exponential curve yet we seem unable to act ahead of it. It’s urgent to find ways to help people more effectively see ahead, to develop an intuitive sense of what it all means and what they need to do.
Our species is able to imagine, to develop scenarios and to think ahead. But an enemy that moves exponentially has our cognition beat. We need AI and tech that turns numbers into intuition, gives us cognitive crutches and scaffolds our resolve to act on hard choices today so we can protect tomorrow.
Also this week:
Sonder Scheme Pro-members article on new research in how people react to autonomy-decreasing AI, ie over-personalization.
Spooky drone footage of San Francisco in the lock-down.
Fascinating visualization of the effect of not social distancing on potential spread as spring breakers leave Florida for various parts of the US.
Honestly, time for some light relief from Twitter. ICYMI.
Dog in leaves, home exercise, sock-puppet entertainment, how to take a break from the kids….