Highlights from AI's power problem
My key takeaways from the series published on Quartz
Hi! This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about artificial intelligence in the wild; how AI is being used to make and break our world. I write this because, while I am optimistic about the technology, I am pessimistic about the power structures that command it. So we need to talk more about the power than the tech.
And - big news - we are starting a paid subscriber newsletter, Artificiality Pro, where our focus is on the frontier of AI and its interaction with human intelligence. First up will be the frontier of natural language - who’s ahead, who’s behind and what it will take to be in the lead.
If you haven’t signed up yet, you can do that here, for both free and paid. If you like this post, share it with a friend :)
This week, Quartz published a series of articles that I spent the last two months researching and writing. The series, called AI’s Power Problem, includes a state of play, a toolkit of people, podcasts, papers and books to follow as well as deep dives into what AI ethicists do and the nature of power in algorithmic systems. The full series is available to Quartz members.
This week’s newsletter is a highlights reel with a few of my personal observations - I hope you can check out the full series.
1. There’s no such thing as unbiased AI
Getting rid of bias in AI is impossible. Bias is inherent in our world, it is an integral part of human experience. We need to get better at recognizing sources of bias before it is propagated at speed and at immense scale. The harm happens before we see it.
2. Bias propagates along existing seams of inequality
Unexpected consequences of AI are a key concern and they pop up all the time. But many of these consequences are able to be foreseen - it’s just that they can’t be anticipated by people who don’t understand existing prejudice. Common problems in AI come from inequities that are already well known; whether they are gender, race or other types of minority representation. The first question to ask is “what do we already know about bias in this context?”
3. There are some places we shouldn’t use AI
AI can be used everywhere, right? This is worth challenging. There are places where AI is a bad fit; where human systems of backup are fragile and easily biased. For example, before we use AI to ration access to scarce resources in human social systems, we may need to reform and repair those systems. A good example is pre-trial evaluation in the criminal justice system, where the practical realities of how judges use algorithmic recommendations appear to run counter to people’s constitutional rights.
4. AI can make discrimination acceptable
AI’s ability to see patterns in data and associate characteristics in non-intuitive ways can be used in online advertising in a practice called “affinity profiling.” Affinity profiling uses personal characteristics and behavior traits rather than self-identified features such as gender or race. But because a person’s affinities are more opaque and less obvious, yet still correlate strongly with characteristics such as race, discrimination can hide in plain sight. This is an ethically dubious and legal grey area and is called “friction-free racism” by scholar and teacher Chris Gilliard.
5. Fairness isn’t free
AI optimizes for one thing - whether it’s profit or clicks or whatever. It won’t be fair unless it’s told to be. Sacrificing predictive power for fairness can be very expensive. But ignoring fairness is also expensive; mainly with reputation and harm to users. AI ethics needs to be driven from the top, which means that leaders need to understand why AI is different. Leaders need to guide people as they translate company values into practical standards.
6. Design is the future
AI design is different. In the age of AI, striving for “good design” means doing more work up front to define intent, anticipate consequences, map power and ensure explanations, justifications and accountabilities are sound. Before AI, a designer’s most valuable resource was glass or steel or plastic but now, it’s human behavior. AI acts in the real world and influences behavior beyond the initial product release, which means designers cannot escape that they are now responsible for the consequences of the use of technology. More diversity and inclusion can provide more design material and more meaningful evaluation of AI. Everyone is an AI designer now.
7. AI needs to think a bit more like us
Humans are very good at thinking about thinking. We can think about strategies for cognition and problem solving. We can reason about our thought processes and come up with ways to solve problems. We are good at reasoning about black boxes - just look at how we think about other people all the time. This is why mental models are so important in AI design - it’s vital to understand how a human will reason about what a machine is doing and how it makes decisions. AI can work better for humans if it too can apply reason to its thinking.
8. It takes time to learn how to avoid the Black Mirror
It takes time to learn how to deal with new technologies. Researchers and lawyers who step in and work pro-bono for people harmed by algorithmic systems are at the front lines of the fight. Today this is a question of social justice systems but unfairness and power imbalance in AI systems goes beyond social justice. As a society we must deal with issues of AI injustice in social and government systems or risk a backlash which may deprive communities of its benefits. The lessons from groups such as AINow apply to all AI applications.
9. Bias is a consequence of being human
One of the reasons that humans have such incredible general intelligence is because we evolved our cognition under serious resource constraints. Our brains are energy efficient but do not have unlimited compute power. We have limited time; our life spans are just not that long. We have limited space inside our skulls. Bias is an outcome of the constraints we are under.
10. Bias in AI will force us to face it in ourselves
AI can reveal our biases in ways that force us out of denial and engage us in deeper reflection and conversation. The people with the “problem” are rarely the ones who get to design or have a say over the use of AI; they are the ones that are usually the decision subjects rather than the decision makers. AI design should include power mapping so that people understand how power is instantiated in AI and then design an ethical response. Our societies are not static and biases are progressively revealed. AI can play a role in progressing our societies, but are we ready?
People to follow:
Josh Lovejoy, Microsoft Cloud and AI
Annette Zimmerman, Princeton University
Jacob Metcalf, Data & Society
Jason Schultz, NYU, AI Now
John C. Havens, IEEE
Maria Axente, PwC UK
Chelsea Barabas, MIT
Michael Kearns, University of Pennsylvania
A must-read book: The Ethical Algorithm, Michael Kearns and Aaron Roth.
This relatively short read is best book I’ve come across to understand the science of ethical AI. It’s fabulously written and makes some otherwise difficult technical concepts easy to understand. I hardly ever needed to re-read paragraphs (which is how I measure this stuff). It’s also on point - highly prioritized with key concepts perfectly curated. If you only read one book, this one is it.
Here in the Cascades of Central Oregon, we are on lock down in an attempt to #FlattenTheCurve. Safeway is out of chicken (weird) and toilet paper (less weird but still perplexing). Kids are home from school or on their way. WHO let the dog out - she has been cleared, thankfully, so at least she can go out. Stay safe, people. Hopefully we can get back to talking about AI soon.