Artificiality

Share this post

AI's strange effects on intuition

artificiality.substack.com

AI's strange effects on intuition

Apr 18, 2020
Share this post

AI's strange effects on intuition

artificiality.substack.com

This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about how AI is being used to make and break our world. I write this because, while I am optimistic about the technology, I am pessimistic about the power structures that command it. 

We are starting Artificiality Pro, a paid subscriber version of this newsletter, where our focus is on the frontier of AI and its interaction with human intelligence. 

If you haven’t signed up yet, you can do that here, for both free and paid. If you like this post, share it with a friend :)


Everyone likes to follow their intuition because it’s the ultimate act of trusting oneself.

Intuition plays a huge role in decisions. Even with AI making more decisions and acting on behalf of humans, our desire to use our intuition is not going to go away. As designers, academics and users gain more experience with AI, we get new insights into its impact on intuition in human-machine systems.

In a video presentation this week hosted by the Berkman Klein Center for Internet and Society, Sandra Wachter, associate professor at the Oxford Internet Institute, described the important role of intuition in discrimination law in the EU and how AI disrupts people’s ability to rely on intuition.

Judges use their intuition and common sense when it comes to assessing “contextual equality” to decide whether someone has been treated unfairly under the law. The agility of the EU legal system is described by Wachter as a “feature not a bug.” Courts generally “don’t like statistics,” because they can easily lie and tend to skew “equality of weapons,” handing the advantage to those who are better resourced. “Common sense” is part of the deal but when discrimination is caused by algorithms that process data in multi-dimensional space, common sense can fall apart. Experts need technical measurements that help them navigate new and emergent grey areas.

Finding and fixing discrimination has also relied predominantly on intuition. Humans discriminate with a negative attitude and unintentional bias which creates a “feeling” of inequality. Equality is observable at a human level. We can see that others are getting promoted. We know that everyone gets the same price in the supermarket.

But machines discriminate differently than humans do. People do not know that they haven’t been served an ad for a job. A candidate being assessed in a video game has no hope of gleaning any causal information that might explain a correlation between click-speed and predicted job performance. Data and AI design stratifies populations differently than we traditionally, and intuitively, do.

AI is valued because of its ability to process data at scale and find unintuitive correlations and patterns between people. AI is creating new groups that are being treated unequally but people in these groups have no protection because they do not fit into traditional buckets. There’s no legal protection for “slow clicker” as there is for age.

Machines are creating a world where discrimination may happen more but we sense it less. Our intuitions are not only honed to detect unequal treatment at a human scale, they are honed to traditional classifications such as gender, race and age; things we can perceive within our conscious awareness. Data and correlation about digital behaviors, unrevealed preferences or statistical affinities do not make our alarm bells ring in the same way.

It’s not enough to design AI without considering its impact on human intuition within the human-machine system. This means more than designing for control and human-machine-human hand-offs. It means providing scaffolds for developing intuition and providing machines with more nuanced and contextual definitions of fairness, grey areas and human bias.

In some respects, making humans more like machines is straightforward. We know how to do it because we can test it on ourselves — on our own intuitions! We can design interfaces that provide information that counteracts automation bias and prompts useful skepticism and "system 2" thinking. For example, by giving users information about the confidence an AI doesn’t have in something alongside the confidence it does have (in the same thing) h/t Josh Lovejoy. This can refine someone’s intuition and even impart a sense of an alternative world that might exist beyond the algorithm’s personalized output. We can design AI assistants for experts where it’s clear upfront that the AI is there to account for confirmation bias and not to be a perfectly objective, neutral advisor. Above all, designing for humans to be more like machines is easier because accountability remains so clearly with the human.

Making machines more human is trickier. How can AI designers, as Wachter refers to, “embrace interpretive flexibility” and create more agile machines?

Perhaps first, should they?

There’s an ethical decision to make about how much human judgment and subjectivity we should even try to automate, especially when it comes to fairness. Much of the recent progress — in fact, pretty much all of what we would consider to be “modern AI” — is because machines now learn things that humans evolved to do. Machines can see, converse and even mimic empathy. Part of the puzzle of design is that we don’t know how we do these things so we have fewer intuitions about how to design for them.

How far should we go to design uniquely human skills into machines? It should be table stakes to design a human-machine system that plays to the strengths of both, but it’s harder than it looks because good systems are highly dynamic; a well-designed system should push the boundaries of both human and machine skills rather than leave either of them stuck in the zone of “so-so automation.” Humans are good at working inside the grey areas because this is where we work together to solve problems, advance knowledge and deal with unpredictable events. When examined through this light, we shouldn’t be trying to remove intuition, we should be trying to enhance it.

We can find ways to have AI reveal its knowledge to us. Innovations in explainability should aim to help people refine existing, and form new, intuitions. Ultimately, clever design may even help us see what’s only been visible to the machine, much like viewing an old-fashioned negative. This should be a core goal of explainability in AI and user design.


The Sonder Scheme YouTube channel is live!

  • Our most popular talks are now on YouTube! Check them out here. My personal favorite is how AI and personalization affects personal agency. But there’s also Introduction to AI ethics, Why AI Matters, What is AI? - How Machines Learn and How to Think About Personalization Without Over-Personalizing.

    AI & Agency:

    Ethics:

    Why does AI matter:

    What is AI:

    Personalization:

    Also:

  • Do we need to worry about AI being dominant? No, because AI doesn’t have testosterone. This is according to Yann LeCun, Facebook’s chief AI scientist. Read my critique on Sonder Scheme which pulls apart what LeCun says versus what science says. “Testosterone is not a one-shot route to domination.”

  • Andrew Ng's startup Landing AI released a blog post with a demo video showing off a new social distancing detector for use at work. Its genesis is responding to customer requests to track employees in situations such as factory floors.

  • Interesting Q and A with Tawana Petty, one of the key people behind fighting the use of facial recognition in urban surveillance.

  • ICYMI excellent piece from The Atlantic on “our pandemic summer.” Ed Yong pulls many threads together but certainly leaves you with a feeling of just how long this haul will be.  

  • Cansu Canca on the ethics of tracking in Medium plus a link to a video discussion with her and Micha Benoliel this week on All Tech is Human.

  • And, finally, a very interesting comment this week from Shoshana Zuboff, the person who brought us Surveillance Capitalism and the ideas behind behavioral surplus… In an article in Le Repubblica (Italy), she reframed pandemic tracking apps away from tech and towards more traditional views of public health by saying that tracking apps should be mandatory like vaccines. A quote from her posted on Twitter translates roughly to “This is not the moment for dystopian scenarios. We need to return to a world where data is used for the benefit of all people, not just by profit driven multinational corporations.”

Share this post

AI's strange effects on intuition

artificiality.substack.com
Comments
TopNewCommunity

No posts

Ready for more?

© 2023 Sonder Studio
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing