Steven Sloman: Trusting knowledge

  
0:00
-51:34

On a scale of 1 to 10, rate how well you understand how a toilet works. Now, take a moment to explain how it works. Now, after you’ve tried to explain it, does your rating of how well you understand change? If you’re like most people, the act of trying to explain will highlight that you don’t understand it as well as you thought you did. This is called the Knowledge Illusion and it’s where we feel we know more than we do because we get our knowledge from our community—both human and machine. What’s so interesting about this illusion is that it says so much about how we should approach others and it also says a lot about how we should approach having our knowledge inside of machines. We talked with Steven Sloman, Professor of Cognitive, Linguistic and Psychological Sciences at Brown University who, along with Philip Fernbach, popularized this idea in a book called the Knowledge Illusion. How does a conscious recognition of our knowledge being derived from our community affect our experience in the world?

Human & machine intuition

Introducing the new Artificiality podcast series

The Artificiality podcast is back with a new focus: understanding the emerging community that is humans and machines. If you want to learn about how humans work with AI and big data, but you’re not sure where to start, this podcast is for you. We take the latest from the human side—decision science, psychology and design—and put it together with advances in artificial intelligence and big data so that you can understand how to work better with machines. And your fellow humans. We ask, how do we improve our learning and decision making when machines are part of our community? How can we develop an intuition for how a machine thinks and what it knows? How can we use machines to help us work better with others?

We founded Sonder Studio to help people be more human in the age of AI. We’re on this learning journey too so we strive to find the frontiers, to ask the best questions and stay curious. This season we interview some of the top minds working at the intersection of humans and machines and make sure we have a few laughs along the way.

We will be releasing episodes of the coming weeks and here’s what to look forward to:

Steven Sloman, Professor of Cognitive, Linguistic and Psychological Sciences at Brown University and co-author of The Knowledge Illusion, on how knowledge being derived from our community affects our experience in the world. Are machines part of our community?

Tania Lombrozo, Arthur W. Marks ’19 Professor of Psychology and director of the Concepts & Cognition Lab at Princeton University, on why humans love to use intuition even when surrounded by data and when even simple algorithms can be more accurate than human judgment.

Josh Lovejoy, Head of Design for Ethics & Society at Microsoft, on what’s different about making decisions with machines and why is human-centered design so important when working with AI.

Kate O’Neill, founder of KO Insights and author of the Tech Humanist, on what it means to be a humanist in the age of technology. How can we put human values into a machine? How can we even know what those human values are?

Mollie Petit, Data Visualization Developer at Observable, on what matters in modern visualization when it’s all about big data and helping people understand uncertainty.

Michael Bungay Stanier, founder of MBS Works and author of the best-selling The Coaching Habit and The Advice Trap, on what makes people different from machines. Well one thing is curiosity, which is something that drives humans but not machines.

Jevin West, Professor in the Information School at the University of Washington, co-founder of the DataLab, director of the Center for the Informed Public and co-author of Calling Bullshit, on what it means to be data literate in a world of big data and AI. Now that so many decisions rely on information that is only readable by machine and our statistical intuitions, which were bad before, are now practically useless, what is data literacy in the age of AI and how important is it?

As a subscriber you will get an email when a new episode is released and you can listen on your favorite podcast platform. We’d love to hear from you at helen@getsonder.com or dave@getsonder.com and hope you’ll enjoy this series.

Ep. 11: Rana el Kaliouby of Affectiva on emotional AI

  
0:00
-45:58

In this episode, we have a conversation with Rana el Kaliouby, CEO and Co-Founder of Affectiva, about emotional AI, bias in AI and her new book, Girl Decoded. Rana is a leader in her views on ethical AI and how to design AI systems for humans—two topics that are near to our hearts! We hope you enjoy this conversation as much as we did.

Note we open with a short conversation about IBM’s announcement about no longer selling general purpose facial recognition technology—an unusual step for a big tech platform to cancel a product line for ethical reasons.

Ep. 10: Renée Cummings of Urban AI on urban AI

  
0:00
-40:57

In this episode we have a conversation with Renée Cummings, Founder & CEO of Urban AI, about the issues and opportunities for AI in urban settings. While we recorded this before the current protests, our conversation with Renée couldn’t be more timely as we talk about the use of AI in law enforcement and recruiting. We think Renée’s specialized focus on AI in urban settings will be essential to understand as we, as a society, seek to bring people together and rise up out of the conflicts and economic strife that are hurting so many today.

Microsoft publishes the science of human-machine collaboration

This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about how AI is being used to make and break our world. I write this because, while I am optimistic about the technology, I am pessimistic about the power structures that command it. 

If you haven’t signed up yet, you can do that here.

Don’t forget….

This summer we’ve carved out time for a virtual summer camp for middle and high schoolers who want to learn to how to build an AI start-up. And because we know that everyone has had enough time on a computer, we will have kids out and about doing data collection, market research and other activities that aren’t all about Zoom. We’d love it if you can help spread the word and share.


Microsoft researchers recently published one of the most exciting advances in AI this year, IMO. I’m not referring to the company’s announcements regarding huge new NLP neural networks using semi-supervised learning, or the company’s new supercomputer, or the new tools for fairness in AI, although these are all impressive and useful. The new research is in human-machine collaboration.

Human-machine collaboration has been a hot area for years but, outside of social robots, it’s been more the domain of non-technical people than technical people. There simply hasn’t been much in the way of a scientific methodology for engineers to take a hybrid approach. While machine learning engineers focus on the frontier of mathematical and computational capabilities, in the absence of methods for human-machine team work, technology has generally marched on in isolation of the human factors. So while we can talk about “augmentation” of human skills, the reality is often different; shitty automation, sub-optimal task design, algorithmic aversion, hidden bias.

New research changes this. Now we have the math to train models in hybrid human-machine systems, taking into account what it means to consult a human.

Here it is: (or one sample of it)

Got that? Great.

In a new paper called “Learning to Compliment Humans,” Eric Horvitz, director of research at Microsoft, along with fellow researchers from Microsoft and Harvard, explain how they have formalized the math to design AI that has humans and machines work better together. It’s this mathematical formalization that makes this paper so important because now there’s no excuse for AI practitioners not to design AI systems that leverage the best of human and machine capabilities at the same time.

The methods presented are aimed at optimizing the expected value of human-machine teamwork by responding to the shortcomings of ML systems, as well as the capabilities and blind spots of humans. 

The standard way of training models which are used to compliment human decision making is to train in isolation—have the model produce the highest accuracy in isolation before putting it in front of a human. This new approach is different—the model is trained in such a way that it is forced to consider the distinct abilities of humans and machines. Training takes into account the “cost” of consulting a human and uses well-established AI techniques such as backpropagation to encode the unique skill of a human alongside the capability of the machine.

Think of it as a formal way to teach a machine how to make the trade-off between what it needs to learn to do to be accurate at a task, but isn’t inherently good at compared to a human.

This is a totally different mindset as well as technical approach.

  • it optimizes the combined performance of the human-machine system, increasing the accuracy of task overall.

  • joint training allows for smaller models and helps a model focus its limited predictive ability on the most important regions (of hyperspace), while matching humans’ top abilities with where the AI can afford to be less accurate.

  • opens up a new design opportunity—tuning for asymmetric loss, where the impact of a false negative is much more than the impact of a false positive, for example.

The researchers tested their joint training approach on two problems: identifying galaxies using citizen science and diagnosing metastatic breast cancer in pathology slides.

For the star gazing application, joint models which optimize for complementarity uniformly outperformed fixed models; by anywhere between 10 and 73%. For the cancer task, improvements were up to 20%.

But the math allows the researchers to go further and to explore what factors are most influential. Because a smaller capacity model has more potential bias (because it represents less complex hypotheses and can’t fit the “truth” as well), there has to be a “tighter fit” between training and team performance. In theory, it’s possible to just throw more data at the problem and make more complex models but in practice this increases the risk of overfitting, so having simpler, better performing models is more useful. This approach helps with tuning for this and hints at new ways to value human expertise in training and developing AI.

The value of complimentary training is especially high when there is an asymmetry between error costs. In cancer diagnosis, for example, a false negative is much worse than a false positive—missing a cancer that’s present is way worse than subjecting someone to unnecessary intervention and anxiety. This technique allowed researchers to show that using a combined system is particularly valuable when these costs are asymmetric—the gap between the joint and fixed models grew as any asymmetry grew. This finding surely should have a big impact on algorithmic design in medical systems.

Humans and machines make different kinds of mistakes. This research was able to identify and quantify this affect as there was a very clear structure to the human error. In one experiment, a large portion of the human errors were concentrated in a small portion of the instances, identified with only two features. The joint model could then prioritize this region. While this came at the expense of lower accuracy in a different region, this was where the human had almost perfect accuracy so the overall effect was still an improvement.

The distribution of errors incurred by the joint model shifts to complement the strengths and weaknesses of humans.

There is now a mathematical way to train AI to take in a human’s knowledge and skill as the AI learns. This means that AI practitioners are now able to optimize the human-machine team performance when interactions between humans and machines extend beyond querying people for answers—say in settings where there are complex, interleaved interactions and that include different levels of human initiative and machine autonomy. 

We see opportunities for studying additional aspects of human-machine complementarity across different settings.

This is a huge step scientifically but it’s one that’s also important psychologically. Now human-centered AI can be expressed in a way that machines truly understand. That should have everyone excited.


A lot has happened this week so I’m skipping the extras.

Loading more posts…