Putting brakes on facial recognition & surveillance

How should we think about this super-convenient yet dystopian tech?

Hi! This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about artificial intelligence in the wild; how AI is being used to make and break our world. If you haven’t signed up yet, you can do that here. If you like this newsletter, please share, especially on LinkedIn and Twitter.


America is experiencing cognitive dissonance with facial recognition. The convenience of using our face for the basics - check in, border crossings, unlocking our phones - is fantastic. But, in the land of the free, suddenly everyone has realized that it might mean we’re actually living in just another surveillance society. Maybe we’re more sensitive now, having missed for so long that surveillance capitalism was driving our every digital move without most people having a clue. Maybe it’s the Hong Kong protests where facial recognition has played a key role in the police response while it’s in our face what people do to avoid identification. Maybe it’s that, when it comes down to it, privacy is not dead after all.

Facial recognition technology is deeply concerning and we should be putting the brakes on for a bit. While digital surveillance has grown exponentially, facial recognition hasn’t followed it as far quite yet but it’s close. This doesn’t necessarily mean that an outright ban is the right answer, although there’s certainly a case to be made for that. Brakes could be self-imposed by companies, by local governments or by individual choice.

Like every AI, the validity of its use is incredibly context specific. Facial recognition has spread with very few people realizing that it is a unique technology. Your face is you, readable and identifiable and everywhere in a way that your fingerprint or gait or other forms of identification just isn’t. You’ve even contributed to your biometric identity being easy for anyone, anywhere to use.

Last week, the NYT broke a story about Clearview AI, a startup that has amassed billions of photos scraped from social media. It uses them in an application marketed to law enforcement. This is against the terms and conditions of social networks. Twitter responded with a cease and desist letter but the damage is done — the images have already been scraped. Perhaps the silver lining is that everyone who has ever put up a selfie on Facebook now understands how porous social media really is. Realizing that a social media selfie could end up in a police mugshot is what’s known as “context collapse” and leaves one with a very specific sensation of digital dirtiness.

Amazon’s Ring doorbell doesn’t have facial recognition. Yet. Through Ring, Amazon has created a public/private surveillance net across the US, with very little or no democratic input. Ring shows us something crazy about ourselves — that we can assimilate the violent and terrifying with the cute, funny and frivolous. Ring has become a content platform in itself — a TV channel and new breed of American’s Funniest Home Videos. Who doesn’t want to watch a cat stalk a raccoon or a bear steal chocolate

How are we meant to think about Ring? It helps prevent delivery theft. It has helped solve crime. People feel more secure. But it also breaks public/private privacy in a very particular way. In our homes, we have a right to privacy. In public, we have to expect that we don’t. But this hasn’t mattered because privacy in public is via obscurity — we don’t expect to be watched. AI breaks down obscurity because now our every move can be watched — by a machine. Ring goes one step further because now, permanent digital surveillance can be literally everywhere, including next door to you. And it is just so, well, ordinary.

But here’s another interesting thing that makes Ring like no other surveillance system. On the inside of the door, you are the user of “luxury surveillance,” while on the outside of the door, you are the victim of “imposed surveillance.” In an essay for the architectural publication Urban Omnibus, Chris Gilliard makes this distinction stark by comparing the experiences of the wearer of an Apple Watch to those of someone forced to wear a court-ordered ankle monitor.

In the digital age, however, the stratification between surveiller and surveilled — and between those who have and don’t have agency over how they are surveilled — is felt beyond the scale of wearable devices. - Chris Gilliard

This is how we need to think of Ring — that now, whenever we leave our houses, we move from surveiller to surveilled. We have to think like someone’s watching and that we have no control over what they see or how they decide to interpret what they see. Was that a hand wave or a threatening gesture? Was that an ironic, playful slap or an indicator of domestic violence? What are those teenagers up to? Should the police be called?

And all those scary and close-call videos intertwined with comedy and cute? The juxtaposition only serves to reinforce the marketing message — that we can only be safe if we are surveilled. The Ring ecosystem is also biased — it skews paranoid which means it’s only a matter of time until Ring adds facial recognition.

Concern over facial recognition and surveillance has grown together. They feed off each other. Our expectations of privacy and our mental model of the constraints that designers put on its use matter. We aren’t worried about facial recognition on our phones. We love how facial recognition means we get more photos of our kids from summer camp...but as long as it’s only photos that counsellors take in group activities. We think that perhaps it’s a good idea to have facial recognition in our kids’ schools…but only as long as it’s only used for identifying people who have been banned from the building. We appreciate how efficient and effective it can be in policing…but we assume we will never be caught as an unfortunate false positive. When facial recognition makes something frictionless, we favor convenience over privacy, just as we do with many other things.

I’ve written before about how facial recognition is not designed for trust. This technology is a fulcrum — the power balance is extreme between the surveiller and the person surveilled. Who gets to choose when and how to see? What inferences are being made and who decides they are meaningful? What is the consequence of failure? Who bears the consequence? Who decides what to keep forever?

In a customer-facing application, one approach might be to start with the goal of making every user a “luxury surveillance” user rather than one of “imposed surveillance.” An Apple Watch gives the user, not just control, but a sense of intimacy. The device feels like an extension of your own body; a tiny, external mind. It’s a luxury to check in on yourself. A customer-facing app that uses facial recognition could be designed with similar intent, going beyond consent and control, and creating a connection with self which reinforces the customer not the company. Unfortunately, any system built on surveillance capitalism doesn’t put the individual’s values first.

It’s hard to see that we can get the horse back-in-the-barn without a revolution in how we see privacy. Current protections are the proverbial “knives to a gun fight.” The more we think about the deeper power dynamic and who wins if we fail to act, the more we will wonder who are really protecting by not acting.


Also this week:

  • The California Sunday Magazine has published a series of articles on facial recognition. Included are a handy series of decision trees for easily figuring out how to avoid facial recognition. For easy reference, we put them on the Sonder Scheme blog.

  • Op-ed in the NYT from Shoshana Zuboff, author of Surveillance Capitalism, specifically focused on privacy and its relationship with human autonomy. A must-read for sure.

  • The Closing Gaps Ideation Game is an interactive ideation game created by the Partnership on AI to facilitate a global conversation around the complex process of translating ethical technology principles into organizational practice.

  • A new look for Google’s desktop search product “blurs the line between organic search results and the ads that sit above them.” Some early evidence suggests the changes have led more people to click on ads, says The Verge.

  • WEF has put together a toolkit for boards to help people ask the right questions about AI.

  • Fun piece on the Wall-E effect from The Next Web.


Machine employees in government

What does it mean if AI is doing the job of government employees?

Hi! This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about artificial intelligence in the wild; how AI is being used to make and break our world. If you haven’t signed up yet, you can do that here.


In 2016, as Intelligentsia Research, we coined the term “machine employees” to describe AI in the workplace. Our goal was to differentiate traditional IT technologies from modern AI; where deep learning and other machine learning techniques were being deployed with the specific intent of taking on a decision making role.

The concept of machine employees is important because, while they can substitute for humans in processes where people once evaluated, decided and acted, they aren’t held to the same standards as people. They can’t be easily questioned, sued or taxed. In short, machine employees aren’t accountable, but the humans that employ them should be. A human should be able to explain, justify and override a poorly performing machine employee.

But everywhere you look, it seems there are more and more instances of machine employees that are poorly designed and deployed, with government services a particular concern. A recent essay in the Columbia Law Review by Kate Crawford and Jason Schultz from AI Now, outlines where AI systems used in government services have denied people their constitutional rights. They argue that, much like other private actors who perform core government functions, developers of AI systems that directly influence government decisions should be treated as state actors. This would mean that a “government machine employee” would be subject to regulation under the United States Bill of Rights, which would prohibit the federal and state governments from violating certain rights and freedoms, something that is a particular risk when services are provided to groups where people are already at a disadvantage.

Here is the key question — are AI vendors and their systems merely tools that government employees use or does the AI perform the functions itself? Are these systems the latest tech tool for human use or is there something fundamentally different about them? If the intent of a machine employee is to replace a human employee, or substitute a significant portion of a their decision making capability, then our intuitions tell us it’s the latter.

There are horror stories about some of these government AI systems. In Arkansas, cerebral palsy patients and other disabled people have had their benefits cut by half with no human able to explain how the algorithm works. In Texas, teachers were subjected to inscrutable employment evaluations. The AI vendor fought so hard to keep source code secret that, even in court, “only one expert was allowed to review the system, on only one laptop and only with a pen and paper.” And in DC, where a criminal risk assessment tool for juveniles rated as “high risk” constrained sentencing choices and only displayed options for treatment in a psychiatric hospital or a secure detention facility, drastically altering the course of children’s lives. Perhaps the most egregious case is Michigan’s use of AI for unemployment benefit “robo-determination” of fraud. The system adjudicated 22,000 fraud cases with a 93% error rate. 20,000 people were subject to highest-in-the-nation quadruple penalties, tens of thousands of dollar per person.

In public services, the bottom line is always to cut costs and increase efficiency. But when the “most expensive” populations are also the ones that require the most support because they are economically, politically or socially marginalized, and decisions about them get made by inscrutable, biased machine employees, while deployed by people who have not been trained or themselves are poorly supported, the potential for harm is high.

In all these situations, human employees were unable to answer even the most basic questions about the behavior of the systems, much less change the course of the outcome for individuals.

One advantage of government — ie public services and public accountability — is that we actually know this stuff now. Thanks to the courts. But these are one-off cases and there’s no systematic way to protect against similar harms being inflicted on others. In fact, government procurement processes can make this worse; AI systems are increasingly adopted from state to state through software contractor migration. They can be trained on historical data from one state that isn’t applicable to another’s without any consideration for the differences in populations. Patterns of bias can proliferate and even stem back to the intentions of one individual employee.

If good design (including explainability, accountability to humans and human-in-the-loop protections) and regulation fails, we will need something in the middle. The idea that AI systems developers are actually state actors — “government machine employees” — is potentially an important way to bridge the current AI accountability gap.


Also this week:

  • From the Sonder Scheme blog: Explainability and transparency are critical performance criteria for AI systems. Bias and fairness are increasingly top-of-mind, which raises the stakes on AI developers to be able to interrogate and understand their models. New research raises concerns about how these tools are being used in practice as researchers find failures with the use of explainability tools.

  • A must-read piece from the NYT on Clearview.ai and facial recognition and its use in surveillance. Now everyone’s everyone’s face is searchable from images taken from Facebook, Twitter, Youtube and Venmo against stated company policy. This isn’t only about surveillance and privacy, this is also test of whether big tech can self-regulate and stop the practice powering surveillance.

  • Terrific piece in the Boston Review from Annette Zimmermann, Elena Di Rosa and Hochan Kim on how technology can’t fix algorithmic injustice. This is totally worthy of your time.

  • Interesting reporting from The Telegraph (registration required) on Google’s bias busting team who get together and swear, curse, be racist and sexist as an in-house way of teaching their AI not to respond to racist and sexist comments. I particularly liked this quote from the SF correspondent: “Trying to boil the prejudice out of this gigantic data stream feels like standing in the middle of a raging river trying to catch refuse in a net. I'm glad for the people who swear at Google, but I wonder how effective they can really be without some deeper, more fundamental realignment.”

  • Useful and intense resource on current state of AI ethics from the Berkman Klein Center for Internet and Society at Harvard University.


Facebook deepfake ban techno-solution distraction

Hi! This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about artificial intelligence in the wild; how AI is being used to make and break our world. If you haven’t signed up yet, you can do that here.


This week Facebook announced a new policy banning deepfake videos. Facebook’s vice president of global policy management, Monika Bickert, said videos that have been edited “in ways that aren’t apparent to an average person and would likely mislead someone” and were created by artificial intelligence or machine learning algorithms would be removed under the new policy.

Ok, this is good, but it’s nowhere near good enough, a sentiment echoed by many. It’s another example of Facebook’s business strategy being driven by its AI strategy, rather than any genuine change in the company’s policy for reducing the pollution level of information on the platform.

Deepfakes are definitely a concern but, as this paper from Data and Society points out, they are only part of the story. “Cheap fakes” use conventional technologies such as speeding, slowing, cutting, re-staging or re-contextualizing footage and are far more accessible to the average person.

Deepfakes and cheap fakes exist on a spectrum from highly technically complex, and therefore requiring a lot of technical expertise, to requiring almost no technical expertise. From most technical to least according to Data and Society:

Deepfakes:

  • Virtual performances: Recurrent Neural Networks, Hidden Markov Models, Long Short Term Memory Models, Generative Adversarial Networks

  • Voice synthesis: Video Dialogue Replacement Models

  • Face swapping and lip syncing: FakeApp, After Effects

Cheap fakes:

  • Face swapping using rotoscoping: Adobe After Effects, Adobe Premiere Pro

  • Speeding and slowing: Sony Vegas Pro

  • Face altering and swapping, speed adjustment: Free and in app such as Snap

  • Lookalikes, relabelling and recontextualizing: relabeling of video and in-camera effects

It should be obvious how narrow and techno-centric this new policy will be in practice - it only captures the top line of this list. And while malicious fake content is dangerous, it may not be inherently more dangerous than less sophisticated doctored media. Cheap fakes can cause just as much havoc as more technically sophisticated deceptive media. One could argue that cheap fakes can be even more engaging - grabbing attention because they are so clearly in the uncanny valley or are distinct, unusual, curious or amusing, manipulate confirmation bias or incite a sense of urgency to act. And it’s the engagement that matters for Facebook - amplification of engaging content is what the AI does.

Bickert also testified this week before the Subcommittee on Consumer Protection and Commerce on manipulation and deception in the digital age. When questioned she acknowledged that on many occasions Facebook is slow to act: slow to find malicious content, slow to get information to fact checkers, slow to remove. And the inability of people to react as open content is amplified at immense speed and scale is the real problem. As Dr Joan Donovan pointed out in her testimony; “the platform’s openness is now a vulnerability.”

Banning deepfakes is good - certainly a good technical challenge - but we shouldn’t kid ourselves that it is indicative of any real change regarding information safety. What’s really needed is for Facebook to decouple amplification of content from the content itself. Ultimately this is the only way to reduce the risks that come with the market in deceptive information and the attention economy.


Elsewhere this week:

  • Interesting research from Microsoft on how AI ethics checklists can be best used and are most commonly misused, a Sonder Scheme article. An AI ethics checklist can act as a "value lever" and make it acceptable to reflect on risks, raise red flags, add extra work and escalate decisions. It should not be used in a simple yes/no fashion that can turn nuanced ethics into simple compliance.

  • Weird stuff at CES. Samsung unveiled its artificial humans - calling its Neons a “new form of life.” Oh the hubris. This video of a CNET journalist interacting with one is worth a view. I couldn’t decide if the marketing people actually believe the schtick or whether they felt they just had to stick with it. Neons are nothing compared Soul Machines’ digital humans. The company wrote this article in response to the fuss, calling Neons “digital puppets.”

  • Speaking of Soul Machines… the NZ company just raised $40m.

  • Super interesting read on Medium about Youtube’s eco-system of far-right media. It’s not all about the recommendation algorithm - it’s a complex mix of social, celebrity and multiple algorithms.

  • Fascinating research from Google Health on the way expert medical practitioners work with AI. This is a bit of a “geek out” paper but it’s one of the most interesting pieces of research I’ve seen on how to think about human-machine collaboration in medicine. As AI assists medical decision making, the way models update and clinical practices change will be a lot more complex.

Predictions for AI in the 2020s

All predictions are wrong but some are useful. Hopefully these are useful!

Hi! This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about artificial intelligence in the wild; how AI is being used to make and break our world. If you haven’t signed up yet, you can do that here.

We have big plans for the Sonder Scheme membership in 2020. We will be releasing resource kits for AI ethics, human-centered AI design and AI governance. There will be monthly “state of AI” research updates and similar reports for specific AI “areas of interest” where we take a deep dive in one particular area. Members also have access to our AI Masterclass and - new - the AI Strategy Workshop DIY kit. If you haven’t already, you can join here and use discount code Holiday2019 for 50% off 2020 membership until December 31 (make sure to click “have a coupon” and input the code to get the discount).

You can also check out our free AI Academy here, a short version of our Masterclass designed to get you quickly up to speed on AI basics (without plot busting the main event).

It would be huge if you could share this issue on social as I’d love to add subscribers and start 2020 with a bump.


This is the final Artificiality for the twenty-teens and it’s a bumper issue. We’re now heading into a decade where artificial and human intelligence start to merge in ways that are impossible to forecast. Having said that, it’s always a fun challenge to make some predictions, so here we go with a dozen predictions for 2020 and 2030.

We will demand to know how AI sees us

The next decade will see a fundamental shift in mindset. We will want to protect the sanctity of our inner lives from manipulation and surveillance by AI. We will think less about what we volunteer as inputs to AI, instead we will be concerned about outputs. Instead of controls and permissions, the next decade will be about AI inferences; what are our individual data “voodoo dolls”? We will consider our inferences to be sensitive personal data.

2020: A social media celebrity will publicly demand to know how a large tech company sees them. They will want to go beyond ad preferences and understand deeper characteristics such as typical behaviors, unstated preferences, personality and who they are seen to be.

2030: We will be able to adjust inferences made about us, especially behavioral and sensitive inferences. There will be tools and services that help us predict the value of our data “voodoo doll” to different entities and in different contexts.

Algorithmic anxiety will become a thing

More people will be affected by algorithmic exclusion — witness the recent Apple Card kerfuffle, when rich white people experienced inscrutable unfairness. Combined with privacy and surveillance concerns, people will experience a new stressor — anger and anxiety from poorly designed AI-enabled decision systems.

2020: The launch of a coaching and advocacy service for succeeding in AI-conducted job interviews.

2030: The DSM will explicitly recognize anxiety derived from repeated dehumanizing experiences with AI.

AI ethics is a top career path

AI ethics is latent — technology has needed ethical input for a long time. AI now makes this a practice and attracts diverse people and thinkers to technology, reinvigorating the humanities and changing how technologists consider the moral consequences of their work.

2020: A major university announces an AI ethics course that involves multiple faculties and disciplines — sciences, philosophy, gender/race/queer studies, as well as math and comp-sci.

2030: >80% of Fortune 500 companies have an Office of AI Ethics.

AI surveillance will be seen for what it is — uniquely invasive 

AI is highly privacy disruptive because it underpins new surveillance capabilities in the physical world. Video-based surveillance and facial recognition are on a collision course with our sense of freedom. While tech companies market the story that surveillance is the only way to be safe, individuals will not buy this once they feel personally threatened by the technology. China’s increasing use of the technology will give rise to a heightened sense of moral non-equivalency — freedom versus AI.

2020: The year we experience a backlash against neighborhood surveillance. Multiple cities will ban facial recognition in policing. Amazon will see its favored status as “most trusted tech company” ceded to Microsoft as a result of its Ring partnerships and hands-off approach to the uses of its facial recognition product, Rekognition.

2030: A patchwork of local laws and regulations around AI-based surveillance and facial recognition will finally result in a standard set of federal laws which protect individual rights and punish abusers of AI’s technological capabilities.

We will appreciate that bias is a two-way street.

Data about the world is biased and, left unmitigated, AI amplifies and propagates this bias. AI exposes human bias and humans expose AI bias. Technical fixes for bias are effective but also expose how difficult it is to fill in gaps in datasets and handle situations where human bias is overwhelming. Bias will become a real-time problem for companies.

2020: Bias will be seen as inherent in AI and concerns will go beyond gender, race or other protected characteristics. Bias will be seen as a social issue and non-technical fixes prioritized equally with technical fixes. Data collection will be justified on the basis of filling data gaps.

2030: Technical fixes expose bias in real-time and AI will routinely direct humans and AI on how to fill data gaps. There will be professional certification for data scientists in the increasingly specialized field of bias and fairness mitigation.

AI’s presence or absence will have to be scientifically justified 

People will become hyper-aware of the scientific justification for using AI, especially for analyzing human behavior and in high-trust or high-stakes situations. 

2020: The first medical ethics case because AI wasn’t used in a diagnostic process with resulting harm. A large employer will abandon the use of psycho-emotional AI analysis, citing lack of efficacy.

2030: AI exposes gaps in the science of human intelligence and directs research. AI is designed to work sympathetically with human cognitive biases — minimizing or maximizing when appropriate. Trust will be dependent on human-like explanations and involvement of a human when it’s important to establish causality.

How AI discriminates will be a game-changer

Fairness in AI is a complex issue because many standard ways of evaluating fairness can be conflicting. AI can be unfair without being illegal because it finds proxies for protected characteristics rather than using them directly.

2020: A state AG takes on a case where it’s suspected that AI proxies are the cause of unfairness or discrimination. 

2030: A new challenge arises around AI-powered stereotyping. A new sub-speciality of AI ethics emerges specifically to deal with how AI inferences and classifications cause micro-discrimination based on personalities, behaviors or preferences, eg an individual who is treated disparately in the workplace because they didn’t wear their smartwatch so didn’t track their activity or location.

Transfer learning hits limitations; the “STI of AI”

Pre-trained models and open datasets supplied by the platforms (esp Google, Facebook) are increasingly effective and efficient for deploying AI at scale. But they are also seen as sources of bias or unexplainable outcomes that can “infect” derivative models, giving rise to untraceable behaviors or data labels that can not be effectively excavated. 

2020: A major harm-causing AI failure due to transfer learning or a data issue which propagates in an unintended context.

2030: New fields of practice will be established in AI: Data Archeology and AI Forensics. Models are certified as “safe” or “clean.” It will be illegal to delete previous versions of socially significant AI as data labels and model parameters must be available for forensic examination.

Human learning and intelligence will go mainstream in AI research 

Leading AI researchers increasingly acknowledge the limits of current deep learning approaches which means that human-analogous AI research will be a growing trend. 

2020: The AI research field will explode with terms we have usually thought of as human. System 1 and system 2 thinking, curiosity, attention, intrinsic motivation and cause-and-effect reasoning will become mainstream AI research terms. Leading researchers match these human characteristics with tools for discovering causality based on out-of-distribution data or generalization using sparse factor graphs.

2030: By the end of the decade, there will be a significant breakthrough in the ability of AI to generalize and discover causal factors in data, which go well beyond the statistical associations of today.

We will better understand the learning cycle between humans and machines

Over the next decade the trend towards automating more and more of our lives will accelerate. However, AI will not be evenly applied. Companies that understand how to automate in ways that are beneficial to human skills and enhance human performance — rather than simply replace humans — will see outsize performance from AI. They will know how to access the flywheel of human-machine collaborative learning and knowledge discovery.

2020: AI leaders will shift their focus to designing jobs based on maximizing human and machine skills and tasks together. The first data will come in — companies that design for human-centered AI delegations will see better performance from both human and machine employees.

2030: Companies using AI in manufacturing will have fine-tuned how to design for the right balance of AI and human. >80% of goods are manufactured in places where no human makes a real-time decision. Humans focus on dealing with unpredictability and complex decisions that machines can’t yet make. 

Machine employees will speak for themselves

Today, machines at work have no voice or conscience, nor do they have any intent because it is deemed to be the same as the intent of the humans who deploy them. However, as people become more aware of the unintended consequences of AI and the counter-intuitive effects that can result with AI, people will need AI to explain itself. Machine employees will need to be capable of self-regulating for what humans intend rather than what humans asked for. Ultimately this will lead to more pressure on large-scale socially-significant AI to be able to communicate based on human values, not on a narrow specification set early on in the process.

2020: An employee at a tech giant will leak information about how a non-intuitive and inscrutable algorithm caused harm to a group of users because it did what people specified but not what they meant.

2030: AI leaders will publish their machine employee’s “code of conduct” and explain how intent and alignment with human value is specified.

AI Safety will evolve to be a practical field of work

Hazards that arise from AI become a significant threat. Adversarial attacks and deep fakes highlight how vulnerable humans are to uncontrolled AI.

2020: A deep fake causes an international incident. An autonomous drone adversarial attack spurs a backlash and highlights that robots-that-fly make communities uniquely vulnerable. Drones test the limit of the convenience/safety tradeoff. One city passes a delivery-centric no-drone ordinance.

2030: First students graduate from a specialized undergraduate degree in AI Safety.

I’m already looking forward to seeing how these pan out in the next ten years.

Have a great holiday and see you in 2020!


Our Sonder Scheme publication highlights from 2019:

  • AI can tell if your walk is deceptive.

  • The strange world of English pronouns and what it means for AI.

  • The Apple Card issue.

  • How to get your teen off Snap.

  • OpenAI’s scary and fascinating breakthrough.

  • How to think about the AI war between China and the USA.

  • Emotional AI has no basis in science.

  • Facebook’s AI chief doesn’t worry about AI domination because AI doesn’t have testosterone.

  • How people like AI to explain themselves.

A few must-know links from this week:

  • This landmark piece from NYT on the USA as a location-based surveillance state.

  • PBS show on AI. Nicely done. And following hard on its heals, a new YouTube Originals series called “The Age of AI” premiered on December 18. The first episode features Soul Machines (go kiwi). Trailer is here.

  • Vast new resource released this week by the Oxford Internet Institute’s Project on Computational Propaganda. It’s designed for anyone dealing with misinformation online.

  • 2019 report from Stanford’s Human-centered AI project. Includes a handy tool for searching arxiv.

Facebook "subsidizes" polarization

Hi! This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about artificial intelligence in the wild; how AI is being used to make and break our world. If you haven’t signed up yet, you can do that here.


Facebook’s algorithms cause political polarization by creating “echo chambers” or “filter bubbles” that insulate people from opposing views about current events.

But how does this happen inside Facebook’s ad delivery process? And is there an economic impact for campaign advertisers and for Facebook?

Researchers from Northeastern University and the University of Southern California published a paper this week which shows that Facebook essentially “subsidizes” partisanship.

The research team experimented by running political ads on Facebook. While it’s impossible to fully understand the algorithms, the research has produced some unique results.

Facebook’s ad algorithms predict whether a user will already be aligned with the content of the ad. If a user is likely to be already aligned with the content (say, a democrat delivered a Bernie ad), the algorithm predicts that the user will be more valuable to Facebook (more engagement, likes, shares etc) than a user that isn’t aligned (say, a republican who might see the same ad). This results in a “discount” to serve an ad to a “relevant” user.

This means it’s more difficult for a political campaign to reach more diverse users. Broad-based campaigns to wide audiences will yield less accurate predictions of “relevant.” Ad delivery is then less under the control of the advertiser who is selecting for certain features in the audience because Facebook’s algorithms have to infer broader preferences across more users.

Counterintuitively, advertisers who target broad audiences may end up ceding platforms even more influence over which users ultimately see which ads, adding urgency to calls for more meaningful public transparency into the political advertising ecosystem.

Researchers were also able to demonstrate that Facebook’s platform is not neutral to the content of the ad. Through some ISP-mastery, they put up a neutral ad (picture of a flag, “get out to vote”) but, at the same time, tricked Facebook into thinking it was an ad from a political site. This resulted in the same skew in both ad delivery and differential pricing, which meant that ad delivery was not solely driven by user reactions. Rather than being a “neutral platform,” decisions are made partially by Facebook itself.

This research has implications for restricting micro-targeting. There’s something inside of Facebook’s algorithms that skews ads based on Facebook, not on the choices of the advertiser.

This selection occurs without the users’ or political advertisers’ knowledge or control. Moreover, these selection choices are likely to be aligned with Facebook’s business interests, but not necessarily with important societal goals.

It is also more expensive for a political campaign to deliver content to users with opposing views. Researchers re-ran Bernie ads and Trump ads with these results on the first day of the ad campaign:

Bernie ad —> conservative users: $15.39/1000 impressions, 4,772 users

Trump ad —> liberal users: $10.98/1000 impressions, 7,588 users

This effect was observable over the course of the campaigns. In one instance, by the end of the experiments, when the liberal ad was shown to the liberal audience, it was charged $21 per thousand users; when the conservative ad was delivered to the same audience, it was charged over $40 per thousand users.

While we can’t be sure of the precise nature of the algorithmic process, this research makes it clear that Facebook economically disincentivizes content that it believes doesn’t align with a user’s view. This sets up a “subsidy” from non-aligned content to aligned content.

When asked about the results of the research, Facebook said that’s how it’s supposed to work, disputing that there was anything novel in the work. “Ads should be relevant to the people who see them. It’s always the case that campaigns can reach the audiences they want with the right targeting, objective and spend,” according to Joe Osborne, a spokesman for Facebook.

That would be true if “relevant” was indeed relevant. In commercial advertising, “relevant” can be narrowly optimized.

But in political advertising, the same measure of “relevant” can distort the delivery of information. Part of the point of political advertising is to try to open up or change people’s minds by presenting them with alternative (perhaps less “relevant”) view points. As the researchers point out, commercial advertising algorithms that solve for inferred or revealed preferences can run counter to important democratic ideals.


Other things this week:

  • Still on Facebook, some research from their AI group on emotionally intelligent chat. Turns out that public social media is a bad place to get data for private chats because content occurs in front of large “peripheral audiences,” whereas messaging involves people sharing more intense and negative emotions through private channels. Sonder Scheme blog.

  • Latest AI Now report is out, here.

  • Retail surveillance - this interesting piece from Vice on Toys-R-Us reinventing itself as a customer surveillance company. Is this an inevitable consequence of Amazon having surveilled us offline and now retailers have little choice but to follow suit? But is it ethical to go this far?

  • The latest survey from McKinsey on AI adoption and our summary of highlights on Sonder Scheme.

  • Stuff article on the NZ police force’s new initiative for facial recognition surveillance. NZ is unique in its obligations to Maori under the Treaty of Waitangi and Karaitiana Taiuru, Doctoral Researcher/ STEAM and Property Rights Māori Cultural Adviser, has written a series of articles about Maori ethics, AI and data sovereignty, accessible here.

  • Article in Scientific American (metered paywall) on machine (and human) consciousness. Thought provoking.

Loading more posts…