The Black Mirror of Clearview AI

If Clearview is normalized, we're not who we think (or hope) we are

Hi! This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about artificial intelligence in the wild; how AI is being used to make and break our world. I write this because, while I am optimistic about the technology, I am pessimistic about the power structures that command it. So we need to talk more about the power than the tech.

If you haven’t signed up yet, you can do that here. If you like this post, share :)


Before the NYT broke the story on Clearview AI, facial recognition was seen by most people as a natural extension of tech in general. It added convenience or it was used by trusted parties to keep us safer.

If you’ve got nothing to hide, you’ve got nothing to fear. But this could not be more wrong.

Facial recognition has been creeping up on us for a while now. We love facial recognition on our phones but our phone is a database of one. We’ve supplied our faces to government for good use - driver’s licenses, passports and Global Entry are obvious examples. But our faces-as-data have bled from this narrow, context-dependent use and have been repurposed for use in other databases where facial recognition technology is routinely deployed.

Facial recognition is a unique technology. There is no other technology that so fundamentally breaks the concept of anonymity in a public space. In public, we take for granted that we have a right to choose who to introduce ourselves to. Others around us have limited capacity to perceive, characterize and remember us. Facial recognition destroys the obscurity we rely on. When we lose obscurity we lose our choices about how much of ourselves to show to others.

Nothing (outside of Yandex, a Russian image search engine) comes even close to Clearview AI. It’s a search engine for faces. And the company poses a material threat to who we are, who we think we are and who we hope to be. All the while, the company asks us to “just trust them.” “Trust” is…

…Why I’m here on TV explaining all these things and that’s why we meet with a lot of the people in government. - Ton That, CEO Clearview AI.

I’m not sure what he was explaining in the interview. Accuracy perhaps? Apparently Clearview AI is 99.6% accurate. Accuracy is a core component of trustworthiness in AI. If the AI is not accurate enough, or if its accuracy is uneven and biased against certain groups, that’s a reason to worry. In the case of Clearview AI, the high accuracy is a reason to worry. The higher the accuracy, the lower chance to escape it.

When this system is misused, the potential harm will be higher when accuracy is higher. And if it’s everywhere and covers everyone, then history says it will be misused. Even if this misuse isn’t malicious, even if misuse is simply the gap between the developer’s intent and an unintended consequence, there are potentially catastrophic consequences for individual obscurity over the course of our lives.

Our faces are central to our identity. We must treat them as inalienable in the true sense of the word. But since we cannot (and do not want to) hide our faces to protect them from surveillance, there is only one way to assure that their inalienable quality is respected. This can only be done if we ban facial recognition. - Dorothea Baur

The power imbalance is extreme. In his CBS interview, Ton That, asserts that the software will not be available to the public “while he is still at the company.” Now our privacy hinges on one individual’s personal assurance and employment tenure. The leverage and amplification effect of AI is evident - one person in one company which offers one product has more influence than the billions on which the product acts. What happens when Ton That decides that his product is far more lucrative as a 24/7 real-time surveillance tool than it is as an “after-the-fact” search tool? What happens when he and his team then decides what is “right” and what is “wrong”? What new norms for “something to hide” could appear?

Clearview AI’s pedigree matters; Peter Thiel is an early investor. Thiel backs “big state” AI surveillance companies like Clearview and Palantir because he thinks the best way to protect us without becoming a police state is “to give the government the best surveillance tools possible.” Thiel believes that freedom and democracy are incompatible. Which is why we see him funding companies like Clearview - democracy can’t be relied upon to keep technology progressing and markets growing.

In a recent profile of Thiel’s ideological positioning; ““Progress” is always aligned with technology and the individual, and “chaos” with politics and the masses.” So Thiel backs technology that makes governments stronger, less chaotic, less democratic, less diverse and more likely to spend money on technology that preserves a techno-social power structure where individuals buy into the idea that the only role of government is public safety, and the best way to be safe is machine surveillance. Welcome to the messy masses under machine management.

A strong centralized state can restore order, breed progress, and open up new technologies, markets, and financial instruments from which Thiel might profit. And as long as it allows Thiel to make money and host dinner parties, who cares if its borders are cruelly and ruthlessly enforced? Who cares if its leader is an autocrat? Who cares, for that matter, if it’s democratic? In fact, it might be better if it weren’t. - Intelligencer

This story highlights that we are reliant on the big tech platforms to protect us. The many-billions of images that were scraped from social media as “public data,” while technically public, their extraction infringes on the T&Cs of the Twitter, Google, YouTube and Venmo. Clearview has been served cease-and-desist letters but this doesn’t mean that the images the company has will be deleted.

If you want to know if they have your face and would like the image deleted, you can if you live in Illinois where the Illinois Biometric Privacy Act will likely give you grounds to request your data are erased. Be prepared to supply a government-issued photo ID on your application.

State authorities are responding. The New Jersey Attorney General called for a temporary ban, saying that while he wasn’t against facial recognition per se, that “all law enforcement agencies in New Jersey stop using Clearview’s technology until we get a better handle on the situation.” Thank you, NJ.

This is truly a Black Mirror moment. Clearview may have inadvertently exposed what state surveillance is about in the age of AI. Under the guise of safety, we lose our right to be obscure, to be anonymous and thereby our right to a future of our own determination. As Shoshana Zuboff says in Surveillance Capitalism, “the real psychological truth is this: if you have nothing to hide, you are nothing.”


Also this week:

  • An article in Sonder Scheme on how to think about personalization versus over-personalization. Along with the essay, I’ve included in a helpful graphic on The Good and Bad of Personalization.

  • A must-check out - Facebook finally released their off-Facebook tool so you can now discover who sells your data to Facebook. It’s really worth a look. In an informal (and tiny) survey this week, we found the average for each user was around 100 companies selling their personal data to Facebook for ad targeting. It’s easy to check - go to your Facebook newsfeed settings and zen through settings&privacy, scroll down to Off-Facebook activity, log in (security feature) and then click on the brand icons. More details and discussion here.

  • Interesting perspective in Slate on AI ethics officers and how their presence won’t work because it implies that everyone else can’t be ethical.

  • Great interview between Stuart Russell and James Manyika from McKinsey. “How does one person improve the life of another? We know there are people who can do it. But, generally speaking, there’s no how-to manual. There’s no science. There’s no engineering of this. We put enormous resources, in the trillions of dollars, into the science and engineering of the cell phone, but not into the science and engineering of how one person can improve the life of another.” 

  • Soul Machines and the NZ Police are experimenting with digital police officers as part of a strategy of digital tech enabling accessibility and inclusion. Check it out in the NZ Herald. If you like listening to a broad kiwi accent, you’ll want to put this on infinite loop.


Google's Super Bowl ad

Building loyalty when you're a monopoly

Hi! This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about artificial intelligence in the wild; how AI is being used to make and break our world. If you haven’t signed up yet, you can do that here. If you like this newsletter, please share, especially on LinkedIn and Twitter.


This past weekend’s Super Bowl might be remembered more for Google’s tearjerker of an ad than the game. In it, “a man reminisces about the love of his life with a little help from Google,” according to the company’s description. The ad struck a cord. It was voted “best ad” in the Super Bowl line up. Twitter was flooded with the tears of hard-core football fans who, “damn it, don’t like to cry during games.”

What is the strategy behind this storytelling?

Picking the ad apart, it works on two levels. On one level it shows how AI can help someone remember. The assumption made by almost everyone (according to social media commentary) is that the man suffers from dementia. Without coming out and blatantly saying, “Google can help your grandfather with his dementia,” the ad shows that Google Assistant can be a great memory jogger.

Highly emotional ads are more engaging and memorable. If the comments on YouTube are anything to go by, it’s been successful at creating a strong emotional bond with Google; a monopoly that provides a utility service and is keen to stave off regulation by building loyalty when it needs it most.

We also know that the company’s goal is to harvest data and use it to make predictions to make more money on ads. You can spin the ad around…

Loretta used to hum showtunes.

How about an ad for tickets to Broadway or a DVD or a streaming service?

Loretta’s favorite flowers were tulips.

Or maybe flower delivery or a wall art of tulips or a trip to Amsterdam. Do you feel better about ads now that Google’s ad is so “powerful?”

“Little things” are powerful. Which gets to the other idea in the ad. What is the subtext about the nature of this “help”?

Outsourcing cognitive, moral and physical processes to technology comes with existential consequences - increased passivity, decreased agency, increased detachment. Outsourcing emotional work to a machine comes with the same consequences, but with an added twist. The promise of ever-more efficient ways for tech to support us through the human journey is the promise of “cheap bliss;” we are seduced into thinking that the hard work of being human is somehow avoidable.

Our most important emotional work is also work that can never be outsourced - love, grief, despair, delight. We need to remain fluent in these most human of experiences. We have to experience them to make sense of them. We need the struggle of working with others to go from *now* to *future.* The idea that even a small component of grief can be outsourced to a machine, one whose sole purpose is to turn that information into fodder for more predictable ad clicks, is misleading advertising in the extreme.

It’s a bit depressing to see how easy it is to manipulate us with a good tearjerker. It’s a blatant move to harness emotions and a fast route to build demonstrable loyalty. In the world of monopoly businesses, it is a recognized strategy to head off critical oversight and regulation. Using vulnerable human moments and fallibility, Google showed just how easy it is to keep the power balance exactly where it wants it because almost everyone embraced it.

The true story behind the ad is a real human story that we can and should embrace with compassion. But the ad is the company and it manipulates our capacity for empathy. It is designed to make us complicit and passive in the harvest of increasingly personal behavioral data for others’ profit.

Tearjerk ads are an easy strategy for monopolies looking to build emotional connection with captive customers. This Telecom NZ ad from many years ago is a melancholic masterpiece. I guess this is one movie we have seen before.


Also this week:

  • A video from Wuhan showing drones instructing people to go back inside. I’ve watched this video many times and each time it triggers a level of cognitive dissonance - is this real or fake? What happens when coronavirus is under control enough for people to go back to “normal” life? Is this the new normal in China?

  • Article in the NYT about facial recognition going live in a school district, despite some people trying hard to stop it. Plus the announcement of a house committee hearing on the use of the technology by Homeland Security.

  • Great long read from Vice about ClassPass and the use of platform algorithms in the fitness industry. Classic “frenemy” story and “Uberification” of everything.

  • Article from Protocol about technology companies and ethics and practical barriers to change.

Ding-Dong

Why we need to keep talking about Ring and "plug-in surveillance."

Hi! This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about artificial intelligence in the wild; how AI is being used to make and break our world. If you haven’t signed up yet, you can do that here. If you like this newsletter, please share, especially on LinkedIn and Twitter.


The Electronic Frontier Foundation recently published results of a study into Ring’s surveillance of its customers. If you are a Ring customer and use an Android phone, your Ring app is packed with third-party trackers which send out a whole range of personally identifiable information to a whole bunch of analytics and data broking-type companies. Data may include your name, mobile carrier, private IP address and sensor data (such as settings on the accelerometer your phone).

For example, Facebook receives alerts when the app is opened and includes time zone, device model, language preferences, screen resolution and a unique identifier that persists even if the user resets certain settings. Facebook receives this even if you do not have a Facebook account.

Branch receives a number of unique identifiers, such as fingerprint identification information and hardware identification data, as well as your device’s local IP address, model and screen resolution. Branch describes itself as a “deep linking” platform. Deep linking, in the context of a mobile app, uses an identifier that links to a specific location within a mobile app rather than simply launching the app. Deferred deep linking allows users to deep link to content even if the app is not already installed. For advertisers and data brokers it’s important because it acts like a backdoor to specific content (say, a particular product) when someone doesn’t have the relevant app already installed. Ring sells device information so that Branch can perform this function behind the scenes.

AppsFlyer also receives data which it uses as part of its offer to marketers. The company specializes in marketing attribution. AppsFlyer can come preinstalled on a low-end Android device - something called “bloatware” - where it is used to offset the cost of the phone by selling consumer data. This practice disproportionality affects low-income consumers because they tend to buy the cheapest phones.

The most data goes to Mixpanel, a business analytics service company. It tracks user interactions with web and mobile applications and provides tools for targeted communication with them.

So what’s the “so what?” We know this kind of tracking happens and we’ve certainly come to expect it with Android phones. What’s new here is that Ring is surveilling the surveillers. In the most extreme case, Ring shares your name, email address, your device and carrier, unique identifiers that allow these companies to track you across apps, real-time interaction data with the app, and information about your home network. This doesn’t seem to match the level of trust that Ring customers would expect. It feels like a fundamental fracture of the mental model a customer should have about Ring.

Perhaps a bigger concern is the growth and extent of “plug-in surveillance.” City-wide plug-in surveillance is experiencing huge growth. Think of it like a public/private mash-up of video surveillance, advanced video analytics and automation. The US has tens of millions of connected cameras and is projected to rival China’s per person camera penetration rate within a few years.

By pooling city-owned cameras with privately owned cameras, policing experts say an agency in a typical large city may amass hundreds of thousands of video feeds in just a few years. - Michael Kwet

Of course, this scale begs the question: what do you with all this footage when you get it? The answer is AI - sophisticated video analytics that can overlay footage of events happening at different times as if they are appearing simultaneously. Once this summarization is done, more AI can be applied, in particular behavioral recognition techniques such as fight detection, emotion recognition, fall detection, loitering, dog walking, jaywalking, toll fare evasion, and even lie detection. These systems can track individuals across a network of connected systems and single people out in highly automated ways.

People who say they aren’t worried about AI surveillance because they aren’t doing anything wrong, often fail to understand what “doing something wrong” might mean in the modern world of plug-in surveillance. It’s not only that bias is a known problem, it’s not only that the science of behavioral analytics cannot always be justified, it’s not only that a lack of accountability for decision and action in AI systems is a real cause of harm, it’s also that these private surveillance systems have a strong incentive to share data with third-party data networks in an opaque and privacy-invasive way. I’m speculating here but, in theory, surveillance can be extended right to the edge of the network - someones’s phone where AI can find patterns outside of human perception and consciousness.

It’s beyond most people’s capability to understand how all these systems fit together. The opacity, obscurity, non-intuitive and inscrutable nature of systems used in public spaces could get worse. The design of third-party data networks and the capability to plug systems together mean that “people are viscerally kept from their data” (h/t John Havens).

It starts to feel like our societies are biased against humans.


Also this week:

  • The every-day existential risks of AI - a Sonder Scheme article.

  • ICYMI, anti-virus software that collects and sells your every click 'Every search. Every click. Every buy. On every site.' After this Vice story, the company announced it will be winding this service down. Privacy is alive, journalism functions, public accountability works. The story is worthy of your time.

  • Sundance movie Coded bias. “Corporations & governments are deploying #AI in many ways, but they have failed to ensure technologies work for All. AJL United’s work to expose the resulting harms is featured in the Shalini Kantayya’s film Coded Bias at Sundance.”

  • Facebook has settled privacy lawsuit over facial recognition. Thanks Illinois, which, along with California, is the most progressive state in regs responding to the unique risks and harms of AI.

  • Fairness, Accountability and Transparency (FAT 2020) conference proceedings. The exponential increase in interest in this conference demonstrates how much people are invested in something that used to be a fringe topic.

  • Back-in-the-day, when I was working on nodal pricing in electricity markets, we used to joke about using similar technology for peak pricing in city parking. While cities have experimented for a while, the AI version is now here. The Wall Street Journal (paywall) reports on a new company, SpotHero, that will adjust parking prices based on predicted demand. Other applications for more consumer products whose prices could fluctuate in real-time could be on their way.

Putting brakes on facial recognition & surveillance

How should we think about this super-convenient yet dystopian tech?

Hi! This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about artificial intelligence in the wild; how AI is being used to make and break our world. If you haven’t signed up yet, you can do that here. If you like this newsletter, please share, especially on LinkedIn and Twitter.


America is experiencing cognitive dissonance with facial recognition. The convenience of using our face for the basics - check in, border crossings, unlocking our phones - is fantastic. But, in the land of the free, suddenly everyone has realized that it might mean we’re actually living in just another surveillance society. Maybe we’re more sensitive now, having missed for so long that surveillance capitalism was driving our every digital move without most people having a clue. Maybe it’s the Hong Kong protests where facial recognition has played a key role in the police response while it’s in our face what people do to avoid identification. Maybe it’s that, when it comes down to it, privacy is not dead after all.

Facial recognition technology is deeply concerning and we should be putting the brakes on for a bit. While digital surveillance has grown exponentially, facial recognition hasn’t followed it as far quite yet but it’s close. This doesn’t necessarily mean that an outright ban is the right answer, although there’s certainly a case to be made for that. Brakes could be self-imposed by companies, by local governments or by individual choice.

Like every AI, the validity of its use is incredibly context specific. Facial recognition has spread with very few people realizing that it is a unique technology. Your face is you, readable and identifiable and everywhere in a way that your fingerprint or gait or other forms of identification just isn’t. You’ve even contributed to your biometric identity being easy for anyone, anywhere to use.

Last week, the NYT broke a story about Clearview AI, a startup that has amassed billions of photos scraped from social media. It uses them in an application marketed to law enforcement. This is against the terms and conditions of social networks. Twitter responded with a cease and desist letter but the damage is done — the images have already been scraped. Perhaps the silver lining is that everyone who has ever put up a selfie on Facebook now understands how porous social media really is. Realizing that a social media selfie could end up in a police mugshot is what’s known as “context collapse” and leaves one with a very specific sensation of digital dirtiness.

Amazon’s Ring doorbell doesn’t have facial recognition. Yet. Through Ring, Amazon has created a public/private surveillance net across the US, with very little or no democratic input. Ring shows us something crazy about ourselves — that we can assimilate the violent and terrifying with the cute, funny and frivolous. Ring has become a content platform in itself — a TV channel and new breed of American’s Funniest Home Videos. Who doesn’t want to watch a cat stalk a raccoon or a bear steal chocolate

How are we meant to think about Ring? It helps prevent delivery theft. It has helped solve crime. People feel more secure. But it also breaks public/private privacy in a very particular way. In our homes, we have a right to privacy. In public, we have to expect that we don’t. But this hasn’t mattered because privacy in public is via obscurity — we don’t expect to be watched. AI breaks down obscurity because now our every move can be watched — by a machine. Ring goes one step further because now, permanent digital surveillance can be literally everywhere, including next door to you. And it is just so, well, ordinary.

But here’s another interesting thing that makes Ring like no other surveillance system. On the inside of the door, you are the user of “luxury surveillance,” while on the outside of the door, you are the victim of “imposed surveillance.” In an essay for the architectural publication Urban Omnibus, Chris Gilliard makes this distinction stark by comparing the experiences of the wearer of an Apple Watch to those of someone forced to wear a court-ordered ankle monitor.

In the digital age, however, the stratification between surveiller and surveilled — and between those who have and don’t have agency over how they are surveilled — is felt beyond the scale of wearable devices. - Chris Gilliard

This is how we need to think of Ring — that now, whenever we leave our houses, we move from surveiller to surveilled. We have to think like someone’s watching and that we have no control over what they see or how they decide to interpret what they see. Was that a hand wave or a threatening gesture? Was that an ironic, playful slap or an indicator of domestic violence? What are those teenagers up to? Should the police be called?

And all those scary and close-call videos intertwined with comedy and cute? The juxtaposition only serves to reinforce the marketing message — that we can only be safe if we are surveilled. The Ring ecosystem is also biased — it skews paranoid which means it’s only a matter of time until Ring adds facial recognition.

Concern over facial recognition and surveillance has grown together. They feed off each other. Our expectations of privacy and our mental model of the constraints that designers put on its use matter. We aren’t worried about facial recognition on our phones. We love how facial recognition means we get more photos of our kids from summer camp...but as long as it’s only photos that counsellors take in group activities. We think that perhaps it’s a good idea to have facial recognition in our kids’ schools…but only as long as it’s only used for identifying people who have been banned from the building. We appreciate how efficient and effective it can be in policing…but we assume we will never be caught as an unfortunate false positive. When facial recognition makes something frictionless, we favor convenience over privacy, just as we do with many other things.

I’ve written before about how facial recognition is not designed for trust. This technology is a fulcrum — the power balance is extreme between the surveiller and the person surveilled. Who gets to choose when and how to see? What inferences are being made and who decides they are meaningful? What is the consequence of failure? Who bears the consequence? Who decides what to keep forever?

In a customer-facing application, one approach might be to start with the goal of making every user a “luxury surveillance” user rather than one of “imposed surveillance.” An Apple Watch gives the user, not just control, but a sense of intimacy. The device feels like an extension of your own body; a tiny, external mind. It’s a luxury to check in on yourself. A customer-facing app that uses facial recognition could be designed with similar intent, going beyond consent and control, and creating a connection with self which reinforces the customer not the company. Unfortunately, any system built on surveillance capitalism doesn’t put the individual’s values first.

It’s hard to see that we can get the horse back-in-the-barn without a revolution in how we see privacy. Current protections are the proverbial “knives to a gun fight.” The more we think about the deeper power dynamic and who wins if we fail to act, the more we will wonder who are really protecting by not acting.


Also this week:

  • The California Sunday Magazine has published a series of articles on facial recognition. Included are a handy series of decision trees for easily figuring out how to avoid facial recognition. For easy reference, we put them on the Sonder Scheme blog.

  • Op-ed in the NYT from Shoshana Zuboff, author of Surveillance Capitalism, specifically focused on privacy and its relationship with human autonomy. A must-read for sure.

  • The Closing Gaps Ideation Game is an interactive ideation game created by the Partnership on AI to facilitate a global conversation around the complex process of translating ethical technology principles into organizational practice.

  • A new look for Google’s desktop search product “blurs the line between organic search results and the ads that sit above them.” Some early evidence suggests the changes have led more people to click on ads, says The Verge.

  • WEF has put together a toolkit for boards to help people ask the right questions about AI.

  • Fun piece on the Wall-E effect from The Next Web.


Machine employees in government

What does it mean if AI is doing the job of government employees?

Hi! This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about artificial intelligence in the wild; how AI is being used to make and break our world. If you haven’t signed up yet, you can do that here.


In 2016, as Intelligentsia Research, we coined the term “machine employees” to describe AI in the workplace. Our goal was to differentiate traditional IT technologies from modern AI; where deep learning and other machine learning techniques were being deployed with the specific intent of taking on a decision making role.

The concept of machine employees is important because, while they can substitute for humans in processes where people once evaluated, decided and acted, they aren’t held to the same standards as people. They can’t be easily questioned, sued or taxed. In short, machine employees aren’t accountable, but the humans that employ them should be. A human should be able to explain, justify and override a poorly performing machine employee.

But everywhere you look, it seems there are more and more instances of machine employees that are poorly designed and deployed, with government services a particular concern. A recent essay in the Columbia Law Review by Kate Crawford and Jason Schultz from AI Now, outlines where AI systems used in government services have denied people their constitutional rights. They argue that, much like other private actors who perform core government functions, developers of AI systems that directly influence government decisions should be treated as state actors. This would mean that a “government machine employee” would be subject to regulation under the United States Bill of Rights, which would prohibit the federal and state governments from violating certain rights and freedoms, something that is a particular risk when services are provided to groups where people are already at a disadvantage.

Here is the key question — are AI vendors and their systems merely tools that government employees use or does the AI perform the functions itself? Are these systems the latest tech tool for human use or is there something fundamentally different about them? If the intent of a machine employee is to replace a human employee, or substitute a significant portion of a their decision making capability, then our intuitions tell us it’s the latter.

There are horror stories about some of these government AI systems. In Arkansas, cerebral palsy patients and other disabled people have had their benefits cut by half with no human able to explain how the algorithm works. In Texas, teachers were subjected to inscrutable employment evaluations. The AI vendor fought so hard to keep source code secret that, even in court, “only one expert was allowed to review the system, on only one laptop and only with a pen and paper.” And in DC, where a criminal risk assessment tool for juveniles rated as “high risk” constrained sentencing choices and only displayed options for treatment in a psychiatric hospital or a secure detention facility, drastically altering the course of children’s lives. Perhaps the most egregious case is Michigan’s use of AI for unemployment benefit “robo-determination” of fraud. The system adjudicated 22,000 fraud cases with a 93% error rate. 20,000 people were subject to highest-in-the-nation quadruple penalties, tens of thousands of dollar per person.

In public services, the bottom line is always to cut costs and increase efficiency. But when the “most expensive” populations are also the ones that require the most support because they are economically, politically or socially marginalized, and decisions about them get made by inscrutable, biased machine employees, while deployed by people who have not been trained or themselves are poorly supported, the potential for harm is high.

In all these situations, human employees were unable to answer even the most basic questions about the behavior of the systems, much less change the course of the outcome for individuals.

One advantage of government — ie public services and public accountability — is that we actually know this stuff now. Thanks to the courts. But these are one-off cases and there’s no systematic way to protect against similar harms being inflicted on others. In fact, government procurement processes can make this worse; AI systems are increasingly adopted from state to state through software contractor migration. They can be trained on historical data from one state that isn’t applicable to another’s without any consideration for the differences in populations. Patterns of bias can proliferate and even stem back to the intentions of one individual employee.

If good design (including explainability, accountability to humans and human-in-the-loop protections) and regulation fails, we will need something in the middle. The idea that AI systems developers are actually state actors — “government machine employees” — is potentially an important way to bridge the current AI accountability gap.


Also this week:

  • From the Sonder Scheme blog: Explainability and transparency are critical performance criteria for AI systems. Bias and fairness are increasingly top-of-mind, which raises the stakes on AI developers to be able to interrogate and understand their models. New research raises concerns about how these tools are being used in practice as researchers find failures with the use of explainability tools.

  • A must-read piece from the NYT on Clearview.ai and facial recognition and its use in surveillance. Now everyone’s everyone’s face is searchable from images taken from Facebook, Twitter, Youtube and Venmo against stated company policy. This isn’t only about surveillance and privacy, this is also test of whether big tech can self-regulate and stop the practice powering surveillance.

  • Terrific piece in the Boston Review from Annette Zimmermann, Elena Di Rosa and Hochan Kim on how technology can’t fix algorithmic injustice. This is totally worthy of your time.

  • Interesting reporting from The Telegraph (registration required) on Google’s bias busting team who get together and swear, curse, be racist and sexist as an in-house way of teaching their AI not to respond to racist and sexist comments. I particularly liked this quote from the SF correspondent: “Trying to boil the prejudice out of this gigantic data stream feels like standing in the middle of a raging river trying to catch refuse in a net. I'm glad for the people who swear at Google, but I wonder how effective they can really be without some deeper, more fundamental realignment.”

  • Useful and intense resource on current state of AI ethics from the Berkman Klein Center for Internet and Society at Harvard University.


Loading more posts…