Facebook "subsidizes" polarization

Hi! This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about artificial intelligence in the wild; how AI is being used to make and break our world. If you haven’t signed up yet, you can do that here.


Facebook’s algorithms cause political polarization by creating “echo chambers” or “filter bubbles” that insulate people from opposing views about current events.

But how does this happen inside Facebook’s ad delivery process? And is there an economic impact for campaign advertisers and for Facebook?

Researchers from Northeastern University and the University of Southern California published a paper this week which shows that Facebook essentially “subsidizes” partisanship.

The research team experimented by running political ads on Facebook. While it’s impossible to fully understand the algorithms, the research has produced some unique results.

Facebook’s ad algorithms predict whether a user will already be aligned with the content of the ad. If a user is likely to be already aligned with the content (say, a democrat delivered a Bernie ad), the algorithm predicts that the user will be more valuable to Facebook (more engagement, likes, shares etc) than a user that isn’t aligned (say, a republican who might see the same ad). This results in a “discount” to serve an ad to a “relevant” user.

This means it’s more difficult for a political campaign to reach more diverse users. Broad-based campaigns to wide audiences will yield less accurate predictions of “relevant.” Ad delivery is then less under the control of the advertiser who is selecting for certain features in the audience because Facebook’s algorithms have to infer broader preferences across more users.

Counterintuitively, advertisers who target broad audiences may end up ceding platforms even more influence over which users ultimately see which ads, adding urgency to calls for more meaningful public transparency into the political advertising ecosystem.

Researchers were also able to demonstrate that Facebook’s platform is not neutral to the content of the ad. Through some ISP-mastery, they put up a neutral ad (picture of a flag, “get out to vote”) but, at the same time, tricked Facebook into thinking it was an ad from a political site. This resulted in the same skew in both ad delivery and differential pricing, which meant that ad delivery was not solely driven by user reactions. Rather than being a “neutral platform,” decisions are made partially by Facebook itself.

This research has implications for restricting micro-targeting. There’s something inside of Facebook’s algorithms that skews ads based on Facebook, not on the choices of the advertiser.

This selection occurs without the users’ or political advertisers’ knowledge or control. Moreover, these selection choices are likely to be aligned with Facebook’s business interests, but not necessarily with important societal goals.

It is also more expensive for a political campaign to deliver content to users with opposing views. Researchers re-ran Bernie ads and Trump ads with these results on the first day of the ad campaign:

Bernie ad —> conservative users: $15.39/1000 impressions, 4,772 users

Trump ad —> liberal users: $10.98/1000 impressions, 7,588 users

This effect was observable over the course of the campaigns. In one instance, by the end of the experiments, when the liberal ad was shown to the liberal audience, it was charged $21 per thousand users; when the conservative ad was delivered to the same audience, it was charged over $40 per thousand users.

While we can’t be sure of the precise nature of the algorithmic process, this research makes it clear that Facebook economically disincentivizes content that it believes doesn’t align with a user’s view. This sets up a “subsidy” from non-aligned content to aligned content.

When asked about the results of the research, Facebook said that’s how it’s supposed to work, disputing that there was anything novel in the work. “Ads should be relevant to the people who see them. It’s always the case that campaigns can reach the audiences they want with the right targeting, objective and spend,” according to Joe Osborne, a spokesman for Facebook.

That would be true if “relevant” was indeed relevant. In commercial advertising, “relevant” can be narrowly optimized.

But in political advertising, the same measure of “relevant” can distort the delivery of information. Part of the point of political advertising is to try to open up or change people’s minds by presenting them with alternative (perhaps less “relevant”) view points. As the researchers point out, commercial advertising algorithms that solve for inferred or revealed preferences can run counter to important democratic ideals.


Other things this week:

  • Still on Facebook, some research from their AI group on emotionally intelligent chat. Turns out that public social media is a bad place to get data for private chats because content occurs in front of large “peripheral audiences,” whereas messaging involves people sharing more intense and negative emotions through private channels. Sonder Scheme blog.

  • Latest AI Now report is out, here.

  • Retail surveillance - this interesting piece from Vice on Toys-R-Us reinventing itself as a customer surveillance company. Is this an inevitable consequence of Amazon having surveilled us offline and now retailers have little choice but to follow suit? But is it ethical to go this far?

  • The latest survey from McKinsey on AI adoption and our summary of highlights on Sonder Scheme.

  • Stuff article on the NZ police force’s new initiative for facial recognition surveillance. NZ is unique in its obligations to Maori under the Treaty of Waitangi and Karaitiana Taiuru, Doctoral Researcher/ STEAM and Property Rights Māori Cultural Adviser, has written a series of articles about Maori ethics, AI and data sovereignty, accessible here.

  • Article in Scientific American (metered paywall) on machine (and human) consciousness. Thought provoking.