You are the product

| July 9, 2021

A recent expose by the investigative journalists at The Markup revealed how Facebook uses detailed information about what people do online – the websites they visit and the search terms they use – to allow pharmaceutical companies to target people regarding medical conditions in which they’ve shown an interest.

This marketing strategy builds on the fact that one of the first places people turn to when they learn or suspect that they or their loved ones might be sick is the internet.

The fact that platforms may know more about us than our doctors reflects an apparent paradox – even when public schemes like My Health Record face widespread public scepticism, online platforms are largely escaping scrutiny for amassing detailed portraits of all the health conditions with which we might be associated.

This information feeds a hugely important structural shift in the dominant model of advertising, towards advertising that’s becoming both increasingly pervasive and less accountable.

A computer screenshot of Facebook, reading

Proxy mechanisms in the mass media

The era of mass media is defined by a number of familiar communications channels – terrestrial television and radio, newspaper ads and billboards, and an array of broadcast media.

Since the mass media has few technological mechanisms for targeting specific groups of people, advertisers developed very rough proxies – concentrating ads for household products, for example, during daytime hours to reach homemakers (hence the term “soap opera”); or placing toy ads alongside Saturday morning cartoons.

The ads followed the content, and, in some cases, its timing and geography. These ads were available to large groups of people, and thus available for public scrutiny – and often became the topic of concern about stereotyping and predatory marketing tactics.

The ads, although privately controlled and administered (in many cases) remained – in an important sense – public.

We know the historical struggles that have taken place regarding racist and sexist forms of advertising; struggles that highlighted the role played by the advertising system in reinforcing particular sets of values and cultural assumptions.

As the media historian Michael Schudson puts it:

“Advertising, whether or not it sells cars or chocolate, surrounds us and enters into us, so that when we speak, we may speak in, or with reference to, the language of advertising, and when we see, we may see through schemata that advertising has made salient for us.”

Advertising, in other words, is not just the filler between the content – it is a form of content that plays an important role in reproducing social and cultural values.

The rise of consumer society – a dramatic social shift – would have been impossible without it.

It’s therefore crucially important that advertising be subject to public examination and discussion as part of our ongoing reflection on the society we live in, and how to build a better one.

Multiple reasons for accountability

This is perhaps the overarching reason for attending to advertising, although there are other important reasons for holding it accountable.

Advertising isn’t just about selling household products and services. It’s also used to rent or sell housing, to promote political candidates, and to recruit employees – and, in some cases, to discriminate in these areas by age, ethnicity, or gender.

The broadcast model of advertising and its associated problems haven’t disappeared. Just as broadcast TV and newspapers remain, so do their core approaches to advertising, albeit working in more sophisticated ways, with more detailed audience data.

Yet in digital contexts, the degree of consumer tracking has led to new advertising methods that upend the “publicness” of historical forms of advertising.

The rise of online advertising represents an epochal shift in advertising that invokes the spectre of new and powerful forms of discrimination that can be difficult to detect.

During the 2016 US presidential campaign, for example, Donald Trump’s digital strategy advisor, Brad Parscale, boasted about targeting African-American voters in swing states with ads claiming that Hillary Clinton had once described young black men as “super-predators”. (She was referring to gang members in a formulation that, while not explicitly racially coded, nonetheless led to a subsequent public apology on her part.)

The goal of this ad buy was not so much to gain voters for Trump, who had very low support among black voters, but to stop them from turning out to vote at all.

“There’s no way to know whether a job ad one encounters while browsing the internet is being shown only to people of a certain age, ethnicity, or gender.”

Because of the nature of targeted advertising, which follows individual users, rather than particular forms of content, and is viewed on a personal device, it was impossible for those who received the ads to know they were being singled out as part of a voter suppression campaign.

The same is true of other forms of online advertising. There’s no way to know whether, for example, a job ad one encounters while browsing the internet is being shown only to people of a certain age, ethnicity, or gender.

Indeed, Facebook had to pay a US$5 million fine after it was revealed that its ad buying system made it possible to discriminate in ads for housing, employment, and credit.

A range of measures is being developed to provide accountability for “dark ads”, so called because they’re ephemeral and targeted. Facebook has made many of the ads it serves available through its ad library, although this is of limited functionality, because it provides only general information about how ads are targeted.

The NYU Ad Observatory tracks political advertising using volunteers who install a browser extension that captures ads served on Facebook. ProPublica developed a similar tool, which we have adapted to provide visibility into how individuals are targeted online.

Our tool, which people who are interested in contributing to the project can install on a Chrome browser, collects some basic demographic information so we can see how people are targeted by variables including age, gender, and location.

Anyone who installs it can also use it as a personal ad tracker to see how Facebook is targeting them over time.

Illustration of a man holding a mobile phone, being watched by a giant periscope with an eye

We pilot-tested this with 136 people to demonstrate how a tool like this might work, and even with that relatively small sample, we were able to demonstrate to people how their online behaviour shaped their ad environment.

One volunteer, for example, was targeted based on information she had been searching for online about her child’s medical condition. In the abstract, we know this is how online advertising works, but it can be confronting to see how detailed and comprehensive the monitoring and tracking is, and how readily, for example, behaviour that we might not disclose publicly around drinking and gambling activity serves as raw material for advertisers.

Making invisible patterns visible

Equally importantly, the tool allows us to see overall patterns that are invisible to individual users – how men might be targeted differently from women, or older people from younger people.

The more data we’re able to capture with this tool, the clearer a picture we’ll have of the new and old forms of stereotyping enabled by dark ads, and the way they shape our information environments.

We know we’ll never have as clear a picture as Facebook does, but it’s crucial we find ways to hold it accountable for the potential and actual abuses that take place in the online advertising world.

This article was written by Mark Andrejevic, a Professor of Communications and Media Studies; Robbie Fordyce, a Lecturer in Media and Communications Studies; Verity Trott, a Lecturer in Communications and Media Studies and Luzhou Li, a Lecturer in Media and Communications Studies, all at Monash University.  It was published by Lens.

SHARE WITH: