Microtargeting has been in the news, with calls for the practice to be banned in both the US and Europe. Critics cite its potential for discrimination as well as amplification of harmful or misleading content. The debate has been robust, but more clarity is needed to support nuanced and careful consideration.
|What is microtargeting? From Mozilla: Microtargeting is a marketing strategy that uses people’s data — about what they like, who they’re connected to, what their demographics are, what they’ve purchased, and more — to segment them into small groups for content targeting.|
To explain, there are two layers of targeting for an advertisement on online platforms. This is a simplification, but generally:
- The advertiser selects “an audience” using explicit targeting parameters: demographics, interests, location, etc.
- The platform showing the ad will then show the ad to individual users within the audience; however, it will use machine learning systems to select those individual users that it predicts are most likely to click on the ad.
The first layer of targeting may be limited to fairly coarse-grained targeting parameters. However, the second layer will generally use all the data the platform has available – a detailed profile of every user based on likes, browsing history, and any other data that the platform has managed to capture.
Microtargeting is the second layer in which the ad is served to those most likely to click on it. Non-advertising content (called “organic content”) is also typically microtargeted – for example, Facebook will surface posts from your friends that you are most likely to “like” and YouTube will recommend the videos it predicts you are most likely to watch.
What is the Problem? What is Discrimination?
I have written previously about how microtargeting can amplify harmful or misleading content or manipulate people into unhealthy use of digital products. Here we focus on discrimination. In general, discrimination can simply mean treating some people differently than others, but we will see that microtargeting can also discriminate in a harmful sense: Amnesty International writes that “Discrimination occurs when a person is unable to enjoy his or her human rights or other legal rights on an equal basis with others because of an unjustified distinction made in policy, law or treatment.”
Microtargeting (or any kind of targeting) is, by definition, discriminating in the broad sense: the practice involves treating different people differently regarding the content you present to them. This is not inherently a problem, but harmful discrimination does ocurr in many ways. European Digital Rights (EDRi) published an excellent report covering these issues, but briefly, microtargeting may cause harmful discrimination through:
- Targeting that leads to unfair exclusion. For example, making it less likely that job or housing ads will be shown to particular populations. In one example, a ProPublica investigation revealed that Facebook actually allowed racial discrimination to be specified by advertisers, but it can also occur as an unintended consequence of microtargeting.
- Harmful targeting, such as an advertisement that discloses an interest or characteristic of the individual that the person has not disclosed. For example, a person might receive an ad for a medication related to a private medical condition while using a shared computer.
Microtargeting is Inherently Discriminatory (the closure of rich personal data)
It might seem that such problems can be avoided by simply excluding sensitive characteristics like race, sexual orientation, and medical conditions from targeting data. Unfortunately, the statistical power of correlations, or proxy variables, makes this ineffective.
Platform data, including Facebook likes and location check-ins have been shown to be highly predictive of sensitive user characteristics. In one study, Facebook likes were found to be highly predictive of sexual orientation, political orientation, and membership in certain ethnic groups. Another showed that location check-in data is highly predictive of gender, age, education, and marital status. What this suggests is that when content is being targeted based on platform data, it is also, in many cases, simultaneously and implicitly targeted based on protected characteristics such as disability, gender reassignment, pregnancy, race, religion or belief, and sexual orientation.
When targeting with adequately rich data, it is extremely challenging to prevent discrimination. As described in research from the University of Arizona, collection of the protected characteristics themselves is generally necessary to prevent discrimination based on them.
So, in fact, any microtargeting based on rich data like that found on the large digital platforms today will always be discriminatory (to some extent) on every possible sensitive characteristic. It may be possible to prevent discrimination of particular characteristics if accurate data is collected about that characteristic from all users, but even then, discrimination will continue for an endless range of other sensitive characteristics.
Microtargeting with rich data is inherently discriminatory. In their report, EDRi calls for the banning of “targeting techniques that are inherently opaque”. They write:
|As this report has shown, some targeting techniques are inherently opaque, meaning that it is often impossible for advertisers to avoid discrimination, even if they deliberately decide to target their ads based on neutral criteria. Ad optimisation falls into this category, so do targeting tools like Lookalike Audiences.…|
Furthermore, mandatory legal requirements cannot be limited to prohibited discrimination. As this report has shown, existing definitions of prohibited discrimination fail to cover all instances of harmful automated discrimination by AI systems, for instance in advertising.
Calling for a ban on opaque microtargeting is not quite the same as banning microtargeting in general, but from a practical standpoint, all the dominant microtargeting systems operating today are opaque. If we don’t want the targeting of advertising and organic content to be discriminatory, we need to move away from microtargeting as we understand it today.
Careful thought is needed to understand in what circumstances microtargeting brings what social harms and benefits. It may be that it is beneficial to allow microtargeting of some kinds of content, under some circumstances, but not in other cases.
Yesterday, the Norwegian Consumer – along with multiple other organisations and experts – called for a ban on surveillance advertising. Read more.
Picture: JOSHUA COLEMAN, Unsplash.com