Skip links

Contact Tracing Apps are Not Just a Privacy Tech Issue. It’s a Question about Power

The inversion of democracy is a slow process of getting used to less and less freedom. In moments of crisis, there is a big risk that we make choices about adopting public health tech too hastily and without considerations as to the power dynamics they reinforce. The Google/Apple contact tracing choice represents one such choice.

By Gry Hasselbalch and Pernille Tranberg, co-founders of DataEthics.eu

Many European governments are currently adopting the Google/Apple API for their contact tracing apps in the fight against Covid-19. Although at first trying to develop their own apps, they were more or less forced to doing it the Google-Apple-way for different technical reasons (e.g. bluetooth does not work on iPhones when the phone is sleeping), and due to different power games with the companies themselves as was recently described in Politico.

The idea that contagion could be traced and contained with an app, had early in 2020, when the pandemic broke out in Europe, initiated a process in many European states to develop such apps. Many presented centralized solutions, where they could collect more data and store it and use it for e.g. scientific purposes. It can therefore be argued that the solution Google/Apple has now offered will protect citizens from potential unsolicited state surveillance. That is; the Google/Apple model is decentralized – based on on-device-processing, which is also less risky when it comes to safeguarding against abuse of data.

For these reasons many privacy experts have supported the adoption of the Google/Apple solution. While this makes sense for the above-mentioned reasons considering the privacy tech solution, it is also puzzling that some of our biggest concerns and criticisms, we have had regarding black box big data technology designs, surveillance capitalism and data monopoly, are not transferred into this debate. Critical questions are left to answer: For example, Apple and Google say they want to protect users from government surveillance. They say that they don’t get access to the health data that are used to match people, who has been in contact with a Covid-19 patient. But how do we know that they don’t get access and what other data do they have that can be used in the contact matching? And the biggest question: Should a duopoly of powerful big data commercial companies be allowed to coerce democratic European governments into adopting core technical components of a public health technical infrastructure?

Without a proper data ethics and social impact assessment of the technologies we adopt today going beyond mere data protection and technical privacy, the consequences are dire. The inversion of democracy is a slow process of getting used to less and less freedom and accepting different types of power players in society and their different interests – economic, scientific, cultural, and social.  In moments of crisis, there is a big risk that we make choices about adopting public health tech too quickly, which includes our acceptance of more general structural power issues.

In China, where the first contact tracing apps were adopted, the risks to freedom are more obvious. And perhaps it was based on these initial experiences where contact tracing apps were introduced with opaque algorithms and data sources as well as centralised governmentally held servers and forcefully implemented mass tracking and surveillance that created the grounds for the hasty and forceful rejection in the Western hemisphere of one type of technology design over another. But even the choices we make that on the face of it by design seem to be morally okay and safe often represent much bigger and more complex issues of power.

Data Ethics and Power

The political theorist Langdon Winner famously wrote in 1980: “What matters is not technology itself, but the social or economic system in which it is embedded.” What he meant was that a technology is never neutral. It represents by design political and ethical choices. And this is exactly why we need data ethics as an additional perspective on technologies adopted during the Covid19 pandemic. From AI triage and treatment choices in overloaded hospitals to contact tracing and facial recognition of people with masks. Data ethics is the step beyond privacy technology design and data protection legal compliance. With data ethics we evaluate not only the role of the very data technologies’ design, we evaluate their role in society and the power dynamics they reinforce and produce. The economic interests, the cultural interests and the political interests that shape their design, governance and adoption in society.

This is a crucial consideration when we get lost in the technical details of contact tracing apps. For example the choice between two different types of data protection and privacy design, such as the state-designed-one (a democratic government would always argue that their design also represents a design as such, even though their solution is based on centralized storage of data) or the Big data Technology company-designed-one, is not just a choice between two different technical solutions to potential technology mass surveillance. It is a choice that represents negotiations of power in society in general as well as compromises between different interests. It represents and acknowledges who is in power, who decides, who is accountable, how much insight we get now and in the future. It represents the shape of democracy.

So, aside from a purely technical choice, which political choices are we making when choosing the Google/Apple solution?  Governments in Europe are held in the palms of big tech’s hands. They have to use a private mobile infrastructure for a public health emergency.  Big technology companies have the power to decide what to do and not to do in order to connect and reach citizens.

This situation also gives Google – and Apple though they have a much better track record on privacy – a prominent chance to position itself as a privacy-focused company. The very same company that invented the ‘free’ model where people pay with their data instead of money and are micro targeted and manipulated with both economically and politically. This is what Helen Nissenbaum, professor of information science at Cornell University in the Washington Post recently called a “flamboyant smokescreen” calling it ironic that the two companies that for years had tolerated the mass collection of people’s data, were now preventing its use for a purpose that is “critical to public health.” And we agree with her that: “If it’s between Google and Apple having the data, I would far prefer my physician and the public health authorities to have the data about my health status…At least they’re constrained by laws.”

Now, what is left to do is to at least demand true transparency. Google and Apple do have access to a lot of data, including our location data, but they say they don’t have access to the sensitive public health data. If we are forced to using their API – which we are, as they have power over the mobile infrastructure – at least governments could demand independent third-party supervision.

Ideally, these third parties are set up as independent public bodies just as we have done within other sectors such as organic or sustainable wood, oranges, coffee or fish. The third parties could be the DPAs, the Data Protection Agencies of each European countries or the European Data Protection Board. But do they have the resources? So far, we have trusted Apple’s claim that they do on-device processing and differential privacy (anonymisation). But wouldn’t it be better, if we had some kind of independent verification – or even a privacy certification – that could prove that they are doing it right.

No Technical Fix

The final point here is that we should not have to choose between “two evils” – either state surveillance or big tech surveillance (or to use another term Zuboff’s “surveillance capitalism” now when we are talking about the more general societal power dynamics). We should make choices that are based on our democratic values and nothing else. Those in power tend to simplify our choices by forceful simplification of false trade-offs: “choose this tech solution or submit to total surveillance”. But the fact is that no technology is a magic wand, no technology is our last resort, and therefore we do not have to accept trade-offs without questioning them or considering alternatives (e.g. how about physical tokens that are not connected with our most private devices the state or the big data technology infrastructure as EIT is looking into?)

A crucial consideration here is also that there is always a human side to a technological invention such as a contact tracing app, which is usually also where the core data ethical implications can be found. As for example, the serious problem when a contact tracing app becomes a technological barrier for lifesaving information to citizens or crucial human qualitative analysis of situations of contagion; When an app becomes obligatory and limits our movements – socially (if it becomes a norm to demand to see other people’s app status) or institutionally enforced (as is the case in China); When they create panic due to false positives or the opposite due to false negatives.  This “human side” to contact tracing apps is also increasingly recognised in the public debate in Europe. For example, the UK contact tracing app is being delayed and increasingly presented as a supplement to human contact tracers, not a replacement, as Michael Veale writes here.

Last but not least. Is it really worth it? Several experts have raised the issue that it is doubtful that contact tracing apps will even work. As security expert Bruce Schneier recently pointed out, they most likely have “absolutely no value” due to their potential for creating false positives and false negatives that in the end will create distrust and chaos in use and application. (Or as demonstrated in this video contact tracing apps have absolutely no chance of working). Or as the Italian professor Francesco Lapenta points out in his analysis for DataEthics.eu here.

If there are any doubts that this won’t work, maybe we should consider not doing this at all. And when you need 60% of the population to download it to maybe make it work, it might be a complete waste of time. In Iceland that managed to set the record of having close to 40% download it, they don’t call it a game changer. Not at all.

Also read this in the Guardian: Why are Google and Apple Dictating how European Democracies fight Corona