Skip links

AI Must be Based on Human (s’) Rights

By Tim Leffler. This article was originally published in Swedish on Voister.

Gry Hasselbalch works at EU level on data ethics, which includes citizens’ privacy and democratic rights online, who has power over data, and what the design of AI should look like. She tells Voister why social media giants have an inordinate amount of power and how philosopher Henri Bergson has influenced her ethical views.

You are one of the founders of Dataethics.eu. What is this?

– We describe Dataethics.eu as a constructive and evidence-based think tank. Three other women and I founded this in 2015 to identify and support those building alternative (social) media solutions, and we were also part of the movement around GDPR, which was later implemented in May 2018. We argued against the notion that data protection and privacy are barriers to innovation, and instead argued that ethical thinking around data is a kind of innovation and something we can build an infrastructure around. We tried to show that there were European alternatives to Gmail and Messenger, for example,” says Gry Hasselbalch.

– Now, more than six years later, we have affiliated scholars and academia and different initiatives, and most of my work focuses on raising awareness around these issues at EU level. Throughout, we highlight issues of data and power.

What do you feel the big tech giants are doing or have done wrong?

– By far the biggest problem is the influence and power that big giants like Facebook and Google have on democracy. On a micro level, it’s about how every citizen is affected, and on a geopolitical level, it’s about how dependent our states become on these platforms during, for example, a pandemic. That is why I have switched from talking about individual privacy to data ethics, because it is such a big and important issue.

What is the data ethics of power that you have also written a book about?

– We have in the world what we could call AI and big data social-technological infrastructure. It is embedded in all of society, from how we communicate to what our politics, culture and economy look like. And just like physical and real infrastructure such as roads, this social-technical infrastructure is also real in the sense that it limits us, while opening doors at the same time. And therefore it also becomes a kind of infrastructure of power.

– This infrastructure based on big data was already being developed in the 90s, and has been furthermore gradually equipped with AI and advanced analytical tools over the last decades. It gives power to certain players by allowing them to use our data for different purposes. The data ethics of power involves looking at how big data and AI distribute power, and the aim is to make these power relations transparent and give power back to users and citizens.

– Once these power structures are made visible and mapped, we need to build new social-technological infrastructures with a more democratic distribution of power. This can be done by giving each individual user and citizen power over their own data, so that they can share it with whomever they want. We see in Europe how things are moving on this front. But it can also be approached from a political angle, by holding existing actors accountable for the data they have about you and releasing it. Today’s dominant social media does not protect the individual citizen, but benefits the biggest players. There are many examples of discriminatory algorithms that disadvantage minorities.

Which technology poses the biggest challenge right now, would you say?

– On a geopolitical level, I would say AI, as the technology is already being used today in the form of various kinds of machine and deep learning models. There are so many political and business interests invested in AI, while at the same time it is a technology that carries many ethical risks. In social infrastructures, all the problems we see with big data take on an additional dimension with the addition of AI, both in terms of decision-making and transparency.

– The most dangerous thing now is that we have a war in Europe that is becoming a lens through which many of the most important AI discussions are being conducted, which risks leading us to conclusions about the accelleration of AI, which in turn limits our human autonomy even more.

What ethical challenges do we face with new technologies like AI?

– One challenge with technology that makes decisions for us is that it is endowed with a kind of utilitarian ethics, in simple terms as much benefit as possible for as many people as possible, as we see with the autonomous cars and whether it should turn away from a family of small children and hit an elderly person instead, if faced with such a choice. The same applies to legal, political and medical decisions. Who should get a new liver? Who should get compensation? It’s all based on where the machine learning model calculates resources will do the most good. In these cases, you can imagine that a human would come to different conclusions and not base make decisions in such a rather calculating way.

What kind of ethics should we lean towards instead, do you think?

– One philosopher who has influenced me is Henri Bergson (1859-1941) who talked about two different kinds of morality: a human morality and a social morality.

– The social morality is the one that is easily transformed into a legal requirement, or to use another example that can be programmed into a computer system and automated, by e.g. an AI system. It is a kind of rule-driven morality with principles that are also found in professional contexts. But it is also a kind of morality that can be overridden. If there’s a war or a crisis, like World War I and the beginning of World War II that Henri Bergson lived through, where this kind of social morality was set aside. Social morality is also very interest-driven, because it is applicable to my nation, my organization, my profession.

– The other kind of morality is a human morality, and has no specific interest. It’s a morality that you live with from moment to moment and that evolves continuously, without one specific interest. It is a kind of universal morality that is inclusive and cannot be overlooked. It was this kind of morality that underpinned the drafting of the UN Declaration of Human Rights, after the Second World War. I believe that in our technology and in our AI solutions, we miss this idea of a universalist moral approach if we let the technology evolve with the social morality of utilitarianism only. Human morality is about the decisions we make when life comes to a head and we listen to our heart, gut and intuition. These decisions require a self-reflection that machines lack, and therefore a machine’s moral decision-making will always be weaker than ours.

This is, of course, one way of looking at ethics and morality that you find reasonable, but others may have a different meta-ethical view and hold that utilitarianism is the way to go, regardless of our human intuition given certain situations. How can we find a universal data ethics platform to stand on?

– I think we already have it in the Declaration of Human Rights. It is based on our dignity as human beings. We don’t need a new ethical framework when we already have one, but we need to get better at applying it. In the light of human rights, for example, we should not compromise our privacy to serve other purposes.

– Virtually everyone I meet where these issues are discussed agrees that we the people should be empowered to make informed, democratic decisions. All parties from democratic states agree that we need to be in the decision-making loop, and no one argues for fully autonomous decision-making processes. In the EU, we currently have a proposal for what we call an AI Liability Act, which is about how we hold humans accountable for an AI’s decisions. UNESCO and the OECD also have principles on AI and decision-making. All of these guidelines, whether regional or global, are rooted in the Declaration of Human Rights and other democratic values. This is positive, and a counterweight to arbitrary decision-making using solutions owned by companies that want to keep their code secret,” says Gry Hasselbalch, founder of Dataethics.eu.

Translation from Swedish into English was done with Deepl (an alternative to Google translate) and edited by DataEthics.eu