Skip links

A human centric approach to AI: The EU Ethics Guidelines for AI

The EU’s first ethics guidelines on AI have been published. Dataethics.eu’s cofounder Gry Hasselbalch is a member of the EC High Level Expert Group that developed the guidelines. She sat down on the day of the launch and outlined her personal perspective on the guidelines and the process of developing them in five points. See also important links to the guidelines etc. at the bottom of this post.

  1. The human centric approach

First of all, the ethics guidelines’ are framed by one essential perspective: The human centric approach. This approach has gained traction particularly in European policymaking (e.g. the European Parliament also recently adopted a Resolution on Artificial Intelligence and Robotics  emphasizing the human centric perspective on AI).  Seeing this approach as a product of a specific cultural context, that is; as a natural element of a European framework, is important, as we can trace it back to formally established legal frameworks (such as the GDPR and fundamental rights) as well as historically tested principles and values. As such, the human centric approach is not a new thing in Europe. We’ve seen it formally formulated in policy instruments dealing with fast paced technological developments all the way back to the 1990s when the Council of Europe for example published its bio ethics convention with an article that states that the human being’s interests prevails over “the sole interests of society or science” (The Oviedo Convention, article 2). Basically, it’s an analytical framework for the context in which we innovate, do business, develop technology. Always looking at our practices from the perspective of the human being. For example from which perspective do we interpret the law? Our innovation practices? Business practices? Our technology development?

  1. The power relations

“Ethics washing” is a keyword in the current debate on AI ethics (and data ethics in general). Numerous ethics boards, groups, initiatives have been launched and then discharged in public opinion based on the argument that they cover up dubious state and business practices or that they divert the attention from compliance with new legal requirements such as the ones of the GDPR. How can we make sure that the (most of the times) honest work that we do in these groups is taken seriously? That it is actually not abused?

We can’t deny that the AI High Level Expert Group’s work, beyond it’s official purpose, also has been viewed as an exercise of power relations and dynamics. Between different global forces (primarily China, USA and Europe) – the ethics approach was for example described as a competitive advantage for Europe (something Pernille Tranberg and I have also argued before in our 2016 book) from the very beginning. And between different interest groups in society – there was for example a clash of discussions between what was, I would argue, very simplistically presented in public as the “industry” and the “civil society” stakeholder groups (they are themselves actually very diverse stakeholder groups with competing internal interests also), regarding the balance between risks and opportunities in the guideline. Here, I for example emphasized in my input to the guidelines that any AI opportunity is always also an ethics problem to solve. In general, by emphasizing in simple terms the great opportunities of AI, are we then promoting a society model of prediction – and control (the ultimate dream of Modernity to conquer and control nature, including our own messy biology)? And is this really what we want? And if so, we should at least be reflective of the consequences of this choice, because we are making a societal ethical choice here.

These are all details and I do think we managed to strike a balance between different interests with the emphasis on trustworthy AI (=no opportunities without implemented ethics). My point here is that we need to make visible these power dynamics that I would argue are also representative of the power relations of the Big Data society in general. Different interests are embedded in data and the technologies and businesses that deal with data and these interests are evidently also part of shaping their development in one direction or another. So let’s make a real effort to do ethics right! No more secrecy, no more special treatment and status without transparency. As we’ve seen with other ethics initiatives, secrecy regarding everything from the criteria for the very selection of members to the interests embedded in the actual negotiation of content, and non-transparent processes in general in the name of PR or for other insignificant reasons, is not good for anything. Not good for the institutions’ who are in dire need of ethics guidance, nor for the public who are affected by unethical practices. We absolutely need to lay open the conditions of negotiation and distribution of power, in order to point to real effective design, business, policy, social and cultural processes that support a truly human-centric distribution of power (which is democratic in essence!).

  1. Ethics do not replace law

Ethics initiatives as the AI High Level Expert Group have a problem. They are often perceived as solutions in their own right. But it is incredibly important that we see them for what they are: Ethics initiatives do not replace legal frameworks such as the European General Data Protection Regulation (GDPR) or human rights law. However, they could complement existing law and may inspire, guide and even set in motion political, economic and educational processes that foster an ethical “design” of the big data age, which means everything from the introduction of new laws, the implementation of policies and practices in organisations and companies, development of new engineering standards, to awareness campaigns among citizens and educational initiatives.

Essentially, innovation is about meeting new societal requirements and demands in new ways. This is where ethics can help. Laws reflect evolving societal requirements and an interpretation of law and ethical awareness is therefore integral of any reflective innovative process. In this way, law and ethics can be seen in combination as enablers of AI innovation.

  1. Red lines/critical concerns

From the outset of this group’s work, we’ve considered red lines/critical concerns in regards to Europe’s AI development. Critical concerns are described in very general terms in the ethics guidelines now and they will continue to be a subject matter in the High Level Group’s future work. The High Level Group has two areas of work and two deliverables. One has been completed now (the ethics guidelines), the other is the policy recommendations that the group has been working on over the last year and is finalising now. Critical concerns will also play a role in the policy recommendations. But we need to be concrete now. Massive citizen scoring, which is one critical concern mentioned in the ethics guidelines, is indeed a very problematic use of AI. The development of autonomous weapons is another very critical concern. But areas as these need more than an ethics guideline. They need a strong policy, regulatory framework with a univocal message. So, our next step is to be specific –  are there for example more areas in AI innovation that we didn’t mention in the guideline that need specific policy considerations and even specific regulation? I believe that Europe in several areas needs to draw a line. Like we’ve done so many times before (think for example of Crispr Cas9 used for modification of human clones and genetics etc. which cannot be funded by the Horizon2020). There are areas of innovation that are so critical that we must show our commitment to the ethical purpose of putting the human interest at the centre and above all other interests (in specific scientific and commercial). One example I can think of is the rapid evolution of neurotechnologies and the merging of the human brain and AI, which is an immense area of investment that raises significant critical concerns for example in terms of the implications for the very composition of the human being.

  1. … and data ethics?

Data/digital/AI ethics has gained momentum. In Denmark for example, over the last year, we had a governmentally established data ethics expert group (I was a member of that one) and now have a data ethics council (I am not a member of that one). Every institution in Denmark today has an official opinion about data ethics and everyone are stumbling over each other’s data ethics principles and guidelines. It all happened in one year. From being, what we in DataEthics.eu was always told before, “boring”, data ethics is now the hottest topic. Everyone is a data ethics expert. But what are we actually talking about here?

One major misrepresentation I meet over and over again for example in the current somewhat hyped AI ethics debate, is the description of AI as an isolated radical societal evolution. But it’s not. We have to stop talking about data, AI (and their ethics) as something that should or will change everything on their own. For example, what AI can do, is to support some positive or negative things in interaction with other ongoing societal, technological, cultural and legal developments. And everything depends on the power dynamics steering the development.  So, what we need to do is to take a step back and try to understand what AI means in the more general evolution of society and what is our ethical response to these general implications, that is; how do we steer this development in the interest of human beings?

I am not a traditional ethicist (that is; a philosopher), but I come from a humanist tradition and my view on data ethics in specific is therefore shaped by this background. I describe my approach, which I call a “Data Ethics of Power”, in an upcoming article in the journal Internet Policy Review (stay tuned). This is an approach, I believe, I share with some other non-traditional ethicists that I think play a crucial role in the current ethics debate – the legal scholars, cultural theorists and sociologists with their view on the ethical implications of in particular big data. Basically, a data ethics of power is an action-oriented agenda that aims to expose the power relations embedded in the Big Data society. It takes point of departure in a general data (re)evolution of society, enabled by computer technologies and which is dictated by a “datafication” of all things (and people) to organise society and predict risks/and or opportunities (a “Society of the Destiny Machine”). If we then look at power as something that is embedded in and distributed via the information technology societal infrastructure that AI is the evolution of, a major concern for a data ethics analytical framework is to understand how some societal actors gain power over others in opaque and unaccountable ways. This concerns not only the state as the primary power actor, but also other stakeholders that gain power through accumulation, analysis of and access to big data. We for example see changing power dynamics in the increasing data asymmetry between individuals and the big data companies that collect and process data in digital networks. The people best equipped to understand these specific dynamics I believe are the people from the disciplines I mentioned before.

In this view, my overall assessment of the negotiation of the EU ethics guidelines, that involved different societal stakeholders, is that it was actually a success. Because in the light of the last 20 years or so internet and data technology and business development, it is truly an achievement to create an ethics guideline for the development of AI where all stakeholders agree on placing the human back into the centre.

See also:

Early in the process I wrote a public blog post based on my understanding of ethics and AI and the discourses shaping the debate: https://www.linkedin.com/pulse/lets-talk-ai-gry-hasselbalch

Important links:

What Where
EU Commission Press release http://europa.eu/rapid/press-release_IP-19-1893_en.htm
AI Alliance page (accessible to all, also non-alliance members) which gathers all info on the guidelines, piloting process, best practices exchange on implementing Trustworthy AI and the stakeholder consultation. https://ec.europa.eu/futurium/en/ai-alliance-consultation
All info on the piloting process and the registration for it. https://ec.europa.eu/futurium/en/ethics-guidelines-trustworthy-ai/register-piloting-process-0
AI ethics guidelines – pdf doc https://ec.europa.eu/digital-single-market/news-redirect/648305
AI communication – pdf doc https://ec.europa.eu/digital-single-market/news-redirect/648304
AI new definition doc – pdf doc https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines
AI factsheet updated – pdf doc https://ec.europa.eu/digital-single-market/en/news/factsheet-artificial-intelligence-europe