Ethics procedures are a core component of any AI innovation and research. It is essential that we do not undermine this during the Covid19 crisis. Europe still needs to deliver on its “Trustworthy AI agenda”.
Most of us know the MIT moral machine experiment. This is where an autonomous car has to make different choices like between hitting and killing an elderly person or a young person etc. The idea is that we consider how we transfer our human moral judgement into a machine. How is it programmed and trained to choose who gets to live or die? Now, think about an AI software like this that has to perform triage assessments of incoming patients at a hospital during the covid19 crisis. The ethical choice and implications are similar, but even more complex and they are real. Who gets the hospital bed and the respirator? Based on what criteria? In addition to age, some might argue that we should consider each patient’s life chances based on their symptoms, and some might even argue that their critical or non-critical role in society is assessed. Is it a nurse? Is it a biologist? Is it an artist? Whose life is more important?
Ethics is incredibly complex, nuanced and anything but practical. Nevertheless, mitigating ethical implications and dealing constructively with them is an essential procedure when developing AI. Because when we do this, we are programming, so to speak, our values, and sometimes even the enactment of our moral agency into AI.
Ethics before and after Covid19
Before the Covid19 crisis, a European AI strategy was published in 2018 by the European Commission and further developed in policy and expert group initiatives over a two-year period with a growing emphasis on ethics in the shape of European laws and cultural values (which I argue is something very particular in terms of legal framework, cultural values framework etc. in this working paper: Culture by Design: A Data Interest Analysis of the European AI Policy Agenda, and this working paper A Framework for a Data Interest Analysis for AI). This meant that in late 2019 and early 2020 we were starting to see an European AI policy agenda delivering on the concept of Trustworthy AI and its seven key ethical requirements spelled out in the European High Level Group on AI’s ethics guidelines and repeated in several policy statements, new documents and strategies:
Human agency and oversight.
Technical robustness and safety.
Privacy and data governance.
Transparency.
Diversity, non-discrimination and fairness.
Societal and environmental wellbeing.
Accountability.
See more here in the ALTAI assessment list
Now, a snapshot from Europe during the Covid19 crisis, show an immense digitalisation and AI boost in Europe. Often with reduced time (or no time) for the real trials and the ethical procedures that would normally ensure that potential ethical and social implications are assessed and mitigated. Telemedicine with remote consultation of patients, contact tracing, big data-based algorithms to support diagnosis and epidomological studies, personalised medicine, care robots (see more in this report “Artificial Intelligence and Digital Transformation: early lessons from the COVID-19 crisis”). Another snapshot of how Europe is delivering on the key seven ethical requirements. Let’s take “Privacy and data governance” for example. The very introduction of AI is always an ethical problem to solve as it relies on big data analysis, particularly when this involves personal data. During Covid19, we have for example seen an acceleration of big data and tracking of personal data: drone surveillance, data on personal devices, contact tracing, personalized medicine, location tracking, biometric bracelets, facial recognition and crowd behaviour analysis. At one point the European Commission even gave its ‘seal of excellence’ to a technology that provides ”‘advanced video analytics, including real-time face recognition and crowd behaviour analysis,’ to be used in the bloc’s fight against another potential outbreak of the coronavirus.”
We can’t derogate from ethics
The recent focus on “AI ethics” in public discourse and policy doesn’t mean that ethics procedures were not a core requirement of AI innovation and research already. For example, in Europe (and elsewhere) we have ethical standards such as the Helsinki Declaration and the Oviedo Bio Ethics Convention and accordingly ethics procedures for research, science and innovation. That is; even though technology is transforming and therefore appear to be new, as is the case with AI, ethics procedures are not similarly a new unexplored area that we can derogate from during crisis and emergencies.
This is only the beginning of a crisis that is challenging not only ethics principles and words, but the very practical implementation of it. The fierce public debates on contact tracing apps in Europe were an example of the maturity of our ethics reflection in Europe in which some values are considered non-negotiable in Europe. But it also illustrated our weaknesses in the field. The debate was also too narrow (as I and Pernille Tranberg argue here), and diverted the attention from other important ethical dilemmas and ethics procedures, including other ethically challenging AI innovations introduced at the same time. And most importantly, it didn’t seem to consider the structual and social components of a technical innovation, which would have made the power relations embedded in the “Big Data Society” and the conditions of their negotiation and distribution visible. Accordingly design, business, policy, social and cultural processes that as a priority support a human-centric distribution of power were not introduced.