Skip links

European AI is Human Made, Fed and Driven 

The German EU presidency of the Council hosted an online conference on Monday that stressed the need for a much stronger focus on fundamental rights in the digital age and especially when jumping to AI as a fix to ensure social goods and growth now and in the future.  

The aspiration is to do AI the European way and to support this endeavor, the EU Fundamental Rights Agency (FRA) launched a new report on AI and fundamental rights. The report unveils the use of AI on four areas including social benefits, predictive policing, health and services and targeted advertising 

A New Imperative
As part of his presentation, the Director of FRA Michael O’Flaherty put forward an imperative that is simple yet pivotal for our approach to AI. He did so by stating that AI is human made, human fed and human driven. Having that as a starting point, embracing respect for fundamental rights and ensuring privacy, non-discrimination and access to a legal remedy stand out as the logic consequence when developing AI solutions in democratic societies.  

This way of anchoring fundamental rights and freedoms in the digital age is not new but its reiteration is important in the sphere of AI as discussions on transparency, explainability and foreseeability is often overruled by tech giants and other actors in the AI industry. Arguments of black box-technology, lack of knowledge or the use of non-disclosure-agreements are thus used to stall requirements for insight by public sector, NGOs, and academia. 

Oversight and Accountability
The alleged opacity of algorithms hinders built-in safeguards for non-discrimination, freedom of expression, association, movement
, the right to education, housing, work and social benefits, and consumer rights, and holds as such a high risk for setting aside the effective protection of fundamental rights.  

Against this background, many speakers presented concrete proposals for new legal obligations supplementary to those found in the GDPR. They included for example mandatory fundamental rights impact assessments and strengthened bodies to ensure effective oversight and accountability. This role could easily be vested in existing structures, such as National Equality Bodies and National Human Rights Institutions.  

A Need For Impact Assessment
To that end a new practical guideline for business and other actors in the digital eco-system deserves attention. The guideline was launched by the Danish Institute for Human Rights in November 2020 with the purpose of providing the private sector with tools to perform human rights impact assessments when developing, deploying and procuring digital and AI based products and solutions. The tools cover the planning, collection of information, impact analyses and identifying means of mitigation and remediation as well as monitoring and reporting.  

The timing of the toolbox is excellent as it accommodates a specific need highlighted throughout todays’ conference on AI and fundamental rights.  

Read more: A tool for public procurement of AI is presented by Dataethics in our white paper Data Ethics in Public Procurement of AI-based services and solutions