Presentation at the Future is Data Conference 9th November 2021 by DataEthics.eu’s co-founder and Research Director Gry Hasselbalch
Transcript:
I want to talk about data, AI and power or said in other words data ethics in an international context. To address the sociotechnical infrastructures of powers that we are supporting in conversations like the one we are having here today.
Right now, there are different approaches to AI and data competing at a regional and global level to gain what some would call a technological momentum. Where local a systems are evolving into larger more integrated global systems.
We have a competition playing out in front of our eyes between different technological systems and styles of scientists and entrepreneurs, requirements and expectations from users, investors and lawmakers. However, with no real open conversation about their underlying power dynamics.
With the data and AI concepts addressed today, let’s also think about the kind of power dynamics that they reinforce or create. How do they distribute power between citizens, governments, industries and so on? Who do they empower? And who do they disempower?
Now to illustrate what I am saying here, let me present to you two imaginary examples of data sharing infrastructures for AI each supporting different structures of power.
First imagine a city infrastructure supported by a decentralised IoT sensor data infrastructure with AI components. It could be used for waste management with smart data-collecting trash cans, for example, and smart lighting with sensors detecting when lights are required to save energy. It could also include the data of people, sensing and acting on a mobile environment.
So many actors could have an interest in the data resources generated within this infrastructure. Commercial actors with an interest in using data to personalise, train and better their services, scientists with an interest in improving results with data, state actors to make services and processes more efficient and control the city.
The risk is of course that only a few interests of the most powerful are met.
But in this first example of a data infrastructure for AI we want to think of data as a commons – open up data to SMEs and scientists, but also, and essentially, to citizens – empowering citizens with tools that enable them to capture, aggregate, and manage their personal data, enrich their personal lives with insights and support. This would of course be a data infrastructure that protects their privacy by default (differential privacy is a good option here). But we could also think about data free zones where data will never be collected.
Now I want you to think about another type of AI data infrastructure for a smart city – what I believe is a more ethically problematic one and even dangerous one. This one is centralized, developed by one company. It could, for example, be designed to monitor every vehicle in the city and in this way help reduce traffic jams. In fact, it could do all the things that I mentioned before in the first example of a data infrastructure.
This centralized data infrastructure we could also combine with video footage of traffic for example, looking out for signs of accidents to alert the police. And it could combine data from different registers and authorities and map the entire city through thousands of cameras with built in facial recognition. In this way, not only car accidents would automatically be detected and responded to faster, but also things like illegal parking and suspicious behaviour by drivers (and citizens in general) could be tracked live and responded to by the police.
This centralized data infrastructure that I am describing here does not have citizen oversight or control. We don’t know that our data is collected, we have no insight into the data process that led to our car being pulled over by the police, for example. We don’t know how and why. The huge amount of data generated by this system is designed to meet the interests of only a few power actors, namely law enforcement, a government and the big tech company that created the AI data system.
This sounds like science fiction, but it’s not. These two examples are actually examples of two existing smart cities in different world regions. Today there are many real life examples of data infrastructures of AI like these – in all sectors and areas of our lives – health, finance, our judicial systems, social welfare and so on. And as said before, their set priorities, their design, the imagination that goes into them, are also competing for the momentum that will shape the more integrated global data infrastructures of this era.
We need to be critically aware of their power dynamics, to understand what kind of power we enforce when we design the components of data intensive sociotechnical infrastructures: democratic powers, monopolistic powers, totalitarian powers. Because this is what we do. We create, provide and distribute power by design.
Basically, we have to decide here and now which grand narrative about the role of data technologies do we want to guide innovation, investments, our policies and legislative proposals? Whose interests does this narrative serve? Which value system will dominate the global technological momentum of the 21st Century?
This is why we need a global more values-based dialogue on AI. One that is not only focused on AI uptake and competition as such.
Who is the fastest, most powerful.
I mean, one thing is what we believe AI can achieve, how it can transform our lives, the AI opportunities that we strive to realize through the accumulation of data in AI. There are indeed so many good things we can imagine to do with AI. For the environment, for health, for the UN sustainable development goals. No doubt about that.
And this is the narrative about AI most policy dialogues focus on right now. But I ask you to also think about AI and data as infrastructures of powers and values. So, while we are achieving what we dream AI can do for us, we need to think about what kind of societies we creating, which infrastructures of power do these technologies support? I hope democratic infrastructures of power. Serving the human interest, empowering humans. But this is not a certainty. We need to guide that narrative.
EU has a particular position within the global agenda on AI that was developed over the last 10 years. Starting with the tough stance on data protection and the GDPR positioning the EU as a global “regulatory super power”. Moving on to the ethics work we did in the EU high-level group on AI, the proposed Data Governance Act being negotiated right now and a risk based regulatory approach to AI in the newly proposed AI act.
And the positive thing is that the EU is now creating alliances globally with likeminded partners on a human-centric standard for trustworthy AI.
It is so important that we leverage on existing multilateral and bilateral initiatives within the context of international fora.
Recently we had the G20 Rome leaders’ declaration with references to AI and data ethics directly mentioning trustworthy human-centered Artificial Intelligence and emphasizing things such as accessibility and data protection, high interoperability and transparency, diversity and inclusion, women’s empowerment and children’s right. This is a new grand narrative on technological progress. Less focused on competition, more on cooperation and democratic values.
We also have the global partnership on AI – with the Paris summit this week that I will participate in. With founding members from regions all over the world with a shared values-based commitment to the development of trustworthy AI.
We need to create more international spaces for dialogues and joint initiatives like these with like-minded partners and support dialogues and knowledge exchange between multiple stakeholders around a human-centric narrative of data and AI.
We need these spaces of negotiation and diplomacy to facilitate convergence of regulatory approaches, create legal certainty to solve conflicts of interests in an overarching framework that ensures a humanly empowering data infrastructure for AI.
Watch the entire debate interpreted in Polish and sign language
See also the European Commission’s International Outreach on a Human-Centric Approach to AI initiative (InTouchAI.eu) that Gry is the key AI ethics expert on.