Humankind is a slow species, human societies evolve gradually, nature doesn’t move fast. It evolves over Millenia. We need time. AI racers, please respect the time it takes to implement the democratic reflection and policy process.
One day in May 2023 I was invited to speak at a panel on the policy implications of AI foundational models at Microsoft’s Data Science and Law conference. Arriving at their headquarters in Brussels, I met a few friendly shy people that stood on the other side of the road with signposts with messages such as: “Pause AI” and “Build AI Safely or Don’t build AI at all”. Their shyness reminded me of what big corporations seem to be lacking in the current hasty race towards AI supremacy: Modesty. In fact, what these kind human beings are asking from the big corporations is not that much. It’s simply: Give us time, step back, take a breath (a human one), be kind, be modest, respect the delicacy of humanity, human societies and nature.
I personally believe that the message could be even more direct. Listening that day in Brussels to Microsoft’s President Brad Smith talking about the need for EU regulation and OpenAI.s CTO Mira Murati about their openness to contributing to the same, it seemed less of a generous offer to respectfully support a democratic process than a warning to make room for corporate needs. Certainly, OpenAI’s CEO Sam Altman was more frank when telling Reuters almost at the same time that pulling Chatgpt out of the EU could be an option if the AI Act’s requirements remained as they are at this stage of the legislative process. Though he also seemed to be of the opinion that it would never get to that:“The current draft of the EU AI Act would be over-regulating, but we have heard it’s going to get pulled back….they are still talking about it”. Fortunately, EU’s industry chief Thierry Breton, responded in a Reuters interview following Altman’s statement that EU’s rules “cannot be bargained”.
Time for Contemplation
This is not a race that should be started or guided by corporations. In fact, this is not a race. It’s a conversation that we need to have in society among human beings, as part of a democratic process. What is at stake in the current AI debate is so profoundly important touching the core of the identity of humanity, our complexity and unique timeliness. Why we need the time to make sure that we get it right from the beginning.
Taking our time for uninterrupted “contemplation” is a fundamental component of a democratic political process. Hannah Arendt, a Jewish philosopher who witnessed the atrocities of World War I and II, and then spent decades on writing about human power, regarded human contemplation and independent critical thinking as fundamental prerequisites for political action. She emphasized the necessity of temporarily withdrawing from public discourse in order to nurture contemplation and independent critical thinking. What she proposed was that we are most powerful and politically free when we are left undisturbed to contemplate. This is certainly not the condition of the current AI debate that seems to be characterised first and foremost by noise.
A Technological AI Momentum
What we are experiencing now is not new. Throughout history, the transformative role of emerging technologies in human society has been subject to much noise, but also human consideration and efforts to direct and govern. Characteristic of such moments in time is the competition that takes place between different interests, different technological systems and their respective values, that eventually result in a situation where one system take precedence over the others. This is what historian Thomas P. Hughes referred to as the “Technological Momentum” where local and regional technological systems finally become interoperable and evolve into large international systems. Hughes used the example of the negotiation of the electric power system that was extended across borders at the end of the 19th and beginning of the 20th Century.
We can and we should think about the current competition between different AI “stakeholders” for an AI momentum in a similar way. As a negotiation between different systems that eventually will turn into one. In the case of electric power, the system that prevailed was one that relied on centralized fossil fuel-based power plants and the expansion of power grids.
AI Infrastructures of Power
The electric power system serves us as an important example of why it is so important to get the values right when establishing the rules and standards for AI. For the last 70 years we have been in a constant repair mode to mitigate the environmental damage of the global infrastructures for the distribution of electric power. Integrating alternative power sources into the electric power systems, trying to reduce their emissions, improving grid efficiency to optimize energy use etc. Frantically trying to fix the original mistake to consider only the interoperability of a basic infrastructure to meet commerce, industry and residential needs.
We run a comparable risk today if we think we are just negotiating the needs and interoperability of a basic infrastructure. A basic AI infrastructure extends across borders through distributed computing and networking technologies, servers, data centers, and cloud platforms that are located across different regions and their interoperability is secured by enabling seamless communication, exchange of data between the different systems and platforms. Interoperability is enabled through the use of protocols, standards and APIs and data sharing between different AI components. But this is not all an infrastructure does and is.
A socio-technical infrastructure is first and foremost the expression of very specific needs, values, priorities, technological styles and cultures. Why I use the terms Big Data and AI Socio-Technical Infrastructures of Power (BDSTIs and AISTIs) (Hasselbalch, 2021). In fact, basic AI infrastructures, for example, do not only extend across borders, they extend through constellations of powers (We all know the main cloud platform providers: Microsoft Azure, IBM and Google Cloud, Amazon AWS). Furthermore, it is evident today that AI infrastructures do not just extend our human powers in a frictionless manner, they also challenge them, when reinforcing asymmetric powers, disempowering the already powerless while reinforcing the power of the powerful. AI infrastructures are indeed not just basic interoperable infrastructure, they are socio-technical constellations of power. And they are the result of interest negotiation and different ways of imagining why things ought to be the way they are.
Take for example the big data infrastructures developed since the 1990s, growing with the sea of data of the World Wide Web and now feeding the components of the AI Infrastructure for e.g. generative AI such as Chatgpt. These big data infrastructures of power are in fact not just basic infrastructure, they are: “human-made spaces shaped by human imagination, compromise and domination of some interests over others. A driving force here has been the commercial and institutional fantasies about the potential of big data as an unlimited resource and the commercial and technical risks to companies and infrastructures that fail to store, collect and process it.” (Hasselbalch, Data Ethics of Power, 2021, p.17)
The Policy Momentum for Good Governance of AI
There is right now no shortness of AI hypes about the super powers of AI. Plentiful of arguments why we should leave AI corporations and scientists alone to develop AI components, services and products without regulatory or good governance obstacles and disturbance. While what we actually need is the time and undisturbed space to shape the AI momentum with good governance.
Fortunately, there are many good AI governance initiatives. Over the last couple of years, hundreds of ethical principles and recommendations for AI have been published worldwide. Standards and regulations are created and negotiated and we see “like-minded” governments taking increasingly stronger stands. Recently the leaders of the G7 stated that they would “advance international discussions on inclusive artificial intelligence (AI) governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic values.”
We need the time to get the regulation in place. The EU’s AI Act has very strong provisions on regulating AI risks. For example, the European Parliament’s draft amendment specifically calls for a fundamental rights impact assessment and disclosure of the kind of training data that has been used for generative AI.
To get the AI momentum right also means international collaboration between like-minded countries and regions on rules and standards for AI based on democratic values and respect for human rights and the planet. For example, the EU-US Technology and Trade Council’s joint road map to inform approaches to AI risk management is an important step towards interoperability based on democratic values with among others the development of a shared AI terminology and taxonomy.
We also need to ensure that we understand which values we are guided by and why. As said, there are currently many normative ethics recommendations and principles. While it is not always clear which frame of reference and with which interest these are deployed, the UNESCO recommendation on AI’s core frame of reference is the Universal Declaration of Human Rights.
Importantly, we need assessment and evaluation of the alignment of the various governance activities with democratic values. UNESCO recently published their Readiness Assessment Methodology to assess the implementation of the AI recommendation. Furthermore, The Center for AI and Digital Policy’s annual Index for countries’ AI policy environment’s alignment with democratic values provides an important overview.