By Gry Hasselbalch, Aimee van Wynsberghe and Sebnem Yardimci-Geyikci
All aspects of human lives are being transformed by digitalization, automation, artificial intelligence (AI), and algorithms. These transformations have left their mark on our natural, social, and personal environments. While digital technologies, in particular AI, are seen as a solution to help achieve the United Nations Sustainable Development Goals, or to mitigate the dangers of the climate crisis, they are themselves exacerbating environmental problems. In addition to environmental pollution, increasing use of AI technologies and applications also has damaging implications on global opportunities, the public sphere, and democratic politics and procedures. For instance, new imbalances in the information eco-systems between citizens, states and other powerful actors create new injustices as we are still further away from equitable access to technology. As such, it is time for a unified movement to address the environmental, social and political harms of AI.
The Data Pollution and Power Group
To do just that, we brought together a group of experts from across Europe over the last year to establish the Data Pollution and Power Group – hosted by the Sustainable AI Lab of Bonn University. The group is made up of both technical experts, scholars from humanities, experts from academia and industry across domains of healthcare, fashion, and political science. We share a concern regarding an increasingly visible link we see between the data economy, on which the recent AI optimism is thriving, and various forms of environmental, social and political damages that may all be considered forms of ‘data pollution’. In short, we are concerned with the hidden costs of the big data economy that are not being discussed: what are the environmental risks of collecting and storing data at this pace? What are the social and political concerns of racing towards the development and use of AI to ‘make sense of this data’? What kind of power dynamics lie behind the functioning of AI?
Led by Gry Hasselbalch, the Data Pollution and Power group published a white paper that suggests to apply the metaphor of data pollution when addressing the data practices of our time in policy and beyond. Through this white paper, we aim to raise awareness of data pollution by developing a shared terminology, defining power domains impacted by data pollution, and making power dynamics visible. The metaphor is meant as a conceptual tool to help society understand that the data economy is at the root of multiple forms of pollution, pollution of the natural, social, and political environments of our lives. In our natural environment, for instance, data pollution is a carbon footprint. Training AI algorithms requires energy and results in carbon emissions, cooling data centers requires vast amounts of water and land, and the physical infrastructure of our digital world will become electronic waste transported to other countries to ruin the health, water, and agriculture of the surrounding communities. In our democracies, the use of AI techniques redefines the processes of inclusion and participation – two main pillars of democratic governance. From manipulation of citizens on a massive scale to strengthening of private powers at the expense of state and society, AI applications have power to anticipate, manipulate, and control individual decisions which would damage the very foundation of democratic legitimacy. Furthermore, automated decision-making systems are reinforcing bias and generating opacity with adverse consequences for individuals and society. In other words, highly automated systems run the risk of new types of power and domination that societies are incapable of balancing not only in authoritarian regimes but also in democratic ones.
Lack of Societal Awareness
Yet, we are still far from the widespread societal awareness of the problem of data pollution. We argue that the impact of data pollution over many different domains needs to be analyzed and faced to develop a multifaced novel approach which can foresee, regulate, and correct the impact of AI and its applications over environmental, social and political domains. One way of doing this is to include data pollution across the global sustainable agenda, and more importantly to create an effective and efficient international data regime which involves many different stakeholders that collaborate on how big data is handled from its collection, storage and management to its use, ownership and control.
Although there is now an emerging institutional awareness with more than 60 countries worldwide adopting AI policies, these initiatives are not coordinated, and often remain as sporadic afterthoughts or additions to other governance initiatives. The EU takes the lead with the strongest regulatory position in AI governance with the proposal of the upcoming AI Act and a commitment to ‘Trustworthy AI’. Such a governance tool may help to mitigate concerns over social and political pollution in so far as, for example, social scoring of citizens may not be allowed. What is needed now is a more comprehensive approach to the environmental damages resulting from the data economy of our time across different fields.
Power Asymmetries
Furthermore, the main problem that we identify in the white paper is the underlying power asymmetry between stakeholders that exacerbates the forms of data pollution discussed as well as the kinds of solutions presented. Those who currently define problems (e.g., the problem of the black box algorithm), shape the political narratives (the geo-political “race” between world nations), identify solutions (how AI become “explainable”), and determine the speed of their implementation are the ones who already have an AI infrastructural advantage. In other words, it is the countries who have access to large amounts of computing that can determine which kinds of algorithms will be made, and for what purposes, because they control the energy needed to develop AI. Moreover, globally, we see very different experiences of data pollution, where those communities most affected by it are often excluded from participating in the global agenda-setting. This means that new inequalities and injustices are emerging that will reshape natural, social, and political environments.
In times when the effects of the pandemic still linger, a dangerous war is taking place on the edge of Europe that impacts the availability of necessary natural resources across the globe, and the climate crisis threatening biodiversity and access to resources in the immediate and long-term future. Where does one find the time to address data pollution as part of a global sustainable development agenda? To this we as a group of interdisciplinary scholars respond that, AI is already here, already impacting our natural, political, and social environments and will continue to do so for decades to come. If we do not address these concerns today, we run the risk of locking ourselves in to an unsustainable digital infrastructure that will erode human rights and deplete the planet of natural resources. It is time to develop a holistic approach to respond to the impacts of AI that goes beyond the simple dichotomy between good versus bad AI. It is time for a unified data pollution movement to face the real challenge of our times.