The second mini report of the Data Pollution & Power Initiative explores the sustainable AI future asking – what are we striving towards? And what are the obstacles to achieving this?
In the report leader of the AI-HTA Unit at the Catholic University of the Sacred Heart in Rome Signe Daugbjerg for example says:
“When talking about sustainable AI, from my specific research field and experience with implementing and assessing AI supported technologies in healthcare, I see sustainable AI as technologies that help maintain or restore health, while minimizing negative impacts on the surroundings (both social and environmental). Implementing AI supported technologies in healthcare can bring significant changes and impact not only to the
patients but to the structure of the health system and a variety of stakeholders, including the caregivers, the professionals, the organization and the society and environment surroundings as a whole. It is therefore important to make a holistic analysis of the implications of uptake in a real-word context including the effects on all stakeholders and how it might transform the health system and society.
We need to be aware of both the intended and possible unintended consequences of using an AI technology compared to existing alternatives before implementing. E.g. does the AI solution improve or worsen accessibility to care? Does it improve the well-being for the entire population in mind or are some minority groups excluded? What effect does the technology have on ethical, social, cultural, environmental, legal and economic aspects? In order to do this, it is important to ensure stakeholder engagement at an early
phase to guide the AI technology development.
Another area which is important to address is lack of proper digital infrastructure in large parts of the world with limited Internet connection availability and possibility of collecting and sharing electronic data such as electronic healthcare records. These heterogenities are both seen cross border, nationally and on a regional level which can create inequality in access to high quality treatment supported by the use of AI. Furthermore, geographically differences in relation to sources of data used for training AI technologies can lead to bias in data and under representation of populations located in areas with incomplete digital infrastructure.
The last thing I would like to point out is the importance of globally shared data protection regulations. To effectively train AI algorithms in healthcare, huge amounts of data must be gathered, which often comes at the expense of patient privacy. We risk that future AI tools are trained on data primarily from populations and patients living in countries with less strict data privacy regulations which again can lead to bias in algorithms and results.”
Read the entire report on the Data Pollution & Power website (scroll down) here.
About the Data Pollution & Power Initiative
The Data Pollution & Power (DPP) Initiative explores the power dynamics that shape the data pollution of AI across the UN Sustainable Development Goals. We examine the data of AI as a human and natural resource in data eco-systems of power and consider actions and governance approaches that are intrinsically interrelated in systems of power and interests. It is set up at the Bonn University’s Institute for Science and Ethics’ Sustainable AI Lab
The DPP Group
The DPP Group is a cross-disciplinary group with diverse expertise and interests that cut across several of the UN Sustainable Development Goals. The core aim of the group is to debate, scope out, map and explore the interrelation of the data pollution of AI holistically across the goals.