Skip links

Why trust in AI is not enough

By Gry Hasselbalch and Sille Obelitz Søe

How can we trust AI? This is one of the most pressing questions we are facing at the moment with the rapid speed in development of Artificial Intelligence (AI) and Machine Learning algorithms. In the wake of the increasing amount of data leaks, the growing number of news stories about algorithmic unfairness, and the potential of harm caused by automated decision-making – spiced up with sci-fi stories about unruly conscious artificially intelligent robots – ethics for data and AI is having glorious days.

We need trust in AI, machine learning, algorithms, and technology in general. Trust is profit, it’s insurance, it is the necessary societal accept of all behind the scene business and state innovation practices without questioning. Businesses are creating codes on trusted AI,  states are doing the same. Mark Zuckerberg is primarily just threading waters to “fix” various kinds of “trust breaches”.

Just trust us! Seems to be the motto of every big state or big tech company at the moment emphasizing a personal trust based relationship between a company and its customers, a state and its citizens. And if we trust these systems, services, and technologies, then the prospects for our future are bright. We are told that AI and machine learning will solve all our societal problems in innovative, smart, and efficient ways. In AI We Trust.

But what if people don’t trust? What if the stories we are presented to almost daily about companies’ (and states’) “wrecking ball ethics” have shaken the core of our trust in the promises of digital innovation?

Trust is a philosophical concept. It is an attitude we have towards others – an attitude just like hope, belief, or desire. Trust is important, but it is also risky. It is important because it lies at the foundation for our social interactions and it is risky because it comes with the potential for betrayal. To trust someone requires that we dare to be vulnerable to others and that we think well of these others. We have to believe that those we trust will do what we trust them to do, while at the same time dare to risk that they will betray our trust and potentially hurt us. Thus, in discussing trust, the question is when trust is warranted, that is under which circumstances is our trust in someone well-grounded or justified?

But what if trust is not warranted? What if distrust is the new sentiment? And what then if we choose to see distrust as a constructive sentiment?

While people’s trust in a technological development or system is pivotal for its implementation and standardization in society, distrust can also be seen as a healthy response, and might even be seen as a natural element of a technological evolution. Of course, sometimes distrust is irrational – a mere fright for the new and unknown – but distrust is also the response where malpractices and functions are questioned and put under public scrutiny.

Over the last couple of years distrust has created a momentum for a lot of innovative ideas and initiatives – all related to the architecture and future of the Internet and technological development with an emphasis on data ethics (see examples here). Right now, we are indeed at a moment in time, where trust has to be rebuilt. But not in a superficial way.

Practices have to change. Businesses and developers of digital technologies must be held accountable and must take responsibility.We need a practical implementation and enforcement of values, ethics, human rights, data protection (and antitrust) laws and conventions in the very innovation and design phase of AI developments. We need to be assured that the AI systems that will inevitably become an ever-increasing part of our daily lives are truly trustworthy. If AI systems and machine learning algorithms are making decisions for us, then we need to be able to trust that these decisions on the one hand are not just arbitrary outcomes of information processing, and on the other that they benefit us as citizens and individuals, and not just one company or state interest.

But trust alone does not get us far. That we trust a technology does not necessarily presume any embedded ethics in the practice behind and design of a technological system such as AI. The term “trust” on its own places an emphasis on the receiver – on the perception of people/citizens. As already mentioned, trust is simply an attitude, just like belief or hope.

Thus, we might trust a developer of AI simply because they excel in public relations. Or, because AI is built into a technological gadget that we’ve been told we can trust, that is smart, and conveniently embedded in our everyday life and practices without questioning. People trusted Google’s Deepmind with their health data and they trusted Facebook with their data. However, the developers that we trust may not have the embedded business and innovation practices, design and security of services, or business model and company spirit to actually be entitled to our trust. They might be “trusted”, but not trustworthy. Many examples of such practices have surfaced the last few years.

In contrast to trust, trustworthiness is a property. It is a property that people can have – and in relation to AI this property might also be delegated to technology and systems. To have the specific property – to be trustworthy – requires that one is competent in and committed to doing that which one is trusted to do. If people and systems are trustworthy then our trust in them is warranted – it is well-grounded and justified. Thus, the risk of trusting is diminished. Trustworthy agents are entitled to our trust because they are unlikely to betray us. They will actually be committed to do what we trust them to do. This is why we should shift the emphasis from “trust” to “trustworthiness” in our approaches to AI. AI must be trustworthy and so must the developers and companies behind the AI. First then can we start to rebuild trust. With a shift to trustworthiness, the emphasis is put on the developers, the companies, the practices, and the business models.

About the Authors:

Sille Obelitz Søe, PhD
Postdoc in Philosophy of Information
Department of Information Studies,
University of Copenhagen

Gry Hasselbalch 

Cofounder of and member of the European Commission’s AI High Level Group that published ethics guidelines and policy recommendations on Trustworthy AI summer 2019.