Skip links

What does ‘Transparent’ AI Mean?

Over the last couple of years, many companies, institutions, governments and ethicists have proposed their own principles for the development of Artificial Intelligence (AI) in an ethical way. Most of the guidelines include principles about fairness, accountability and transparency, but there is a wide range of interpretations and explanations of what these principles entail. Fenna Woudstra did research on the element ‘transparency of AI’. In this blog we give a summary of her findings.

What is transparency?

Because of the growing use of Artificial Intelligence (AI) in many different fields, problems like discrimination caused by automated decision-making algorithms are becoming more pressing. The underlying problem is the unknown existence of biases, due to the opacity of the systems.[1] The logical response to reduce the problems caused by opacity is a greater demand for transparency. Transparency is all about an information exchange between a subject and an object, where the subject receives information about the performance of a system or organisation the object is responsible for.[2]

However, putting transparency into practice is easier said than done. Firstly, not everything can simply be made publicly available. Other important values like privacy, national security and corporate secrecy have to be taken into account.[3] Secondly, even after considering these points it remains unclear how transparency should be realised; what information about which aspects of the AI system should be disclosed to be transparent?

This is not about transparency reports

When it comes to the use of data in combination with the term ‘transparency’ one could also think of ‘Transparency reports’. Transparency reports are reports over a certain period which show when and under what authority governments request for (a certain type of) data collected by a (big tech) company. The research we describe here is not about this kind of transparency, but focuses only on transparency during the development and implementation of AI.

Results of comparing 18 ethical AI guidelines

Eighteen guidelines for Ethical AI have been analysed on their principles and practical implementations of transparency, among which the Data Ethics Principles.

As a result, a new framework has been created that consists of nine main topics: environmental costs, employment, business model, user’s rights, data, algorithms, auditability, trustworthiness and openness (see Table 1). Each topic is explained by multiple practical specifications that were extracted from the existing guidelines. The topics cover the whole process of developing, using and reviewing AI systems.

Table 1. Framework for transparent AI

Some critical notes

Unexpectedly, the principles of transparency do not only address algorithmic transparency or the explainability of the AI system, but also the transparency of the company itself. This is most notable in the first two topics, ‘environmental costs’ and ‘employment’. They show how a system is made, at what costs and under which circumstances, is also a part of the development of a system. Being transparent about these aspects could probably result in a greater trust and acceptance, as Dignum already mentioned that ‘trust in the system will improve if we can ensure openness of affairs in all that is related to the system. As such, transparency is also about being explicit and open about choices and decisions concerning data sources and development processes’.[4]

Furthermore, it should be mentioned that not all existing guidelines have been analysed and neither can we be sure that the existing guidelines already include all the important principles, due to the novelty of this subject. So clearly, additions to this framework can be made when important principles are yet missing.

That is why this framework is not meant to provide strict rules, but aims at providing an overview of the possible topics to be transparent about. Organisations can decide which aspects are relevant for their system and organisation. This framework also tries to show that being transparent does not necessarily mean that everything must be made publicly available, but that explaining why a principle cannot be fulfilled is also a form of transparency. Not publishing the used data can be perfectly ethical if it is done to ensure people’s privacy or the national security, as Stiglitz already noted.[5]

Finally, this framework shows that there are many aspects to be transparent about and many ways to achieve transparency. So even when an algorithm’s working is highly complex, there are still many aspects to be transparent about that could help reveal or prevent biases in the system. There is much more to an AI system than only a black box.

This article is written by Fenna Woudstra and Piek Visser-Knijff. Fenna did  research on the Transparency of AI as part of her internship for Filosofie in actie. You can read the full research paper here.

References

[1] O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books. See also: Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31(4), 611-627.

[2] Meijer, A. (2013). Understanding the complex dynamics of transparency. Public administration review, 73(3), 429-439.

[3] Stiglitz, J. E. (1999). On liberty, the right to know, and public discourse: the role of transparency in public life. Globalizing rights: The Oxford amnesty lectures, 149.

[4] Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer International Publishing.

[5] Stiglitz, J. E. (1999). On liberty, the right to know, and public discourse: the role of transparency in public life. Globalizing rights: The Oxford amnesty lectures, 149.