Report. With AI technologies evolving and being integrated into our society, the possible dangers connected with AI are becoming more real than ever. These risks involve the emerging of surveillance governments, the creation of unfair and discriminating systems and societies, threats to individual privacy and freedom of speech, and, an overall concern of the lack of transparency and accountability. We need a conversation on how to control AI technologies and which processes and precautions companies need to take, in developing AI.
Darrell M. West (the founding director for Center for Technology Information) has given six advices in “The role of corporation in addressing AI’s ethical dilemmas” for companies on how to be responsible when creating AI technologies.
1. Data Ethicists
One of the first things companies can do, is to hire ethicists to evaluate the decisions taken by management and software developers. They must function as a watchdog and make awareness of possible ethical risks and dangers, before a system is created and taking into use.
2. Code of AI ethics
Every team involved in creating AI must have a public code of ethics showing their principles, values and processes. In this way it is clear for all stakeholders, how concerns about AI is being address and handled.
3. AI Review Board
Companies must create an AI review board to evaluate the AI technologies being created. This board must meet regularly and consist of members representing different stakeholders, and should be informed of particular products being developed, government contracts and procedures involved in creating AI.
4. AI Audit Trail
To be able to investigate and correct biases and unfairness, it is necessary to keep an AI audit trail that shows how and why the coding have been made.
5. AI Training Programs
Software developers need training in considering the dangers and ethical risks of AI. When they are aware of how their decisions in designing and coding AI technologies can have direct consequences to society and individuals, they are more capable of making better decisions.
6. Means for Remediation
In the event of harm being caused by AI, either to individuals or organizations, companies need to have procedures in place for how to react, so they can minimize or undo the damages.
Signe Agerskov is a member of the European Group on Blockchain Ethics (EGBE) and is researching blockchain ethics at the European Blockchain Center