The Norwegian Data Protection Agency has introduced a regulatory sandbox following a British model. The sandbox establishes a project environment for AI, where private and public companies can get free guidance on personal data protection.
The Norwegian sandbox aims at stipulation innovation within AI, whics is ethical and responsible from a data protection perspective. In this way, companies will have the opportunity to gain greater insights into personal data protection and the opportunity to develop products that can be a win for not only the company but also for individuals, the society and for the Data Protection Agency.
The gains for individuals and society are the assucance that AI is being developed and applied within the requirements of the law and with respect for fundamental rights.
The Data Protection Agency sees it as a gain in itself that cooperation with companies can give them increased insight into the use of AI in practice, thereby contributing to the supervision of the supervisory authority, case management, supervision and policy develoment.
What is Responsible AI?
The Norwegian sandbox works according to principles of responsible AI, which are inspired by the recommendations of the EU High Level Group on Trustworthy AI. Responsible AI must therefore be developed based on three main principles:
1. Lawful – comply with applicable legislation
At core is regulating in the areas overseen by the Data Protection Agency, i.e. the GDPR, the Norwegian Personal Data Protection Act, the Police Register Act, the Health Register Act, the Health Research Act, the Patient Records Act and regulations on camera surveillance and control of e-mails at work. However, other authorities may cooperate if a sandbox project is concerned with other areas, such as the Administrative Act, the Public Disclosure Act or other relevant regulatory framework.
A crucial point is that no derogation from the legislation is granted while the sandbox project is running. However, the Data Protection Agency has no intention of taking corrective action during the development process.
2. Ethical – adhere to ethical principles and values.
These include fairness and transparency, which both constitute ethical principles and are reflected in GDPR Article 5.
A key point of the requirement for the transparency is that the individual understands that the machine that has made a decision or supported a process rather than a case worker.
AI based decisions must also be explainable to the individual, i.e. that he or she has insight into why a decision got the result it received.
In addition, a traceability requirement ensures that the AI solution may be required to be audited and explained.
The ethical assessment helps companies ask whether an AI solution is the ethically correct choice – even in situations where the solution is legal.
3. Security – from a technical and societal perspective
The requirement of security implies that the AI solution must be technically robust. It shall prevent and minimise the risk of unexpected and intended damage and contribute to the system’s functioning as planned. Technical robustness is also important for the accuracy and reliability of the AI solution and for how it can be verified.
The UK Information Commissioners Office, ICO, has good experience with their sandbox and has since May 2019 completed four projects. The projects cover AI used to improve student support services in higher education and research, automation of the passenger journey in Heathrow, mitigating bias in biometric identification verification, and investigating financial fraud.