Skip links

How To Tackle Human Rights In The Age of Data and AI

Our human rights are at stake in the age of surveillance capitalism. Private companies are not bound by human rights law, as states are, but operate in a grey zone between corporate responsibility, data ethics and human rights. A new book and a new report focus on human rights in the digital age and give advice on how to proceed to safeguard our individual freedom.

“In our digital lives, the arm of the market extends far into the private sphere and into our most personal domains. Companies know a lot about where we are, who we are, what our interests are, and they can therefore influence our thoughts and actions, without our consent or knowledge,” says Rikke Frank Jørgensen, the editor of ‘Human Rights in The Age of Platforms‘ and senior researcher at the Danish Institute of Human Rights:

“The different services know a lot about us and we know nothing about them and their practices. There is a fundamental asymmetry in knowledge. And knowledge is power.

The book is published by MIT Press, and the 11 chapters are written by internationally renowned researchers in law, human science and social science are brought together in a tale of a common global challenge.

“There are essentially two approaches to solving the problem. The first approach is for companies to voluntarily ensure the protection of rights through self-regulation, codes of conduct, cooperation agreements and the like. The second solution requires that the state regulates the practices of the platforms and secure human rights through legislation. Whereas many has relied on the first model for a long time, there is now increasing pressure on government regulation,” Rikke Frank Jørgensen says.

Concrete Advise to Investors, Companies and Governments

In October 2019, the Canadian company Element AI partnered with the Mozilla Foundation and The Rockefeller Foundation to convene a workshop on the human rights approach to AI governance, to determine what concrete actions could be taken in the short term to help ensure that respect for human rights is embedded into the design, development, and deployment of AI systems. It their report, ‘Closing the Human Rights Gap in AI Governance’, they recommend;

Investors Can:

  • Support companies that adopt research budgets, corporate governance structures and timelines for market returns that recognize the imperatives of responsible, rights-respecting AI;
  • Fund research and advocacy efforts designed to empower the public with knowledge of AI systems and their risks, in particular to human rights; and
  • Explore the possibility of establishing a new Responsible AI Fund capable of incentivizing and supporting the long term development needs associated with rights-respecting AI.

Companies Can:

  • Commit to conducting HRDD and HRIA throughout the AI lifecycle;
  • Support and contribute to the UN Office of the High Commissioner for Human Rights’ B-Tech project, to ensure the UN Guiding Principles are properly adapted to the context of AI;
  • Prioritize research on the design and technological imperatives of rights-respecting AI systems, with a focus on transparency, explainability and accountability; and
  • Collaborate with partners from academic communities, civil society, international organizations and governments to help them understand the risks associated with AI systems and work together to devise appropriate governance mechanisms and safeguards.

Governments Can:

  • Support a phased approach to requiring HRDD and HRIA in the public and private sectors, beginning with developing model frameworks and sector-specific codes of conduct that may be audited;
  • Establish a new, independent Centre of Expertise on AI (as more fully described above);
  • Disincentivize irresponsible technology deployment through regulation, but also incentivize research on human rights by design in the private sector, with an emphasis on transparency, explainability and accountability, through tailored direct spending programs, or other financial incentives;
  • Incentivize research and development of the technological underpinnings of data trusts through new spending and pilot programs;
  • Support the Canada-France led Global Partnership on AI, which could serve as a forum for international coordination of research and best practices developed by national Centres of Expertise on AI; and,
  • Partner with the Office of the UN High Commissioner for Human Rights to host national consultations for the B-Tech Project, to support the application of the UN Guiding Principles to the context of AI.

Read more about Human Rights in The Age of Platforms

Get the full report from Element AI here