Skip links

Debating Ethics: We Need A ‘Manual Override’ Button is invited to participate in the 40th International Conference of Data Protection and Privacy Commissioners Debating Ethics. We are part of a Creative Café involving ‘specially invited knowledge holders’ discussing ‘How To Move Forward In Digital Ethics?’ Below are some of our thoughts on that. Do send us yours at or Twitter @DataEthicsEU

What are the big issues in digital or data ethics going forward? One answer is; how to keep humans at the centre and safeguard our democracy. Or; how to keep human control. A worry that many share, e.g. the professor in history and author of ‘Homo Deus’, Yval Noah Harari, who spoke about it at World Economic Forum in Davos. He worries that a tiny elite will design a new race of super humans and take control over humans. A scary scenario which some argue we should not focus on, when there is so much potential in tech development such as artificial intelligence to make our lives much better. But these discussions must be taken in due course.

The big questions is thus; how do we create a human-centrered data democracy and avoid the data dictatorship, we are seeing in China, and the data monopoly society we are seeing in the US where the difference between rich and poor is only increasing. How to we put humans over governmental and corporate interests?

The French Data Protection Authority CNIL has already done some great work with ‘How can humans keep the upperhand‘ describing six ethical matters and questions in the wake of artificial intelligence:

  • Autonomous machines Will we release ourselves from the deep commitment inherent to taking decisions, to judging, to taking responsibility? How can we ensure that algorithmic systems do not water down responsibilities?
  • Biases, discrimination and exclusion How should we face this challenge?
  • Algorithmic profiling How do we balance between personalisation and collective benefits?
  • Preventing massive files while enhancing AI How do we balance between the individuals’ rights regarding their personal data and the potentials of AI?
  • Quality, quantity, relevance: the challenge of data selection. How do we remain critical instead of putting immoderate trust in the machine?
  • Human identity at the age of artificial intelligence. Machines are getting increasingly autonomous. How should we view humanoid robots which are likely to create substantial emotional responses on individuals?

The French authority puts forward 6 recommendations:

  1. Fostering education of all players
  2. Making algorithmic systems comprehensible
  3. Improving algorithmic system’s design to prevent the “black box” effect
  4. Creating a national platform in order to audit algorithms;
  5. Increasing incentives for research on ethical AI
  6. Strengthening ethics in companies

The 6 recommendations are all very relevant and necessary – 4 could be a European platform – and fits into our data ethics principles and guidelines. There are, however, even more discussions to be taken.

  1. Analogue function. Should we decide that all IoT gadgets must function without being online. When you buy a coffee machine, shouldn’t it be working without being online. Is this a realistic  demand?
  2. Help foster and preserve a market of analogue products. We can also decide that we must to be able to buy products and services, that are analogue and that they should not only be affordable for the rich. Can we buy a coffee machine in the future that does not track us via chips or sensors? Or get a car where we roll up the windows and lock the doors in our cars by human hand?
  3. Manual override button. If we allow the development of new products and services that are dependent on the internet, at least we should demand an ‘manual override button’ embedded. With that a human being can stop it, if the machine runs amok. If we had understood what Google Search was doing when it started personalising searches, we could also have demanded a red button that neutralised the searches to not depend on who you are and what you have done in the past.
  4. Privacy by design. Though the European General Data Protection Regulation, GDPR, promotes the use of Privacy by Design it is not mandatory and we are still seeing most services and products, which are not privacy by design, as you still have to opt out of tracking. We need to enforce at least privacy by default (one of the criteria in Privacy by design) on all gadgets and services in the future, as it puts demands on all humans to think twice of their own responsibilities and actions over their data.
  5. Is it need-to-have or nice-to-have? We must ask ourselves, if everything really need to be online and data-driven. Is it necessary with a remote control of your coffee machine or can you walk to the coffee machine your self and wait those few minutes while the coffee is brewing? Or do you need a device in the baby room that can play a lullaby for your baby, if it wakes up and cry? If we just listen to the tech industry producing new convenient products, we will end up as fat couch potatoes with a remote control in our hands.
  6. Please send us more input… or Twitter @DataEthicsEU