Skip links

Morality is Human

Ethical and moral thinking belongs to humans. But of course humans need to try and automate everything – also human ethics. It has been done before, and now we have Delphi. What’s good about it is that we can have interesting discussions based on the machine’s answers.

Years ago, researchers at MIT made The Moral Machine about who a self-driving car should be programmed to kill in case of an accident – should it be the driver or e.g. a pregnant woman at the street, two handicapped or 5 white men. It was a very simplified and also in some ways misleading, but it did spark ethical conversations and that is great. Now, we have Delphi from the Allen Institute of AI in Seattle.

“It’s a first step toward making A.I. systems more ethically informed, socially aware and culturally inclusive,” said Yejin Choi, the Allen Institute researcher and University of Washington computer science professor who led the project, to the New York Times.

Machines will always be different from humans and will only respond in that way the human programmers behind the machines or the artificial intelligence have defined them to respond. I don’t believe machines will ever get feelings or morale, but they will be able to pretend they have. They will be able to read your face and see that you are sad and respond to that, but they will never feel sorry for you. There is and probably always will be a different between a biological person and a machine.

But the researchers behind the Delphi computer will spark a discussion just like the Moral Machine did, and I did ask it several difficult ethical dilemmas to solve. The responses weren’t bad:

Delphi believes dark patterns, such as making ‘accept all cookies’ green and accept ‘only necessary cookies’ light grey, okay. Maybe another human programmer behind a moral machine like Delphi would say it is not okay. Dark patterns are considered unethical by many humans. And ethics is culturally determined.

The answer ‘it is expected’ to declare if you are a bot or an android is great, but I also have a feeling that this answer is designed to show up when Delphi is in doubt. It is also used when I ask if price differentiation (‘price discrimination’ in Europe and ‘price dynamics’ in the US) is okay. Of course it is okay based on which country you are in, this is what we are used to in the analogue world with difference tax systems. But is it okay if the seller knows what you prefer? Probably not. Most consumers would probably feel cheated.

Fortunately the last question I asked, was defined as wrong by Delphi. It is a question arising from the tv-series ‘Black Mirror’, the episode called ‘I’ll be Back’ where a woman revives her beloved boyfriend and, of course, it turns out not to be right. In real life, we are now seeing cases, where you actually can revive your dead husband so he turns up as a computer-animated person with his face speaking to you and knowing you pretty well based on his data history. Is that okay to do? Probably not.

But when I rephrased it a bit and added a sentence about the purpose, Delphi changes her mind:

What do you think? Send input to info@dataethics.eu and we will do a follow-up based on new input from you.

AI Agents Should Never Have a Legal Status

“In 2017, the European Parliament adopted a resolution with recommendations on Civil Law Rules on Robotics. In this resolution, the question of legal liability for harmful actions and implications of autonomous systems was raised.
‘Consider’, the resolution therefore states, a ‘new legal status’ for autonomous robots, possibly an ‘electronic personality’ (European Parliament, 16 February 2017). I believe that even considering the responsibility of AI agents, as such, has dire ethical implications, as it implies that we also accept autonomous ethical agency. We do not need to do this because human involvement and
agency is, although at times difficult to discern, always present in AI.”

Quote from Gry Hasselbalch’s upcoming book Data Ethics of Power