Analysis. In the discussion of decision-making in autonomous cars we must be aware of the data these system have access to. On what basis are autonomous cars going to determine humans right to life? And are we willing to take personal information into account for answering this question? If so, systems will discriminate and we are accepting that human life does not have a value in itself.
Self-driving cars on our roads will sooner or later be a reality, but questions like responsibility and priorities are still unanswered. As postdoc Edmond Awad and his research group from MIT, formulate in their paper “The Moral Machine”:
”Car manufacturers and policymakers are currently struggling with these moral dilemmas, in large part because they cannot be solved by any simple normative ethical principles […].”
Awad’s research group set up The Moral Machine Experiment which is an online platform designed to explore the moral dilemmas we may face in traffic. The aim of the platform is to quantify societal expectations about the ethical principles, to use as guidance of machine behaviour. By gathering data about human preferences from more than 40 million decisions from millions of people all over the world, the researchers intend to reduce this to machine behaviour.
I find this reduction of ethical questions problematic in more than one way, and here is why:
1. Ethical problems are not problems with a single solution
The so-called Trolley Problem was first formulated in 1905 and has ever since been a topic in philosophy and ethical discussion. The Trolley Problem is an ethical thought experiment where you are asked the following (from Wikipedia):
You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people l lying on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, there is a single person lying on the side track. You have two options:
Do nothing and allow the trolley to kill the five people on the main track.
Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the more ethical option?
There are several versions of this dilemma: What if the one is a young person and the five are all old? Or criminals? What if your closest relatives are involved and what if one of them is a doctor who a close to finding a cure for cancer? And the newest version: The Moral Machine from MIT.
Within a utilitarian world-view, the Trolley Problem is quite easy to handle: How do we maximize the total happiness and benefit in the world?
Kill one and spare five.
Kill the old and spare the young.
Kill the criminal jay-walker and spare the law-abiding.
Kill the sick and spare the healthy.
Kill the unemployed and spare the business owner who has multiple employees.
Kill the lonely and depressed and spare the happy and beloved.
See what happened there?
If we accept this approach and focus on savings and profit maximization we can eliminate the ethical considerations and reduce the problem to calculations. But we are also accepting that human life does not have a value in itself.
The point I want to emphasise is that ethical dilemmas, by definition, are unsolvable. That does not mean that we should not ask, discuss and reflect upon them – indeed we should. But we should not expect to arrive at any deterministic answer. Ethical dilemmas are dilemmas. It is questions to be considered, and questions we should ask each other and reflect upon. A computer does not reflect – it calculates. Thus, it is not an ethical dilemma if it is incorporated into a computer – then it is a calculation.
2. The Thought Experiment is no longer just an experiment
The Trolley Problem is originally formulated as a hypothetical thought experiment that is used to explore and compare moral systems. But with today’s massive data collection and register linking the hypothetical part of the dilemma is realizable.
The questions the respondents are asked in the Moral Machine are questions like: “would you rather drive a sick man down than a pregnant woman?”
Not only is this type of discrimination violating the equality principle, which is a part of the UN Human Rights. But on what basis are we going to determine humans right to life and which information is taking into account for answering this question?
When the researcher behind the Moral Machine are asking these type of questions they are, more or less explicitly, suggesting that the algorithms in the autonomous vehicles do have access to the same kind of data as their test respondents are presented with.
The interesting question for me is not “would you rather drive a sick man down than a pregnant woman?” The interesting question, however, should sound: How does the system know that he is sick and she is pregnant? And what type of data should these systems have access to?
And this leads us to my third, and last point:
3. We need to ask questions to the one formulating these questions.
If we accept the MIT researcher’s approach to ethics as something that can be reduced to calculations, without asking questions to this approach and understanding we are leaving human morality and ethics behind.
Our ethics are defined by the questions we are asking.
I would like to encourage more questions to be asked: How do we want this to be implemented? Which consequences would it have if this is implemented? How does this affect our view on the value of human life?
A non-discriminative solution
The goal of The Moral Machine is to make autonomous vehicles able of decision-making in traffic, and I will in the following try to suggest another solution.
The questions set up by the research group are highly extreme. Either everyone in the car will die. Or everyone you hit on the street will die. Real-life cars are equipped with airbags, while pedestrians are soft and vulnerable road users. With the computational power in modern cars, it would not be unrealistic to calculate who has the highest probability of surviving. These calculations could include objective factors like speed, braking distance, angles, and the presence of safety equipment as airbags.
The car should always take the least deadly or harmful outcome, and I will assume it is very rare that two people will get the same survival-score.
And in cases where the probability of death is the same, I will suggest that the decision of the algorithm is based on randomization. A random decision is based on equality, and should therefore also include the driver of the car. If the driver always is spared then we will reach a society where it is safer to own a car compared to walking or biking. Rich people can buy a car and be safe in traffic while low-income people are left with the way more dangerous alternative as pedestrians.
Traffic is dangerous, and even though autonomous cars are more safe than tired, slow responding,
unaware humans, there will always be accidents. But in a system with access to personal information, any non-random decision between two lives will always be discriminatory.