It is a mistake to see the idea of robot rights as an eccentric sci-fi delusion. The area is truly troubled by dilemmas and uncertainties, but our human nature is likely to incline us to eventually grant rights to robots.
” … as robots develop more advanced artificial intelligence empowering them to think and act like humans, legal standards need to change. Companies need to develop new labor laws and social norms to protect these automated workers.”
So begins a 2018 column by the prominent American author, consultant, and Professor Andrew J. Sherman. He problematises that while robots will largely take over productive functions in the workplace it is not yet clear whether they will have any worker rights.
Many will find Sherman’s views immediately absurd. Why on earth waste resources on protecting machines from harm? But it only takes a few psychological and philosophical concepts to explain why an intelligent person like Sherman gets such an idea. And, more importantly, why many more will soon follow his line of thought.
In my recent book about intelligent robots, I present what I call “The Robot Rights Tesis. It basically says that artificial intelligence in certain forms will come to seem so alive that it will be psychologically hard for most humans to exclude them from moral consideration. This will eventually lead to a demand for robot rights.
I will present a condensed version of The Robot Rights Thesis below. After that, I explain how it will put human civilization in an unfortunate situation, but also why the alternative; treating robots like slaves, is not without drawbacks either.
The Robot Rights Thesis
Step #1 – An Acute Philosophical Problem
Although there are good arguments against robots having conscious minds, we cannot reject it with certainty. Within philosophy this obstacle is called “the problem of other minds”, and originally it concerned the minds of people around us. It basically says that we have no direct access to other minds than our own, and followingly cannot know if they exist. In this context, the concept of mind is not about whether someone is able to show meaningful and seemingly conscious reactions to environmental stimuli. Or if someone can utter apparently meaningful sentences. If someone seems to be able to think, that is.
The problem of other minds concerns the fundamentally different question of what it feels like to be someone. What it feels like to burn your fingers. Or to see something red. Science can tell us a lot about the correlating brain processes of these experiences. But the mental states themselves – the qualia, the experience of the colors red or blue – escape this knowledge.
The exact same goes for AI models constituting a robotic “brain”. We know how they are built and trained to recognize and reproduce natural language. We know why they can produce sentences about conscious states like color visions or feelings of pain and pleasure. But in principle, we cannot know for sure if there are emotions going on in the processors. If a chatbot saying “I like the color red” has ever had any experience with red at all. We can pose arguments against it. Good arguments, even. But we cannot prove it with certainty.
The problem of other minds is arguably the most essential philosophical problem of our time, since our inability to solve it will play a huge role in what might make us grant rights to robots.
Step #2 – Our empathy is Hard to Ignore
Despite the problem of other minds, we still often feel like having a pretty good idea about what goes on in other people’s minds. When you see somebody bumping their toe into a doorframe you have a fairly accurate notion about their inner life (it hurts!). Those of us who are not psychopaths will mostly feel sorry for the person. A widely accepted term for this ability is empathy; the capacity to understand or feel what another person is experiencing. It is due to empathy that the inner states of the living beings surrounding us do not appear to be something obscure, hidden, or irrelevant.
Empathy works on a neural level. Neurons in our brains mirror the inner life of others through what we see and hear and make us have similar feelings ourselves (if you eat painkillers, you will have less empathy with the person kicking the doorframe!).
Step #3 – Increasingly Humanlike Technology Tricks Our Brains
We empathize mostly with beings that look like ourselves. The bigger the similarities, the stronger the empathy. This has already started posing problems regarding realistic humanlike technology like robots and chatbots. Plenty of users of the digital dating service Replika report having deep feelings for the platform’s avatars. A big advantage for Replika is when they want to sell the user the ability to sex chat with the AI. An obvious problem with it is that empathy is likely not to reflect something real. Rather, it seems that we are being tricked into believing that certain AIs have an inner life by our empathy. Tricked by a sophisticated yet completely normal function of the human brain.
This “tricking” can only happen because of a lack of technological insight, some might argue. Unfortunately, that is not the case. Numerous developers and entrepreneurs with deep technological insight believe that some of today’s AI models possess consciousness. Examples are the former Google engineer Blake Lemoine, entrepreneur and Twitter owner Elon Musk, and OpenAI co-founder Ilya Sutskever.
The only reason that this can happen – and the very reason why we can seriously disagree about something as controversial as whether AI has developed consciousness – is the problem of other minds. We cannot open the machine and see if it has a mind or not.
Belief in machine consciousness is still a narrow trend, but as technology develops and spreads, it will become mainstream. Millions of people who will soon be interacting with social robots at home, in schools, nursing homes, hospitals, and workplaces. And it is a natural reaction for humans to empathize with robots and come to think they are alive. Even if our rational minds tell us they have no more inner life than a microwave oven or a Windows PC.
For further argumentation also read: Humanlike Tech is a Wolf in Sheep’s Clothing
Step #4 – We Will Include Robots in Our Moral Landscape
Luckily, we tend to include beings we believe are able to feel pleasure and pain in our moral landscape. This goes for animals, but research suggests it will work the same way – or even more outspoken – for humanlike technology. Thus, we will face tough psychological barriers if we attempt to exclude robots from moral considerations. An aggravating reason is that robots are not only able to look and act as if they have feelings. They are also able to explain mental states using the exact same language humans use to describe feelings.
Step #5 – We Will Grant Rights to Robots
The last step of The Robot Rights Thesis concerns the granting of a sort of natural rights to robots. Something like the right not to be oppressed and abused. For example, as part of a labor force as Sherman pointed to at the beginning of this article. If humanity will take this step is still an open question but looking at how rights have developed both regarding humans and animals it seems likely, at least for some cultures. The demand for rights will probably be stronger in societies with a general focus on rights, while regimes that openly oppress humans are unlikely to see the point of protecting robots just because they look as if they can suffer.
Problems of Robot Rights
As should be clear by now The Robot Rights Thesis is not a result of humanity rationally assessing and choosing our individual and societal response to increasingly humanlike robots and an increasing number of human-robot interactions. It is rather a result of what almost inevitably happens when we are faced with beings that look like us: We anthropomorphize and empathize. At the same time, we are unable to know if we are being fooled by our empathy, or if our feelings are grounded in something real: Existing conscious experiences inside other beings.
There is a real danger that these cognitive and psychological conditions will make us prioritize certain forms of unconscious artificial “life” at the expense of conscious human lives. Punishing real people while protecting dead electronical circuits. A sociological experiment has already shown that people are willing to lie to other people to protect humanlike robots from harm. Not because these people are crazy or psychopaths, but because of the first steps explained in The Robot Rights Thesis.
Problems of Denying Robot Rights
If, on the other hand, we ignore robots as morally relevant we are immediately faced with other problems. Letting go for now of the puzzling, yet poorly supported claim that AI might already be sentient.
According to the German philosopher Immanuel Kant, destroying beautiful things disturbs our morality. So, if we accept the premise that robots will soon look very much alive, treating them cruelly will disturb our morality even more.
Kant’s line of thought fits well with contemporary research. Even though our capacity for empathy is inborn, it is also partly a learned behavior. Social conventions and contexts have been shown to influence our individual levels of empathy, and we can to some degree be taught to become more cynical. We might not be able to ignore our empathetic reaction towards one being (a robot) while upholding it towards others (human beings). It seems, in other words, plausible that treating humanlike robots with a certain level of dignity will, in a bigger picture, make the world a better place, also for human beings. Even if we rationally believe robots to be nothing but dead electronic devices, consistently mistreating humanlike robots will dull our moral abilities.
A Solution for Now
The Robot Rights Thesis puts us in quite a dilemma. We seem bound to empathize with robots, and so they will affect us morally in one way or the other. Furthermore, in the future of very sophisticated robots, we cannot know for sure if they have become sentient. Adding to this, even if we feel positively sure that robots are dead as microwave ovens inside, mistreating them may, as Kant puts it, uproot our own morals.
There is no simple solution to these dilemmas. But rather than trying to ignore or explain away the widespread tech-anthropomorphism, we should meet it as an unavoidable condition we must deal with. Empathy is not a bug. It is a necessary and good trait of humans, that happens to pose problems when it comes to certain technology.
An Incomplete List
My suggestion has various elements which I only sketch very shortly here:
1. Begin to consider rules about how to treat humanlike technology, including digital humans and chatbots. Not for the sake of the tech, but for our own morality’s sake.
2. Securing the humane treatment of humanlike AI-driven entities will hopefully, put a damper on any nascent robot rights movement. Shortly put: Make social regulations now instead of risking being pushed into granting rights later. Rights are a dangerous step towards putting robots on par with humans which risks the punishing of humans for mistreating something that is no more able to suffer than a rock.
3. It is equally important to work with how these devices are being framed. Avoiding too much anthropomorphic framing will put a damper on our empathy and postpone the whole problem.
4. Another serious concern not considered here is the manipulative potential of humanlike tech. Perhaps we should consider which context and towards which audiences humanlike tech should even be allowed.
5. Remember that regulation is not simple. It would be easy to say that we should not expose children or the mentally weakened to humanlike tech. This would, however, be a mistake as the technology shows great potential in the treatment of certain mental illnesses both in children and adults.
Consider if the EU should be humanoid-free zone. A lot of effort is put onto the development of humanlike robots. A country like Japan is – for cultural and political reasons – generally pro humanlike tech. But perhaps we should do a thorough analysis of pros and cons of humanlike tech in Europe rather than blindly developing and implementing it “just because we can”. If prohibition is too much it should at least be mandatory to always declare in a very visible way what is not human.6. Before adopting restrictions remember that regulation is not simple. It would be easy to say that we should not expose children or the mentally weakened to humanlike tech. This would, however, be a mistake as the technology shows great potential in the treatment of certain mental illnesses both in children and adults.
For more possible solutions regarding regulation of humanlike tech also read: Blake Lemoine’s Belief in Sentient AI may soon Become the Prevailing Norm
Photo: Flickr (Android-Human Theater, 2011)