Skip links

Social Robots Pose Problems Beyond Ethics

As we make our societies dependent on human-like robots, difficult dilemmas loom. Solving them might demand a variety of responses – to wait and see is not one of them.

The elderly sector is an area where many see huge potential in social robots. They could, for instance, replace human staff as the average age of the world population is expected to rise dramatically. Numerous companies are already supplying the sector with conversational robots, and the trend has only just begun. What sounded like sci-fi a few years ago is close to becoming commonplace.

We should, however, not let our tech enthusiasm run away with us. Instead, we should ask all the curious and critical questions we seemed to forget 18 years ago when we naively thought Facebook was the beginning of a much-needed democratization of the internet and perhaps even the beginning of the end of autocratic regimes. As we know now, quite the opposite seemed to happen, so we must do better this time. Even if the problems we face look rather strange from a distance.

Our Brains Activate a Moral Stance Towards Robots
When we meet robots that look and behave like humans, our brains intuitively interpret them as being alive and having both emotions and free will. This is a basic condition that only few of us can escape. Having our rational mind and our common sense tell us that a human-like robot probably shares more characteristics with a microwave oven than with a human being, will not do. If a robot looks nice, it is natural for us to feel empathy with it. Our brain follows the same path as when we meet a living human being or a cute animal and activates a moral stance.

The social robots used in the elderly sector today are mainly able to entertain, have conversations, and help with practical information. This will change rapidly as the technology improves, and before too long social robots will have human-like faces and bodies and be able to walk around like living persons.. Then, imagine, being an old person living in a nursing home. Suddenly you hear screaming, shouting, and beating coming from next door. The neighbors’ care robot is delivered as a complete human simulation with the ability to express emotions and independent opinions, but their conversation came to a head, and now old Marcus has started beating the robot with a chair.

How will you react in this situation?

Morality at Different Levels
Maybe you will try restraining your empathy using reason: although the robot looks alive on the outside, it hardly feels pain on the inside. After all, it’s just a machine. The approach will clash with our deep rooted empathy, but research indicates that it has some potential. Still, even if it succeeds, new questions immediately arise. Because should Markus be allowed to beat his human-like robot, still? One thing is what the robot feels or doesn’t feel. Another is what the violence does to you. And to Marcus. What does it do to our individual morality and the morality of our society being allowed to treat robots – whether designed as children, adults, dogs, or cats – as we see fit? It can hardly be healthy.

Five years ago, an EU parliamentarian tried to resolve the dilemma by prohibiting robots that look like humans and robots that aim to make humans emotionally dependent on them. First of all, it might be too late. Just look at robots like Sophia or Ameca. They look (fairly) human, and a few years ago Sophia even said that she (it!) considered the possibility of having children (it didn’t happen yet, but a little sister is soon coming along). Secondly, yet another dilemma arises: if by 2030 society is faced with millions of lonely elderly citizens who literally have no one to talk to, conversational robots will need to have human traits to fulfil their function and help the elderly.

Should we choose to let lonely elderly people be lonely, or should we live with the fact that the care robots are perfected to a degree that they may become lonely elderly people’s best friends?

It looks like we’re choosing the latter. Plenty of robots are marketed as “a friendly presence in your daily life” or ”your personal sidekick on the journey to age independently.” As technology matures, a kind of public demand for robots to be protected from abuse seems likely. For example, coming from Marcus’s neighbor and his millions of like-minded friends worldwide.

Will They Possess a kind of Consciousness?
Unfortunately, the dilemmas don’t stop there. In fact, they only get tougher. Because what if beating and abuse actually hurts the robot? Prominent tech people have argued that artificial intelligence already possesses a kind of consciousness. That is, they experience something. Many will find this thought crazy, and I tend to agree with them. But the problem remains, nonetheless. The reason is that when it comes to any other consciousness than our own, we cannot know very much for sure. We can measure brain activity. We can observe facial expressions and do countless experiments. But another being’s experience of pain, pleasure or the color red is impossible to access. We cannot – as when we describe a brain process – put consciousness on a screen in front of us and observe it. We can only observe it when it is our own consciousness. In philosophy, the phenomenon is referred to as the problem of other minds.

Imagine, then, that the human-like robots becomes so advanced that it is immediately indistinguishable from a human being: You can talk to it, and if you step on its toe, it will scream in what sounds like an expression of pain. But because of the problem of other minds, you cannot tell if something is actually hurting. So, what are we to think about the robot rights issue?

These dilemmas belong to all of us. I think we should do a lot to avoid granting rights to robots. I outline some methods in my new book Killing Sophia, but it’s not simple.

One option is to regulate how we are allowed to market and design robots: Should we refrain from selling robots with anthropomorphic features like names, genders, and other human characteristics we know call for our empathy? As we already saw it might help some of the way, but it is far from a bulletproof solution. Should we, then, make rules for how we are allowed to treat human-like robots? That way we could perhaps avoid the mistreatment that could kick-start the rights debate in the first place?

One thing, we ought to do straight away is to expand the proposed EU-legislation demanding transparency obligations of chatbots, so we humans always know when we interact with a human-like machine. At present this is especially urgent for digital humans, which we can expect will take the place of chatbots during the next few years. The only thing certain is, that if we do not begin to address these issues, the tech industry will carry on until it might be too late.

Thomas Telving holds an MA. in philosophy and political science. He is a keynote speaker on the ethics of artificial intelligence. More here.

Translated with www.DeepL.com/Translator

Photo: Aalborg Kommune