Skip links

Humanlike Tech is a Wolf in Sheep’s Clothing

In this column the philosopher and tech-critic and guest contributor Thomas Telving advises us to start a philosophical rebellion against the growing trend of believing in machine consciousness.

A disturbing new trend is spreading among powerful trendsetters in the tech community: The idea that machines powered by artificial intelligence will develop – or have already developed – consciousness. Tesla founder Elon Musk tells us that his latest invention, the Tesla Bot, Optimus will become “semi sentient”. OpenAI co-founder Ilya Sutskever speculates that advanced AI may already possess consciousness. Similar ideas have been put forward by inventor of the humanoid robot Sophia, David Hanson.

Severe Legal and Ethical Challenges
If you think this is a nerdy discussion best reserved for eccentric tech-entrepreneurs and philosophers, you better think again. If it becomes a general norm to think that humanoid robots, digital humans, and the like are sentient beings we will be facing unprecedented legal and ethical challenges.

In the following I will present two problems, and a possible way to prevent or at least postpone the worst-case scenario of our robotic future.

The rationale is roughly based on the thoughts presented in my new book, Killing Sophia – Consciousness, Empathy, and Reason in the Age of Intelligent Robots.

Problem #1: We will Develop Empathy towards Robots
Research indicates that when a human meets a humanlike robot, like Sophia or the Tesla Bot, we intuitively start to anthropomorphize. This basically means that we apply a broad range of human traits to them. In this case things like consciousness, will, personality etc.

Anthropomorphizing is a basic feature of human psychology; an inherent part of how we interact with living (and seemingly living) things around us. People have always seen human features in landforms, clouds, and trees. It is common for artists to depict natural phenomena like the sun and moon as having faces and gender.

Because of this peculiar human characteristic, many of us will – like Musk, Sutskever and Hanson – be fooled into believing that robots and digital humans are (or will soon become) comparable to humans when it comes to consciousness.

Because of our human feature; anthropomorphizing, we might be fooled into believeing that robots and digital humans will soon become comparable to humans, when it come to consciousness.

Thomas Telving

An equally inherent human characteristic is that we start feeling empathy towards robots, simply because of their appearance. When this happens (actually, it already happens), it will be very hard for us not to consider robots morally. Just as if they were living beings.

While this scenario might still seem exotic to some, I am sure most readers can easily imagine the huge commercial potential in humanlike technology.

As an example, how about replacing the standard recommender algorithm in a web shop with a charming and helpful digital human shopping assistant? One that flirts with you and makes you think, he likes you. One that feasted on all your SoMe-data and now knows exactly how to make you like him. Automated systems capable of connecting emotionally with humans have limitless potential within everything from elderly care, education, client support, sales, and entertainment. But they are rarely made without commercial purpose. Cute robots and helpful digital humans on our screens will often be wolfs in sheep’s clothing.

Problem #2: We Cannot know if Musk and Sutskever are Right

Unfortunately, the problems of our robotic future get even worse. You will see this after dwelling on the concept of consciousness for just a moment.

Being conscious or sentient is not the same thing as being able to talk, and to react seemingly reasonably to outer stimuli etc. conscious experience is the inner feeling of a person. The sense of things being hot and cold. The experience of pain. The experience of colors, sounds, tastes, or discomfort. Consciousness is what it feels like to be you.

Most of us have an intuitive sense of which things possess consciousness. Few will disagree that other human beings do. Probably also dogs, cats, and mice. Should someone be cruel enough to cut off the leg of a living mouse with a garden shears it will probably feel pain. Saying that a tree feels pain when we cut a moan from it would, on the other hand, be nonsense.

The problem with consciousness is that when it comes to other consciusnesses than our own, we have no access. We can look at bodily movements and facial expressions, and scientists can conduct numerous experiments, but another being’s experience of pain, discomfort or the color read is private. We cannot – as when we describe a brain process – put it on the screen in front of us and look at it. We can only observe our own consciousness. It seems to only show in a first-person perspective.

Imagine now, when the Tesla Bot becomes so advanced that you cannot distinguish it from a human being by looking at it and talking to it. If you grab your gardening shears and cut off one of its fingers it will scream in (seeming) pain. It has, in other words, become able to perfectly simulate the behavior of a human being.

A humanoid robot will become able to perfectly simulate the behaviour and feelings of a human being.

Thomas Telving

The question is if you will be able to decide if it is conscious? Does it feel pain? Or does it merely look as if it does? Most scientists agree that when it comes to the conscious experience of others – in dogs, humans, and robots – we have no access. And because of the first-person character of other minds, we might never get access.

So, should we, on these grounds, give robots the benefit of the doubt for their own sake? Will it be morally wrong to keep a robot as a slave?

Should we start sharing the beliefs of Musk and all the other tech fetishists (who can make even more billions by making us believe in machine consciousness?)

No matter how exotic this might seem, it is a dilemma that needs to be discussed. In Killing Sophia, I explain why I personally find no good reasons to believe that machine consciousness will arise anytime soon. This belief forms the basis of what I think we should do now!

A Closing Window of Opportunity: Framing tech as dead
According to Elon Musk, the Tesla Bot will become a larger business success than its popular four wheeled big brother. The prototype should be ready this year. And he is certainly not as alone on the market as he was when introducing the world’s first really cool electric car. Household humanlike robots will, in other words, be a common part of our everyday lives within the next few years.

The question is if we have any way of escaping our tendency to empathize with humanlike technology? We do in fact have one opportunity!

Research shows that the way we react when meeting technology with human (or animal) traits is heavily affected by how we talk about it. Framing a robot like a companion that understands us and can help us in the lonely days of our senior life, will increase our tendency to believe it is alive and deserves moral consideration. Whereas framing it like a dead piece of machinery will push us in the different direction.

The best thing, we can do here and now is to oppose the view of Musk, Sutskever and Hanson and consistently frame technology as dead. Even though we anthropomorphize and empathize with humanlike technology when we meet it, many of us are still easy to convince that we are being fooled.

The best thing, we can do here and now is to oppose the view of Musk, Sutskever and Hanson and consistently frame technology as dead.

Thomas Telving

This window of opportunity is rapidly closing as the technology improves, so if we wish to avoid granting human rights to robots – which I think will put us in a very bad place – we’d better get out before it shuts. Our escape tools should involve philosophy and scientists, visionary politicians, and ethical tech people, combined with consistent communication and a large portion of stubbornness.

Thomas Telving holds an MA. in philosophy and political science. He is a keynote speaker on the ethics of artificial intelligence. His book “Killing Sophia – Consciousness, Empathy, and Reason in the Age of Intelligent Robots” was published the 21st of April on University Press of Southern Denmark.