Skip links

“AI is not magic – it’s a computer that we choose what to do”

By Malene Fryd Ejsing

Based on the movie ‘iHuman’, this article explores the EU’s regulation of the field of artificial intelligence.  The article was originally published on PARA:DOX in Danish.

In this Q&A, we speak with Gry Hasselbalch, Key Expert in the EU’s InTouchAI.eu – an EU initiative that promotes a human-centric and ethical approach to AI. Gry is also co-founder of DataEthics.eu. The article is about the EU’s regulation of AI to better protect citizens’ rights and to ensure democratic control. Furthermore, we discuss the challenges of AI’s potential to replace human labor and its risky dealings with issues ranging from sexual orientation to crime. Finally, we look at the differences in cultural acceptance of AI from efficient robotic receptionists in Japan to the human contact we still very much prefer in Denmark.

Where and how do we encounter AI in our everyday lives?

The AI systems we encounter in our everyday lives today are trained by processing large amounts of data, finding patterns and analyzing them. For example, Netflix recommends movies based on the other movies we’ve watched. Here, the system can make its own recommendations for which series or movies you should watch next. It’s called artificial intelligence because the system can evolve and learn from the data it’s fed and can do things on its own without human intervention.

We tend to confuse these everyday systems with what is called AGI, artificial general intelligence. In this case, it is imagined that sometime in a dystopian future, the artificial intelligence evolves in a way that gives it something resembling its own identity, like the movie ‘Space Odyssey 2001’, where the HAL 9000 computer refuses to obey an order from the astronaut and says “I’m sorry, Dave, I’m afraid I can’t do that.”

This scene is an incarnation of the existential fears that artificial intelligence also represents, and which we see expressed in the debate on generative AI at the moment, for example, because if a machine can do some of the things we can do, who are we as humans? And is there a possibility that at some point we will lose control? To that, I would answer that artificial intelligence is of course in no way like humans, and it’s not really AI we should fear, but the interests that develop it. But the dream of AGI activates that fear.

How does the EU regulate when it comes to AI?

The EU looks at what we want AI to do, not what it could do. There is a huge difference. In the EU, human relationships and the meeting between people is still seen as very important, while in other cultures the relationship is perceived in a different way. According to the Shinto religion, things can have a soul, so in Japan, for example, you can have a meaningful relationship with a robot.

Our approach to artificial intelligence in the EU is what is called human-centric. This means that the development of and the way we use artificial intelligence must be based on human needs and, in particular, the individual human being and the rights they have in a democracy.

Therefore, the EU’s approach is also very much about ensuring a democratic balance of power where citizens are empowered and power is not concentrated. Right now, for example, the distribution of power and interests is very difficult to see in ChatGPT. How has it been trained and where do the answers actually come from?

What kind of life and world are we looking at with AI in Europe in 5-10 years?

As I said, it is a big ambition for the EU to lead the development when it comes to AI. The EU wants transparency in the systems and is very aware of issues such as manipulation, people’s personal right to agency and how AI systems are trained. Our form of society is democratic, which is why we pay particular attention to our right to choose for ourselves. The EU must first and foremost not only think about what companies can gain from implementing AI, but also what negative impact this development can have on us citizens, and avoid this, for example, through legislation. 

The EU is currently developing new legislation in the field of AI. Can you tell us a bit about what considerations they are taking into account and in which areas they dare to invest in AI in the future?

One of the areas where the EU can see AI being put to good use is in the healthcare sector. For example, AI can be used to analyze large amounts of complex data and diagnose diseases or develop new medicines. It could also be in climate and environmental research, where AI could help identify problem solutions that have the greatest environmental impact. There are many opportunities.

The reason why the EU wants to regulate and legislate in this area is because they are very aware of the risks that can be associated with the use of AI. It’s very important that we humans are involved in steering the development and not just incorporating AI into everything we do because we are told that there is a technological solution to everything. Because there isn’t.

The purpose of the new EU legislation is to ensure human safety and rights. The risk factors that could be at play when launching AI on the market or deploying AI in the EU must be managed, and based on the level of risk associated with this, some uses of AI will be banned, some will be deemed high-risk and some will be low-risk. There will then be different requirements to live up to, e.g. regarding the data used, the transparency of the system, etc. Failure to comply with the law can result in heavy fines. 

There is talk of AI taking over many jobs, such as manual labor. Where does that leave us humans and our need for those jobs?

This is a huge problem. AI is no match for human qualities like creativity, originality, critical reflection and emotion. So the creative class that is already at the top of society, I don’t think they have much to fear. But there are people in this world who can’t afford to work with the kind of creative tasks that humans are actually best at and that AI can’t compete with. Those who traditionally have the least in society will once again be the most vulnerable in this development. AI combined with robotics is super smart when it comes to certain types of manual labor.

If a country like the US, for example, gets 100% self-driving cars and therefore leaves 10,000,000 people without a job, it only makes the inequality and imbalance in society even greater. It’s the part of the population that doesn’t have a long and creative education that gets their jobs first.

Should we as EU member states be worried about using AI in the same way as they do in China with their so-called “Smart Cities”?

I don’t think we will be able to implement the kind of controlled mart city systems  that they have in China in a country like Denmark. We have a different point of departure. But we really need to think carefully and be aware of the different risks when implementing AI technologies in our society. In short, Chinese Smart Cities are built on advanced technology designed to control and monitor citizens. For example, facial recognition is used to detect behavior, and you are then registered and “scored” in a social scoring system that can give and deduct points if you, for example, run a red light, park inappropriately or are rewarded for volunteering.

Chinese social scoring systems can have serious consequences for an individual’s standard of living, as it can mean that you can’t send your children to school or take out a loan from the bank if you don’t behave properly. In the worst case scenario, you could be sent to a re-education camp where you learn how to become a “good citizen”.

As mentioned, we won’t see a situation this extreme in Denmark. After all, our fundamental rights are still very important in Danish society. In connection with EU legislation, it is also currently being negotiated that the use of facial recognition systems should be banned in public places. 

In what ways is AI “smarter” and “dumber” than humans?

There is one thing we need to understand. AI and humans are not comparable. AI is a machine designed to process data. That’s as far as it goes. A human, on the other hand, is something far more complex than a machine that processes information. We do process information, but we also have emotions, creativity, critical reflection and intuition, as well as many other qualities, not to mention a biology that a machine cannot replicate.

AI is not magic, it’s a computer that we choose what to do. It’s really good at some things and not others. For example, AI is really good and fast in some parameters that we humans also have, but are a little slower at, such as performing rational and logical analysis on large amounts of data.

What worries me the most is not whether AI and humans can be compared. Because of course they can’t. I’m worried about people in manual jobs having their work taken away from them because a robot can do it faster. It will create a divide and a gap between us humans, as low-skilled or people without a higher education are suddenly ‘redundant’ and only people with a long or creative education will be able to get a job. This acceptance of the shift that AI will create in society and the perception that the elite have nothing to fear worries me.

The movie ‘iHuman’ claims that the AI of the future will be able to read people’s sexual orientation and even whether a child is prone to crime in the future – just from facial recognition. What are your thoughts on this?

First of all, it’s very dangerous to believe that AI would actually be able to do that. Not to mention hugely discriminatory to minorities such as the LGBT+ segment. You can compare it to the old days of measuring the size of people’s skulls to tell if they are more or less gifted. It’s limiting for us as humans to believe that artificial intelligence can predict human actions before they even happen.

AI is not a fortune-teller’s ball. It’s a system that makes predictions based on data. We shouldn’t start limiting our freedoms as humans. One of the most important aspects of our freedom is that our future is indefinite and unpredictable, and therefore we have a say in shaping it. In addition, we must remember that AI is biased. It is trained by what we teach it, and therefore it is also biased, just like we are. 

It’s no secret that robots are the next step in AI development. Some people may even fear that one day they will “take over,” what do you think about that?

In Japan, for example, you might bump into a robot receptionist at the hotel you’re staying at. It’s completely normal in Japan, but it would be weird at home. There is a huge cultural difference in the perception of people and our surroundings. In Japan, there is a much greater openness to the development of AI and robots because there is a different culture around the technologies. As said in the Shinto religion, a thing can have a soul.

Our culture is different, so I don’t think we in Denmark should be afraid that robots will take over human relationships. We want human contact, even though a robot would be hugely efficient in many respects. The closest we come to that is that we can nowadays go down and scan our goods in the super market – but we can also go to the regular checkout where there is a human sitting there with whom we can have a conversation.

The documentary ‘iHuman’ is directed by Tonje Hessen Schei and was screened at CPH:DOX in 2020. The film can be seen at PARA:DOX

Photo by jasper benning on Unsplash