Skip links

ChatGPT is the Epitome of Donald Trump

Real and fake, truth and falsehood are being blurred. First and foremost by Donald Trump, but now also ChatGPT and the rest of generative AI services (GenAI). Our trust in each other and the democracy is already at alow but we are racing toward the bottom by integrating GenAI into everything, as if it were a tested, secure service ready for the market and humans.

Many people were outraged over the fact that the former president and current candidate for president Donald Trump claimed that the election was ‘stolen’ from him. Not because they believed him, but because he was lying. There has been proven no evidence of fraud that could have changed the outcome, and all lawsuits challenging the results have been lost even by judges he hand-picked. Still, millions of people believe it was a fake election, according to The Conversation.

The fact that the Americans can’t agree on who won their vote, shows how their democracy is limping. The polarisation in the Western superpower is deeply serious, as facts – and agreeing on facts – are essential. If we don’t have a common idea of what is right and wrong, our democracy will die, and we will end up with a civil war described in the dystopian movie of the same name, Civil War.

If we can’t agree on facts, we will discuss that instead of discussing how to solve our problems (which we might disagree on) and lead ourselves, our businesses, and our society in the right direction.

Downright Lies
ChatGPT and other GenAI services are not telling us facts. It is quite the opposite because GenAI is not made for that but for guessing the most likely next word in a sentence. Here are three newer examples of misinformation:

Barack Obama is a muslim.
Put glue in your pizza to avoid the cheese from sliding.
Eat rocks for your health.

They are all coming out of Google’s GenAI, according The Financial Times.

Sophisticated Misinfo
But there are more sophisticated misinformation. Example: When you ask ChatGPT to structure an assignment about ethical AI, and feed it with an idea, part of the response is: You can use something ‘as simple as facial recognition for security or something more complex like analyzing employee data to predict and improve employee satisfaction.’ The problem is that it is not at all simple using facial recognition – at least not in Europe where is it better regulated than in the US. Facial data is very sensitive data.

And it is very very hard to fact check even stuff you know pretty well. Stuff you don’t know is almost impossible to fact check.

Reponsibility on User
Google, OpenAI and Microsoft all warn that their products are ‘hallucinating’, and thus try to avoid responsibility and put it on the user instead, while knowing that most humans will swallow the lies they cannot detect immediately. Fact checking is really hard and time consuming, even if you know the topic well. At the same time we know, that convenience trumps all.

Some people will say that humans are not always correct either. True. But I don’t think we should compare humans and machines. And because some humans are lying, it doesn’t make it okay to integrate a new technology immaturely into everything.

Cold Machines
Just calling it ‘hallucination’ is manipulative. Humans can hallucinate. Machines can’t. But the AI industry wants us to believe that their cold machines (please, always call them that) are close to getting as bright as humans. Both Trump and many of the GenAI services pretend they are human and work for better lives for humans, but they both blur the truth with fake and use false facts to manipulate humans. The latest attempt at so-called anthropomorphisation (attributing human characteristics or behaviour to a machine) was at the launch of ChatGPT4o where OpenAI introduced Sky – a stolen voice of Scarlett Johansson, now taken down again. At the 27-minute intro, they kept saying to the machine: How are you, thanks, please as if they were speaking to a human being. When you create machine voices and text that are so similar to humans you also make the users antrropomorphise – because we do get feelings for something that is so human-like.

Trump and GenAI also both pretend they care about humanity more than they care about profits, which is obviously not true. They are both creating hype around themselves in order to collect and make more money, and quite a lot of humans follow them blindly.

Sam Altmann, the hallucinating CEO of OpenAI, says that OpenAI is the only one who can drive GenAI responsibly and reach their end goal of Artificial General Intelligence, AGI. But what Mr Altmann is doing is leading the way for more Trumps to invade every computer and every gadget used by humans.

Illustration: AI-generated. Prompt: ChatGPT as president. Tool:

PS One thing is the blurring of facts and truth and the anthropomorphisation of machines. There are many risks of GenAI being ignored such as the rising environmental costs of GenAI, which should be mentioned every time we talk about how GenAI will disrupt everything.