Skip links

Stop The AI Hype. Please

The hype around artificial intelligence (AI) is endless. When it comes to self-driving cars, climate solutions, diagnostics and personalised medicine, finance and stock trading, customer service and not least content generation. Hype occurs when there is a gap between the potential of something new and the actual reality.

Technology optimists and salespeople have once again succeeded in fuelling a development that is taking place at an inhumane, climate unfriendly and undemocratic pace. No one can keep up. Not even the people who develop it, because many of them express concern about where it all ends. It’s all about getting there first. Profit is the driving force with little focus on security, privacy, truth, rights, climate and copyright.

AI has been around for decades, but with generative AI, GenAI, we have taken new steps, and I believe that in the right hands GenAI really can create positive change, especially in science, health, manufacturing and climate. But right now, things are moving so fast that we risk repeating the mistakes of the past and letting American monopolistic tech giants take over more infrastructure.

We see tech giants acquiring or rather ‘partnering’ with AI startups (because they risk being prevented from buying them by antitrust authorities), while authorities in both the US and the EU lag behind and investigate whether these partnerships are breaking competition law. Few AI companies are making money today, but their value is inflated by the hype. And they are launching one AI service after another without risk analyses or external auditing because legislation is virtually non-existent in the US and has not come into force in the EU.

We are in the very early stages of a new technological era, which is why we in Europe need to think twice before we leave it all to Microsoft and Google (or Amazon and Meta). Again. We must also remember that both Microsoft/OpenAI and Google’s products may even be illegal. At the very least, they are unethical as they have been trained on other people’s works without authorisation. Court cases will determine this, but a copyright infringement judgement in France against Google suggests that it is illegal. And a Danish clarification of copyright law makes it illegal in any case.

Fortunately, there are legal and ethically responsible AI products on the market or at least on the way. Both in Europe and in the US. There’s German Aleph Alpha aimed at businesses, and a German language model (LLM) with government support and content from media and universities is on the way. In Denmark, both public and private organisations can buy the Supwiz chatbot, which also uses generative AI, but where you have control over what it says. The Danish Alexandra Institute is working with universities to create a Danish language model at foundationmodel.dk. Then there’s the French Mistral.ai (which Microsoft has already eaten a large chunk of), which is open source but is also trained on copyrighted content. And Finish Silo.ai is working on a larger European LLM.

Meta’s LLM model Llama, which many smaller European language models are trained on, are partly open source but they don’t disclose which data it is trained on. Therefore not ‘clean’ either. KL3M.ai, on the other hand, is. It is the first model to receive the fairlytrained.org label.

Finally, Adobe has created Firefly, which is mainly trained on legal content, it says. Getty Image’s generator will be the same when it launches, and there will even be revenue sharing with the creators of the content it is trained on.

So we should slow down a bit and buy and support the legal and ethical models so we can build a legal and ethically responsible AI infrastructure.

This column was first published at Prosa.dk

Illustration made by firefly.adobe.com with the prompt: a lot of sales people from big tech companies creating an AI hype