European entrepreneurs and their supporters are reluctant about new tight regulation of artificial intelligence, AI, but politicians could easily turn it into a true competitive advantage if they really wanted.
Mistral, a French start-up that develops artificial intelligence solutions for other companies, played a leading role in the marathon negotiations for the Western world’s first AI regulation. The less than one-year-old European company, already estimated to be worth two billion euros, led French President Emmanuel Macron to oppose a number of formulations in the draft AI Act, which the EU parties agreed on Friday 8 December 2023 after almost 40 hours of long negotiations.
Macron believed that the so-called foundation models should be based on voluntary rules, as they are in the US and UK, where Mistral has its main competitors. This was also the Council of Ministers’ recommendation, backed by Germany and Italy. It didn’t end up that way, but Macron is still sceptical. Speaking to the Financial Times three days later, he said he was worried that the new AI Act would hamper innovation, again citing Mistral as an example. With just 22 employees, Mistral has increased its value seven-fold in its short lifespan by building technological solutions that other companies can use for chatbots, search engines, online education and other AI-powered products.
Foundation models
Foundation models are large, versatile artificial intelligence models that can be adapted to many different applications. The term focuses on the model’s role as a foundation for a wide range of AI applications. Foundation models include OpenAI’s GPT, Google’s Gemini, Meta’s Llama and French Mistral AI. The first two are so-called black boxes where you can’t see what data they are trained on, while the last two are open source. ChatGPT is an application based on OpenAI’s GPT, and danskGPT.dk is based on Meta’s basic model, Llama.
Macron’s opposition unfortunately shows that as soon as a nation has large or promising companies in an area, there is a political softening around the desire to regulate them. This is exactly the case in the US, where AI regulation is voluntary. They practise self-regulation, where it’s up to the tech companies themselves to behave properly. In the US it is often about beating China – which, by the way, has long since introduced AI regulation.
Of course, the big American tech giants have also lobbied hard in Europe to water down the AI Act. They do not want strict regulation, even though they market themselves as wanting regulation in public and say they are afraid that AI could get out of control. Meta therefore backed Macron with a little lie. Meta’s head of AI Yann LeCun wrote on X that basic models were added to the legislation ‘at the last minute’, which is not true, as the French government itself put the idea on the table in May 2022, according to AI expert Gary Marcus.
The world’s second largest economy, the EU, agreed on a law that in two years will replace voluntarism with binding rules. With the legislation, the EU is trying to ensure democratic control over a technology that even its developers are worried could get out of control, and which today is developed purely for profit – not with humanity or the planet at the centre. For example, with the AI Act, SnapChat wouldn’t have been able to launch SnapAI, the artificially intelligent chatbot that children and young people in particular are conversing with, without proving that there is no risk of it manipulating users, for example.
The new rules have been developed with a risk-based approach and categorise the use of AI according to whether there is minimal risk, high risk or whether they should be banned altogether, and of course there are exceptions. But overall, you can say that humans are protected against discriminatory algorithms and systematic manipulation.
Prohibition
The AI Act introduces a total ban on systems that manipulate human behaviour and circumvent our free will. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behaviour of minors or systems that allow ‘social scoring’ by governments or companies, and certain applications of predictive policing.
There are also bans on biometric systems based on sensitive data such as political opinions or sexual orientation. Emotion recognition systems, for example, are not allowed to be used in the workplace and educational institutions. There is also a ban on police using ‘predictive policing’ – predicting whether a person is at risk of becoming a criminal. Finally, there are limits on whether authorities can use live remote biometric identification systems in public places.
However, there are loopholes, according to German organisation Algorithm Watch:
“AI systems that are used to ‘recognize’ the emotions of asylum seekers or AI used to identify the faces of people in real-time in public space with the objective of searching for a suspect of crime are legalized through the loopholes and exceptions that the list of bans apparently foresees,” it writes, emphasising that the seriousness of the issue will only become clear once the details are finalised in the final text.
The loophole could also pave the way for what they do at Brøndby Stadium in Denmark. Here, a biometric AI system reads the faces of spectators at the entrance. They are then compared to a specific list of football hooligans who are banned from entering Brøndby Stadium, so they can be rejected at the entrance. This use is an exception approved by the Danish Data Protection Agency because, among other things, all images of spectators who are not identified as one of the known hooligans are deleted immediately.
Minimal risk, transparency risk and high-risk AI
Companies that develop and sell AI with minimal risk include spam filters and recommendation algorithms. They get a free pass.
However, there can be ‘special transparency risks’ in any use of AI. If you use chatbots or human-like technology, for example, you need to make users aware that they are interacting with a machine. All forms of AI-generated content must be labelled as AI-generated, and if using people’s biometric data or emotion recognition, they must also be informed about it. Furthermore, manufacturers must ensure that all these AI systems are labelled in a machine-readable format and technically detectable as artificially generated.
Special Transparency Risks. Using a virtual anchor from e.g. Synthesia you’ll have to declare clearly that it is AI-generated and not a human being. This is a rule of ethics today and tomorrow with the AI Act a rule of law.
The most difficult and largest category is high-risk AI. It includes critical infrastructure in terms of water, gas and electricity. It covers medical devices, systems to assess access to educational institutions and HR systems used for recruitment. It can cover systems used for border control, courts and other democratic processes. Systems that use biometric and emotional data are also considered high-risk.
Anything that can be categorised as high-risk, which unfortunately AI developers must decide for themselves, must conduct risk impact assessments and assess what it means for fundamental human rights. There must also always be human control over them. And users who come into contact with high-risk AI have the right to an explanation if their rights are affected, and we must be able to complain about it.
Slow democratic process
Regulation is a slow democratic process. It takes at least two years before the law is applicable to everyone. During that time, the EU relies on voluntary implementation of the rules. We’ll probably see this from the companies that are committed to good behaviour. In Denmark, for example, we have the chatbot company SupWiz, which is considered a role model company when it comes to compliance with GDPR, the EU’s personal data legislation from 2018.
But others want to drag things out. Not least the American giants. Very often we see them launch services that they know are against the law and ethics. They usually don’t ask for permission, they just do it and then ask for forgiveness and apologise if problems arise. We’ve seen Meta do this in the US Senate, where Mark Zuckerberg pretended to be surprised that Facebook’s algorithm was manipulating users. We may also see this with OpenAI’s Sam Altmann, who launched ChatGPT knowing that it was developed using people’s data and artwork from all over the web without asking for permission.
How to turn it into a competitive advantage
Just as French Mistral and Macron are concerned that the AI Act will hamper innovation, the Danish Chamber of Commerce is not entirely satisfied.
“We are in favour of a risk-based approach. But the extensive burdens and documentation requirements will hit SMEs, which are the backbone of the Danish economy, particularly hard. It actually creates unfair competition in the market. And it’s a shame when we in Europe are already lagging so far behind the US and China in the competition to not only use but also develop AI,” says Nikolaj Wædegaard, Industry Director at the Danish Chamber of Commerce,
There are two ways this can go.
One way is for Mistral to move its legal headquarters to the US. In the same week that the EU agreed with itself on the AI Act, Mistral, founded by three men who previously worked for Meta and Google, received 400 million euros in venture capital, according to The New York Times. One of the investors is Andreessen Horowitz, one of Silicon Valley’s most voracious AI investors. Co-founder Marc Andreesen is also behind an AI manifesto that is decidedly optimistic about the technology – without the slightest reservation about the risks that many AI experts are concerned about. As American VC firms have done so many times before with the companies they invest heavily in, they may demand that Mistral also move its legal headquarters to the US. That would be a sad way out for Europe.
The other possible route is for politicians to make it a genuine competitive advantage for European companies to comply with the AI Act and other data legislation. They could do this by requiring all public bodies in the EU to buy only legally and ethically responsible. Not only when it comes to green procurement, but also when it comes to technological procurement (see our white paper on public procurement), so if they need a cloud solution or a chatbot, they could start by looking around Europe. This would give Danish SupWiz, French Mistral and many other European companies a huge and necessary boost.
This article was first published in Danish in KommunikationsForum.
The illustration is made by DallE with the prompt: an illustration showing the EU’s new AI Act giving humans control of AI such as ChatGPT