Skip links

Only Humans Can Write The Right AI-Legislation

It is now widely known that in March 2023 a group of leading superstar scientists and industry representatives published an open letter calling for a global call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. Because “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs“. Meaning that the capabilities and dangers of systems such as GPT-4 have to be properly studied and mitigated.

What is less well known is that in Time Magazine Eliezer Yudkowsky (a decision theorist from the U.S. who leads research at the Machine Intelligence Research Institute) followed up with another open letter claiming that: “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down“. In a very serious opinion piece he ends up writing: “We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong“.

The authors of the first mentioned open letter were less conclusive than Eliezer with the suggestion that if researchers will not voluntarily pause their work on AI models more powerful than GPT-4 (the letter’s benchmark for “giant” model) then “governments should step in”.

The big question is: How will and should governments step in?

We know that the stakes are high for Big Tech – the uncrowned king of capital and lobbyism in both Brussels and Washington. But now some of these have asked for more time. We have seen this in other domains before that where scientists and industry have called for a global pause in the development of new potent risky technology.

In 2019 some of the biggest names in gene editing – also called CRISPR – called for a global moratorium on heritable gene editing. In the words of Technology Review they wanted to stop anyone from playing around with cells that pass on changes to the next generation”. 

And in 2022 a coalition of some of the world’s leading companies dedicated to introducing new generations of advanced mobile robotics to society published an open letter to the Robotics Industry and their communities stating that general-purpose AI should not be weaponized pledging that they will not “weaponize advanced-mobility general-purpose robots or the software we develop that enables advanced robotics and we will not support others to do”. Also in relation to nuclear technology moratoriums have been widely used as a way to try and master control. 

But the examples mentioned above – gene edition; nuclear weapons of mass destruction or general purpose AI-driven robots armed with lethal weapons – are all obviously serious examples.  Whereas generative AI like GPT-4 at first glimpse seems more of a “softie” in the domain of AI because it’s about language and pictures backed by an old school IT company like Microsoft.

What could possibly go wrong?
Pretty much – according to more than 50 prominent individual experts and institutional signatories that are now calling on European officials to pursue even broader regulations on the technology in the EU’s AI Act. The group says that while general-purpose tools like ChatGPT may not have been designed with high-risk uses in mind, they can be used in different contexts that make them riskier, according to Sarah Myers West, executive director of the AI Now Institute, which helped spearhead the call to European policymakers.

One thing that is certain is the tech giants will try and convince lawmakers of legislation that benefits them the most. And computer scientist Hany Farid points out that large tech companies are soon going to have a legal conundrum. If they claim their chatbots are transformative (which I guess corresponds to the phrase “generative” in generative AI – as opposed to “reproducibility”) this will close the doors to claims that they violate copyright). But if they claim that they are not transformative then they would better be able to keep Section 230 protections). And according to Fareed, it will be difficult for them to have it both ways. 

No matter what: This is not just a “race for god-like AI”, or a race for AI researchers’ to try to halt the development – it is also a race for Big Tech to secure the the type of legislation that optimize their profit. In that regard nothing have really changed after all. But when it comes to the question which is the “right” legislation? – not even large language models can come up with the right answer. And policy makers need to be very aware which and whose interests are in play.

Photo: Photo by Jr Korpa on Unsplash