Skip links

Corporate Europe: US Big Tech Lobbied Against Generative AI as High Risk in the AI Act

The coming AI Act from the EU is being re-written at the moment to make sure that General Purpose AI, including Generative AI, is being regulated properly. It was only labelled as ‘limited risk’, probably after heavy lobbying from especially Google and Microsoft, according to interesting report from Corporate Europe.

A report ‘The Lobbying Ghost in the Machine – Big Tech’s Covert Defanging of Europe’s AI Act’ from Corporate Europe Observatory, CEO, shows how tech companies, particularly from the US, sought to reduce requirements for high risk AI systems and limit the scope of the regulation. In particular Big Tech lobbyists sought to exclude the newly introduced concept of ‘general purpose’ AI systems from regulation.

General purpose AI was initially kept out of the draft regulation proposed by the Commission. Council meeting minutes, obtained by CEO, revealed that the Commission had initially not thought it necessary to include a definition for general purpose AI systems “as only high-risk use is regulated.” Critics argued that that this exclusion would pose “risks to health, safety and fundamental rights.” Then in 2022, the French presidency of the Council proposed requirements for general purpose systems which was accepted by the Commission. However, the proposed rules would be less stringent than those for high-risk systems, and based on “internal control”. And then Big Tech’s lobbying network got busy, according to the report:

  • At Politico’s Google-funded ‘AI & Tech’ summit in April 2022, Google’s Vice President Marian Croak, underlined that general purpose models should not be made to comply with the rules the EU was considering for high-risk systems.
  • Microsoft followed suit with a statement that there’s “no need for the AI Act to have a specific section on [general purpose AI]” and so did various business associations.
  • Also so-called shadow organisations went into action. An organisation called BSA (funded by Microsoft) said it would hamper innovation. Allied for Start-ups (funded by Google, Apple, Microsoft, Amazon, and Meta) joined the choir and said it would hurt European start-ups.
  • China was draw into the arguments. The traditional lobby message, that regulation would stifle innovation and economic activity, was framed as being part of a larger geopolitical debate. Microsoft, for example, asked for a meeting to discuss “the right balance on the regulatory front” and where the AI Act “put Europe in the global scheme of things between the US and especially China.

The report also looks into meeting with Members of the European Parliament: “By mid- January 2023, MEPs had recorded 1,012 lobby meetings on AI with 551 different lobbyists, data gathered for this report showed. Most organisations had one or a few meetings: Google topped the list with 28.”

CEO’s analysis also revealed that industry and trade associations together accounted for 56 per cent of all MEP meetings, while only a quarter of the meetings were with academics, researchers, and civil society organisations. Big Tech’s own so-called ethical guidelines has become a powerful tool in their lobbying toolbox, according to the report which concludes that “despite all the concerns over AI and the critique over its other products, Big Tech had succeeded in spinning a positive narrative about the use of artificial intelligence in the European Union.”

Photo generated on Open AI’s Dall-e with the prompt a lobbyist in the European Union who want to stop AI regulation – there was many choices of white male lobbyist. This is only one. Bias or?