Big Tech has been shaping the world economy and the global cultural and social environment in its favour for years. The concentration of power in the hands of a few companies is enormous. With the launch of ChatGPT two years ago and the rapid popularisation of generative artificial intelligence, GenAI, their concentration of power has only accelerated – also in the policy domain.
”Make No Mistake, AI is Owned by Big Tech,” wrote Signal-CEO Meredith Whittaker and others in an article in Technology Review last year; ”If we’re not careful, Microsoft, Amazon, and other large companies will leverage their position to set the policy agenda for AI, as they have in many other sectors”.
Who conducts power in politics must be a central question in any well-functioning democracy. Pernille Tranberg and I have regularly written articles about Big Tech’s lobbying efforts in the EU, in this country and on the other side of the Atlantic, and we have analysed their soft power methods used to influence civil society, academia and publicist media to create goodwill and cement their own power in society.
A new academic article supports some of our points with theory building that focuses more specifically on Big Tech, generative AI and impact in the policy process itself.
In the article ‘Why and How is the Power of Big Tech Increasing in the Policy Process? The Case of Generative AI’, the authors point out that this is an overlooked area of research – which is obviously a point in itself. They put it this way:
“While studies have highlighted how governments increasingly rely on digital platforms and integrate AI into their day-today operations, existing studies downplay the comprehensiveness of Big Tech’s influence in the policy process.“
Overall, the researchers identify several streams – one of which they call the ‘problem stream’:
In a knowledge society, policy experts, researchers, NGOs, media and others all play a crucial role in identifying, framing and legitimising societal issues (or ‘problems’) for political action.
Big Tech plays a major role here through their AI platforms, social media and search engines, where discourses are shaped.
But also through their interventions in AI research. Big Tech has a dual role as both contributors to and beneficiaries of scientific work in AI, which presents potential conflicts of interest and raises ethical concerns about AI research. This conflict of interest became evident, for example, in the case of Google’s firing of Tinmit Gebru, who had publicly expressed concerns about the risks of the new types of AI.
Big Tech also shapes the ‘problem stream’ by influencing publicist media sources. Partly through ownership (Amazon owns The Washington Post, for example) and the advertising market (Google and Meta have a duopoly). Generative AI also enables Big Tech to transform from content judges to content providers.
But there’s more: ‘GenAI’s capacity to create content that seamlessly blends with human-generated information amplifies Big Tech’s role in defining societal problems. Furthermore, GenAI can analyze vast amounts of data to identify emerging policy issues and give Big Tech a first-mover advantage in framing these issues.’
With GenAI, the gatekeeper function of Big Tech has been strengthened. They determine not only what is known, but also to whom and how it is known, thereby influencing the epistemic basis for policy making.
If we are to try to analyse and understand how AI influences and interacts with democracy, we cannot avoid talking about the enormous concentration of power that lies behind the AI ecosystem in the market today. And there are many threads in this analysis.
Translated by the help of deepl.com