Skip links

The AI Hype In a New Guise: Agentic AI

The generative AI hype surface is cracking, so why not reintroduce an old idea to boost AI investment? ‘Agentic AI’ is as old as the history of AI. But the promises these days are grander than ever.

The promises of Generative AI have the last couple of years been overwhelming:  We will automate millions of jobs and save a lot of money on human staff. We’ll see a productivity boost in every industry. Media companies will of course suffer, because AI can do most of the work of journalists. Chatbots can take over customer service, solve most of our problems, and now, we are told, AI can even do the job of a PhD. What’s not to like? The only obstacle, according to AI legend, is the faulty human that just need to learn to prompt better.

The generative AI gold rush has been ongoing for years. But we are slowly realising that there is more science fiction than science to these promises. After all these years of promises we are still waiting for real efficiency gains, the models still hallucinate by design, mess up simple math, and make mistakes in the simplest tasks such as placing countries on a map.  

So, what should the AI companies do, when they need AI investments to be pouring in to cover the escalating costs of GenAI? They reintroduced the idea of the ‘autonomous agent’ – the big promise that since the dawn of the technology has been haunting the field – from Turing’s thinking machines and McCarthy’s  ‘advice takers’ to the more popular term ‘electronic brains’ used during the first AI hype cycle of the 1950s . Let’s call it ‘Agentic AI’ and once again promise that this time, yes this time for sure, these machines will join the workforce, replace knowledge work and be the number one provider of digital labor, according to TechChrunch.

In modern terms, AI agents don’t sound much different from their early autonomous computer brethrens. They are software products designed to complete multi-part tasks on behalf of their human taskmasters, according to Futurism.

Gartner – one of the consultancy companies accustomed to hyping new technologies in any form – wrote in a report from June 2025 that “at least 15% of day-to-day work decisions will be made autonomously through agentic AI by 2028, up from 0% in 2024. In addition, 33% of enterprise software applications will include agentic AI by 2028, up from less than 1% in 2024.”

Not a lot of hype this time, as some of the AI companies already call their generative AIs for agentic. Gartner’s report also states that over 40% of agentic AI projects will be canceled by the end of 2027, due to escalating costs, unclear business value or inadequate risk controls.

Of course AI agents are already failing to live up to the promises. According to researchers at Carnegie Mellon University, who released a paper in June 2025, the bestperforming AI agent, Google’s Gemini 2.5 Pro, failed to complete real-world office tasks 70 percent of the time. While OpenAI’s GPT-4o had a failure rate of 91.4 percent, Meta’s Llama-3.1-405b had a failure rate of 92.6 percent, and Amazon’s Nova-Pro-v1 failed a ludicrous 98.3 percent of its office tasks.

Agentic AI is A Privacy Nightmare

The challenges of autonomous AI systems – yesterday’s ‘electronic brains’, today’s ‘Agentic AI’ – are many and well documented. This is also why we have landed on a more nuanced approach to AI in the international AI governance field. And today, fortunately, we have the EU AI Act that automatically sets a non-agentic AI path forward. As stated in the first requirement of the law: “Humans must remain in control of the system, and be able to intervene at any time to prevent or minimize harm”.

However, to provide a more obvious example of the challenges, we could take something like the right to privacy. Agentic AI, of course, outperforms any of the big data technologies introduced over the last 20 years in privacy challenges. While generative AI has been sucking up your data, Agentic AI needs even more. The more you want them to ‘serve’ you, the more data they will need. For example, if you want to buy a pair of shoes and have your AI agent do it without asking further questions, it needs access to your credit card or bank account, to your calendar, and email, to remind you of appointments or prepare you for them.

Germany and France, for now, are saying No to agentic AI due the may risks.

Meredith Whittaker, the CEO of Signal, Meredith Whittaker, the private messaging app, therefore has serious issues with the whole idea:

“The thing that is the new buzzword, agentic AI, there is a real danger. We are giving too much control to these systems that will need access to data. It can book a ticket for a concert, look into your calendar, schedule it, and message all your friends about it. Put your brain in a drawer. The data will be sent to a cloud service, and there is a profound issue around security and privacy. It will undermine your privacy.”

Would you trust a commercial, non-transparent company to act on your behalf, providing them with control over your calendar, credit card, and friends’ contact? Yes, some might do it unknowingly, but most would say no thanks if they were truly informed about the consequences.

According to The Financial Times (AI agents: from co-pilot to autopilot 7.5.25) there is a gap between hype and reality. The author of the book Agentic Artificial Intelligence Pascal Bornet explained in the paper that currently, autonomous cars operate at levels two to four (out of six levels), and that AI agents are at a similar stage. Most operate at level two or three, with some specialised systems operating at four. But level five, where agents fully understand, plan and execute complicated missions with minimal human input, remains theoretical, he said.

The many promises regarding Generative and Agentic AI come from those with economic interests in the technology. They need to be grand to make investments keep coming.

Don’t Follow the Business Hype. Follow the Science.

There is also the alternative path forward, and this presents the position of an AI scientific field that fortunately did move on from the early 1950s thrills, that are painfully aware of the 1970s AI winter and perhaps are also seeing a new 2020s AI winter just around the corner (“AI winter” is a term used about a period of disappointment and budget cuts following moments of AI hype and broken promises). The alternative AI path is also a thrilling idea about the potential of AI, but it is based on decades of scientific work and testing of what works and what doesn’t work, the careful exploration of social impacts, ethical impacts, the policy developments and international agreements – such as the work we did in the EU High Level  Group on AI, in the EU in general, UNESCO, OECD and the UN on “trustworthy” AI with humans in control. It is built on the common understanding we reached of the semi-autonomous role we need and want AI technology to play in our human societies.

As Yoshua Bengio – one of the world’s foremost AI researchers – recently wrote:

“The leading AI companies are increasingly focused on building generalist AI agents — systems that can autonomously plan, act, and pursue goals across almost all tasks that humans can perform. Despite how useful these systems might be, unchecked AI agency poses significant risks to public safety and security, ranging from misuse by malicious actors to a potentially irreversible loss of human control.”

They propose instead to focus on non-agentic AI systems that are trustworthy and safe by design. We could call it ‘Scientist AI’, which is a system designed to explain the world from observations, as opposed to taking actions in it to imitate or please humans.”

Photo: CottonBro Studio