Skip links

The Quest for AGI Continues Despite Dire Warnings From Experts

Musk, Gates, Hawking, Altman and Putin all fear artificial general intelligence, AGI. But what is AGI and why might it be an advantage that more people are trying to develop it despite very serious risks?

“We are all so small and weak. Imagine how easy life would be if we had an owl to help us build nests,” said one sparrow to the flock. Others agreed:

“Yes, and we could use it to look after our elderly and our children. And it could give us good advice and keep an eye on the cat.”

An older, grumpy sparrow suggests that before they let an owl into the flock, they might want to learn how to domesticate such a bird. But the other sparrows don’t listen. They’re already so engrossed in the idea of the owl that they set off to find an owl egg.

Philosopher Nick Bostrom’s book Superintelligence begins with a little fable about a flock of sparrows who dream of getting an owl to help them, as I loosely paraphrase it above. But the sparrows have neither the patience nor the forethought to take appropriate safety precautions before embarking on this inventive project. Superintelligence was published in 2014 and is more relevant than ever. Put in 2023 terms, the optimistic sparrow could be OpenAI CEO Sam Altman, while the old sceptic could be the more cautious OpenAI board member Helen Toner. Toner was famously ousted, and Altman is once again free to lead his loyal flock.

Better Than Humans

According to consultancy Gartner, AGI is a form of AI that can understand, learn and apply knowledge across domains. With its unlimited cognitive flexibility, it can be utilized for all kinds of tasks. ChatGPT creators OpenAI’s definition goes one step further. They write that AGI is highly autonomous systems that outperform humans at most economically valuable work. That little word “outperform” is significant because it suggests that it is not just about being able to do the same work as humans. It is about outperforming us. Is AGI for OpenAI the same as superintelligence? It seems so. Many people today treat the two terms as synonyms.

There is no consensus on the definition of AGI. Not surprising, considering the technology doesn’t exist. But the disagreement is also due to the fact that there is no consensus on the definition of intelligence. IT experts tend to define human intelligence in terms of the ability to achieve certain goals, while psychologists tend to look at the ability to adapt or survive. These differences are reflected in the suggestions below on how to test an AI system for AGI.

The Turing Test (Allan Turing)

The mother of all AI-tests: A human must be unable to distinguish machine from human based on the answers to questions posed to both.

The Robot College Student Test (Ben Goertzel)

An AI system should be able to enroll in a university and attend and pass courses in various subjects, just like a human student would.

The employment test (Nils John Nilsson)

A machine should not only answer questions in testing environments, but be able perform in the real world. It must be able to perform all functions at least as well as a human in the same job.

The IKEA test (Gary Marcus)

A physically embodied AI views the parts and instructions of an Ikea flat-pack product and is, then, able to assemble the furniture correctly. 

Notice how the methods differ in their focus on rational calculations and the ability to operate in complex environments. Someone has suggested that we have reached AGI when a robot can walk into a house and make a cup of coffee without further instruction. The test is interesting because it reveals how complex an ordinary, everyday task is for artificial intelligence. To my knowledge, no one has suggested testing the ability to create peace, trust, and green transition.

From AGI to Superintelligence

A common concern about AGI is that it could evolve into superintelligence; a type of intelligence that far surpasses that of humans. The notion is that AGI models will be able to self-improve their design until they are significantly more intelligent than humans, both in speed and quality. The next possible development is a kind of intelligence explosion, described as a never-ending spiral of exponentially growing intelligence. As mentioned, the two concepts tend to merge in public debate, but in the literature, they are usually referred to as different technologies.

I will soon discuss whether Altman’s owl is more likely to help us or destroy us, but first let’s look at the realism of AGI. Is it even possible to develop artificial intelligence that can replace the press officer and the McDonald’s clerk, and brew a cup of coffee before helping you out because you bought a KALLAX or a KLEPPSTAD in IKEA?

Disagreement Over Time Horizon

Prophecies have a somewhat tarnished reputation in scientific circles, so what do you do if you want to try to make such predictions look like research? A common approach is to conduct quantitative measurements among experts. In a 2022 survey of 352 AI experts, half said we’ll reach our goal before 2061, while 90 per cent stayed within 100 years. So, despite major disagreements about the time horizon, few see it as a pure sci-fi fantasy. Keep in mind, though, that even if the respondents are experts in a particular technology, they are not experts in predicting the future. Some are far more optimistic. CEO of AI company Anthropic, Dario Amodei, speaks of just two to three years, and in the wake of Sam Altman’s sacking, it was suggested that the owl’s egg has not only been laid in OpenAI’s sparrow’s nest, but may already be hatching. ChatGPT developers are close to a breakthrough, was the (unconfirmed) speculation surrounding the sacking.

Power Monopoly or Balance of Power

The question of how AGI might affect humanity is naturally a major topic of debate. If AGI actually succeeds, it would hardly be an exaggeration to call it a milestone in human history. While some tech optimists believe that AGI will be mankind’s salvation, many high-profile voices in the debate are extremely worried. Elon Musk, Bill Gates, and Stephen Hawking are just a few of them.

Prominent Swedish American mathematician and researcher Max Tegmark believes that AGI could be both the worst and the best thing to happen to humanity. If the technology lands right, its superior intellectual superpowers could eradicate everything from hunger and poverty to CO2 emissions, diplomatic crises, and wars. Because it can give us alle the answers we need, and even tell us how to overcome the barriers that stop us at present.

If, on the other hand, the technology lands butter-side down, the potential harms are vast. Tegmark is not concerned about the Terminator scenario, where AI develops consciousness and sets out to eradicate humanity. Rather, he is seriously worried about the extreme power that AGI, and especially the expected superintelligence that may follow, could give its creators. If you possess infinite intelligence resources, infinite productivity forces will follow, enabling you to control and manipulate the entire world, the rationale goes. This applies whether you’re a private company, a state, the UN, or a terrorist movement.

For the same reason, Tegmark hopes that the development will progress slowly and well-prepared, allowing for a balance of power similar to that with nuclear weapons, where the ultimate weapons are distributed among several independent parties. The same is hoped by Vladimir Putin, who has stated that the ruler of the world will be the one who leads in AI, and that Russia would share its AI knowledge if it becomes a leader. Judge the credibility for yourself.

Enough problems without AGI

Another issue in the AGI debate is called the instrumental convergence thesis. This is the idea that you can get an AI system to do what you say, but not necessarily what you mean. A famous thought experiment comes from the aforementioned Nick Bostrom, who asks us to imagine a powerful AI system that is ordered to optimize the production of paper clips. The system is so efficient that it converts all available mass on Earth – including humans – into paperclips. So, it optimizes its goals, but unfortunately takes things a little too literally. A similar example is setting an AI system to solve the climate problem which makes it wipe out humanity, since that could be seen as the shortest way of reaching that goal.

These stories are extreme and hardly something we should worry about anytime soon (unless there’s an unexpected intelligence explosion tomorrow). But the problem of misalignment between human intent and algorithmic goal fulfilment isn’t limited to inventive thought experiments. An example of how an order can have different and more violent consequences than intended is the case of Instagram’s algorithms optimizing young girls’ time spent on the platform by pushing self-harm content. Another is the Flash Crash where trading algorithms executed a number of large selling orders of E-Mini S&P contracts to push the prices down, which ultimately triggered a market crash. While we can smile at the paperclip example, it is worth remembering that similar things are already happening today with much simpler AI, which have led to very serious consequences. Without safe and robust alignment between AI and human values, it is easy to imagine that the problems could grow in strength as the technology becomes more advanced.

Human vs. Artificial Intelligence

One could argue that if an AI model is really that intelligent, it should be able to understand a simple command from a normal human. Unfortunately, it is not that simple, partly due to the aforementioned lack of clarity about what intelligence is. A system can be described as intelligent even if it understands orders very literally and doesn’t read between the lines. And even if it lacks contextual understanding or understanding of human values in a broad sense. We should regularly remind ourselves that AGI is not human intelligence, even though it is built by humans and was, perhaps, initially aimed at simulating humans.

The Great Battle of Human Values

A related issue is that prompts can be impossible to interpret unambiguously. Even if an intelligent system makes good suggestions for organizing society, laws, and workplaces in the best possible way, it can hardly provide a single answer that everyone will find good and right. The same goes for climate change, school systems and criminal justice systems. Because there is obvious disagreement about what is best. Value judgements are hidden in more places than we are normally aware of, as we already know from issues surrounding BIAS in data. The fact that an intelligent computer system matches or surpasses humans on many parameters does not mean that all humans will find its decisions right, good or, for that matter, intelligent.

Our Limited Understanding

This brings us to another issue: Will we be able to understand the genius of any advice from an AGI or superintelligence? This point is reminiscent of the black-box dilemma: The basis for AI systems to make good decisions is inextricably linked to the ability to make sense of massive amounts of data that we cannot make sense of ourselves. So, should we trust the machine if we know it is mostly right, but we don’t understand its ‘reasoning’? The question becomes even more important to answer if we don’t merely ask for advice but implement AI decisions into autonomous systems that act directly upon them.

Readers with greater faith in human abilities might think it foolish to even consider this. However, don’t forget that billions of people happily organize their lives according to extremely intrusive rules they have no chance of verifying the rationality of. A large part of the world’s population follows religious rules based on the sole principle that they are right because God says so, even though the rules are beyond our comprehension.

The Lord’s ways are inscrutable, and the same is true for advanced AI. So far, trust in the former seems greater, but AI is gradually catching up to God. In this light, one does not need to be a Marxist to ask whether AGI developers should be more obligated towards the common good than they are through increasingly less credible mission statements.

One Last Chirp

The small story about the risk-taking sparrows that opens Nick Bostrom’s book on superintelligence doesn’t have a conclusive ending, and neither does this story. However, it does feature a distinct absence of the enthusiasm some actors display towards the idea of AGI. Even if researchers succeed in developing AGI without jeopardizing human civilization – if the sparrows manage to domesticate the owl – it is probably to soon to expect Utopia. People and cultures differ, and it is hard to imagine even the most intelligent authority implementing a form of society everyone will voluntarily embrace. Changes rarely occur without friction and sacrifice. And they seldom improve when based on someone claiming to have found the truth, whether it originates from religious or technological leaders. While we wait, I cling to the hope that my pessimism is due to my limited human intellect.

Photo: Unsplash.com and paperclips by Dan Cristian Pădureț

Also read Big AI Tech Wants to Disrupt Humanity

Source material

“Speculations Concerning the First Ultraintelligent Machine”, Irving John Good, 1965

https://vtechworks.lib.vt.edu/server/api/core/bitstreams/a5e423ee-54e0-4eec-aeca-32b73f851af5/content

“Life 3.0: Being Human in the Age of Artificial Intelligence”, Max Tegmark, 2017 

https://www.williamdam.dk/life-30-being-human-in-the-age-of-artificial-intelligence__312527

“Superintelligence: Paths, Dangers, Strategies”, Nick Bostrom, 2014

https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies

“Putin says the nation that leads in AI ‘will be the ruler of the world’”, James Vincent, 2017

https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world

“OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say”, Reuters, 2023

https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

“AI Belongs to the Capitalists Now”, NYT, 2023

”OpenAI-gate: profit vs. menneskeheden”, Pernille Tranberg, 2023 In English

https://kforum.dk/nyheder/analyser/article16624615.ece

AGI Definition: Tech Target

https://www.techtarget.com/searchenterpriseai/definition/artificial-general-intelligence-AGI

AGI Definition: OpenAI

https://openai.com/charter

AGI Defintion: Gartner

https://www.gartner.com/en/information-technology/glossary/artificial-general-intelligence-agi

AGI Definition: Wikipedia

https://en.wikipedia.org/wiki/Artificial_general_intelligence