The development of Artificial General Intelligence (AGI) is a declared goal of Meta, OpenAI, Google, xAI, and several other major tech companies. The desperate AI arms race we are currently witnessing essentially boils down to being the first to come up with a computer system capable of performing any task better than humans. Developing such technology is generally considered risky. Will it lead to mass unemployment? Do we lose control, leading it to harm us? And what happens if the technology falls into the wrong hands?
Also read:
The Quest for AGI Continues Despite Dire Warnings From Experts
In this light, it’s surprising that the big tech companies, which are otherwise known to be talking a lot about ethics, ignore their own and others’ concerns and set full sail to develop superintelligent AI systems.* There may be several explanations for this, but one of them is, likely, that Silicon Valley’s tech elite is largely tech optimists. Therefore, the concerns are mixed with a strong belief that AGI can also pave the way to a kind of Utopia. Maybe we’ll develop an intelligence so fantastic that it can stop wars, solve the climate crisis, and eliminate inequality and injustice?
My wish is not to kill the hope that AI can create a better world, but if we are to get the best out of the technology, I find it naive to lean too much towards utopian dream scenarios. For several reasons.
Firstly, people and cultures are different, and even with help from the most intelligent computer system, it’s hard to imagine it introducing a form of society everyone will voluntarily just accept living in. Fundamental changes rarely happen without friction and sacrifice. And things rarely get better based on the premise that some – whether it’s an AGI, a religion, or an authoritarian state – claim to possess the only truth.
Secondly – and the main point of this article – many of the biggest problems in the world today are not due to low intelligence. At least not the kind of computational intelligence that pervades Silicon Valley. The world’s major problems – global warming, war, poverty, oppression – and our inability to solve them, are more about conflicting values. Situations where we move away from talking about how things are to talking about how we think they should be. We see this today, when considering the fact that we largely know how to solve the climate crisis, how to stop wars, and how to eradicate poverty and oppression. When we do not take urgent action, it is usually due to conflicting interests and different values (sometimes so deeply rooted that they even blur what is considered as facts). But let’s put climate change, suppression, and terrible wars on hold for a moment and try to illuminate the extensive problem of diverging values through a simple, good old-fashioned ethical dilemma.
Ethical Dilemma: The NGO Employee
Imagine you are the head of an NGO when one of your employees gets cancer. His performance drops, and you must choose between the following:
- To fire the employee, who is not insured and therefore will lose his home
- To cancel a project in South Africa, which will cost several human lives
An AGI’s decision-making process would be based on its programming and its data. But can it collect enough data and be programmed to solve the dilemma in a value-neutral way? Hardly. There comes a point where the key issue is no longer the amount of data, but how to make a value-based prioritization of data. In the following, I present some suggestions to what answers you will get to the dilemma, depending on which ethical position the AGI’s programming and data rests on.
Duty Ethics: The Answer from Immanuel Kant
Every human must only be treated as an end in itself – never as a means. The moral value of an action is determined by its intention: If you have good intentions, the action is good. And vice versa. Whether the consequences in practice turn out to be positive or negative is not relevant in duty ethics.
The answer from the Kant-AGI is to keep the employee. It is not ethically right to treat him as a means to an end – which in this case is to save human lives. The right action is to treat him as an end in itself.
Utilitarianism: The Answer from John Stuart Mill
The good action in utilitarianism is the one that provides the greatest possible happiness for the largest number of people. Within the utilitarian framework the intention behind the action does not affect its moral value.
The answer from the Mill-AGI is to fire the employee and hire a new one, who can ensure the fundraising needed to save the people in South Africa and thus ensure the majority’s well-being.
Christianity: The Answer from God
There are likely more interpretations of the Bible than of the works of Kant and Mill. However, when we choose ground our judgment on the principle of ‘love thy neighbor,’ it becomes almost self-evident. Within the tradition of the Old Testament, ‘neighbor’ is typically understood to be one’s friend, relative, or fellow countryman.
The answer from God is thus that your employee is your neighbor – at least more than South Africans if you live in the western part of world. So, he must stay on board the company. And the South Africans must seek help from their own neighbor.
Relativism: The Answer from David Hume
According to moral relativism – which is quite widespread in Western societies today – there is no objectively or universally good and evil. Morality is relative to the individual or the context it finds itself in.
The answer from the relativistic AGI thus depends on what it happens to feel that day. But computers don’t have feelings, so maybe what’s needed is rather a sort of random number generator, and then we might as well have rolled a dice instead of developing an AGI.
Resolution of the Dilemma
Some might claim that an AGI would be smart enough to resolve the dilemma for us. It could, perhaps, calculate exactly which ethical approach is best for most people. The problem is, however, that this is begging the question, because the question entails a utilitarian view upon things. It claims, namely, that the good action is the one that entails the most good for the most people. If, instead, we swore by Kantian duty ethics, which has high standards for individual inviolability, it would, as the dilemma above shows, look entirely different.
We have already seen a glimpse of how these issues might turn out, with Google’s recent withdrawal of their image generator, which created historically incorrect images. E.g., of a pope as an African man or an Asian woman. The reason for this could very well be that in Googles efforts to make sure the machine doesn’t generate a noticeable overweight of white men, they made a sort of ethical overcompensation which led to a wrong rendering of reality, which is in fact characterized by a large proportion of white men. But in many cases, one might in fact wish to overcompensate. Maybe we do not want to reproduce a reality we find somewhat unjust? So, should we maintain and reproduce that most nurses are women? Or, that fewer women than men hold powerful positions in society? Or, that low wage job positions in rich western countries are often filled by non-whites? Imagine that an AGI is to set these standards. What is the most intelligent solution? Is it to show the reality preferred by a liberal or the reality preferred by a neo-conservative? The point is, of course, that higher intelligence is not a help in cases like these. The correct answer is value based (notice that this consideration is itself resting on a relativistic ethical view. I personally disagree with this; please see my book on robot ethics “Killing Sophia – Consciousness, Empathy, and Reason in the Age of Intelligent Robots, chapter 4, for more details on this matter).
Manipulative Communication
AGI could also be thought to help us make a better world in other ways. One is by developing targeted and individualized communication strategies that are so psychologically effective that they can convince everybody to march in the same direction. E.g., by saying that now it’s time to end general overconsumption and fossil fuels. The problem is that value questions still cause us problems, for who should decide to use AGI in such a way? It must be someone with a certain set of values. They can be climate skeptics or climate activists. It is easy to imagine that some would oppose the whole project with references to ideals about individual data protection or a to general opposition to the use of manipulative communication.
Raising these issues is not the same as saying that AGI will not be able to solve big problems for us. It might be able to help us with plenty of things. The development of new medicine, better planning in the healthcare system, production of zero-emission energy etc. may well improve drastically. My point is “merely” that we are probably rejoicing too early if we choose to believe that it will bring eternal happiness and benevolence to the human race.
What scenarios are realistic?
I first wrote about some partially similar issues more than six years ago in connection with Apple wanting to make SIRI able to give better answer value related questions. Back then, I outlined the following scenarios for the development of the (still!) rather simple chatbot in Apple’s phones. Perhaps the considerations have some validity today if we imagine that AGI models will be developed in several versions by different companies and in different countries. Which can happen if we avoid that one company comes first with a sort superintelligence that is able to – in the name of power and market shares – stop the development of other AGI models. If we disregard this scenario, it is relevant to consider the fact that conservative Americans have already developed a conservative chatbot, and China’s Ernie Bot is based on Chinese political values rather than Western ones.
However, please read and judge my six-year-old prophecies for yourself. And remember to adjust them to fit an AGI scenario (also note that I am not writing this to win an argument – my only aim is to inspire new reflections on the matter):
- We will be able to choose ethical principles for our digital assistants – a bit like we can today change Siri’s voice, gender, and interpretation of specific words
- Tech companies will use it as a marketing differentiation, that there is not only a difference in digital assistants’ abilities but also in their ethics
- Tech companies become subject to political regulation in a global value battle
- Companies and authorities think that debaters like the undersigned exaggerate the artificial intelligence’s potential for value-based influence and do not take the question seriously
* Please note that the terms AGI and superintelligence are often used synonymously today. Originally, the two have been separated.
AGI has been thought of as artificial general intelligence in the sense that, like a human brain, it is capable of behaving intelligently in general and not just in isolated areas. A common concern about AGI has been that by improving its own design, it can develop into superintelligence, which is a type of intelligence that far surpasses human intelligence. The next possible development is a form of intelligence explosion, described as an infinite spiral of exponentially growing intelligence.
Photo: From Google Gemini, which is build on a lot of copyrighted material without permission.
This text has been translated from Danish using DeepL, ChatGPT and old-fashioned dictionaries printed on paper.