In April, a Swiss research team revealed that for several months they had secretly allowed chatbots to discuss with users of the popular Reddit debate forum r/changemyview. The aim was to investigate the ability of artificial intelligence to change people’s opinions.
The incident, which contains layer upon layer of ethical issues, gives us a glimpse into a whole new political and commercial reality where AI is being used as a weapon on both sides in a high-stake information war.
Perhaps the most straightforward issue in the Reddit case is the ethics of research itself. In the wake of the researchers revealing what they were up to, a heated debate arose both on Reddit and in other media that they were crossing ethical boundaries by using chatbots anonymously. The researchers, who still appear anonymously under the Reddit username u/LLMResearchTeam, claim to have been approved by the University of Zurich’s ethics committee, but the Reddit forum in question – a rare and beautiful example of a constructive online debate forum – prohibits the use of AI-generated content.
The First Fluff of the Blizzard
The sympathetic Reddit forum deserves respect for building a healthy online debate culture, and their outrage is understandable. But the specific twist between the research group and Reddit may very well prove to be merely one of the first small snowflakes in a blizzard of AI-driven manipulation that is building up to topple our usual notions and ideals of free personal preferences and free opinion formation.
It is remarkable in itself that AI has reached a level where it can participate in public debates for months without anyone noticing it is not human. Until a few years ago, this would rightly have been a world sensation, but perhaps we ought to still see it as such today.
Commercial Tigers are on the Prowl
The fact that lifelike chatbots barely distinguishable from humans have, in just a few years, become accepted by some as a normal part of our everyday online lives is likely the beginning of a total upheaval of what it means to be online, not least as a consumer. When we look at how cynically the internet, with social media at the forefront, has so far been exploited by commercial forces, there is reason to be on guard. Mark Zuckerberg already has big ambitions for how he wants to create the digital friends of our future, and AI partners are no longer a phenomenon reserved for a few experimental souls. Social chatbots are becoming mainstream, but unlike human boyfriends and girlfriends and human friends, neither are free.
Political Manipulators have New Tools
In a political and democratic perspective, the most frightening aspect of the Reddit case is that the Swiss research group’s AI-generated debate profiles were actually successful in changing the opinions of often seasoned debaters on Reddit. In this very fine forum, debaters mark it with a delta (∆) if they change their point of view or are convinced by a comment, and according to the researchers, they succeeded in 100 cases. There are different figures on how many AI accounts were created and how many comments were posted to convince the 100 debaters, but it’s probably over 30 accounts and around 1700 comments. Perhaps not statistically impressive, but compared to human debaters, the number is high and should be seen in light of the fact that generative AI is improving at an almost unimaginable rate. This means that there is hardly an upper limit to how many voters can be manipulated with individually customized AI debate votes at a time, for example One million voters? 10 million voters?
Of course, this would only happen if someone with political ambitions were to use AI-powered manipulation methods, but it doesn’t take much documentation to show that such already exists.
In Denmark, the political party Dansk Folkeparti is pretty much alone in playing with generative AI as part of their political campaigns, and the rules are being tightened considerably with the EU AI Act. Maybe it will have an effect on the Danish party leader Morten Messerschmidt, but Donald Trump and his unscrupulous entourage of tech billionaires don’t seem to care, so it’s probably little consolation that the EU has created forward-thinking legislation in this area. Nor should we expect restraint from the Russians, who have already been caught several times directly interfering in European elections.
The Future is Bright and Dark
It’s easy to see the Reddit case as yet another example of how what we perceive as free opinion and preference formation is eroding under the tidal pressure of algorithms. The risk is large, but as with so many other technologies, it’s a bit more complex. Of course, the technology also has positive aspects, and perhaps they can be used in the same arena. For example, one study has shown that there is potential in using chatbots to correct belief in conspiracy theories.
So, as so often before, we seem to be in a situation where good and bad uses of technology are fighting each other with different human interests on both sides. Who are the good guys and who are the bad guys has been debated by philosophers for millennia, and if one thing is abundantly clear, it’s that AI cannot give us the answer.
This opinion piece has been translated from Danish using DeepL. No other AI-tools have been used in its creation.
Photo: https://www.artificial-intelligence.store/collections/anonymous