Skip links

Why I won’t use ChatGPT

Generative language models, in particular ChatGPT, have been on the lips of many and occupied a lot of headlines since OpenAI launched the first version just over a year ago. One of the major topics of discussion has been how educational institutions approach exams when ChatGPT appears to be able to answer everything for you. The main topic has been ‘cheating’ and the quick solution has been banning.

Is it cheating?

As a researcher in the field of education, my answer to this question is a clear “no”. If our education system is not supposed to educate children and young people to understand the tools available to them in their future personal and professional lives, who else should? That said, I agree that tests and exams are often designed in ways that allow copy-paste from a generative language model, which is obviously not beneficial – certainly not for the children and young people. So this is a good opportunity to change outdated exam forms. I definitely think that children and young people should learn to understand how generative language models are built and what that means they can and cannot do. Exams should reflect real-life projects, problems, and solutions that students may face in their future personal or professional lives. The fact that I do not think generative language models should exist at all is another matter. My perspective here is that they do exist, that they are presumably legal, and that many people use them.

Copyright issues

Another serious problem is that generative language models in their current form may be trained on content that the tech companies behind them have not been given permission to use. In other contexts, you must credit authorship, and in some contexts, it is not allowed to use certain content at all. But in this case, it is apparently perfectly fine – for now – to collect all kinds of available content, stuff it into the machine and then ‘magically’ transform it into another form. Afterwards, paradoxically, users must then credit ChatGPT – or whatever other language model they use. In my opinion, this is a clear ethical problem and a violation of our rights to decide on our own works, professional as well as private.

Simulation of humans

Being a researcher in children and young people’s understanding of digital technologies, I find it extremely damaging that many digital technologies today are designed to simulate humans. It certainly does not make it any easier to understand that you are dealing with a machine when it uses and speaks to you in polite phrases and with the use of personalized pronouns like ‘you’ and ‘I’. Some digital technologies are also designed to physically simulate humans, which I have written about previously here and, with a colleague, here. As one terrible example among many, the many children and young people who use Snapchat are now involuntarily equipped with My AI. Only as a Snapchat+ subscriber can they remove the inhuman chatbot. Let us rather design (digital) technologies that stimulate humans – to the extent that we actually need technologies for that purpose.

More of the same of the same of the same of the …

Like many other digital technologies that attempt to simulate human consciousness, there is a risk – or rather a guarantee – of bias. The Danish dictionary describes bias as a “misrepresentation of survey results, measurable quantities or similar, especially due to methodological errors or unconscious preferences” (translated). My objection to this is based on the fact that the content collected by the company OpenAI (in the case of ChatGPT) may be based on one-sided and censored content – especially in the long run, as content will just be endlessly reproduced in ‘new’, ‘creative’ ways over and over again in the case that we all start using ChatGPT as a ‘source of inspiration’ when we express ourselves in one way or another, in one context or another. In this way, attitudes can eventually be lost, and we may forget to consider specific aspects of issues. A similar situation to our narrow-minded news feeds on social media. We get more of the same and it seems stupidizing to me.

Values gone wild

If language models only used content with permission, did not try to communicate like humans and (utopianly) represented a diverse view of the world, I would still not use them. My biggest problem with them is that I do not know from where and how the content is generated. It could, for example, originate from someone with whom I have very little sympathy. After all, when I seek out knowledge or inspiration myself, I use my own experiences, attitudes, and values to construct my answers, and as a source-critical person, I know who said the things I use. With ChatGPT, it seems that it is easy to lose ourselves in the fascination of how amazing the output sounds, whether it is a ‘funny speech for mom’s birthday’ or an ‘easy-to-understand explanation of the periodic table for school children’. Even as critical users, it may be difficult not to get a little (mis)led by letting ourselves being influenced by the content, style and so on. To me, it seems behavior-manipulating, disempowering and far from my personal values. I understand that this way I will have to spend more time writing that speech for mom or explaining the periodic table to a group of school children, but in return, I know that I can vouch for the result.

Maybe arguments will come up that I have not thought about, or I can learn things I did not know about the models. Some might also argue that it is entirely possible to critically assess the content we are presented with. For now, however, I find myself with nothing but dislike and objection to this kind of use of mechanically generated sentences based on statistics.

Photo by Kari Shea on Unsplash