Following the launch of ChatGPT, which uses artificial intelligence to write texts and much more – including schoolwork – many teachers are unsure how to proceed. Have students used ChatGPT or other AI-generated tools to do their homework or exams?
It’s nothing new that some students try to cheat where they can, and that educational institutions therefore have access to a wide range of plagiarism software to check for cheating. But with the new AI-generated chatbots like ChatGPT, you can’t use the same plagiarism software.
“Text from the chatbot doesn’t fall into the trap of known material because the text is fresh and unique to the user who requested it. Transcribing from the chatbot is therefore not like transcribing from another pupil or from your sister’s old essays,” wrote Professor Asmus Leth Olsen from the University of Copenhagen in Berlingske.
Update 21st December: It is not true that there are not detector tools. This one GPT-2 Output Detector Demo works according to David Knuffke who told me that on Mastodon. Try it out. He does outline, however, that it is easy to get around. Another tool is: Detect ChatGPT.
Update 26th January: This is also pretty good ZeroGPT.
So what the heck do you do?
Teachers should embrace new technology and use it constructively.
- They could demand of their students to declare whether they have used a bot and, if so, how. Is it just for research or have they written/figured it all out for themselves? It will be law anyway very soon in the EU – the AI Act – that you have to declare any use of AI (you have to be transparent about it), so people can tell the difference between interacting with a robot or a human in the digital world. That way, teachers can appeal to students to take the lead.
- In AI-generated text, teachers should ask students using bots to copy the text into a document and turn on corrections, thus showing the teacher what they have changed and not changed.
These ideas are based on trust, I know.
In the longer term, you can use flipped classrooms, which I’ll come back to in a moment. But if you think your student is cheating, there are ways teachers themselves could verify whether a text was written by a human or an artificial intelligence. Jan Birkemose from Medietrends asked the chatbot about just that:
One possibility is to examine the content and structure of the text. Artificial intelligence generated text often has a more monotonous and repetitive structure, compared to text written by a human. This can be seen, for example, in repeated phrases, repeated structures, or in the text lacking meaning, coherence or depth.
Another option is to examine the sources and references of the text. Artificial intelligence-generated text often has fewer or no references to other sources, compared to text written by a human. This can be seen, for example, in the lack of references to academic literature, experts or other sources that can confirm or challenge the claims of the text.
Finally, one can also try to examine the purpose and intention of the text. Artificial intelligence-generated text often has a more commercial or manipulative purpose than text written by a human. For example, this can be seen in texts with a single purpose, such as selling a product, or in texts that try to influence the reader’s perception or behavior in a certain way.
Yet another way is looking for typos. Human text is often full of typos and slang, and looking out for these sorts of mistakes and subtle nuances is a good way to identify text written by a human, according to The Algorithm newsletter from Melissa Heikkilä at MIT Technology Review.
If OpenAI had had even a little responsibility in its DNA, it would not have launched ChatGPT without also launching some kind of watermark or a small piece of software where, among other things, teachers could test whether a text has been through ChatGPT as I write here. But OpenAI has more commercial intent than public interest, so it’s open to everyone. However, after some criticism, they have put some restrictions in place since publication. And they are, along with other AI-companies, working on building tools that can help humans detect whether it is human or AI-generated, according to MIT Technology Review.
A writer at The New York Times explains how teaching can change their teaching with so-called Flipped Classrooms.
“Rather than listen to a lecture in class and then go home to research and write an essay, students listen to recorded lectures and do research at home, then write essays in class, with supervision, even collaboration with peers and teachers,” she writes:
“Teachers could assign a complicated topic and allow students to use such tools as part of their research. Assessing the veracity and reliability of these A.I.-generated notes and using them to create an essay would be done in the classroom, with guidance and instruction from teachers. The goal would be to increase the quality and the complexity of the argument.“
Update 2. David Knuffke, a science teacher in Singapore did a draft of an AI Usage Policy in Schools you could use do to your own.
If you have any other good tips, please email info@dataethics.eu as soon as possible and this page will be updated.
The image is embarrassed by another service from OpenAI – Dalle-E – (read more here) with the text: ‘ChatGPT used by school children’
Translated with the help www.DeepL.com/Translator (free version)