Skip links

The Ethical Challenges of Generative AI

This is a translated article written in Danish for an encyclopaedia aimed at explaining what is generative AI, what it can be used for and how it is challenged ny law and ethics. Thanks to the European AI-based translator Deepl.com

Generative AI, an advanced form of artificial intelligence, is trained and developed on large amounts of data from the internet and can use machine learning to create something new. Unlike machine learning, generative AI can operate without human involvement. There are some ethical issues surrounding the use of generative AI, but at the same time, there is a belief that the technology can create positive social change and save time and money if used correctly. 

While artificial intelligence has been around for decades, generative artificial intelligence (generative AI) was added in 2018 and became widespread and world-renowned with ChatGPT, a service from the American company OpenAI in 2022. Unlike machine learning, which has been used for years for pattern recognition and predictions in areas such as healthcare and industrial production – and which requires human involvement – generative AI can work autonomously when prompted to perform a task. A prompt can be anything from a short question to a long, detailed commanding sentence explaining the task or context that the service needs to fulfil. 

Generative AI is also known as language models (LLMs) or more popularly chatbots when it comes to text and voice-based services. These are services such as ChatGPT, CoPilot, Mistral, Claude, Gemini, DeepSeek and Perplexity. They are all designed in a way that you get a human-like response when you prompt them and set them to work for you. Some also call them personal assistants. 

In addition to text-based generative AI services, there are services that can generate music (e.g. Udio and Suno), photos (e.g. Firefly and Midjourney) and moving images (e.g. Sora), and some that can programme (e.g. AlphaCode).

Generative AI can be used for many of the tasks that humans normally perform: 

  • Programming, debugging computer code. 
  • Chatbots in customer service to take care of simple queries instead of an FAQ. 
  • Diagnostics and research – including summarising long heavy texts. 
  • Brainstorming and testing new ideas in music and art.
  • Structuring work tasks.
  • Help with homework at all levels in the education sector. 
  • Use as a work sparring partner or as a conversation partner.

Thorough Fact Checking

The first studies on the effective use of generative AI (e.g. a Norwegian study by CoPilot) show that it is particularly effective to use it in areas you already have a thorough knowledge of, because then it doesn’t take as long to fact-check the answers and you can work faster than usual when you get help from a generative AI service. For example, lawyers can have it write a short and concise summary of a heavy legal text. The lawyer can fact-check this relatively easily because the lawyer has the factual knowledge. You can also use generative AI to summarise three people’s notes into action points. Or ensure linguistic consistency in an application. Or if there’s something you’ve overlooked in an application. Just to give you some examples of things that are relatively easy to fact-check. 

On the other hand, studies also show that if you use generative AI in areas you have no knowledge of, the necessary fact-checking can mean that you don’t save time by using the new technology.

AI Act

The development of generative AI services is driven by large technology companies such as OpenAI/Microsoft, Google, Amazon and Meta. These companies – and investors – are throwing billions of dollars at the technology because they expect to make a lot of money from it. The development is primarily profit-driven because there is no legislation in the US that limits the use of AI to protect people, democracy and the planet.  The EU, on the other hand, has ensured this with the GDPR, Digital Services Act and AI Act. For example, European law requires that if you put a chatbot on the market, you must clearly tell users that the outcome is from an AI-generated machine. The same goes for AI-generated content; it must be declared so that recipients know that it is AI-generated. The technology has become so good in just a few years that it is impossible to tell the difference between real and AI-generated photos with the human eye, and according to researchers in a Europol report from 2023, it is expected that by 2026, 90% of content on the internet will be AI-generated. 

Another requirement of the AI Act is to be trained before using generative AI, which can be difficult to achieve at the speed of development. The AI Act also states that if you use AI in high-risk areas such as critical infrastructure, education, security, HR, essential public services, legal system, migration, and essential private and public services such as credit scoring, you must, among other things, perform risk assessments, use quality data, and log all activities.

Ethical Challenges

  • Generative AI is accused of copyright infringement. Publicist media, writers, graphic designers, photographers, actors, filmmakers and other creatives believe that the AI services are developed using their copyrighted works without authorisation. That’s why the lawsuits are piling up. Especially in the US, but also Danish media working together have threatened legal action. Microsoft, the company behind CoPilot – a version of ChatGPT – believes that a “fair use” clause in US copyright law will exonerate them.
  • Misinformation. It’s one thing that generative AI can be used to create and spread misinformation if you prompt the services to do so. Another thing is that they have inbuilt misinformation, “hallucinations” as it’s called, as the technology guesses the next word in a sentence. Studies from Reuters Insitute and the BBC show that around a fifth of what comes out of generative AI services are hallucinations. 
  • The technology is also being used to create and propagate deep fakes, misusing other people’s names, faces and voices to gain votes in political elections or coax money out of others.
  • There can be issues around bias and discrimination due to the data used in a service.
  • There are challenges regarding the technology’s climate impact, as the technology significantly increases the use of electricity and water in the data centres. However, with DeepSeek, the Chinese have shown that it can be made less harmful to the climate. 
  • Privacy. It’s unclear whether generative AI is legal under EU privacy law GDPR. EU data protection authorities have asked OpenAI over 30 questions about their compliance with GDPR. 
  • Finally, there is an ethical issue related to anthropomorphisation. Most services do not write and speak as if they are machines. They are designed to respond in a human-like manner. And this can cause humans to anthropomorphise, i.e. develop feelings for them. There are several cases where people have fallen in love with their chatbots or they are used as AI therapists and friends. Be especially careful to keep children and teenagers away from so-called AI friends. 

Are Humans Getting Dumber?

The technology companies behind generative AI are working towards the goal of AGI, Artificial General Intelligence, a form of superintelligence that will be able to perform all the cognitive tasks that humans can do, only much faster. It is uncertain whether they will ever achieve AGI and what it actually means. But regardless of how far they get, generative AI can already perform some tasks that normally require human intelligence. And in this context, some are starting to ask the question: Is generative AI making humans lazier and lazier because we tend to skip where the chips are down and let the machines do the work for us? Generative AI is still an immature technology, and there aren’t enough research results yet to provide clear answers to that question.

Also read what DataEthic.eu’s affiliated scholar group discussed around the hype of generative AI, emotional dependency, copyrights etc.

External links:

Tracker listing copyright cases against generative AI services: https://www.mishcon.com/generative-ai-intellectual-property-cases-and-policy-tracker

Pros and cons of CoPilot – Norwegian study https://dataethics.eu/da/copilots-faldgrupper-for-den-offentlige-sektor-en-exit-strategi-er-alfa-og-omega/

Europol report, 2022, predicting the internet will be filled with AI-generated content by 2026 https://www.europol.europa.eu/cms/sites/default/files/documents/Europol_Innovation_Lab_Facing_Reality_Law_Enforcement_And_The_Challenge_Of_Deepfakes.pdf

Europol_Innovation_Lab_Facing_Reality_Law_Enforcement_And_The_Challenge_Of_Deepfakes.pdf

Reuters Institute Investigation into AI-generated misinformation about UK elections: https://reutersinstitute.politics.ox.ac.uk/how-generative-ai-chatbots-responded-questions-and-fact-checks-about-2024-uk-general-election

BBC investigation into AI-generated misinformation: https://www.bbc.co.uk/aboutthebbc/documents/bbc-research-into-ai-assistants.pdf

How generative AI was misused in the Indian democratic elections: https://www.wired.com/story/indian-elections-ai-deepfakes/ 

European Data Protection Board questions in this report whether ChatGPT complies with GDPR https://www.edpb.europa.eu/system/files/2024-05/edpb_20240523_report_chatgpt_taskforce_en.pdf 

AI can affect our critical sense, especially young people are exposed. Versio3. https://www.version2.dk/artikel/ai-kan-ramme-vores-kritiske-sans-saerligt-unge-er-udsat-viser-studie