The excitement over ChatGPT’s updated image generator is never-ending on social media. What has happened to critical thinking, ethics and not least respect for the legal labeling of AI images? The hype of being able to put Elon Musk into even weird surroundings is leaving the reflection on what consequences these tools can cost.
It was a huge thing when OpenAI’s image generator ChatGpt-4o was rolled out end of March. The new feature was shared and hyped on social media with examples of photorealistic motifs that were infused with unreflected playfulness. Putting themselves on the moon, leavingTrump at a Danish hot dog with with Mette Frederiksen, placing the White House in Greenland.
It might sound like it’s kids just playing with a new app. But we are talking adults – some with influence and teachers – sharing photorealistic artificial images that are as far from reality as it gets. You can’t tell the difference between real and fake anymore. These are worthless images that are not only fake. They require large resources (water and energy) to produce. These fake images are likely to cause more harm than good, even though we are required by law to clearly label them as AI-generated – which most of the users don’t do.
There are many good uses for AI, such as pattern recognition of images in hospitals or predicting breakdowns in industrial production and optimising energy. But the uncritical enthusiasm and use of generative AI present deep ethical and democratic challenges.
Services like ChatGPT-4o are likely to infringe the copyrights of many photographers and artists as it is trained on existing content on the web without permissions, and while the big AI companies are cashing in on their new services, photographers and artists around the world are forced to spend their scarce ressources suing them. And OpenAI, the company behind ChatGPT, has most recently asked Trump to allow them to ignore copyright.
Where are the Guardians of Democracy?
We don’t need uncritical play with image generators, but rather a real ethical dialog about the fact that technology now makes it possible to create artificial images. A common understanding of what is reality is essential for understanding of community and democracy. Just look at the polarization in the US, which is also starting to show its ugly face in Europe.
We are finding it harder and harder to trust what we consume on the internet. Images and videos can be used for any narrative without limits. For political influence, for manipulation of the vulnerable or not vulnerable. For radicalization or misogyny. The proliferation of technology will create an even deeper gulf of mistrust, as we have seen with the spread of fake news. Citizens lose trust and hold even important agendas at arm’s length. Because maybe it’s fake.
We’ve had Photoshop, which can manipulate images, for a long time, but it has primarily been used by professionals. Now the ability to create artificial photorealistic images is for everyone. In many ways, it’s an institutionalization of the lie that Trump makes so much use of.
The many ethical dilemmas are not omnipresent in the debate and hype. Not even in mainstream media, who are partly responsible to prepare citizens to act democratically. Not just embracing generative AI as an efficiency tool, but to inform and educate citizens about its dark sides.
The development should set new standards for both labeling and education about the dilemmas of the technology. As well as a deeper reflection on whether hot dog carts and moon landing is an important tool or just a silly game. Where are these reflections, and why is it like watching children in a toy store these days, where only few take responsibility for the fact that AI generated images will be a significant challenge to media credibility, trust in each other, and not least democratic unity?
Thanks for the AI-generated photo and proper declaration to Stefan Kirschnick