‘A person who wants to control her own data is trying to opt out of micro profiling by google’. This is the natural language, I put into Dall-E, the open source-based AI art generator to create the illustration on top of this article. An illustration that is unique, and where I don’t have to pay someone or ask someone to use it.
Dall-E is on of more new disruptive services that can create realistic images and art from a description in natural language. The AI art generators are typically on real humans’ creations which they have uploaded to the web. The generators scrape them all from the web, and one AI-generated creation can be based on millions of pieces of art. Thus it would be very complicated to find a revenue-sharing or compensation model for the participating artists as we know from other copyrighted art. In all circumstances, we need a discussion on the data ethics of using AI-generated art.
It is a lot of fun playing with an AI art generator. The Google picture is funny because it shows the Google logo colors and a person having a really hard time with her data.
For a new talk I work with, the theme of the conference is ‘The Good Digital Life’, so I tried Dall-E again and look at the three results:
Very interesting interpretation of a good digital life. You are alone. You are in front of a screen. You are laughing. And no human interaction.
This painting recently won Colorado State Fair’s annual art competition – also AI generated art:
So, at DataEthics.eu we have some ethical concerns and would love to get your input on this talking about data ethics.
Should we get consent and revenue-sharing?
If a company collects data and sells anonymized insights, it would be ethical only if they got the consent of every user in the dataset. Should it be the same with AI-generated art? Can you possibly reach out to everybody participating in AI-generated art, and should you? At Technology Review you can read a story about how some artist feel about this new trend disrupting their work.
Discrimination risks?
As my colleague wrote recently, AI image generators might contribute to discrimination by reproducing harmful stereotypes acquired through data collections containing real life biases.
Fake News risks?
If you try and write ‘shit’ in the DALL-E generator, it will not allow it. And as my colleague wrote, Dall-E does not allow generating images with famous people or celebrities. But will we lose faith in the authenticity of images, because of the AI art generators?
Can Regulators Destruct them?
It might also be possible that regulators will destruct the AI art generators that cannot prove they’ve scraped images legally, with consent. According to this article, legislators in the US have ordered a model destructed if based on data obtained illegally.
Do send an email with your concerns to info@dataethics.eu and I will collect viewpoints on this in a follow-up story.