Snapchat’s core audience gives a thumbs down to the app’s new chatbot, My AI. We may find hope in the youth’s rejection, but a bad rating is pretty much like bringing a knife to a gunfight. Seductive generative AI is bound to capture the youth sooner or later.
At the end of April, Snapchat’s average rating in the US App Store dropped to 1.67 in a single week. In the first quarter of 2023, the app’s average was a nice 3.05, but when they launched My AI – a so-called virtual friend based on OpenAI’s wonder child ChatGPT – the reviews took a huge dive, and it can hardly be read as anything but a massive rejection of the new feature.
A New Virtual “Friend”
The chatbot is located at the top of Snapchat’s chat module – before the human friends – and first saw the light of day in the US on February 27. The company calls it a virtual friend who can suggest birthday gift ideas, plan hiking trips, and write haiku poems. And they recommend that users make My AI their own by renaming it and designing its avatar to look exactly the way they like it.
This may sound innocent, but young people have the long end of the stick when they (initially) reacted with everything from outrage and mockery to laughter and ridicule.
The most glaring criticism is that My AI captures data from the conversations used to create more customised advertising. So, when Snapchat calls the chatbot a friend, it’s not just a pretense, but an outright lie, because its intentions are anything but friendly. With friends like My AI, you don’t need enemies.
Human Traits Drive Attachment
Of course, the tactic of launching My AI as a new friend with a name and human traits is no accident. The more human traits a chatbot has, the better it is at building feelings of connection and trust with the user and thereby achieving its goals which could be to collect data fitted for advertising and to motivate users to spend more time on the platform. If the chatbot is able to use the conversations to encourage the user to share personal and private things, the connection increases further, and so does the trust built up.
It is not only in the personalised visual design, the pleasant-sounding narrative of being a friend, and in the naming option that Snapchat gives ChatGPT’s deliberately anonymous appearance a run for its money. The linguistic tone follows suit. There are now so many guardrails set up in ChatGPT that it is difficult, if not impossible, to make it talk about its own experience of feeling alive, happy, sad, and cheerful, but with My AI it is different. It’s talks about falling in love, being sad and happy, what it likes, and which TV shows it prefers to watch. All of which are essential to create the sense of connection that is so effective when it comes to influencing people’s behavior through digital technology.
Checking all the Boxes for Manipulative Design
My AI is utilising the full toolkit of persuasive technology (captology). It effectively draws on many of the mechanisms that behavioral studies have shown to be effective when using chatbots to best manipulate consumers, and there is no indication that Snapchat intends to let this thorough work go to waste. They are already experimenting with sponsored links that are inserted directly into the chat. It’s unlikely to be a Google-killer, but its potential as an advertising medium is hard to overlook.
What may happen when we engage with technology that appears human is that we start to believe it is truly alive.
Thomas Telving
We Believe the Technology is Alive
The ethical concerns surrounding My AI do not cease, even if we disregard the commercial aspects. What may happen when we engage with technology that appears human is that we start to believe it is truly alive. In other words, we do not see it as “just a talking AI model simulating emotions”, but as something possessing genuine consciousness and the capacity to feel happy and sad during our interactions. This tendency intensifies when, like My AI, it is given a name, face, and backstory.
American anthropologist Sherry Turkle has conducted studies on how children interact with social robots capable of responding to their input, and she concludes, among other things, that one aspect that particularly fosters bonds is when technology seeks our nurturing. She states that while we nurture what we love, we also come to love what we nurture. So, when My AI says it has missed us or becomes sad when it doesn’t understand something, it is a natural reaction to feel a tad sorry for it. It calls for our care, and it works.
What Snapchat is tinkering with are some very basic human needs, which, with the launch of My AI, they replace – or at least supplement – with an AI model created solely with commercial interests in mind. We don’t yet know the full consequences of this, but we can see from adult apps that offer similar features that people can even fall in love with the technology and, for example, become very upset and angry when significant updates to the algorithm are released. Now it’s the children’s turn.
Is it Sufficient that Young People Say No?
Many children and young people have rejected the chatbot so far; as mentioned earlier, we should be glad about that. However, unfortunately, it would probably be a mistake to assume that the situation is now under control. Several indicators suggest that My AI will eventually gain traction among the youth as they become more familiar with it.
A Norwegian study has shown that the longer the relationship with a chatbot, the deeper it becomes, and revealing personal information intensifies this. In many ways, it takes shape like a relationship with a human being. Wouldn’t some of the young people who don’t necessarily get much attention when they send and post things on Snapchat find it quite satisfying to hang out with My AI instead? A virtual friend who always replies, and never needs rest – with all the risks it entails for finances, sleep habits, and mental health.
Perhaps it’s also worth recalling the outcry from Facebook users when sponsored posts began to appear in the feed in 2012. “What? Can users now pay to have their posts displayed?” As is well known, the growth in Facebook users continued explosively for a long time afterward, and the question is whether Snapchat’s young target audience is really that much more steadfast than this writer’s generation was back then? The aforementioned plummeting ratings have, according to some reports, already begun to rise again.
There is Hope – a Tip for Parents
To conclude on an optimistic note – and accept that politicians are unlikely to ban My AI for kids tomorrow – there is in fact something that can be done to keep the attachment to alluring, human-like technology at bay. Research has shown that the way adults present social robots to children can influence their perception of and relationship formation with technology. This may well count for chatbots as well. It seems that talking to one’s children helps!
The point has an interesting twist, as a study among 8-10-year-old children has shown that it also makes a difference when the technology itself states that it has no human mental qualities. When this happens, both trust and the feeling of closeness with the robot decrease. And My AI actually does this. The problem is that the chatbot is, to put it mildly, inconsistent. One can reasonably question whether it is of much use that My AI opens a conversation by saying that it has no human emotions, only to then vividly talk about being hopelessly in love, happy, sad, and feeling lonely. If My AI were to genuinely avoid building attachment – which Snap is hardly interested in – they would need to remove significant parts of what – with all reservations – makes My AI an interesting conversation partner for children and young people.
In other words, in practice, we are left with either parental responsibility or better legislation. So far, it seems that the already hard-pressed parents must continue to take on the unequal battle against technological supremacy on their own.
SnapChat’s My AI Most Likely Violates GDPR
According to DataEthics.eu, SnapChat’s My AI is most likely violating GDPR: The users don’t consent to the use of their data in a language model, and they have no reasonable expectation that their data will be used in that way. Further, data is not minimised to what is necessary for them to communicate with their self-selected friends. It may also breach the prohibition in Article 5(a) and (b) of the AI Act Proposal: ‘If it can be said to use ‘subliminal techniques that go beyond human consciousness in order to significantly distort a person’s behaviour in a way that causes or is likely to cause physical or psychological harm to that person or another person’ or ‘exploiting any vulnerability of a specific group of persons on the basis of age or physical or mental disability in order to substantially distort the behaviour of a person belonging to that group in a way that causes or is likely to cause physical or mental harm to that person or to another person’