Algorithms are picking up on how we feel. In China, emotion recognition technology is now being used to monitor students in the education system. In the US, advertisers are using emotion recognition technology to measure the emotional impact of commercials, and all over the world, various startups and businesses are working on emotion recognition technology for security and hiring purposes. There are pros and cons of this controversial technology.
Critics have been voicing concerns over this development, however, calling for tight regulations on emotion recognition technology, as lawmakers and regulators in the EU have been working on new legislation. Some critics have gone as far as comparing emotion recognition technology to phrenology, the once popular scientific idea (and now popular punching bag) that personal characteristics can be determined by cranial measurements.
For one thing, critics have argued that emotion recognition technology is too inaccurate to determine individual emotional states. This is because the relationship between facial expressions and emotional states is a lot more complex than it may appear. For instance, while people do indeed sometimes smile when happy, frown when sad, and so on, the way people express these emotions vary considerably across cultures, individuals, and situational circumstances.
Moreover, critics have argued that emotion recognition technology tends to be racially biased. For instance, one study suggests that the technology especially struggles with accurately assigning emotions to black faces, assigning more negative emotions to black men’s faces as opposed to white men’s faces. This could of course have real life consequences, for instance in relation to a job interview or a security check. It is worth noting, however, that this particular issue has more to do with flaws in development, and not so much with any inherent flaws of the technology.
It is also worth noting, that even if emotion recognition technology is not perfect, that does not necessarily mean that it shouldn’t be used. Humans are flawed in their assessments as well. We are not particularly accurate at determining emotional states either. Especially when we are under time pressure or stress. Our machines are biased because we are. The question is therefore not so much whether emotion recognition technology is perfect, but whether using the technology is sometimes better than the alternative.
That may of course not be the case right now. But as face scanning technology advances, and perhaps is combined with other biometric scanners, this may well be the case sooner rather than later. Especially considering the huge projected markets for emotion recognition technology in the coming years, which will presumably drive innovation and technological advancements.
However, perfecting emotion recognition technology does not alleviate all the worries associated with the technology. In some ways, it only makes things worse. For instance, some critics have argued that emotion recognition technology challenges our right to privacy. Other critics have even gone as far as claiming that, “there are no ethical uses of emotion recognition tech” and has called for a ban on the technology.
I think that conclusion is much too fast. Like technology in general, emotional recognition technology can be used for both good and bad. For instance, using this kind of technology to help people with autism spectrum disorder better recognize emotions in others seems like a good example of using the technology benevolently. Helping drivers not falling asleep behind the wheels also seems like a good candidate for a benevolent use case. Of course, data can always be misused, but that is not enough in itself to render something morally suspect.
Moreover, it is worth noting that even if emotion recognition technology does in fact violate our right to privacy, that does not mean using the technology is necessarily morally wrong. Rights can come into conflict and may sometimes be outweighed by competing considerations. For instance, in the context of law enforcement, we typically allow many practices that would normally be considered as interfering with people’s rights. For instance, suspects may have their houses and electronic devices searched, they may be detained by police, have their DNA or blood collected, and so on.
There is no doubt that the many different possibilities of using emotion recognition technology raise a variety of potential ethical problems. However, this technology also presents potential benefits. To ban the technology or discard further development would be to throw the baby out with the bathwater.