Can Artificial Intelligence Software be trusted to Analyze Human Emotions Accurately?

IBM, Microsoft, Amazon, Amazon Rekognition, AI, Artificial Intelligence, HireVue, U.S. Federal Trade Commission, Electronic Privacy Information Center, Unilever, Hilton, Association for Psychological Science
Can Artificial Intelligence Software be trusted to Analyze Human Emotions Accurately

Tech giants like IBM, Microsoft, and Amazon use artificial Intelligence (AI) to sell what they refer to as “emotion recognition” algorithms. These algorithms claim to infer how people feel based on facial analysis. But, how far are such algorithms reliable?

5 Challenges Associated with a rise in ML usage in 2020

Firms are adopting AI technologies to decode human emotions in a variety of ways to improve operation and sales. But, the belief that the software can easily infer how people feel based on how they look is controversial. Experts across different industries have confirmed that there’s no substantial scientific justification for it.

A review was commissioned by the Association for Psychological Science, asking five distinguished scientists from the field to scrutinize the evidence. It took them almost two years to examine the data, reviewing more than 1,000 different studies. The finally concluded that emotions could be expressed in a wide contrasting variety of ways, making it hard to decode or infer what a person is feeling. The study highlighted that people, on average, scowl less than 30% of the time when they’re angry. That means that more than 70% of the time, people do not frown when they’re angry. So, the reliability of such facial gestures to derive decision making based on emotions is highly questionable.

Overcoming Cybersecurity Road Blocks

This indicates that companies that use AI to comprehend people’s emotions in this manner are actually misleading consumers. And such misleading information can result in wrong business decisions with outcomes like financial loss, wrong investment or misinterpretation of client requirements. However, the review doesn’t deny that common or “prototypical” facial expressions might exist. But, they should not be the base for critical decision making. The report also states that the studies that seem to show a strong correlation between facial expressions and emotions are often methodologically flawed.

Researchers alarmed by the possible harmful social effects of artificial intelligence have called for a ban on automated analysis of facial expressions driving major organizational decisions like hiring. The AI Now Institute based at New York University re-assured that banning such software-driven data was its top priority. They continued stating that science doesn’t justify the technology’s use to identify human emotions and it’s time to curb its widespread adoption.

The best example of such a problem is HireVue, which sells systems for remote video interviews for top firms such as Unilever and Hilton. It offers AI to analyze the tone of voice, facial movements, and speech patterns and doesn’t even disclose scores to the job candidates. The Electronic Privacy Information Center filed a complaint against HireVue to the U.S. Federal Trade Commission criticizing the use of AI to determine emotions. How people communicate anger, fear, happiness, disgust, sadness, and surprise varies substantially across countries, cultures, situations, and even across people within a single situation. So, no technology, software, or algorithm can be trained enough to decode human emotions accurately.

Can Robots Replace Human Touch in the Retail?

Leading firms, including Microsoft, are marketing their ability to classify and decoding emotions using AI-driven software. Amazon is also facing a lot of criticism regarding its Rekognition software. Amazon defended itself saying that its technology only determines the physical appearance of someone’s face but does not claim to decode what a person feels. The damaging uses of AI are multiplying despite the broad consensus on the ethical principles because there are no consequences for violating them. Firms need to be aware of the consequences of relying on mere software and algorithms to understand human emotions. No matter how much workload reduction or process simplification is brought in by such AI-driven software, firms should not get tempted to adopt them.