The official blog for the Fulbright College of Arts & Sciences

Editorial Examines Challenges of Automated Facial-Expression Analysis

by and | Jan 5, 2023 | Features, Research, Research and Innovation

Jeffrey Mullins and Patrick Stewart

Jeffrey Mullins and Patrick Stewart

As automated facial-expression analysis, or AFEA, becomes increasingly able to recognize facial behavior in everyday life, it will become increasingly important to understand what causes the technology to work incorrectly, as well as anticipate problems that could arise when it does work correctly.

These are the pressing issues two U of A professors highlighted in a recent policy editorial published in the Journal of the Association for Information Systems. The article, “Facing Forward: Policy for Automated Facial Expression Analysis,” was co-authored by Jeffrey K. Mullins, an assistant professor of information systems, and Patrick A. Stewart, a professor of political science. Thomas J. Greitens, a professor of political science at Central Michigan University, was an additional co-author.

The purpose of the editorial is to look further down the road as AFEA develops. Currently, commercial AFEA is not as accurate as expert human raters trained in the Facial Action Coding System and tends to only identify and use the six basic emotions of anger, fear, disgust, sadness, happiness and surprise. But that could change quickly. Developers and organizations inclined to use AFEA need to be aware of current and future challenges.

CHALLENGES TO RELIABILITY

At this stage of development, a few things still undermine the reliability of AFEA. These include “simplicity bias,” as noted above, in that AFEA focuses on detecting only six emotions, and currently does not identify more complex facial behaviors. Nor is it able to detect nuances, such as the difference between a smile of contentment or amusement, which the authors say reflects a “monomodal bias.”

Another issue is “environmental bias.” A flustered or claustrophobic passenger at an airport security checkpoint may have facial behavior indistinguishable from a more suspicious traveler nervous because they are using forged documents.

Finally, there is “individual difference bias.” People are the sum of their genetics, family, culture and experiences — and not everyone responds to the same stimulus in the same way. What is expected in one group may not be in another, so assigning specific emotions to specific facial behaviors will never be wholly accurate.

CHALLENGES OF RELIABILITY

Assuming AFEA can be brought to the point of greater reliability, this will introduce new challenges. The authors begin with “negativity bias.” Of the six current emotions AFEA identify, four are commonly thought of as negative (fear, sadness, disgust, anger), one is positive (happiness) and one is neutral (surprise). Given the human propensity for focusing on the negative, the authors feel “AFEA could encourage coercion and control as opposed to coordination and cooperation.” Another major concern is transparency, with the degree to which facial behaviors are recorded and thoughts and feelings determined potentially undermining one’s right to privacy.

The last two concerns are “systemic bias” and “subjectivity bias.” The first concerns how biases regarding marginalized groups can be unintentionally built into the algorithms, such as the case where a hiring algorithm used by Amazon proved to be biased against women. In the other, the authors observe that values can differ greatly between and within cultures, making it hard to prioritize what’s “good.” This can lead to the mistreatment of marginalized groups, or processes and outcomes that engender societal conflict rather than building consensus.

Ultimately, the authors conclude that “organizations should be realistic in their expectations, cautious in their implementations and critical when trying to predict potential negative impacts.”