For anyone that has had difficulty reading another person’s
facial expressions, a new technology developed by UCSD researchers could
recruit the help of an unlikely candidate: a computer.
Marian Stewart Bartlett, an associate research professor at
UCSD’s Institute for Neural Computation and co-director of the Machine
Perception Lab, presented her work with automated computer recognition of human
facial expressions at a recent digital imaging conference. Bartlett uses a
system called “machine learning” to train computers to recognize different
facial expressions.
First, the computer detects the face and facial features of
the subject and determines the alignment of the subject’s face. The image is
passed to a bank of image filters, which prescribes patterns of light and dark
in a manner that computers can understand.
The image is then passed to the machine learning system,
which has been trained on thousands of expression models — both positive
examples of the expression and examples of its absence — from the categories in
question.
“It learns to differentiate the patterns,” Bartlett said.
According to Bartlett, the system is precise and accurate
because it has not been hand-designed by humans. Human operators are not
required to distinguish between emotions;
instead, the machine identifies subtle patterns from a series of images
of different people experiencing the same emotion. The computer is powerful
enough to detect which facial movements remain the same in all of the
expressions.
Gwen Littlewort, Bartlett’s co-director and a member of the
Institute for Neural Computation, said that machine learning of this type has
been around for decades. However, applications of the technology are quickly
expanding as computers become more powerful.
As methods to collect data about facial expressions improve,
so does the capacity to analyze emotions in those expressions.
Currently, UCSD’s Machine Perception Lab has trained the
computer system to recognize joy, sadness, surprise, anger, fear, disgust and
neutral expressions.
“We can analyze these facial expressions on two different
levels,” Bartlett said. “We can recognize emotions or we can answer specific
questions. Is it real pain versus fake pain?”
While humans normally have a 50-percent chance of guessing
whether an emotion is genuine, the computer has performed significantly above
chance — showing up to a 72-percent accuracy rate.
“Because it can measure 30 [out of 46] component [facial]
movements, it gets information on nearly the full range of facial expressions,”
Bartlett said.
In addition to identifying emotions and determining whether
they are true or simulated, the system is also capable of ranking their
intensity. The recognition system has also been trained to recognize facial
expressions demonstrating fatigue or alertness.
Bartlett said the technology can be used as a tool for
further study in other related fields.
“Given that it recognizes facial expressions on those
dimensions, you can learn new relations between behavior and facial
expressions,” she said.
Littlewort said that possible uses of this technology
include social robotics, educational software, software that could accurately
determine levels of pain, tools to recognize deception and helpful ways to
monitor or train people with neurological conditions.