TY - GEN
T1 - Expression-driven salient features
T2 - 2010 IEEE International Conference on Multimedia and Expo, ICME 2010
AU - Zhang, Xing
AU - Yin, Lijun
AU - Gerhardstein, Peter
AU - Hipp, Daniel
PY - 2010
Y1 - 2010
N2 - Humans are able to recognize facial expressions of emotion from faces displaying a large set of confounding variables, including age, gender, ethnicity and other factors. Much work has been dedicated to attempts to characterize the process by which this highly developed capacity functions. In this paper, we propose to investigate local expression-driven features important to distinguishing facial expressions using a so-called 'Bubbles' technique [4]. The bubble technique is a kind of Gaussian masking to reveal information contributing to human perceptual categorization. We conducted experiments on factors from both human and machine. Observers are required to browse through the bubble-masked expression image and identify its category. By collecting responses from observers and analyzing them statistically we can find the facial features that humans employ for identifying different expressions. Humans appear to extract and use localized information specific to each expression for recognition. Additionally, we verify the findings by selecting the resulting features for expression classification using a conventional expression recognition algorithm with a public facial expression database.
AB - Humans are able to recognize facial expressions of emotion from faces displaying a large set of confounding variables, including age, gender, ethnicity and other factors. Much work has been dedicated to attempts to characterize the process by which this highly developed capacity functions. In this paper, we propose to investigate local expression-driven features important to distinguishing facial expressions using a so-called 'Bubbles' technique [4]. The bubble technique is a kind of Gaussian masking to reveal information contributing to human perceptual categorization. We conducted experiments on factors from both human and machine. Observers are required to browse through the bubble-masked expression image and identify its category. By collecting responses from observers and analyzing them statistically we can find the facial features that humans employ for identifying different expressions. Humans appear to extract and use localized information specific to each expression for recognition. Additionally, we verify the findings by selecting the resulting features for expression classification using a conventional expression recognition algorithm with a public facial expression database.
KW - Bubble
KW - Facial expression recognition
KW - HCI
UR - https://www.scopus.com/pages/publications/78349234612
U2 - 10.1109/ICME.2010.5583081
DO - 10.1109/ICME.2010.5583081
M3 - Conference contribution
SN - 9781424474912
T3 - 2010 IEEE International Conference on Multimedia and Expo, ICME 2010
SP - 1184
EP - 1189
BT - 2010 IEEE International Conference on Multimedia and Expo, ICME 2010
Y2 - 19 July 2010 through 23 July 2010
ER -