TY - GEN
T1 - Customized expression recognition for performance-driven cutout character animation
AU - Yu, Xiang
AU - Yang, Jianchao
AU - Luo, Linjie
AU - Li, Wilmot
AU - Brandt, Jonathan
AU - Metaxas, Dimitris
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/5/23
Y1 - 2016/5/23
N2 - Performance-driven character animation enables users to create expressive results by performing the desired motion of the character with their face and/or body. However, for cutout animations where continuous motion is combined with discrete artwork replacements, supporting a performance-driven workflow has some unique requirements. To trigger the appropriate artwork replacements, the system must reliably detect a wide range of customized facial expressions that are challenging for existing recognition methods, which focus on a few canonical expressions (e.g., angry, disgusted, scared, happy, sad and surprised). Also, real usage scenarios require the system to work in realtime with minimal training. In this paper, we propose a novel customized expression recognition technique that meets all of these requirements. We first use a set of handcrafted features combining geometric features derived from facial landmarks and patch-based appearance features through group sparsity-based facial component learning. To improve discrimination and generalization, these handcrafted features are integrated into a custom-designed Deep Convolutional Neural Network (CNN) structure trained from publicly available facial expression datasets. The combined features are fed to an online ensemble of SVMs designed for the few training sample problem and performs in realtime. To improve temporal coherence, we also apply a Hidden Markov Model (HMM) to smooth the recognition results. Our system achieves state-of-the-art performance on canonical expression datasets and promising results on our collected dataset of customized expressions.
AB - Performance-driven character animation enables users to create expressive results by performing the desired motion of the character with their face and/or body. However, for cutout animations where continuous motion is combined with discrete artwork replacements, supporting a performance-driven workflow has some unique requirements. To trigger the appropriate artwork replacements, the system must reliably detect a wide range of customized facial expressions that are challenging for existing recognition methods, which focus on a few canonical expressions (e.g., angry, disgusted, scared, happy, sad and surprised). Also, real usage scenarios require the system to work in realtime with minimal training. In this paper, we propose a novel customized expression recognition technique that meets all of these requirements. We first use a set of handcrafted features combining geometric features derived from facial landmarks and patch-based appearance features through group sparsity-based facial component learning. To improve discrimination and generalization, these handcrafted features are integrated into a custom-designed Deep Convolutional Neural Network (CNN) structure trained from publicly available facial expression datasets. The combined features are fed to an online ensemble of SVMs designed for the few training sample problem and performs in realtime. To improve temporal coherence, we also apply a Hidden Markov Model (HMM) to smooth the recognition results. Our system achieves state-of-the-art performance on canonical expression datasets and promising results on our collected dataset of customized expressions.
UR - http://www.scopus.com/inward/record.url?scp=84977650130&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84977650130&partnerID=8YFLogxK
U2 - 10.1109/WACV.2016.7477449
DO - 10.1109/WACV.2016.7477449
M3 - Conference contribution
AN - SCOPUS:84977650130
T3 - 2016 IEEE Winter Conference on Applications of Computer Vision, WACV 2016
BT - 2016 IEEE Winter Conference on Applications of Computer Vision, WACV 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - IEEE Winter Conference on Applications of Computer Vision, WACV 2016
Y2 - 7 March 2016 through 10 March 2016
ER -