Non-linear factorised dynamic shape and appearance models for facial expression analysis and tracking

C. S. Lee, A. Elgammal

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Facial expressions exhibit non-linear shape and appearance deformations with variations in different people and expressions. The authors present a non-linear factorised shape and appearance model for facial expression analysis and tracking. The novel non-linear factorised generative model of facial expressions, using conceptual manifold embedding and empirical kernel maps, provides accurate facial expression shape and appearance. It preserves non-linear facial deformations based on the configuration, face style and expression type. The proposed model supports tasks, such as facial expression recognition, person identification and global and local facial motion tracking. Given a sequence of images, temporal embedding, expression type and person identification parameters are iteratively estimated for facial expression analysis. The authors combine global facial motion estimation and local facial deformation estimation for large global and subtle local facial motion tracking. The authors employ local facial motion deformation estimation using a thin-plate spline for subtle facial motion tracking. The global shape and appearance model provides appearance templates for the estimation of local deformation. Experimental results using Cohen-Kanade AU-coded facial expressions demonstrate facial expression recognition using estimated personal style parameter, and facial deformation tracking using global and local facial motion estimation.

Original languageEnglish (US)
Pages (from-to)567-580
Number of pages14
JournalIET Computer Vision
Volume6
Issue number6
DOIs
StatePublished - 2012

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Non-linear factorised dynamic shape and appearance models for facial expression analysis and tracking'. Together they form a unique fingerprint.

Cite this