Non-manual grammatical marker recognition based on multi-scale, spatio-temporal analysis of head pose and facial expressions

Jingjing Liu, Bo Liu, Shaoting Zhang, Fei Yang, Peng Yang, Dimitris N. Metaxas, Carol Neidle

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

Changes in eyebrow configuration, in conjunction with other facial expressions and head gestures, are used to signal essential grammatical information in signed languages. This paper proposes an automatic recognition system for non-manual grammatical markers in American Sign Language (ASL) based on a multi-scale, spatio-temporal analysis of head pose and facial expressions. The analysis takes account of gestural components of these markers, such as raised or lowered eyebrows and different types of periodic head movements. To advance the state of the art in non-manual grammatical marker recognition, we propose a novel multi-scale learning approach that exploits spatio-temporally low-level and high-level facial features. Low-level features are based on information about facial geometry and appearance, as well as head pose, and are obtained through accurate 3D deformable model-based face tracking. High-level features are based on the identification of gestural events, of varying duration, that constitute the components of linguistic non-manual markers. Specifically, we recognize events such as raised and lowered eyebrows, head nods, and head shakes. We also partition these events into temporal phases. We separate the anticipatory transitional movement (the onset) from the linguistically significant portion of the event, and we further separate the core of the event from the transitional movement that occurs as the articulators return to the neutral position towards the end of the event (the offset). This partitioning is essential for the temporally accurate localization of the grammatical markers, which could not be achieved at this level of precision with previous computer vision methods. In addition, we analyze and use the motion patterns of these non-manual events. Those patterns, together with the information about the type of event and its temporal phases, are defined as the high-level features. Using this multi-scale, spatio-temporal combination of low- and high-level features, we employ learning methods for accurate recognition of non-manual grammatical markers in ASL sentences.

Original languageEnglish (US)
Pages (from-to)671-681
Number of pages11
JournalImage and Vision Computing
Volume32
Issue number10
DOIs
StatePublished - Oct 2014

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Computer Vision and Pattern Recognition

Keywords

  • American Sign Language (ASL)
  • Conditional Random Field (CRF)
  • Eyebrow height
  • Facial expressions
  • Head gestures
  • Non-manual grammatical markers

Fingerprint Dive into the research topics of 'Non-manual grammatical marker recognition based on multi-scale, spatio-temporal analysis of head pose and facial expressions'. Together they form a unique fingerprint.

Cite this