Dynamically adaptive tracking of gestures and facial expressions

D. Metaxas, G. Tseehpenakis, Z. Li, Y. Huang, A. Kanaujia

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations


We present a dynamic data-driven framework for tracking gestures and facial expressions from monocular sequences. Our system uses two cameras, one for the face and one for the body view for processing in different scales. Specifically, and for the gesture tracking module, we track the hands and the head, obtaining as output the blobs (ellipses) of the ROIs, and we detect the shoulder positions with straight lines. For the facial expressions, we first extract the 2D facial features, using a fusion between KLT tracker and a modified Active Shape Model, and then we obtain the 3D face mask with fitting a generic model to the extracted 2D features. The main advantages of our system are (i) the adaptivity, i.e., it is robust to external conditions, e.g., lighting, and independent from the examined individual, and (ii) its computational efficiency, providing us results off- and online with a rates higher than 20fps.

Original languageEnglish (US)
Title of host publicationComputational Science - ICCS 2006
Subtitle of host publication6th International Conference, Proceedings
PublisherSpringer Verlag
Number of pages8
ISBN (Print)3540343830, 9783540343837
StatePublished - Jan 1 2006
EventICCS 2006: 6th International Conference on Computational Science - Reading, United Kingdom
Duration: May 28 2006May 31 2006

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3993 LNCS - III
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


OtherICCS 2006: 6th International Conference on Computational Science
CountryUnited Kingdom

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint Dive into the research topics of 'Dynamically adaptive tracking of gestures and facial expressions'. Together they form a unique fingerprint.

Cite this