Specifying and animating facial signals for discourse in embodied conversational agents

Doug DeCorlo, Matthew Stone, Corey Revilla, Jennifer J. Venditti

Research output: Contribution to journalArticlepeer-review

23 Scopus citations

Abstract

People highlight the intended interpretation of their utterances within a larger discourse by a diverse set of non-verbal signals. These signals represent a key challenge for animated conversational agents because they are pervasive, variable, and need to be coordinated judiciously in an effective contribution to conversation. In this paper, we describe a freely available cross-platform real-time facial animation system, RUTH, that animates such high-level signals in synchrony with speech and lip movements. RUTH adopts an open, layered architecture in which fine-grained features of the animation can be derived by rule from inferred linguistic structure, allowing us to use RUTH, in conjunction with annotation of observed discourse, to investigate the meaningful high-level elements of conversational facial movement for American English speakers.

Original languageEnglish (US)
Pages (from-to)27-38
Number of pages12
JournalComputer Animation and Virtual Worlds
Volume15
Issue number1
DOIs
StatePublished - Feb 2004

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Keywords

  • Embodied conversational agents
  • Facial animation

Fingerprint

Dive into the research topics of 'Specifying and animating facial signals for discourse in embodied conversational agents'. Together they form a unique fingerprint.

Cite this