Animated conversation: Rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents

Justine Cassell, Catherine Pelachaud, Norman Badler, Mark Steedman, Brett Achorn, Tripp Becket, Brett Douville, Scott Prevost, Matthew Stone

Research output: Chapter in Book/Report/Conference proceedingConference contribution

395 Scopus citations

Abstract

We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversations are created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gesture generators. Coordinated arm, wrist, and hand motions are invoked to create semantically meaningful gestures. Throughout, we will use examples from an actual synthesized, fully animated conversation.

Original languageEnglish (US)
Title of host publicationProceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1994
PublisherAssociation for Computing Machinery, Inc
Pages413-420
Number of pages8
ISBN (Electronic)0897916670, 9780897916677
DOIs
StatePublished - Jul 24 1994
Externally publishedYes
Event21st Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1994 - Orlando, United States
Duration: Jul 24 1994Jul 29 1994

Publication series

NameProceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1994

Other

Other21st Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1994
Country/TerritoryUnited States
CityOrlando
Period7/24/947/29/94

All Science Journal Classification (ASJC) codes

  • Computer Graphics and Computer-Aided Design
  • Human-Computer Interaction

Cite this