Making Discourse Visible: Realizing Conversational Facial Displays in Interactive Agents

Project Details

Description

When people communicate, they systematically employ a diverse set of nonverbal cues, and highlight the intended interpretation of their utterances. In face-to-face conversation, expressive movements of the brows and head are particularly pervasive. This project studies these facial conversational signals from the dual perspectives of human communication and computer animation, with the goal of realizing them effectively in spoken dialogue interfaces modeled on face-to-face conversation. The research strategy is to categorize and document facial conversational signals, to model their meaning and interpretation, to implement computational systems that generate and animate these signals, and to evaluate the contribution these systems can make to users' understanding of agents.

This specific project fits into a broader program that uses animated agents to enhance the flexibility and accessibility of interfaces, by improving the robustness of dialogue systems, increasing communication bandwidth between users and computers, and supporting more collaborative styles of interaction. The results of this project, including new software, data and training materials, promise to lower the barriers to entry to this research area, and add to its momentum.

StatusFinished
Effective start/end date7/15/031/31/07

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.