Abstract
3D facial animation synthesis from audio has been a focus in recent years. However, most existing literature works are designed to map audio and visual content, providing limited knowledge regarding the relationship between emotion in audio and expressive facial animation. This work generates audio-matching facial animations with the specified emotion label. In such a task, we argue that separating the content from audio is indispensable—the proposed model must learn to generate facial content from audio content while expressions from the specified emotion. We achieve it by an adaptive instance normalization module that isolates the content in the audio and combines the emotion embedding from the specified label. The joint content-emotion embedding is then used to generate 3D facial vertices and texture maps. We compare our method with state-of-the-art baselines, including the facial segmentation-based and voice conversion-based disentanglement approaches. We also conduct a user study to evaluate the performance of emotion conditioning. The results indicate that our proposed method outperforms the baselines in animation quality and expression categorization accuracy.
Original language | English (US) |
---|---|
Article number | e2076 |
Journal | Computer Animation and Virtual Worlds |
Volume | 33 |
Issue number | 3-4 |
DOIs | |
State | Published - Jun 1 2022 |
All Science Journal Classification (ASJC) codes
- Software
- Computer Graphics and Computer-Aided Design
Keywords
- adaptive instance normalization
- audio-driven animation
- content-emotion disentanglement
- emotion-conditioning
- expressive facial animation synthesis