Mutual correlation attentive factors in dyadic fusion networks for speech emotion recognition

Yue Gu, Weitian Li, Xinyu Lyu, Shuhong Chen, Marsic Ivan, Weijia Sun, Xinyu Li

Research output: Chapter in Book/Report/Conference proceedingConference contribution

14 Scopus citations

Abstract

Emotion recognition in dyadic communication is challenging because: 1. Extracting informative modality-specific representations requires disparate feature extractor designs due to the heterogenous input data formats. 2. How to effectively and efficiently fuse unimodal features and learn associations between dyadic utterances are critical to the model generalization in actual scenario. 3. Disagreeing annotations prevent previous approaches from precisely predicting emotions in context. To address the above issues, we propose an efficient dyadic fusion network that only relies on an attention mechanism to select representative vectors, fuse modality-specific features, and learn the sequence information. Our approach has three distinct characteristics: 1. Instead of using a recurrent neural network to extract temporal associations as in most previous research, we introduce multiple sub-view attention layers to compute the relevant dependencies among sequential utterances; this significantly improves model efficiency. 2. To improve fusion performance, we design a learnable mutual correlation factor inside each attention layer to compute associations across different modalities. 3. To overcome the label disagreement issue, we embed the labels from all annotators into a k-dimensional vector and transform the categorical problem into a regression problem; this method provides more accurate annotation information and fully uses the entire dataset. We evaluate the proposed model on two published multimodal emotion recognition datasets: IEMOCAP and MELD. Our model significantly outperforms previous state-of-the-art research by 3.8%-7.5% accuracy, using a more efficient model.

Original languageEnglish (US)
Title of host publicationMM 2019 - Proceedings of the 27th ACM International Conference on Multimedia
PublisherAssociation for Computing Machinery, Inc
Pages157-165
Number of pages9
ISBN (Electronic)9781450368896
DOIs
StatePublished - Oct 15 2019
Event27th ACM International Conference on Multimedia, MM 2019 - Nice, France
Duration: Oct 21 2019Oct 25 2019

Publication series

NameMM 2019 - Proceedings of the 27th ACM International Conference on Multimedia

Conference

Conference27th ACM International Conference on Multimedia, MM 2019
Country/TerritoryFrance
CityNice
Period10/21/1910/25/19

All Science Journal Classification (ASJC) codes

  • Media Technology
  • Computer Science(all)

Keywords

  • Attention Mechanism
  • Dyadic Communication
  • Multimodal Fusion Network
  • Mutual Correlation Attentive Factor
  • Speech Emotion Recognition

Fingerprint

Dive into the research topics of 'Mutual correlation attentive factors in dyadic fusion networks for speech emotion recognition'. Together they form a unique fingerprint.

Cite this