TY - GEN
T1 - A Fairness-Aware Fusion Framework for Multimodal Cyberbullying Detection
AU - Alasadi, Jamal
AU - Arunachalam, Ramanathan
AU - Atrey, Pradeep K.
AU - Singh, Vivek K.
N1 - Funding Information:
ACKNOWLEDGMENT The work of Jamal Alasadi was supported by the MO-HESR, Iraq. Work by Ramanathan Arunachalam and Vivek Singh was supported in part by the US National Science Foundation under Grant SES-1915790.
Funding Information:
The work of Jamal Alasadi was supported by the MOHESR, Iraq. Work by Ramanathan Arunachalam and Vivek Singh was supported in part by the US National Science Foundation under Grant SES-1915790.
Publisher Copyright:
© 2020 IEEE.
PY - 2020/9
Y1 - 2020/9
N2 - Recent reports of bias in multimedia algorithms (e.g., lesser accuracy of face detection for women and persons of color) have underscored the urgent need to devise approaches which work equally well for different demographic groups. Hence, we posit that ensuring fairness in multimodal cyber-bullying detectors (e.g., equal performance irrespective of the gender of the victim) is an important research challenge. We propose a fairness-aware fusion framework that ensures that both fairness and accuracy remain important considerations when combining data coming from multiple modalities. In this Bayesian fusion framework, the inputs coming from different modalities are combined in a way that is cognizant of the different confidence levels associated with each feature and the interdependencies between features. Specifically, this framework assigns weights to different modalities not just based on accuracy but also their fairness. Results of applying the framework on a multimodal (visual + text) cyberbullying detection problem demonstrate the value of the proposed framework in ensuring both accuracy and fairness.
AB - Recent reports of bias in multimedia algorithms (e.g., lesser accuracy of face detection for women and persons of color) have underscored the urgent need to devise approaches which work equally well for different demographic groups. Hence, we posit that ensuring fairness in multimodal cyber-bullying detectors (e.g., equal performance irrespective of the gender of the victim) is an important research challenge. We propose a fairness-aware fusion framework that ensures that both fairness and accuracy remain important considerations when combining data coming from multiple modalities. In this Bayesian fusion framework, the inputs coming from different modalities are combined in a way that is cognizant of the different confidence levels associated with each feature and the interdependencies between features. Specifically, this framework assigns weights to different modalities not just based on accuracy but also their fairness. Results of applying the framework on a multimodal (visual + text) cyberbullying detection problem demonstrate the value of the proposed framework in ensuring both accuracy and fairness.
KW - Bayesian Fusion
KW - Bias in Machine Learning
KW - Cyberbullying Detection
KW - Fairness
KW - Multimedia Fusion
UR - http://www.scopus.com/inward/record.url?scp=85097246151&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85097246151&partnerID=8YFLogxK
U2 - 10.1109/BigMM50055.2020.00032
DO - 10.1109/BigMM50055.2020.00032
M3 - Conference contribution
AN - SCOPUS:85097246151
T3 - Proceedings - 2020 IEEE 6th International Conference on Multimedia Big Data, BigMM 2020
SP - 166
EP - 173
BT - Proceedings - 2020 IEEE 6th International Conference on Multimedia Big Data, BigMM 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 6th IEEE International Conference on Multimedia Big Data, BigMM 2020
Y2 - 24 September 2020 through 26 September 2020
ER -