Evaluating the quality of educational answers in community question-answering

Long T. Le, Chirag Shah, Erik Choi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Scopus citations

Abstract

Community Question-Answering (CQA), where questions and answers are generated by peers, has become a popular method of information seeking in online environments. While the content repositories created through CQA sites have been used widely to support general purpose tasks, using them as online digital libraries that support educational needs is an emerging practice. Horizontal CQA services, such as Yahoo! Answers, and vertical CQA services, such as Brainly, are aiming to help students improve their learning process by answering their educational questions. In these services, receiving high quality answer(s) to a question is a critical factor not only for user satisfaction, but also for supporting learning. However, the questions are not necessarily answered by experts, and the askers may not have enough knowledge and skill to evaluate the quality of the answers they receive. This could be problematic when students build their own knowledge base by applying inaccurate information or knowledge acquired from online sources. Using moderators could alleviate this problem. However, a moderator's evaluation of answer quality may be inconsistent because it is based on their subjective assessments. Employing human assessors may also be insufficient due to the large amount of content available on a CQA site. To address these issues, we propose a framework for automatically assessing the quality of answers. This is achieved by integrating different groups of features - personal, community-based, textual, and contextual - to build a classification model and determine what constitutes answer quality. To test this evaluation framework, we collected more than 10 million educational answers posted by more than 3 million users on Brainly's United States and Poland sites. The experiments conducted on these datasets show that the model using Random Forest (RF) achieves more than 83% accuracy in identifying high quality of answers. In addition, the findings indicate that personal and community-based features have more prediction power in assessing answer quality. Our approach also achieves high values on other key metrics such as F1-score and Area under ROC curve. The work reported here can be useful in many other contexts where providing automatic quality assessment in a digital repository of textual information is paramount.

Original languageEnglish (US)
Title of host publicationJCDL 2016 - Proceedings of the 16th ACM/IEEE-CS Joint Conference on Digital Libraries
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages129-138
Number of pages10
ISBN (Electronic)9781450342292
DOIs
StatePublished - Sep 1 2016
Event16th ACM/IEEE-CS Joint Conference on Digital Libraries, JCDL 2016 - Newark, United States
Duration: Jun 19 2016Jun 23 2016

Publication series

NameProceedings of the ACM/IEEE Joint Conference on Digital Libraries
Volume2016-September
ISSN (Print)1552-5996

Other

Other16th ACM/IEEE-CS Joint Conference on Digital Libraries, JCDL 2016
CountryUnited States
CityNewark
Period6/19/166/23/16

All Science Journal Classification (ASJC) codes

  • Engineering(all)

Keywords

  • Answer Quality
  • Community Question-Answering (CQA)
  • Features

Fingerprint Dive into the research topics of 'Evaluating the quality of educational answers in community question-answering'. Together they form a unique fingerprint.

Cite this