A comparison of unimodal and multimodal models for implicit detection of relevance in interactive IR

Roberto González-Ibáñez, Aileen Esparza-Villamán, Juan Carlos Vargas-Godoy, Chirag Shah

Research output: Contribution to journalArticle

Abstract

Implicit detection of relevance has been approached by many during the last decade. From the use of individual measures to the use of multiple features from different sources (multimodality), studies have shown the feasibility to automatically detect whether a document is relevant. Despite promising results, it is not clear yet to what extent multimodality constitutes an effective approach compared to unimodality. In this article, we hypothesize that it is possible to build unimodal models capable of outperforming multimodal models in the detection of perceived relevance. To test this hypothesis, we conducted three experiments to compare unimodal and multimodal classification models built using a combination of 24 features. Our classification experiments showed that a univariate unimodal model based on the left-click feature supports our hypothesis. On the other hand, our prediction experiment suggests that multimodality slightly improves early classification compared to the best unimodal models. Based on our results, we argue that the feasibility for practical applications of state-of-the-art multimodal approaches may be strongly constrained by technology, cultural, ethical, and legal aspects, in which case unimodality may offer a better alternative today for supporting relevance detection in interactive information retrieval systems.

Original languageEnglish (US)
JournalJournal of the Association for Information Science and Technology
DOIs
StatePublished - Jan 1 2019

Fingerprint

multimodality
experiment
Information retrieval systems
Experiments
information retrieval
Multimodality
Experiment

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Computer Networks and Communications
  • Information Systems and Management
  • Library and Information Sciences

Cite this

@article{6ab0a380414c4b07b2c9bda0a7cf20aa,
title = "A comparison of unimodal and multimodal models for implicit detection of relevance in interactive IR",
abstract = "Implicit detection of relevance has been approached by many during the last decade. From the use of individual measures to the use of multiple features from different sources (multimodality), studies have shown the feasibility to automatically detect whether a document is relevant. Despite promising results, it is not clear yet to what extent multimodality constitutes an effective approach compared to unimodality. In this article, we hypothesize that it is possible to build unimodal models capable of outperforming multimodal models in the detection of perceived relevance. To test this hypothesis, we conducted three experiments to compare unimodal and multimodal classification models built using a combination of 24 features. Our classification experiments showed that a univariate unimodal model based on the left-click feature supports our hypothesis. On the other hand, our prediction experiment suggests that multimodality slightly improves early classification compared to the best unimodal models. Based on our results, we argue that the feasibility for practical applications of state-of-the-art multimodal approaches may be strongly constrained by technology, cultural, ethical, and legal aspects, in which case unimodality may offer a better alternative today for supporting relevance detection in interactive information retrieval systems.",
author = "Roberto Gonz{\'a}lez-Ib{\'a}{\~n}ez and Aileen Esparza-Villam{\'a}n and Vargas-Godoy, {Juan Carlos} and Chirag Shah",
year = "2019",
month = "1",
day = "1",
doi = "10.1002/asi.24202",
language = "English (US)",
journal = "Journal of the Association for Information Science and Technology",
issn = "2330-1635",
publisher = "John Wiley and Sons Ltd",

}

A comparison of unimodal and multimodal models for implicit detection of relevance in interactive IR. / González-Ibáñez, Roberto; Esparza-Villamán, Aileen; Vargas-Godoy, Juan Carlos; Shah, Chirag.

In: Journal of the Association for Information Science and Technology, 01.01.2019.

Research output: Contribution to journalArticle

TY - JOUR

T1 - A comparison of unimodal and multimodal models for implicit detection of relevance in interactive IR

AU - González-Ibáñez, Roberto

AU - Esparza-Villamán, Aileen

AU - Vargas-Godoy, Juan Carlos

AU - Shah, Chirag

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Implicit detection of relevance has been approached by many during the last decade. From the use of individual measures to the use of multiple features from different sources (multimodality), studies have shown the feasibility to automatically detect whether a document is relevant. Despite promising results, it is not clear yet to what extent multimodality constitutes an effective approach compared to unimodality. In this article, we hypothesize that it is possible to build unimodal models capable of outperforming multimodal models in the detection of perceived relevance. To test this hypothesis, we conducted three experiments to compare unimodal and multimodal classification models built using a combination of 24 features. Our classification experiments showed that a univariate unimodal model based on the left-click feature supports our hypothesis. On the other hand, our prediction experiment suggests that multimodality slightly improves early classification compared to the best unimodal models. Based on our results, we argue that the feasibility for practical applications of state-of-the-art multimodal approaches may be strongly constrained by technology, cultural, ethical, and legal aspects, in which case unimodality may offer a better alternative today for supporting relevance detection in interactive information retrieval systems.

AB - Implicit detection of relevance has been approached by many during the last decade. From the use of individual measures to the use of multiple features from different sources (multimodality), studies have shown the feasibility to automatically detect whether a document is relevant. Despite promising results, it is not clear yet to what extent multimodality constitutes an effective approach compared to unimodality. In this article, we hypothesize that it is possible to build unimodal models capable of outperforming multimodal models in the detection of perceived relevance. To test this hypothesis, we conducted three experiments to compare unimodal and multimodal classification models built using a combination of 24 features. Our classification experiments showed that a univariate unimodal model based on the left-click feature supports our hypothesis. On the other hand, our prediction experiment suggests that multimodality slightly improves early classification compared to the best unimodal models. Based on our results, we argue that the feasibility for practical applications of state-of-the-art multimodal approaches may be strongly constrained by technology, cultural, ethical, and legal aspects, in which case unimodality may offer a better alternative today for supporting relevance detection in interactive information retrieval systems.

UR - http://www.scopus.com/inward/record.url?scp=85063968941&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063968941&partnerID=8YFLogxK

U2 - 10.1002/asi.24202

DO - 10.1002/asi.24202

M3 - Article

JO - Journal of the Association for Information Science and Technology

JF - Journal of the Association for Information Science and Technology

SN - 2330-1635

ER -