Group recommendations via multi-armed bandits

José Bento, Stratis Ioannidis, S. Muthukrishnan, Jinyun Yan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Scopus citations

Abstract

We study recommendations for persistent groups that repeatedly engage in a joint activity. We approach this as a multi-arm bandit problem. We design a recommendation policy and show it has logarithmic regret. Our analysis also shows that regret depends linearly on d, the size of the underlying persistent group. We evaluate our policy on movie recommendations over the MovieLens and MoviePilot datasets. Copyright is held by the author/owner(s).

Original languageEnglish (US)
Title of host publicationWWW'12 - Proceedings of the 21st Annual Conference on World Wide Web Companion
Pages463-464
Number of pages2
DOIs
StatePublished - 2012
Event21st Annual Conference on World Wide Web, WWW'12 - Lyon, France
Duration: Apr 16 2012Apr 20 2012

Publication series

NameWWW'12 - Proceedings of the 21st Annual Conference on World Wide Web Companion

Other

Other21st Annual Conference on World Wide Web, WWW'12
Country/TerritoryFrance
CityLyon
Period4/16/124/20/12

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications

Keywords

  • Group recommendation
  • Multi-armed bandits

Cite this