Abstract
Rankboost has been shown to be an effective algorithm for combining ranks. However, its ability to generalize well and not overfit is directly related to the choice of weak learner, in the sense that regularization of the rank function is due to the regularization properties of its weak learners. We present a regularization property called consistency in preference and confidence that mathematically translates into monotonic concavity, and describe a new weak ranking learner (MWGR) that generates ranking functions with this property. In experiments combining ranks from multiple face recognition algorithms and an experiment combining text information retrieval systems, rank functions using MWGR proved superior to binary weak learners.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 791-812 |
| Number of pages | 22 |
| Journal | Journal of Machine Learning Research |
| Volume | 8 |
| State | Published - Apr 2007 |
All Science Journal Classification (ASJC) codes
- Software
- Control and Systems Engineering
- Statistics and Probability
- Artificial Intelligence
Keywords
- Convex/concave
- Rankboost
- Ranking
- Regularization
Fingerprint
Dive into the research topics of 'Concave learners for rankboost'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver