Skip to main navigation Skip to search Skip to main content

Learning to select rankers

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

20 Scopus citations

Abstract

Combining evidence from multiple retrieval models has been widely studied in the context of of distributed search, metasearch and rank fusion. Much of the prior work has focused on combining retrieval scores (or the rankings) assigned by different retrieval models or ranking algorithms. In this work, we focus on the problem of choosing between retrieval models using performance estimation. We propose modeling the differences in retrieval performance directly by using rank-time features - features that are available to the ranking algorithms - and the retrieval scores assigned by the ranking algorithms. Our experimental results show that when choosing between two rankers, our approach yields significant improvements over the best individual ranker.

Original languageEnglish
Title of host publicationSIGIR 2010 Proceedings - 33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval
Pages855-856
Number of pages2
DOIs
StatePublished - 2010
Event33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2010 - Geneva, Switzerland
Duration: Jul 19 2010Jul 23 2010

Publication series

NameSIGIR 2010 Proceedings - 33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval

Conference

Conference33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2010
Country/TerritorySwitzerland
CityGeneva
Period07/19/1007/23/10

Keywords

  • Combining searches
  • Learning to rank
  • Metasearch

Fingerprint

Dive into the research topics of 'Learning to select rankers'. Together they form a unique fingerprint.

Cite this