Skip to main navigation Skip to search Skip to main content

Better together: Fusing visual saliency methods for retrieving perceptually-similar images

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

In this paper, we describe a new model of visual saliency by fusing results from existing saliency methods. We first briefly survey existing saliency models, and justify the fusion methods as they take advantage of the strengths of all existing works. Initial experiments indicate that the fused saliency methods generate results closer to the ground-truth than the original methods alone. We apply our method to content-based image retrieval, leveraging a fusion method as a feature extractor. We perform experimental evaluation and show a marked improvement in retrieval performance using our fusion method over individual saliency models.

Original languageEnglish
Title of host publication2015 IEEE International Conference on Consumer Electronics, ICCE 2015
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages507-508
Number of pages2
ISBN (Electronic)9781479975426
DOIs
StatePublished - Mar 23 2015
Event2015 IEEE International Conference on Consumer Electronics, ICCE 2015 - Las Vegas, United States
Duration: Jan 9 2015Jan 12 2015

Publication series

Name2015 IEEE International Conference on Consumer Electronics, ICCE 2015

Conference

Conference2015 IEEE International Conference on Consumer Electronics, ICCE 2015
Country/TerritoryUnited States
CityLas Vegas
Period01/9/1501/12/15

Fingerprint

Dive into the research topics of 'Better together: Fusing visual saliency methods for retrieving perceptually-similar images'. Together they form a unique fingerprint.

Cite this