Skip to main navigation Skip to search Skip to main content

Combining classifiers with informational confidence

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

We propose a new statistical method for learning normalized confidence values in multiple classifier systems. Our main idea is to adjust confidence values so that their nominal values equal the information actually conveyed. In order to do so, we assume that information depends on the actual performance of each confidence value on an evaluation set. As information measure, we use Shannon's well-known logarithmic notion of information. With the confidence values matching their informational content, the classifier combination scheme reduces to the simple sum-rule, theoretically justifying this elementary combination scheme. In experimental evaluations for script identification, and both handwritten and printed character recognition, we achieve a consistent improvement on the best single recognition rate. We cherish the hope that our information-theoretical framework helps fill the theoretical gap we still experience in classifier combination, putting the excellent practical performance of multiple classifier systems on a more solid basis.

Original languageEnglish
Title of host publicationMachine Learning in Document Analysis and Recognition
EditorsSimone Marinai, Hiromichi Fujisawa
Pages163-191
Number of pages29
DOIs
StatePublished - 2008

Publication series

NameStudies in Computational Intelligence
Volume90

Fingerprint

Dive into the research topics of 'Combining classifiers with informational confidence'. Together they form a unique fingerprint.

Cite this