Skip to main navigation Skip to search Skip to main content

Robust kernel principal component analysis

  • Minh Hoai Nguyen
  • , Fernando De La Torre

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

46 Scopus citations

Abstract

Kernel Principal Component Analysis (KPCA) is a popular generalization of linear PCA that allows non-linear feature extraction. In KPCA, data in the input space is mapped to higher (usually) dimensional feature space where the data can be linearly modeled. The feature space is typically induced implicitly by a kernel function, and linear PCA in the feature space is performed via the kernel trick. However, due to the implicitness of the feature space, some extensions of PCA such as robust PCA cannot be directly generalized to KPCA. This paper presents a technique to overcome this problem, and extends it to a unified framework for treating noise, missing data, and outliers in KPCA. Our method is based on a novel cost function to perform inference in KPCA. Extensive experiments, in both synthetic and real data, show that our algorithm outperforms existing methods.

Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 21 - Proceedings of the 2008 Conference
PublisherNeural Information Processing Systems
Pages1185-1192
Number of pages8
ISBN (Print)9781605609492
StatePublished - 2009
Event22nd Annual Conference on Neural Information Processing Systems, NIPS 2008 - Vancouver, BC, Canada
Duration: Dec 8 2008Dec 11 2008

Publication series

NameAdvances in Neural Information Processing Systems 21 - Proceedings of the 2008 Conference

Conference

Conference22nd Annual Conference on Neural Information Processing Systems, NIPS 2008
Country/TerritoryCanada
CityVancouver, BC
Period12/8/0812/11/08

Fingerprint

Dive into the research topics of 'Robust kernel principal component analysis'. Together they form a unique fingerprint.

Cite this