Skip to main navigation Skip to search Skip to main content

Robust support vector machines for classification with nonconvex and smooth losses

  • Yunlong Feng
  • , Yuning Yang
  • , Xiaolin Huang
  • , Siamak Mehrkanoon
  • , Johan A.K. Suykens
  • KU Leuven
  • Shanghai Jiao Tong University
  • University of Waterloo

Research output: Contribution to journalArticlepeer-review

50 Scopus citations

Abstract

This letter addresses the robustness problem when learning a large margin classifier in the presence of label noise. In our study, we achieve this purpose by proposing robustified large margin support vector machines. The robustness of the proposed robust support vector classifiers (RSVC), which is interpreted from a weighted viewpoint in this work, is due to the use of nonconvex classification losses. Besides the robustness, we also show that the proposed RSCV is simultaneously smooth, which again benefits from using smooth classification losses. The idea of proposing RSVC comes from M-estimation in statistics since the proposed robust and smooth classification losses can be taken as one-sided cost functions in robust statistics. Its Fisher consistency property and generalization ability are also investigated. Besides the robustness and smoothness, another nice property of RSVC lies in the fact that its solution can be obtained by solving weighted squared hinge loss-based support vector machine problems iteratively. We further show that in each iteration, it is a quadratic programming problem in its dual space and can be solved by using state-of-the-art methods.We thus propose an iteratively reweighted type algorithm and provide a constructive proof of its convergence to a stationary point. Effectiveness of the proposed classifiers is verified on both artificial and real data sets.

Original languageEnglish
Pages (from-to)1217-1247
Number of pages31
JournalNeural Computation
Volume28
Issue number6
DOIs
StatePublished - Jun 1 2016

Fingerprint

Dive into the research topics of 'Robust support vector machines for classification with nonconvex and smooth losses'. Together they form a unique fingerprint.

Cite this