Skip to main navigation Skip to search Skip to main content

STOCHASTIC AUC MAXIMIZATION WITH DEEP NEURAL NETWORKS

  • Mingrui Liu
  • , Yiming Ying
  • , Zhuoning Yuan
  • , Tianbao Yang

Research output: Contribution to conferencePaperpeer-review

48 Scopus citations

Abstract

Stochastic AUC maximization has garnered an increasing interest due to better fit to imbalanced data classification. However, existing works are limited to stochastic AUC maximization with a linear predictive model, which restricts its predictive power when dealing with extremely complex data. In this paper, we consider stochastic AUC maximization problem with a deep neural network as the predictive model. Building on the saddle point reformulation of a surrogated loss of AUC, the problem can be cast into a non-convex concave min-max problem. The main contribution made in this paper is to make stochastic AUC maximization more practical for deep neural networks and big data with theoretical insights as well. In particular, we propose to explore Polyak-Łojasiewicz (PL) condition that has been proved and observed in deep learning, which enables us to develop new stochastic algorithms with even faster convergence rate and more practical step size scheme. An AdaGrad-style algorithm is also analyzed under the PL condition with adaptive convergence rate. Our experimental results demonstrate the effectiveness of the proposed algorithms.

Original languageEnglish
StatePublished - 2020
Event8th International Conference on Learning Representations, ICLR 2020 - Addis Ababa, Ethiopia
Duration: Apr 30 2020 → …

Conference

Conference8th International Conference on Learning Representations, ICLR 2020
Country/TerritoryEthiopia
CityAddis Ababa
Period04/30/20 → …

Fingerprint

Dive into the research topics of 'STOCHASTIC AUC MAXIMIZATION WITH DEEP NEURAL NETWORKS'. Together they form a unique fingerprint.

Cite this