Skip to main navigation Skip to search Skip to main content

Screening for a Reweighted Penalized Conditional Gradient Method

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The conditional gradient method (CGM) is widely used in large-scale sparse convex optimization, having a low per iteration computational cost for structured sparse regularizers and a greedy approach for collecting nonzeros. We explore the sparsity acquiring properties of a general penalized CGM (P-CGM) for convex regularizers and a reweighted penalized CGM (RP-CGM) for nonconvex regularizers, replacing the usual convex constraints with gauge-inspired penalties. This generalization does not increase the per-iteration complexity noticeably. Without assuming bounded iterates or using line search, we show O(1/t) convergence of the gap of each subproblem, which measures distance to a stationary point. We couple this with a screening rule which is safe in the convex case, converging to the true support at a rate O(1/(δ2)) where δ ≥ 0 measures how close the problem is to degeneracy. In the nonconvex case the screening rule converges to the true support in a finite number of iterations, but is not necessarily safe in the intermediate iterates. In our experiments, we verify the consistency of the method and adjust the aggressiveness of the screening rule by tuning the concavity of the regularizer.

Original languageEnglish
Article number14
JournalOpen Journal of Mathematical Optimization
Volume3
DOIs
StatePublished - 2022

Keywords

  • Dual screening
  • atomic sparsity
  • conditional gradient method
  • reweighted optimization

Fingerprint

Dive into the research topics of 'Screening for a Reweighted Penalized Conditional Gradient Method'. Together they form a unique fingerprint.

Cite this