Abstract
Stochastic gradient Markov chain Monte Carlo (SG-MCMC) methods are Bayesian analogs to popular stochastic optimization methods; however, this connection is not well studied. We explore this relationship by applying simulated annealing to an SG-MCMC algorithm. Furthermore, we extend recent SG-MCMC methods with two key components: i) adaptive preconditioners (as in ADAgrad or RMSprop), and ii) adaptive element-wise momentum weights. The zero-temperature limit gives a novel stochastic optimization method with adaptive element-wise momentum weights, while conventional optimization methods only have a shared, static momentum weight. Under certain assumptions, our theoretical analysis suggests the proposed simulated annealing approach converges close to the global optima. Experiments on several deep neural network models show state-of-the-art results compared to related stochastic optimization algorithms.
| Original language | English |
|---|---|
| Pages | 1051-1060 |
| Number of pages | 10 |
| State | Published - 2016 |
| Event | 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016 - Cadiz, Spain Duration: May 9 2016 → May 11 2016 |
Conference
| Conference | 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016 |
|---|---|
| Country/Territory | Spain |
| City | Cadiz |
| Period | 05/9/16 → 05/11/16 |
Fingerprint
Dive into the research topics of 'Bridging the gap between stochastic gradient MCMC and stochastic optimization'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver