Abstract
This note provides a simple example demonstrating that, if exact computations are allowed, the number of iterations required for the value iteration algorithm to find an optimal policy for discounted dynamic programming problems may grow arbitrarily quickly with the size of the problem. In particular, the number of iterations can be exponential in the number of actions. Thus, unlike policy iterations, the value iteration algorithm is not strongly polynomial for discounted dynamic programming.
| Original language | English |
|---|---|
| Pages (from-to) | 130-131 |
| Number of pages | 2 |
| Journal | Operations Research Letters |
| Volume | 42 |
| Issue number | 2 |
| DOIs | |
| State | Published - Mar 2014 |
Keywords
- Algorithm
- Markov decision process
- Policy
- Strongly polynomial
- Value iteration
Fingerprint
Dive into the research topics of 'The value iteration algorithm is not strongly polynomial for discounted dynamic programming'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver