Skip to main content

Research Repository

See what's under the surface

Advanced Search

An Efficient Policy Iteration Algorithm for Dynamic Programming Equations

Alla, Alessandro; Falcone, Maurizio; Kalise, Dante


Alessandro Alla

Maurizio Falcone


© 2015 Society for Industrial and Applied Mathematics. We present an accelerated algorithm for the solution of static Hamilton–Jacobi–Bellman equations related to optimal control problems. Our scheme is based on a classic policy iteration procedure, which is known to have superlinear convergence in many relevant cases provided the initial guess is sufficiently close to the solution. This limitation often degenerates into a behavior similar to a value iteration method, with an increased computation time. The new scheme circumvents this problem by combining the advantages of both algorithms with an efficient coupling. The method starts with a coarse-mesh value iteration phase and then switches to a fine-mesh policy iteration procedure when a certain error threshold is reached. A delicate point is to determine this threshold in order to avoid cumbersome computations with the value iteration and at the same time to ensure the convergence of the policy iteration method to the optimal solution. We analyze the methods and efficient coupling in a number of examples in different dimensions, illustrating their properties.

Journal Article Type Article
Publication Date Jan 20, 2015
Journal SIAM Journal on Scientific Computing
Print ISSN 1064-8275
Electronic ISSN 1095-7197
Publisher Society for Industrial and Applied Mathematics
Peer Reviewed Peer Reviewed
Volume 37
Issue 1
Pages A181-A200
APA6 Citation Alla, A., Falcone, M., & Kalise, D. (2015). An Efficient Policy Iteration Algorithm for Dynamic Programming Equations. SIAM Journal on Scientific Computing, 37(1), A181-A200.
Keywords Applied Mathematics; Computational Mathematics
Publisher URL