Computational Method for Nonlinear Stochastic Optimal Control
by John J. Westman and Floyd B. Hanson
Proceedings 1999 American Control Conference
Abstract:
Nonlinear stochastic optimal control problems are treated that are
nonlinear in the state dynamics, but are linear in the control.
The cost functional is a general function of the state,
but the costs are quadratic in the control.
The system is subject random fluctuations due to discontinuous Poisson
noise that depends on both the state and control, as well as due to
discontinuous Gaussian noise.
This general framework provides a comprehensive model for numerous
applications that are subject to random environments.
A stochastic dynamic programming approach is used and the theory for an
iterative algorithm is formulated utilizing a least squares equivalent
of a genuine LQGP problem to approximate the nonlinear state space
dependence of the LQGP problem in control only in order to accelerate
the convergence of the nonlinear control problem. A particular contribution
of this paper is the treatment of a Poisson jump process that is linear in
the control vector, within the contex of a nonlinear control problem.