Abstract
Consider an Ito equation for a scalar-valued process that is controlled through a dynamic and adaptive choice of its diffusion coefficient. Such a control is called a variance control and is said to degenerate when it becomes zero. We consider the problem of choosing a control to minimize a discounted, infinite-horizon cost that penalizes state values close to an equilibrium point of the drift and also imposes a control cost. Admissible controls are required to take values in the closed, bounded interval [0, σ0], where σ0 > 0; in particular, the control can be degenerate. In general, there will be a bang-bang optimal control that takes the value σ0 in some open set and is zero otherwise. We discuss the existence and properties of solutions to stochastic differential equations with such controls and characterize the value function and optimal control in more detail, in the case of both linear and nonlinear drift. Employing the Hamilton-Jacobi-Bellman equation and results of [N. V. Krylov, Theory Probab. Appl., 17 (1973), pp. 114-131] and [P.-L. Lions, Comm. Pure Appl. Math., 34 (1981), pp. 121-147], we derive sufficient conditions for the existence of single-region optimal controls, construct examples of multiple-region controls, and provide bounds on the number and size of the regions in which the optimal control is positive.
Original language | English (US) |
---|---|
Pages (from-to) | 1-24 |
Number of pages | 24 |
Journal | SIAM Journal on Control and Optimization |
Volume | 39 |
Issue number | 1 |
DOIs | |
State | Published - Aug 2000 |
All Science Journal Classification (ASJC) codes
- Control and Optimization
- Applied Mathematics