Abstract
The control problem of the linear output of an autonomous nonlinear stochastic differential system is considered. The infinite horizon and the quadratic objective make it possible to interpret the control goal as stabilization of the output near the position determined by the state, which is described by a nonlinear stochastic differential equation. The solution is obtained for two variants of the model: with accurate measurements and under the assumption that the linear output represents indirect observations of the state. In the case of indirect observations, a continuous Markov chain is used as a state model, which makes it possible to separate the control and filtering tasks and apply the Wonham filter. In both variants, sufficient conditions for the existence of an optimal solution consist of typical requirements for linear systems that ensure the existence of a limiting solution to the Riccati equation. Additional requirements due to nonlinear elements are the ergodicity of nonlinear dynamics and the existence of a limit in the Feynman-Katz formula for the coefficients of the nonlinear part of the control. The results of the numerical experiment are presented and analyzed.