Search results
Results from the WOW.Com Content Network
Optimal control problem benchmark (Luus) with an integral objective, inequality, and differential constraint. Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. [1]
Inspired by—but distinct from—the Hamiltonian of classical mechanics, the Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his maximum principle. [2] Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to optimize the Hamiltonian. [3]
Widely regarded as a milestone in optimal control theory, the significance of the maximum principle lies in the fact that maximizing the Hamiltonian is much easier than the original infinite-dimensional control problem; rather than maximizing over a function space, the problem is converted to a pointwise optimization. [8]
Its solution is the value function of the optimal control problem which, once known, can be used to obtain the optimal control by taking the maximizer (or minimizer) of the Hamiltonian involved in the HJB equation. [2] [3] The equation is a result of the theory of dynamic programming which was pioneered in the 1950s by Richard Bellman and ...
The optimal control solution is unaffected if zero-mean, i.i.d. additive shocks also appear in the state equation, so long as they are uncorrelated with the parameters in the A and B matrices. But if they are so correlated, then the optimal control solution for each period contains an additional additive constant vector.
The method is based on the theory of orthogonal collocation where the collocation points (i.e., the points at which the optimal control problem is discretized) are the Legendre–Gauss (LG) points. The approach used in the GPM is to use a Lagrange polynomial approximation for the state that includes coefficients for the initial state plus the ...
Hamilton–Jacobi–Bellman equation – An optimality condition in optimal control theory; Markov decision process – Mathematical model for sequential decision making under uncertainty; Optimal control theory – Mathematical way of attaining a desired output from a dynamic system; Optimal substructure – Property of a computational problem
In mathematical optimization, Zermelo's navigation problem, proposed in 1931 by Ernst Zermelo, is a classic optimal control problem that deals with a boat navigating on a body of water, originating from a point to a destination point .