Search results
Results from the WOW.Com Content Network
In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, [citation needed] a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs ...
Inspired by—but distinct from—the Hamiltonian of classical mechanics, the Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his maximum principle. [2] Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to optimize the Hamiltonian. [3]
The examples thus far have shown continuous time systems and control solutions. In fact, as optimal control solutions are now often implemented digitally , contemporary control theory is now primarily concerned with discrete time systems and solutions.
A Carathéodory-π solution can be applied towards the practical stabilization of a control system. [ 6 ] [ 7 ] It has been used to stabilize an inverted pendulum, [ 6 ] control and optimize the motion of robots, [ 7 ] [ 8 ] slew and control the NPSAT1 spacecraft [ 3 ] and produce guidance commands for low-thrust space missions.
One of the main results in the theory is that the solution is provided by the linear–quadratic regulator (LQR), a feedback controller whose equations are given below. LQR controllers possess inherent robustness with guaranteed gain and phase margin , [ 1 ] and they also are part of the solution to the LQG (linear–quadratic–Gaussian) problem .
Widely regarded as a milestone in optimal control theory, the significance of the maximum principle lies in the fact that maximizing the Hamiltonian is much easier than the original infinite-dimensional control problem; rather than maximizing over a function space, the problem is converted to a pointwise optimization. [8]
H ∞ (i.e. "H-infinity") methods are used in control theory to synthesize controllers to achieve stabilization with guaranteed performance. To use H ∞ methods, a control designer expresses the control problem as a mathematical optimization problem and then finds the controller that solves this optimization.
Its solution is the value function of the optimal control problem which, once known, can be used to obtain the optimal control by taking the maximizer (or minimizer) of the Hamiltonian involved in the HJB equation. [2] [3] The equation is a result of the theory of dynamic programming which was pioneered in the 1950s by Richard Bellman and ...