Optimal control theory is largely utilized in science as well as engineering. It is a mathematical optimization technique commonly used in creating control policies. Lev Pontryagin, along with his team in the ex-Soviet Union, and the American Richard Bellman are mostly responsible for optimal control theory. The general aim of the theory is to use various methods of analysis to determine the parameters of a system by conducting trial-and-error processes.
The optimal control theory comes in handy when trying to solve continuous time optimization problems. The theory tackles a problem by determining a control law for a hypothetical system in order to achieve a level of optimality. The optimal control consists of a set of various equations, which describe the paths of the variables that bring the cost functional to a minimum. The cost functional is basically a function of variables related to state and control. The optimal control theory makes use of the Pontryagin maximum principle, which generally states that one can solve the optimization problem P with the usage of a Hamiltonian function H over one period, which is a needed condition. The theory can also be derived with the Hamilton-Jacobi-Bellman equation.
In order to help a person understand the optimal control theory, the “driving your car through a hilly road” example is commonly used. Imagine traveling in an automobile on a craggy road in a straight line. The theory can determine how one should accelerate as to minimizing the absolute traveling time. In such a case, the “system” consists of the vehicle and the rocky road and the optimality criteria is one’s reaching the minimization of the traveling time. Such problems are known to include constraints (e.g. fuel limitation, speed limits). Another question may be finding a way for the car to optimize its consumption of fuel while obliged to complete a certain course in a given time limit.
Another example of the usage of the optimal control theory is the solving the costate or shadow price. It consists of the marginal value of expanding the state variable. Having solved that, the optimal value for the control can form a differential equation conditional on the awareness of the costate. It is common for this strategy to solve for regions that describe the optimal control and seclude the actual choice values in time.