5R19. Optimal Control: An Introduction. - A Locatelli (Dept di Elettronica e Informazione, Politecnico di Milano, Piazza L da Vinci 32, Milano, 20133, Italy). Birkhauser Verlag AG, Basel, Switzerland. 2001. 294 pp. ISBN 3-7643-6408-4.

Reviewed by S Sieniutycz (Dept of Chem Eng, Fac of Chem Eng, Warsaw Univ of Tech, 1 Warynskiego St, Warszawa, 00-645, Poland).

Optimization is the collective process of finding the set of conditions required to achieve the best result from a given situation. Frequently its teaching is connected to an extent with control science as the field which has attained a high level of competence in advanced design of practical devices, complex industrial systems, robotics, and flying objects. Extensive research in dynamic optimization, especially with reference to robust control problems, has caused it to be regarded as valuable source of numerous useful, powerful, and flexible tools for both engineers and scientists. The link of optimal control with variational calculus, an older although still vital discipline, influences the opinion that many results of variational techniques are particularly suited when approaching nonlinear solutions of complex control problems. Therefore, even though first breaking discoveries in the optimal control were published more than 40 years ago, novel treatments of the fundamentals of classical optimal control theory are still important and of immediate interest. This book is a good example of accomplishing that task. The book is at a high scientific level and represents an integrated view of the discipline. Still its choice of topics, the relative weight given to them, and the nature of illustrative examples reflect the author’s commitment to effective teaching. The book is designed for both undergraduate and graduate students who have already been exposed to basic linear system and control theory and have the calculus background usually found in undergraduate curricula in engineering. Yet, graduates and teachers will also benefit from using the book.

Almost any problem in the analysis, operation, and design of practical processes and industrial operations, and many associated problems, such as production planing, can be broken down in its final stage to the problem of determining the largest or smallest value of a function or a functional. Optimization of an arbitrary system requires knowledge of a model of a controlled system, the evaluation criterion called performance index, and constraints (not yet accounted for by the model). The constraints can be of different sort and may be given in diverse forms. When dynamical problems are in question, a typical optimization model consists of an integral functional, a set of constraining differential equations, and some local constraints imposed on control and/or state variables. That functional should be maximized or minimized. The task of optimization is then to find the best dynamics in terms of the best control functions and corresponding optimal trajectory.

The book is composed of two parts: the first is devoted to global methods (sufficient optimality conditions), whereas the second deals with variational methods of the first order (necessary conditions) or second order (locally-sufficient conditions). The first part develops the basis of the Hamilton-Jacobi theory that provides sufficient conditions for global optimality along with its most essential accomplishments, namely the solution of the Linear Quadratic and Linear Quadratic Gaussian problems. Some attention is given to the Ricatti equations which play an essential role in these problems. The second part of the book begins with the presentation of the Pontryagin’s Maximum Principle, which represents the first-order variational approach and provides both powerful and elegant necessary conditions encompassing a wide class of complex problems. The second-order variational approach is displayed next and is shown to be capable of ensuring, under suitable conditions, the local optimality of solutions derived from the Maximum Principle.

The Introduction to the book concisely describes the standard optimal control problem along with its basic features. Numerous examples displaying typical and basic formulations are given in Chapter1 (Introduction). Typical problems considered are rendezvous problem, positioning (or transfer) problem and attitude control problem with a minimum consumption of fuel. The presentation in terms of definitions and analyses involved is both rigorous and lucid. Chapter 2, with a variety of examples, does a particularly elegant job of laying out results of a global nature in the form of Hamilton-Jacobi theory for functionals of Bolza type. Chapter 3 discusses the linear-quadratic problem for Bolza functionals in case of finite and infinite control horizons. Its special case, discussed in detail, is the optimal regulator problem in which coefficients are constant. Again, many suitable and illuminating examples are given. A minor flaw here is that most of them come from the realm of electric circuits theory and lumped system mechanics, but they are sufficiently simple and nice to attract readers representing different fields. Stability properties of the optimal regulator problem along with robustness properties are also analyzed in detail. An inverse optimal problem is also formulated and discussed. It consists of finding, for a given system and control law, a performance index with respect to which such a control policy is optimal. The restriction to linear dynamics with the quadratic performance index is mandatory, in this case, because the nonlinear problem is extremely difficult to solve and requires satisfaction of complex (Helmholtz) conditions known from variational calculus. Chapter 4 considers Linear Quadratic Gaussian problems. The dynamical equations of this chapter represent a non-zero mean, Gausian, stationary, multidimensional process which is assumed to be a white-noise. The initial state is an n-dimensional Gaussian random variable. The uncertainty of the system is specified, and problems are considered with the optimal estimate of the system state and the optimal stochastic control of it. In particular, Kalman filter is treated and associated examples are displayed. Chapter 5 deals with Ricatti equations in differential and algebraic forms which play a particular role in these problems.

The Pontryagin’s Maximum Principle in Chapter 6 begins the presentation in the second part of the book. Hamiltonian and auxiliary system of adjoint equations are defined along with simple constraints. First control problems require that no complex constraints are present on the state and control variables. Examples are given that display necessary optimality conditions for wide class of complex problems. Complex constraints are treated next, imposed on state and control. Also, integral constraints and global instantaneous (equality and inequality) constraints are considered. The latter are adjoint to the Hamiltonian, and all previous results are exploited with reference to the modified Hamiltonian function. Role of isolated equality constraints is investigated. Singular optimal control problems are also considered and exemplified along with time optimal control problems. Quite a comprehensive review of the second variation method is given in Chapter 7. The issues such as: local sufficient conditions and neighboring optimal control are considered. To warrant the self-contained structure of the book, suitable mathematical Appendices are included.

One of the key concepts of the contemporary optimal control and its distinguishing feature is that it can take account of dynamic and pathwise constraints. The book contributes well to the understanding of the theory of Pontryagin’s maximum principle and the relationship between the optimal performance function and the Hamilton-Jacobi equation. The analysis is largely self-contained and provides a unified perspective on dynamic optimization problems which are beyond the realm of trivial analytical and computational techniques. Moreover, this analysis includes many of the unifying properties and simplifications discovered in recent research. Its didactic line complies with recent developments in optimal control, aimed at extending the range of application of available necessary optimality conditions and stressing similarities rather than differences with the variational calculus. In fact, the book shows that it is possible to derive, in the optimal control context, optimality conditions of remarkable generality by using the mathematical apparatus for solutions to the Hamilton-Jacobi equation. One highlight of these approaches is the clarification of the relationship between the optimal performance and the Hamilton-Jacobi equation. Another important achievement of the book is to make the contemporary results of the optimal control theory accessible to a wide audience without requiring extensive prior knowledge of optimization theory.

The number of examples and their didactic value are impressive. A flaw is the limitation in scope resulting from the omission or reduction of certain topics. Notably among these are 1) the role of Kuhn-Tucker theorem, convex sets, and penalty approaches for global optimality; 2) the dynamic programming, Bellman’s functional equations, and enunciation of relations between the optimal trajectories and cost surfaces in this context; 3) the numerical integration schemes for ordinary differential equations (ODEs) constructed in such a way that a qualitative property of the solution of the ODE is exactly preserved, eg, Hamilton’s-structure-preserving integration schemes (symplectic integrators and invariants). The latter topic conforms with the present tendencies in numerical physics.

The book may be used by a relatively broad audience comprising postgraduates, researchers, and professionals in engineering, process control, system science, and applied mathematics. It is written clearly and represents a readable, self-contained text with suitable basic references and a good subject index. It also has simple and clear, good quality figures. That text represents a step forward in teaching optimal control by putting the collection of nontrivial problems into a cohesive framework displayed at the level of detail that a student or teacher willing to put some effort could follow.

In summary: Optimal Control: An Introduction represents a valuable and rigorous approach, of remarkable didactic value. The book is well written and well edited in terms of organization, technical writing, and the use of illustrations. It deserves space on the bookshelf of any teacher and student interested in optimal control and related fields.