We believe that answering existence questions is an important component of a good design method- 1 As it turned out, the then new optimal control theory was well suited to many of the control problems that arose from the space program.
There were two main reasons for this: 1. The underlying assumptions of the WHK theory are that the plant has a known linear and possibly time-varying description, and that the exogenous noises and disturbances impinging on the feedback system are stochastic in nature, but have known statistical properties. Since space vehicles have dynamics that are essentially ballistic in character, it is possible to develop accurate mathematical models of their behavior. In addition, descriptions for external disturbances based on white noise are often appropriate in aerospace applications.
Therefore, at least from a modelling point of view, the WHK theory and these applications are well suited to each other. Many of the control problems from the space program are concerned with resource management. In the s, aerospace engineers were interested in minimum fuel consumption problems such as minimizing the use of retrorockets. One famous problem of this type was concerned with landing the lunar excursion module with a minimum expenditure of fuel.
Performance criteria of this type are easily embedded in the WHK framework that was specially developed to minimize quadratic performance indices. Once the designer has settled on a quadratic performance index to be minimized, the WHK procedure supplies the unique optimal controller without any further intervention from the designer. In the euphoria that followed the introduction of optimal control theory, it was widely believed that the control system 1 Linear Quadratic Gaussian LQG optimal control is the term now most widely used for this type of optimal control.
The wide-spread success of the WHK theory in aerospace applications soon led to attempts to apply optimal control theory to more mundane industrial problems. In contrast to experience with aerospace applications, it soon became apparent that there was a serious mismatch between the underlying assumptions of the WHK theory and industrial control problems. Accurate models are not routinely available and most industrial plant engineers have no idea as to the statistical nature of the external disturbances impinging on their plant. After a ten year re-appraisal of the status of multivariable control theory, it became clear that an optimal control theory that deals with the question of plant modelling errors and external disturbance uncertainty was required.
For such a framework to be useful, it must have the following properties: 1.
Optimal and robust control
It must be capable of dealing with plant modelling errors and unknown disturbances. It should represent a natural extension to existing feedback theory, as this will facilitate an easy transfer of intuition from the classical setting. It must be amenable to meaningful optimization. It must be able to deal with multivariable problems.
We have carefully selected these in order to minimize the amount of background mathematics required of the reader in these early stages of study; all that is required is a familiarity with the maximum modulus principle.
You might also Like...
Roughly speaking, this principle says that if a function f of a complex variable is analytic inside and on the boundary of some domain D, then the maximum modulus magnitude of the function f occurs on the boundary of the domain D. For example, if a feedback system is closed-loop stable, the maximum of the modulus of the closed-loop transfer function over the closed right-half of the complex plane will always occur on the imaginary axis. The transfer function g represents a nominal linear, time-invariant model of an open-loop system and the transfer function k represents a linear, time-invariant controller to be designed.
The fourth property is the crucial submultiplicative property which is central to all the robust stability and robust performance work to be encountered in this book. Note that not all norms have this fourth property. Thus, when the plant is stable and there are no performance requirements other than stability, the optimal course of action is to use no feedback at all!
We will return to the analysis of this type of problem in Section 1.
Donate to arXiv
In order to lay the groundwork for our analysis of optimal disturbance attenuation and optimal stability robustness, we consider the optimal command response problem. This problem is particularly simple because it contains no feedback. The conditions given in 1.
This A general solution to problems of this type is complicated and was found early this century. Once the optimal error function is found, f follows by back substitution using 1. We shall now consolidate these ideas with a numerical example. Example 1. Back substitution using 1. Interpolating a single data point is particularly simple because the optimal interpolating function is a constant. Our next example, which contains two interpolation constraints, shows that the general interpolation problem is far more complex.
Notice that the optimal interpolating function is a constant multiplied by a stable transfer function with unit magnitude on the imaginary axis, which is a general property of optimal interpolating functions. We conclude from this example that an increase in the number of interpolation constraints makes the evaluation of the interpolating function much harder.
We shall say more about this in Chapter 6. In the system illustrated in Figure 1. Before continuing, we need to introduce the notion of internal stability and discover the properties required of q in order that the resulting controller be internally stabilizing. If the feedback system in Figure 1. We therefore conclude that the system in Figure 1. Lemma 1.
- Robust Linear Control (EE 60555).
- Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach.
- Furskin Processing.
- Navigation menu.
- Coordinating mathematics across the primary school;
It is immediate from Figure 1. This gives the following result: Lemma 1. Then k is an internally-stabilizing controller for the feedback loop in Figure 1. For the loop to be internally stable, we need to ensure that q is stable. The controller is simply the negative of the inverse of the plant together with an arbitrarily high gain factor.
Linear Robust Control by Michael Green, David J. N. Limebeer | Waterstones
This is not a surprising conclusion, because high gain It follows from 1. The presence of a right-half-plane zero makes broadband disturbance attenuation impossible. If some spectral information is available about the disturbance d, one may be able to improve the situation by introducing frequency response weighting. So far we have only established this fact for the stable plant case, but it is true in general.
This is a classical Nevanlinna-Pick interpolation problem and satisfaction of the interpolation constraints guarantees the internal stability of the feedback system. As a consequence, the design process is complicated by the fact that the controller has to be designed to operate satisfactorily for all plants in some model set. To set this problem up in a mathematical optimization framework, we need to decide on some representation of the model error.
A block diagram of the set-up under consideration is given in Figure 1. The mere stability of q is not enough in the unstable plant case. As we will now show, it is possible to reformulate the problem so that there is one, rather than two, interpolation constraints per right-half-plane pole. If q is stable, so is q. Substitution into 1. Our second robust stabilization example shows that it is impossible to robustly stabilize a plant with a right-half-plane pole-zero pair that almost cancel.
We expect such a robust stability problem to be hard, because problems of this type have an unstable mode that is almost uncontrollable. Consider the case of 1. The constraints appear as interpolation constraints and stable closed-loop transfer functions that satisfy the interpolation data may be found using the classical Nevanlinna-Schur algorithm. This approach to control problems is due to Zames  and is developed in Zames and Francis  and Kimura .
In our examples we have exploited the fact that there is no need for the Nevanlinna algorithm when there is only one interpolation constraint. We will not be discussing the classical Nevanlinna-Pick-Schur theory on analytic interpolation in this book. There are several reasons for this: a Interpolation theoretic methods become awkward and unwieldy in the multivariable case and in situations where interpolation with multiplicities is required; if there are several interpolation constraints associated with a single right-half-plane frequency point, we say that the problem involves interpolation with multiplicities.
Computational issues become important in realistic design problems in which one is forced to deal with systems of high order. The state-space methods we will develop are capable of treating linear time varying problems. To see this we cite one of many possible problems involving robust stabilization with performance. A large part of the remainder of the book will be devoted to the development of a comprehensive theory for multivariable, multitarget problems. The solutions to the problems we have considered so far have a common theme. With the exception of the robust stabilization of an integrator, the It turns out that this is a general property of the solutions of all single-input, single-output problems that are free of imaginary axis interpolation constraints.
In each case, the optimal closed-loop transfer function is a scalar multiple of a rational inner function. Problem 1. The function w is a stable and minimum phase frequency weight. Conclude from this that multivariable problems have vector valued interpolation constraints. What are they? There are several reasons for the continued success of these methods for dealing with single-loop problems and multiloop problems arising from some multi-input-multi-output MIMO plant.
Firstly, there is a clear connection between frequency response plots and data that can be experimentally acquired. Thirdly, their graphical nature provides an important visual aid that is greatly enhanced by modern computer graphics.
Unfortunately, these classical techniques can falter on MIMO problems that contain a high degree of cross-coupling between the controlled and measured variables. In order to design controllers for MIMO systems using classical single-loop techniques, one requires decomposition procedures that split the design task into a set of single-loop problems that may be regarded as independent.
Do good gain and phase margins 21 By invoking a variational argument, he showed that certain design problems involving quadratic integral performance indices may be solved analytically. It turned out that the solution involved an integral equation which he had studied ten years earlier with E. Hopf—thus the term Wiener-Hopf optimization. In addition, because of their optimization properties, the designer is never left with the haunting thought that a better solution might be possible.
The key observation was that the solution of the Wiener-Hopf equation, and hence the optimal control law, may be obtained from the solution of a quadratic matrix equation known as a Riccati equation.