Model Predictive Control
Model Predictive Control
Principal Investigator: Professor Mark Cannon
Model predictive control is powerful technique for optimizing the performance of constrained systems. Constraints are present in all control systems due to physical, environmental and economic limits on plant operation, and the systematic handling of constraints provided by predictive control strategies allows for significant improvements in performance over conventional control methodologies. The focus of the Oxford group's work is on control of nonlinear systems, stochastic systems, and systems with fast dynamics. The approach is applicable to problems in aerospace, robotics and process control, as well as econometric systems encountered in financial engineering applications.
Most industrial control systems are subject to constraints and uncertainty, with uncertainty usually characterized in terms of stochastic variables. Predictive control is well developed in terms of handling constraints and bounded uncertainty but there is currently no framework addressing problems involving stochastic objectives and stochastic constraints. This project aims to achieve a full extension, both in respect of constraints and objective, of model predictive control (linear and nonlinear) to the stochastic case. Given the omnipresence of uncertainties of a stochastic nature, the results are of theoretical interest and will also have a significant impact in practical applications. An objective of the project is to demonstrate this through the study of (i) policy optimization in a sustainable development problem addressing power generation, energy cost and pollution; (ii) optimization of availability of power plant and grid capacity within a competitive power generation market.
Dynamic compensation for input-affine nonlinear systems
Geometric control techniques such as input-output feedback linearization have traditionally been restricted to a small class of minimum-phase systems. The objective of the project is to overcome these restrictions by introducing a synthetic output which is defined by introducing perturbations aimed at maximizing the relative degree. Feedback linearization applied to the synthetic output yields a dynamic feedback law. Near-optimal performance with respect to the actual plant output can be obtained by resetting the controller initial conditions so as to minimize the gap between actual and synthetic outputs. Alternatively the synthetic output may be used to define a convergence constraint or terminal control law suitable for embedding within a predictive controller.
Fast on-line MPC optimization algorithms
To widen the applicability of nonlinear predictive control, this project develops strategies that perform the required online optimization within the exacting limits (e.g. several milliseconds) imposed by fast sampling applications, while retaining guarantees of feasibility and closed-loop stability. The project investigates the use of Euler-Lagrange and dynamic programming methods in order to avoid the exponential increase in computational complexity with horizon length of conventional MPC. An active set approach is employed, in which equality constraint problems are split into a series of smaller problems, thus achieving a linear dependence of complexity on horizon length. The approach is applicable to: (i) min-max optimization of uncertain linear systems; (ii) successive optimization of nonlinear system performance.
Receding horizon policy optimization for sustainable development
Policy optimization in sustainable development can be formulated as a multi-objective problem involving power generation, transport, agriculture, climate change, depletable and renewable resources. The aim is to model the impacts of instruments (such as public spending on R&D in different technologies) on measurable sustainability indicators (such as energy costs and emissions of pollutants), and to develop policies that maximize the likelihood of benefit while minimizing risk. This project predicts the probabilities of risk and benefit over a future horizon using stochastic models, and formulates receding horizon control laws through the optimization of stochastic objectives subject to stochastic constraints. The approach enables methods to be developed for ensuring acceptable closed-loop performance despite high levels of model uncertainty.
Invariant sets and Model Predictive Control of nonlinear systems
Predictive control for linear systems is now a well-established discipline providing a range of techniques with guaranteed stability, feasibility and robustness. The extension of such techniques to nonlinear models can lead to impracticable solutions due to the non-convexity of the relevant optimization and its excessive computational burden. It is however possible to cast the prediction problem in an autonomous framework which enables the definition of computationally convenient ellipsoidal and/or polytypic invariant sets that ensure future feasibility. This framework can be used to develop predictive control algorithms with guaranteed stability and robustness.
Interpolation in Model Predictive Control
Linear interpolation between a family of pre-computed predicted trajectories with desirable attributes has proved beneficial for the development of computationally efficient model predictive control algorithms. The reduction in computation is obtained by reducing the number of degrees of freedom in predictions that are needed to ensure feasibility and near-optimal performance. A possible approach to extending interpolation techniques to the case of nonlinear systems is through the use of feedback linearization, but this is limited to a restricted class of model dynamics. An autonomous predictive control formulation opens up different alternatives, and the development of suitable algorithms together with the relevant stability and robustness analysis forms the main focus of the project.