87. Model Predictive Control¶
The general idea of figuring out what moves to make using optimisation at each time step has become very popular due to the fact that a general version can be programmed and made very user friendly so that the intricacies of multivariable control can be handled by a single program.
In this notebook I will show how a single time step’s move trajectory is calculated. We’ll use the same system as we used for the Dahlin controller
import numpy import scipy.signal import scipy.optimize import matplotlib.pyplot as plt %matplotlib inline
We start with a linear model of the system
G = scipy.signal.lti(, [15, 8, 1])
[<matplotlib.lines.Line2D at 0x1c146a8390>]
Our goal is to find out what manipulations must be made (changes to \(u\)) in order to get the system to follow a specific desired trajectory (which we will call \(r\) for the reference trajectory). We will allow the controller to make a certain number of moves. This is called the control horizon, \(M\). We will the observe the effect of this set of moves (called a “move plan”) for time called the prediction horizon (\(P\)).
M = 10 # Control horizon P = 20 # Prediction horizon DeltaT = 1 # Sampling rate
tcontinuous = numpy.linspace(0, P*DeltaT, 1000) # some closely spaced time points tpredict = numpy.arange(0, P*DeltaT, DeltaT) # discrete points at prediction horizon
We choose a first order setpoint response similar to DS or Dahlin
tau_c = 1 r = 1 - numpy.exp(-tpredict/tau_c)
For an initial guess we choose a step in \(u\).
u = numpy.ones(M)
Initital state is zero
x0 = numpy.zeros(G.to_ss().A.shape)
def extend(u): """We optimise the first M values of u but we need P values for prediction""" return numpy.concatenate([u, numpy.repeat(u[-1], P-M)])
def prediction(u, t=tpredict, x0=x0): """Predict the effect of an input signal""" t, y, x = scipy.signal.lsim(G, u, t, X0=x0, interp=False) return y
[<matplotlib.lines.Line2D at 0x1c1475ccf8>]
def objective(u, x0=x0): """Calculate the sum of the square error for the cotnrol problem""" y = prediction(extend(u)) return sum((r - y)**2)
This is the value of the objective for our step input:
Now we figure out a set of moves which will minimise our objective function
result = scipy.optimize.minimize(objective, u) uopt = result.x result.fun
Resample the discrete output to continuous time (effectively work out the 0 order hold value)
ucont = extend(uopt)[((tcontinuous-0.01)//DeltaT).astype(int)]
Plot the move plan and the output. Notice that we are getting exactly the output we want at the sampling times. At this point we have effectively recovered the Dahlin controller.
def plotoutput(ucont, uopt): plt.figure() plt.plot(tcontinuous, ucont) plt.xlim([0, DeltaT*(P+1)]) plt.figure() plt.plot(tcontinuous, prediction(ucont, tcontinuous), label='Continuous response') plt.plot(tpredict, prediction(extend(uopt)), '-o', label='Optimized response') plt.plot(tpredict, r, label='Set point') plt.legend()
One of the reasons for the popularity of MPC is how easy it is to change its behaviour using weights in the objective function. Try using this definition instead of the simple one above and see if you can remove the ringing in the controller output.
def objective(u, x0=x0): y = prediction(extend(u)) umag = numpy.abs(u) constraintpenalty = sum(umag[umag > 2]) movepenalty = sum(numpy.abs(numpy.diff(u))) strongfinish = numpy.abs(y[-1] - r[-1]) return sum((r - y)**2) + 0*constraintpenalty + 0.1*movepenalty + 0*strongfinish