🔍
Stochastic model predictive control — how does it work? Audio slides - YouTube
Channel: unknown
[3]
welcome to this five minute presentation of the
paper stochastic model predictive control how
[8]
does it work this is published in computers and
chemical engineering and available online I'm tor
[14]
heirung and my co-authors are John Paulson Jared
O'Leary and Ali Mesbah we're all at the department
[19]
of chemical and biomolecular engineering at the
University of California Berkeley before MPC we
[25]
first look at the general constrained stochastic
optimal control problem for linear systems we have
[30]
a state-space formulation with state x control
U and measurement Y weather disturbance W and
[36]
measurement noise in E and both are stochastic
with known distributions the goal is to minimize
[41]
the expected value of a standard quadratic control
cost over a finite horizon with a terminal cost
[46]
on the state we want to minimize this subject to
the model and constraints on the state X and input
[53]
U the chance constraint specifies the permitted
probability of violating the state's constraints
[58]
and post a key challenge in constrained stochastic
optimal control without perfect information on the
[63]
entire state vector we have to estimate the
state here we have the patient framework in
[68]
terms of the hyper state which is a probability
distribution for the state conditioned on the
[72]
current information in linear systems control and
estimation are generally treated separately which
[78]
turns out to be optimal under model assumptions
one case in which the separation principle holds
[83]
meaning control and observer design do not
interact is linear quadratic Gaussian or
[88]
LQG control this is an unconstrained problem and
therefore significantly less challenging in this
[95]
problem the disturbance W and the measurement
noise V are both Gaussian sequences the optimal
[101]
state estimator is the well-known Kalman filter
which can be derived from the general Bayesian
[105]
recursion above the expected value cost function
has a simple deterministic form with the expected
[110]
value of the state the minimizing control law
is linear in the state estimate with gain K this
[116]
results in stable closed loop dynamics a minus
BK it is here clear that the separation principle
[121]
holds when looking at the state and estimate
error dynamics together the bottom left zero in
[126]
the highlighted matrix means the eigenvalues for
control and estimation can be placed independently
[130]
so many of these ideas are used in stochastic
model to achieve control but the question is
[135]
how things change from the reference we shall
win the paper that the objective function is
[139]
not affected by the presence of constraints it
retains its deterministic quadratic form in mean
[144]
state predictions the shown on the right in the
constrained stochastic optimal control problem
[148]
shown two slides back the state constraints
are probabilistic we show how to arrive at
[152]
a deterministic reformulation on the chance
constraints for the main challenge being how to
[156]
modify the right-hand side often called the back
off this results in a standard quadratic program
[161]
the initial condition is a state estimate here
from the Kalman filter for simplicity stochastic
[165]
MPC is solving this deterministic QP on a receding
horizon just like in standard MPC the result is an
[172]
implicit control law now comparing stochastic MPC
and standard MPC we have the stochastic system on
[177]
the left and the deterministic systems which NPC
is applied on the right for s MPC we have the
[182]
expected value pulse function as mentioned this as
a quadratic deterministic form shown here in red
[187]
this is identical to the standard MPC cost except
for a change of variables this is also shown here
[194]
in red similarly with the optimal control problems
the stochastic formulation has this equivalent
[200]
deterministic form highlighted here this is
the QP and again identical in complexity to
[206]
the QP in standard MPC also shown in red the only
significant difference is the right-hand sides in
[212]
the state constraints highlighted in red in the
paper we discuss how to determine the modified
[218]
right-hand side or the constraint pack of which is
generally done offline in other words stochastic
[223]
MPC is nothing but a slight modification of
standard MPC we use a case study throughout
[228]
the paper to illustrate the main points it is
a two-stage chemical reaction with a constraint
[234]
on the second species here we first see many
simulations with MPC on stochastic MPC and see
[240]
that with the stochastic MPC the constraint on
X 2 is not violated nearly as frequently or by
[245]
as much as with MPC the second figure is a face
plot which also shows constraint modification
[250]
using the content each abusive inequality as
well as lqg and compares the extent to which
[255]
the controller's lead to acceptable levels of
constraint satisfaction we also look at how
[260]
specifying the required probability of constraint
satisfaction compares with manually adjusting the
[264]
constrained backup in terms of the effect on
the cost function the term through multiple
[269]
simulations finally thank you for seeing this
presentation we hope you go read the paper
Most Recent Videos:
You can go back to the homepage right here: Homepage





