3.4. AIPLAN - Threats and Flaws - YouTube

Channel: unknown

[0]
A partial plan is a solution for a given planning problem if it contains no more
[6]
flaws. So, our goal test will be to check
[10]
whether our partial plan is a flawless partial plan.
[14]
Of course, we haven't defined a flaw yet so that's what we will do next.
[21]
And when we have done this, the search problem will be complete.
[24]
We have defined the initial state. We have defined the successor function,
[29]
and we will now look at the goal test. I will now introduce the concept of a
[34]
threat in a partial plan and I will use an example to illustrate this.
[38]
What we see here is a partial plan to which I've already added two actions to
[43]
support the two goal conditions. We also the dummy action for the goal
[47]
over here. And there should be a second dummy action
[50]
for the initial state. But since we're not going to use that in
[53]
this example, I've left it out here to save a little space.
[57]
Now, what we do is apply more plan refinement operations.
[60]
In this case, we add an action to the plan,
[63]
this new move action over here. And the reason why we do this is because
[67]
we want to support the precondition at robot location one with this new
[72]
operator. So, we haven't started a cause link for
[74]
this, and we insert an ordering constraint because the action that is the
[79]
provider must come before the consumer. Now, let us also look at the condition
[84]
that are protected by the cause links. These are at robot location two here the
[89]
not unloaded robot here the two goal condition that we want to support.
[94]
And over here, it is the precondition of the load operator that the robot must be
[99]
at location one that we're protecting with this causal link.
[103]
Now, the problem is the following. Our first operator, the move action, has
[108]
a negative effect at robot location one, which is exactly the condition we're
[113]
trying to protect down here. This means if we first executed action
[117]
number three, the move robot location to location one.
[121]
Then, action number one moving the robot from location one to location two, and
[125]
finally action number two loading the container onto the robot with a crane,
[130]
etc. Then, this plan would not be valid
[132]
because the second action in our totally ordered plan will destroy the protected
[138]
condition that we need to execute the final action in that plan.
[142]
So, the final action would no longer be executable.
[145]
This is what is called the threat. We have an action that may possibly occur
[150]
in parallel with a causal link and that has an effect that is complementary to
[155]
the condition we're trying to protect in the causal link.
[158]
That is what is called a threat. And in this case, there is a simple way
[163]
to get rid of this threat, namely to introduce another ordering constraint
[167]
that says, action number one must come after action number two. In which case,
[173]
the action with the effect that is potentially harmful cannot be in parallel
[177]
to the causal link that protects this condition.
[180]
So, the insertion of the ordering constraint removes the threat and that is
[185]
what is called a resolver for this threat.
[187]
Now that you've seen an example of a threat, here's the formal definition of
[193]
what constitutes a threat. We start of with a plan pi consisting of
[197]
the four usual components. We have actions, orderings, variable
[202]
bindings, and causal links. In this plan, we have an action ak that
[207]
may cause a threat for a causal link, That links an action ai to an action aj
[212]
protecting condition p. And we say that the action ak threatens
[217]
our causal link if the following preconditions hold.
[221]
Firstly, ak must have an effect, not q, that is possibly inconsistent with p, the
[226]
condition we're trying to protect. If p and q are ground, then of course,
[232]
they are possibly inconsistent if they are the same.
[236]
But, if they contain variables, then they are possibly inconsistent if p and q are
[241]
unfiable. That means, they can be made the same if
[246]
we substitute values for variables. The second condition is that the
[251]
following ordering constraints must be consistent with the ordering constraints
[256]
in our plan. And the ordering constraints that we
[260]
consider are ai could come before ak, and ak before aj.
[264]
What this means is that our action ak can possible be in parallel to the causal
[269]
link because the provider comes before ak and the consumer comes after ak.
[275]
But, the condition here is, that this is possible, not that it is necessarily so.
[279]
So, these orderings, constraints must be consistent with our orderings in the
[284]
plan, but they need not be in the plan. Similarly, the third condition states
[288]
that if we introduce new variable bindings as part of our unification up
[293]
here, then, these new variable binding constraints must be consistent with the
[297]
binding constraints that we currently have in our Plan.
[301]
So, ak can threaten our causal link if it has an effect that is possibly
[307]
inconsistent with the proposition we're trying to protect.
[311]
And it may possibly occur in parallel to our causal link.
[315]
And finally, the bindings that we need for making p and w the same are
[320]
consistent with the bindings used in the plan, that is the definition of a threat.
[327]
Threats however, are not the only types of flaws that can occur in a plan.
[332]
The good news is there are only two types of flaws we need to look out for.
[338]
So, in a plan pi, that consists of the usual four components, we can have two
[343]
types of flaws. The first is an unsatisfied sub-goal in
[347]
that plan. And an unsatisfied sub-goal is a
[350]
precondition of an action that does not have an incoming causal link that
[355]
supports it. So, this could be the goal dummy action
[359]
which, of course, has all the goal conditions as preconditions.
[363]
All of those need to be supported for the plan not to have this type of flaw.
[368]
And then, every action that we have added to the plan also must have all of its
[373]
preconditions supported by causal link for the plan not to have this type of
[377]
flaw. And the second type of flaw is a threat,
[380]
so that's what we have just seen in action that may interfere with a causal
[385]
link. And that's it. That's all the type flaws
[388]
we need to consider. And here is the proposition that gives us
[392]
our goal test. Suppose we are given a partial plan,
[395]
consisting of actions, orderings, bindings, and links and we can say that
[400]
this solution to a planning problem consisting of a state transition system,
[405]
initial state and goal, if three conditions hold.
[408]
The first one is that the plan pi has no flaw.
[411]
That's the two types of flaws we've just described.
[414]
Then, the ordering constraints must not be circular, so they must be consistent.
[419]
And the variable binding constraints must also be consistent.
[423]
So, while we are doing our search for a plan, we have these three conditions to
[428]
maintain. The first one that a plan must have no
[431]
flaws, well, we cannot really have that because initially our plan has flaws.
[436]
The goal has unsatisfied preconditions. But, the other two conditions That we
[441]
keep up our ordering constraints and our variable binding constraints consist in
[446]
these two conditions, we maintain this consistency during the
[450]
planning process. Why is that? Because once we introduce an
[454]
inconsistency into either set of constraints, this can never be removed.
[458]
So, we maintain the consistency of the constraints and try to remove the flaws
[463]
from the plan. And once for all the flaws are gone,
[466]
we're done. I won't go into the proof of this
[469]
proposition in detail, but just tell you that this can be done
[474]
by induction quite easily. So, we start off with the base case, that
[478]
is the empty plan. And in the empty plan, the only causal
[482]
links can go from the innate action to the goal action, the two dummy actions
[486]
that we have in this plan. And if all the goal conditions are
[490]
supported from the initial state, that must be a solution plan to our problem in
[494]
which case the goal was already satisfied in our initial state.
[497]
And the induction step simply consist of removing one action from the plan and
[502]
showing that the shorter plan now still is a solution to a planning problem, the
[507]
modified planning problem. And the action we pick here, as you can
[511]
see, is one of the actions in the plan that has no predecessors.
[515]
So, this is it. This is our goal test that completes the
[519]
planning problem. And now, a goal test is that our plan
[523]
must not have flaws. If it has no flaws, it is a solution
[527]
plan.