Computer Science Department
School of Computer Science, Carnegie Mellon University
Probabilistic Plan Management
Laura M. Hiatt
The general problem of planning for uncertain domains remains a difficult challenge. Research that focuses on constructing plans by reasoning with explicit models of uncertainty has produced some promising mechanisms for coping with specific types of domain uncertainties; however, these approaches generally have difficulty scaling. Research in robust planning has alternatively emphasized the use of deterministic planning techniques, with the goal of constructing a flexible plan (or set of plans) that can absorb deviations during execution. Such approaches are scalable, but either result in overly conservative plans, or ignore the potential leverage that can be provided by explicit uncertainty models.
The main contribution of this work is a composite approach to planning that couples the strengths of both the above approaches while minimizing their weaknesses. Our approach, called Probabilistic Plan Management (PPM), takes advantage of the known uncertainty model while avoiding the overhead of non-deterministic planning. PPM takes as its starting point a deterministic plan that is built with deterministic modeling assumptions. PPM begins by layering an uncertainty analysis on top of the plan. The analysis calculates the overall expected outcome of execution and can be used to identify expected weak areas of the schedule.
PPM uses the analysis in two main ways to maximize the utility of and manage execution. First, it makes deterministic plans more robust by minimizing the negative impact that unexpected or undesirable contingencies can have on plan utility. PPM strengthens the current schedule by fortifying the areas of the plan identified as weak by the probabilistic analysis, increasing the likelihood that they will succeed. In experiments, probabilistic schedule strengthening is able to significantly increase the utility of execution while introducing only a modest overhead.
Second, PPM reduces the amount of replanning that occurs during execution via a probabilistic meta-level control algorithm. It uses the probability analysis as a basis for identifying cases where replanning probably is (or is not) necessary, and acts accordingly. This addresses the trade-off of too much replanning, which can lead to the overuse of computational resources and lack of responsiveness, versus too little, which can lead to undesirable errors or missed opportunities during execution. Experiments show that probabilistic meta-level control is able to considerably decrease the amount of time spent managing plan execution, without affecting how much utility is earned. In these ways, our approach effectively manages the execution of deterministic plans for uncertain domains both by producing effective plans in a scalable way, and by intelligently controlling the resources that are used maintain these high-utility plans during execution.