CMU-CS-98-108
Computer Science Department
School of Computer Science, Carnegie Mellon University



CMU-CS-98-108

Situation-Dependent Learning
for Interleaved Planning and Robot Execution

Karen Zita Haigh

February 1998

Ph.D. Thesis

CMU-CS-98-108.ps
CMU-CS-98-108.pdf


Keywords: Artificial intelligence, robotics, Prodigy, Xavier, interleaving planning and execution, execution monitoring, asynchronous goals, machine learning, situation-dependent rules, situation-dependent costs, plan quality, planning performance, execution performance, search control knowledge


This dissertation presents the complete integrated planning, executing and learning robotic agent Rogue.

Physical domains are notoriously hard to model completely and correctly. Robotics researchers have developed learning algorithms to successfuly tune operational parameters. Instead of improving low-level actuator control, our work focusses instead at the planningstages of the system. The thesis provides techniques to directly process execution experience, and to learn to improve planning and execution performance.

Rogue accepts multiple, asynchronous task requests, and interleaves task planning with real-world robot execution. This dissertation describes how Rogue prioritizes tasks, suspects and interrupts tasks, and opportunistically achieves compatible tasks. We present how Rogue interleaves planning and execution to accomplish its tasks, monitoring and compensating for failure and changes in the environment.

Rogue analyzes execution experience to detect pattens in the environment that affect plan quality. Rogue extracts learning opportunities from massive, continual, probabilistic execution traces. Rogue then correlates these learning opportunities with environmental features, thus detecting patterns in the form of situation-dependent rules. We present the development and use of these rules for two very different planners: the path planner and the task planner. We present empirical data to show the effectiveness of Rogue's novel learning approach.

Our learning approach is applicable for any planner operating in any physical domain. Our empirical results show that situation-dependent rules effectively improve the planner's model of the environment, thus allowing the planner to predict and avoid failures, to respond to a changing environment, and to create plans that are tailored to the real world. Physical systems should adapt to changing situations and absorb any information that will improve their performance.

189 pages


Return to: SCS Technical Report Collection
School of Computer Science homepage

This page maintained by reports@cs.cmu.edu