Task-Level Planning and Task-Directed Sensing for Robots in Uncertain Environments
Donald, Bruce Randall; Jennings, James; rown, Russell
The primary goal of our research is task-level planning. We approach this goal by utilizing a blend of theory, implementation, and experimentation. We propose to investigate task-level planning for autonomous agents, such as mobile robots, that function in an uncertain environment. These robots typically have very approximate, inaccurate, or minimal models of the environment. For example, although the geometry of its environment is crucial to determining its performance, a mobile robot might only have a partial, or local "map" of the world. Similarly, the expected effects of a robot's actuators critically influence its selection of actions to accomplish a goal, but a robot may have only a very approximate, or local predictive ability with regard to forward-simulation of a control strategy. While mobile robots are typically equipped with sensors in order to gain information about the world, and to compensate for errors in actuation and prediction, these sensors are noisy, and in turn provide inaccurate information. We propose to investigate an approach whereby the robot attempts to acquire the necessary information about the world by planning a series of experiments using the robot's sensors and actuators, and building data-structures based on the robot's observations of these experiments. A key feature of this approach is that the experiments the robot performs should be driven by the information demands of the task. That is, in performing some task, the robot may enter a state in which making progress towards a goal requires more information about the world (or its own state). In this case, the robot should plan experiments which can disambiguate the situation. When this process is driven by the information demands of the task, we believe it is an important algorithmic technique to effect task-directed sensing. Plan projects focus on: 1. A theory of sensor interpretation and task-directed planning using perceptual equivalence classes, intended to be applicable in highly uncertain or unmodelled environments, such as for a mobile robot. 2. Algorithmic techniques for modelling geometric constraints on recognizability, and the building of internal representations (such as maps) using these constraints. 3. Explicit encoding of the information requirements of a task using a lattice (information hierarchy) of recognizable sets, which allows the robot to perform experiments to recognize a situation or a landmark. 4. The synthesis of robust mobot programs using the geometric constraints, constructive recognizability experiments, and uncertainty models imposed by the task. We propose to (a) continue our research and develop the theory fully, (b) use tools and concepts from the geometric theory of planning where appropriate, and (c) extend our theory and the geometric theory of planning where necessary to overcome challenges of the autonomous mobile robot domain. One of our most important goals is to show how our theory can be made constructive and algorithmic. We propose a framework for mobot programming based on constructive recognizability, and discuss why it should be robust in uncertain environments. Our objective is to demonstrate the following: When recognizability is thusly constructive, we naturally obtain task-directed sensing strategies, driven by the information demands encoded in the structure of the recognizable sets. A principled theory of sensing and action is crucial in developing task-level programming for autonomous mobile robots. We propose a framework for such a theory, providing both a precise vocabulary and also appropriate computational machinery for working with issues of information flow in and through a robot system equipped with various types of sensors and operating in a dynamic, unstructured environment. We will implement the theory and test it on mobile robots in our laboratory.
computer science; technical report
Previously Published As