11. Sensors and Information Spaces

Up until now it has been assumed everywhere that the current state is known. What if the state is not known? In this case, information regarding the state is obtained from sensors during the execution of a plan. This situation arises in most applications that involve interaction with the physical world. For example, in robotics it is virtually impossible for a robot to precisely know its state, except in some limited cases. What should be done if there is limited information regarding the state? A classical approach is to take all of the information available and try to estimate the state. In robotics, the state may include both the map of the robot's environment and the robot configuration. If the estimates are sufficiently reliable, then we may safely pretend that there is no uncertainty in state information. This enables many of the planning methods introduced so far to be applied with little or no adaptation.

The more interesting case occurs when state estimation is altogether
avoided. It may be surprising, but many important tasks can be
defined and solved without ever requiring that specific states are
sensed, even though a state space is defined for the planning problem.
To achieve this, the planning problem will be expressed in terms of an
*information space*. Information spaces serve the same purpose
for sensing problems as the configuration spaces of Chapter
4 did for problems that involve geometric
transformations. Each information space represents the place where a
problem that involves sensing uncertainty naturally lives.
Successfully formulating and solving such problems depends on our
ability to manipulate, simplify, and control the information space.
In some cases elegant solutions exist, and in others there appears to
be no hope at present of efficiently solving them. There are many
exciting open research problems associated with information spaces and
sensing uncertainty in general.

Recall the situation depicted in Figure 11.1, which was also shown in Section 1.4. It is assumed that the state of the environment is not known. There are three general sources of information regarding the state:

- The
*initial conditions*can provide powerful information before any actions are applied. It might even be the case that the initial state is given. At the other extreme, the initial conditions might contain no information. - The
*sensor observations*provide measurements related to the state during execution. These measurements are usually incomplete or involve disturbances that distort their values. - The
*actions*already executed in the plan provide valuable information regarding the state. For example, if a robot is commanded to move east (with no other uncertainties except an unknown state), then it is expected that the state is further east than it was previously. Thus, the applied actions provide important clues for deducing possible states.

Keep in mind that there are generally two ways to use the information space:

*Take all of the information available, and try to estimate the state.*This is the classical approach. Pretend that there is no longer any uncertainty in state, but prove (or hope) that the resulting plan works under reasonable estimation error. A plan is generally expressed as .*Solve the task entirely in terms of an information space.*Many tasks may be achieved without ever knowing the exact state. The goals and analysis are formulated in the information space, without the need to achieve particular states. For many problems this results in dramatic simplifications. A plan is generally expressed as for an information space, .

For brevity, ``information'' will be replaced by ``I'' in many terms. Hence, information spaces and information states become I-spaces and I-states, respectively. This is similar to the shortening of configuration spaces to C-spaces.

Sections 11.1 to 11.3 first cover information spaces for discrete state spaces. This case is much easier to formulate than information spaces for continuous spaces. In Sections 11.4 to 11.6, the ideas are extended from discrete state spaces to continuous state spaces. It is helpful to have a good understanding of the discrete case before proceeding to the continuous case. Section 11.7 extends the formulation of information spaces to game theory, in which multiple players interact over the same state space. In this case, each player in the game has its own information space over which it makes decisions.

- 11.1 Discrete State Spaces

- 11.2 Derived Information Spaces
- 11.2.1 Information Mappings
- 11.2.2 Nondeterministic Information Spaces
- 11.2.3 Probabilistic Information Spaces
- 11.2.4 Limited-Memory Information Spaces

- 11.3 Examples for Discrete State Spaces
- 11.3.1 Basic Nondeterministic Examples
- 11.3.2 Nondeterministic Finite Automata
- 11.3.3 The Probabilistic Case: POMDPs

- 11.4 Continuous State Spaces
- 11.4.1 Discrete-Stage Information Spaces
- 11.4.2 Continuous-Time Information Spaces
- 11.4.3 Derived Information Spaces

- 11.5 Examples for Continuous State Spaces
- 11.5.1 Sensor Models
- 11.5.2 Simple Projection Examples
- 11.5.3 Examples with Nature Sensing Actions
- 11.5.4 Gaining Information Without Sensors

- 11.6 Computing Probabilistic Information States

- 11.7 Information Spaces in Game Theory