Let denote the *observation space*, which is the set of all
possible observations, . For convenience, suppose that ,
, and are all discrete. It will be assumed as part of the
model that some constraints on are known once is given.
Under the nondeterministic model a set
is
specified for every
. The set
indicates the possible observations, given that the nature action is
. Under the probabilistic model a conditional probability
distribution,
, is specified. Examples of sensing models
will be given in Section 9.2.4. Many others appear in
Sections 11.1.1 and 11.5.1, although they are
expressed with respect to a state space that reduces to
in this section. As before, the probabilistic case also requires a
prior distribution, , to be given. This results in the
following formulation.

- A finite, nonempty set called the
*action space*. Each is referred to as an*action*. - A finite, nonempty set called the
*nature action space*. - A finite, nonempty set called the
*observation space*. - A set or probability distribution specified for every . This indicates which observations are possible or probable, respectively, if is the nature action. In the probabilistic case a prior, , must also be specified.
- A function
,
called the
*cost function*.

Consider solving Formulation 9.5. A strategy is now
more complicated than simply specifying an action because we want to
completely characterize the behavior of the robot before the
observation has been received. This is accomplished by defining a
*strategy* as a function,
. For each
possible observation, , the strategy provides an action. We
now want to search the space of possible strategies to find the one
that makes the best decisions over all possible observations. In this
section, is actually a special case of an information space, which
is the main topic of Chapters 11 and 12.
Eventually, a strategy (or plan) will be conditioned on an information
state, which generalizes an observation.

Steven M LaValle 2020-08-14