## 11.2.2 Nondeterministic Information Spaces

This section defines the I-map from Figure 11.3, which converts each history I-state into a subset of that corresponds to all possible current states. Nature is modeled nondeterministically, which means that there is no information about what actions nature will choose, other than that they will be chosen from and . Assume that the state-action sensor mapping from Section 11.1.1 is used. Consider what inferences may be drawn from a history I-state, . Since the model does not involve probabilities, let represent a set . Let be the minimal subset of in which is known to lie given . This subset is referred to as a nondeterministic I-state. To remind you that is a subset of , it will now be denoted as . It is important that be as small as possible while consistent with .

Recall from (11.6) that for every observation , a set of possible values for can be inferred. This could serve as a crude estimate of the nondeterministic I-state. It is certainly known that ; otherwise, , would not be consistent with the current sensor observation. If we carefully progress from the initial conditions while applying constraints due to the state transition equation, the appropriate subset of will be obtained.

From the state transition function , define a set-valued function that yields a subset of for every and as

 (11.28)

Note that both and are set-valued functions that eliminate the direct appearance of nature actions. The effect of nature is taken into account in the set that is obtained when these functions are applied. This will be very convenient for computing the nondeterministic I-state.

An inductive process will now be described that results in computing the nondeterministic I-state, , for any stage . The base case, , of the induction proceeds as

 (11.29)

The first part of the equation replaces with , which is a longer way to write the history I-state. There are not yet any actions in the history. The second part applies set intersection to make consistent the two pieces of information: 1) The initial state lies in , which is the initial condition, and 2) the states in are possible given the observation .

Now assume inductively that has been computed and the task is to compute . From (11.15), . Thus, the only new pieces of information are that was applied and was observed. These will be considered one at a time.

Consider computing . If was known, then after applying , the state could lie anywhere within , using (11.28). Although is actually not known, it is at least known that . Therefore,

 (11.30)

This can be considered as the set of all states that can be reached by starting from some state in and applying any actions and . See Figure 11.5.

The next step is to take into account the observation . This information alone indicates that lies in . Therefore, an intersection is performed to obtain the nondeterministic I-state,

 (11.31)

Thus, it has been shown how to compute from . After starting with (11.29), the nondeterministic I-states at any stage can be computed by iterating (11.30) and (11.31) as many times as necessary.

Since the nondeterministic I-state is always a subset of , the nondeterministic I-space, , is obtained (shown in Figure 11.3). If is finite, then is also finite, which was not the case with because the histories continued to grow with the number of stages. Thus, if the number of stages is unbounded or large in comparison to the size of , then nondeterministic I-states seem preferable. It is also convenient that is a sufficient I-map, as defined in Section 11.2.1. This implies that a planning problem can be completely expressed in terms of without maintaining the histories. The goal region, , can be expressed directly as a nondeterministic I-state. In this way, the planning task is to terminate in a nondeterministic I-state, , for which .

The sufficiency of is obtained because (11.30) and (11.31) show that can be computed from , , and . This implies that a derived information transition equation can be formed. The nondeterministic I-space can also be treated as just another state space.'' Although many history I-states may map to the same nondeterministic I-state, it has been assumed for decision-making purposes that particular history is irrelevant, once is given.

The following example is not very interesting in itself, but it is simple enough to illustrate the concepts.

Example 11..13 (Three-State Example)   Let , , and for all and . The state transitions are given by . Regarding sensing, and for all . The sensor mapping is .

The history I-space appears very cumbersome for this example, which only involves three states. The nondeterministic I-space for this example is

 (11.32)

which is the power set of . Note, however, that the empty set, , can usually be deleted from .11.3 Suppose that the initial condition is and that the initial state is . The initial state is unknown to the decision maker, but it is needed to ensure that valid observations are made in the example.

Now consider the execution over a number of stages. Suppose that the first observation is . Based on the sensor mapping, , which is not very helpful because . Applying (11.29) yields . Now suppose that the decision maker applies the action and nature applies . Using , this yields . The decision maker does not know and must therefore take into account any nature action that could have been applied. It uses (11.30) to infer that

 (11.33)

Now suppose that . From the sensor mapping, . Applying (11.31) yields

 (11.34)

This process may be repeated for as many stages as desired. A path is generated through by visiting a sequence of nondeterministic I-states. If the observation is ever received, the state, , becomes immediately known because .

Steven M LaValle 2012-04-20