It will become important throughout this chapter and Chapter 12 to view the I-space as an ordinary state space. It only seems special because it is derived from another state space, but once this is forgotten, it exhibits many properties of an ordinary state space in planning. One nice feature is that the state in this special space is always known. Thus, by converting from an original state space to its I-space, we also convert from having imperfect state information to always knowing the state, albeit in a larger state space.
One important consequence of this interpretation is that the state transition equation can be lifted into the I-space to obtain an information transition function, . Suppose that there are no sensors, and therefore no observations. In this case, future I-states are predictable, which leads to
Now suppose that there are observations, which are generally unpredictable. In Section 10.1, the nature action was used to model the unpredictability. In terms of the information transition equation, serves the same purpose. When the decision is made to apply , the observation is not yet known (just as is unknown in Section 10.1). In a sequential game against nature with perfect state information, is directly observed at the next stage. For the information transition equation, is instead observed, and can be determined. Using the history I-state representation, (11.14), simply concatenate and onto the histories in to obtain . The information transition equation is expressed as
The costs in this new state space can be derived from the original cost functional, but a maximization or expectation is needed over all possible states given the current information. This will be covered in Section 12.1.
Steven M LaValle 2020-08-14