11.1.1 Sensors

As the name suggests, sensors are designed to sense the state. Throughout all of this section it is assumed that the state space, $ X$, is finite or countably infinite, as in Formulations 2.1 and 2.3. A sensor is defined in terms of two components: 1) an observation space, which is the set of possible readings for the sensor, and 2) a sensor mapping, which characterizes the readings that can be expected if the current state or other information is given. Be aware that in the planning model, the state is not really given; it is only assumed to be given when modeling a sensor. The sensing model given here generalizes the one given in Section 9.2.3. In that case, the sensor provided information regarding $ \theta $ instead of $ x$ because state spaces were not needed in Chapter 9.

Let $ Y$ denote an observation space, which is a finite or countably infinite set. Let $ h$ denote the sensor mapping. Three different kinds of sensor mappings will be considered, each of which is more complicated and general than the previous one:

  1. State sensor mapping: In this case, $ h : X \rightarrow Y$, which means that given the state, the observation is completely determined.
  2. State-nature sensor mapping: In this case, a finite set, $ \Psi(x)$, of nature sensing actions is defined for each $ x \in X$. Each nature sensing action, $ \psi \in \Psi(x)$, interferes with the sensor observation. Therefore, the state-nature mapping, $ h$, produces an observation, $ y
= h(x,\psi) \in Y$, for every $ x \in X$ and $ \psi \in \Psi(x)$. The particular $ \psi$ chosen by nature is assumed to be unknown during planning and execution. However, it is specified as part of the sensing model.
  3. History-based sensor mapping: In this case, the observation could be based on the current state or any previous states. Furthermore, a nature sensing action could be applied. Suppose that the current stage is $ k$. The set of nature sensing actions is denoted by $ \Psi_k(x)$, and the particular nature sensing action is $ \psi_k \in \Psi_k(x)$. This yields a very general sensor mapping,

    $\displaystyle y_k = h_k(x_1,\ldots,x_k,\psi_k) ,$ (11.1)

in which $ y_k$ is the observation obtained in stage $ k$. Note that the mapping is denoted as $ h_k$ because the domain is different for each $ k$. In general, any of the sensor mappings may be stage-dependent, if desired.

Many examples of sensors will now be given. These are provided to illustrate the definitions and to provide building blocks that will be used in later examples of I-spaces. Examples 11.1 to 11.6 all involve state sensor mappings.

Example 11..1 (Odd/Even Sensor)   Let $ X = {\mathbb{Z}}$, the set of integers, and let $ Y = \{0,1\}$. The sensor mapping is

$\displaystyle y = h(x) = \left\{ \begin{array}{ll} 0 & \mbox{ if $x$ is even}  1 & \mbox{ if $x$ is odd.}  \end{array}\right.$ (11.2)

The limitation of this sensor is that it only tells whether $ x \in X$ is odd or even. When combined with other information, this might be enough to infer the state, but in general it provides incomplete information. $ \blacksquare$

Example 11..2 (Mod Sensor)   Example 11.1 can be easily generalized to yield the remainder when $ x$ is divided by $ k$ for some fixed integer $ k$. Let $ X = {\mathbb{Z}}$, and let $ Y = \{0,1,\ldots,k-1\}$. The sensor mapping is

$\displaystyle y = h(x) = x \operatorname{mod}k .$ (11.3)

$ \blacksquare$

Example 11..3 (Sign Sensor)   Let $ X = {\mathbb{Z}}$, and let $ Y = \{-1,0,1\}$. The sensor mapping is

$\displaystyle y = h(x) = {\rm sgn} x .$ (11.4)

This sensor provides very limited information because it only indicates on which side of the boundary $ x = 0$ the state may lie. It can, however, precisely determine whether $ x = 0$. $ \blacksquare$

Example 11..4 (Selective Sensor)   Let $ X = {\mathbb{Z}}\times {\mathbb{Z}}$, and let $ (i,j) \in X$ denote a state in which $ i,j \in
{\mathbb{Z}}$. Suppose that only the first component of $ (i,j)$ can be observed. This yields the sensor mapping

$\displaystyle y = h(i,j) = i .$ (11.5)

An obvious generalization can be made for any state space that is formed from Cartesian products. The sensor may reveal the values of one or more components, and the rest remain hidden. $ \blacksquare$

Example 11..5 (Bijective Sensor)   Let $ X$ be any state space, and let $ Y = X$. Let the sensor mapping be any bijective function $ h : X \rightarrow Y$. This sensor provides information that is equivalent to knowing the state. Since $ h$ is bijective, it can be inverted to obtain $ h^{-1} : Y \rightarrow X$. For any $ y \in Y$, the state can be determined as $ x = h^{-1}(y)$.

A special case of the bijective sensor is the identity sensor, for which $ h$ is the identity function. This was essentially assumed to exist for all planning problems covered before this chapter because it immediately yields the state. However, any bijective sensor could serve the same purpose. $ \blacksquare$

Example 11..6 (Null Sensor)   Let $ X$ be any state space, and let $ Y = \{0\}$. The null sensor is obtained by defining the sensor mapping as $ h(x)
= 0$. The sensor reading remains fixed and hence provides no information regarding the state. $ \blacksquare$

From the examples so far, it is tempting to think about partitioning $ X$ based on sensor observations. Suppose that in general a state mapping, $ h$, is not bijective, and let $ H(y)$ denote the following subset of $ X$:

$\displaystyle H(y) = \{ x \in X \;\vert\; y = h(x) \} ,$ (11.6)

which is the preimage of $ y$. The set of preimages, one for each $ y \in Y$, forms a partition of $ X$. In some sense, this indicates the ``resolution'' of the sensor. A bijective sensor partitions $ X$ into singleton sets because it contains perfect information. At the other extreme, the null sensor partitions $ X$ into a single set, $ X$ itself. The sign sensor appears slightly more useful because it partitions $ X$ into three sets: $ H(1)
= \{1,2,\ldots\}$, $ H(-1) = \{\ldots,-2,-1\}$, and $ H(0) = \{0\}$. The preimages of the selective sensor are particularly interesting. For each $ i \in {\mathbb{Z}}$, $ H(i) = {\mathbb{Z}}$. The partitions induced by the preimages may remind those with an algebra background of the construction of quotient groups via homomorphisms [769].

Next consider some examples that involve a state-action sensor mapping. There are two different possibilities regarding the model for the nature sensing action:

  1. Nondeterministic: In this case, there is no additional information regarding which $ \psi \in \Psi(x)$ will be chosen.
  2. Probabilistic: A probability distribution is known. In this case, the probability, $ P(\psi\vert x)$, that $ \psi$ will be chosen is known for each $ \psi \in \Psi(x)$.
These two possibilities also appeared in Section 10.1.1, for nature actions that interfere with the state transition equation.

It is sometimes useful to consider the state-action sensor model as a probability distribution over $ Y$ for a given state. Recall the conversion from $ P(\psi\vert\theta)$ to $ P(y\vert\theta)$ in (9.28). By replacing $ \Theta$ by $ X$, the same idea can be applied here. Assume that if the domain of $ h$ is restricted to some $ x \in X$, it forms an injective (one-to-one) mapping from $ \Psi$ to $ Y$. In this case,

$\displaystyle P(y\vert x) = \left\{ \begin{array}{ll} P(\psi\vert x) & \mbox{ f...
...=h(x,\psi)$. } \\ 0 & \mbox{ if no such $\psi$\ exists. } \\ \end{array}\right.$ (11.7)

If the injective assumption is lifted, then $ P(\psi\vert x)$ is replaced by a sum over all $ \psi$ for which $ y=h(x,\psi)$.

Example 11..7 (Sensor Disturbance)   Let $ X = {\mathbb{Z}}$, $ Y = {\mathbb{Z}}$, and $ \Psi = \{-1,0,1\}$. The idea is to construct a sensor that would be the identity sensor if it were not for the interference of nature. The sensor mapping is

$\displaystyle y = h(x,\psi) = x + \psi .$ (11.8)

It is always known that $ \vert x - y\vert \leq 1$. Therefore, if $ y$ is received as a sensor reading, one of the following must be true: $ x
= y-1$, $ x = y$, or $ x = y+1$. $ \blacksquare$

Example 11..8 (Disturbed Sign Sensor)   Let $ X = {\mathbb{Z}}$, $ Y = \{-1,0,1\}$, and $ \Psi = \{-1,0,1\}$. Let the sensor mapping be

$\displaystyle y = h(x,\psi) = {\rm sgn}(x + \psi) .$ (11.9)

In this case, if $ y=0$, it is no longer known for certain whether $ x = 0$. It is possible that $ x=-1$ or $ x=1$. If $ x = 0$, then it is possible for the sensor to read $ -1$, 0, or $ 1$. $ \blacksquare$

Example 11..9 (Disturbed Odd/Even Sensor)   It is not hard to construct examples for which some mild interference from nature destroys all of the information. Let $ X = {\mathbb{Z}}$, $ Y = \{0,1\}$, and $ \Psi = \{0,1\}$. Let the sensor mapping be

$\displaystyle y = h(x,\psi) = \left\{ \begin{array}{ll} 0 & \mbox{ if $x+\psi$\...
...+\psi$\ is odd. } \\ \end{array}\right. \index{disturbed odd/even sensor\vert)}$ (11.10)

Under the nondeterministic model for the nature sensing action, the sensor provides no useful information regarding the state. Regardless of the observation, it is never known whether $ x$ is even or odd. Under a probabilistic model, however, this sensor may provide some useful information. $ \blacksquare$

It is once again informative to consider preimages. For a state-action sensor mapping, the preimage is

$\displaystyle H(y) = \{ x \in X \;\vert\;$   $\displaystyle \mbox{ $\exists \psi \in \Psi(x)$ for which }$$\displaystyle y = h(x,\psi) \} .$ (11.11)

In comparison to state sensor mappings, the preimage sets are larger for state-action sensor mappings. Also, they do not generally form a partition of $ X$. For example, the preimages of Example 11.8 are $ H(1) = \{0,1,\ldots\}$, $ H(0) =
\{-1,0,1\}$, and $ H(-1) = \{\ldots,-2,-1,0\}$. This is not a partition because every preimage contains 0. If desired, $ H(y)$ can be directly defined for each $ y \in Y$, instead of explicitly defining nature sensing actions.

Finally, one example of a history-based sensor mapping is given.

Example 11..10 (Delayed-Observation Sensor)   Let $ X = Y = {\mathbb{Z}}$. A delayed-observation sensor can be defined for some fixed positive integer $ i$ as $ y_k = x_{k-i}$. It indicates what the state was $ i$ stages ago. In this case, it gives a perfect measurement of the old state value. Many other variants are possible. For example, it might only give the sign of the state from $ i$ stages ago. $ \blacksquare$

Steven M LaValle 2020-08-14