8.5.2.1 Using interpolation for continuous state spaces

Consider a problem formulation that is identical to Formulation 8.1 except that $ X$ is allowed to be continuous. Assume that $ X$ is bounded, and assume for now that the action space, $ U(x)$, it finite for all $ x \in X$. Backward value iteration can be applied. The dynamic programming arguments and derivation are identical to those in Section 2.3. The resulting recurrence is identical to (2.11) and is repeated here for convenience:

$\displaystyle G^*_k({x_k}) = \min_{{u_k}\in U(x_k)} \Big\{ l({x_k},{u_k}) + G^*_{k+1}(\xkp1) \Big\} .$ (8.56)

The only difficulty is that $ G^*_k(x_k)$ cannot be stored for every $ x_k
\in X$ because $ X$ is continuous. There are two general approaches. One is to approximate $ G^*_k$ using a parametric family of surfaces, such as polynomials or nonlinear basis functions derived from neural networks [97]. The other is to store $ G^*_k$ only over a finite set of sample points and use interpolation to obtain its value at all other points [582,583].

Figure 8.19: Even though $ x_k$ is a sample point, the next state, $ x_{k+1}$, may land between sample points. For each $ u_k \in U(x_k)$, interpolation may be needed for the resulting next state, $ x_{k+1} = f(x_k,u_k)$.
\begin{figure}\centerline{\psfig{file=figs/contdp.eps,width=3.5in}}\end{figure}

Suppose that a finite set $ S \subset X$ of samples is used to represent cost-to-go functions over $ X$. The evaluation of (8.56) using interpolation is depicted in Figure 8.19. In general, the samples should be chosen to reduce the dispersion (defined in Section 5.2.3) as much as possible. This prevents attempts to approximate the cost-to-go function on large areas that contain no sample points. The rate of convergence ultimately depends on the dispersion [92] (in combination with Lipschitz conditions on the state transition equation and the cost functional). To simplify notation and some other issues, assume that $ S$ is a grid of regularly spaced points in $ {\mathbb{R}}^n$.

First, consider the case in which $ X = [0,1] \subset {\mathbb{R}}$. Let $ S =
\{s_0,s_1,\ldots,s_r\}$, in which $ s_i = i/r$. For example, if $ r =
3$, then $ S = \{0,1/3,2/3,1\}$. Note that this always yields points on the boundary of $ X$, which ensures that for any point in $ (0,1)$ there are samples both above and below it. Let $ i$ be the largest integer such that $ s_i < x$. This implies that $ s_{i+1} > x$. The samples $ s_i$ and $ s_{i+1}$ are called interpolation neighbors of $ x$.

The value of $ G^*_{k+1}$ in (8.56) at any $ x \in [0,1]$ can be obtained via linear interpolation as

$\displaystyle G^*_{k+1}(x) \approx \alpha G^*_{k+1}(s_i) + (1- \alpha) G^*_{k+1}(s_{i+1}) ,$ (8.57)

in which the coefficient $ \alpha \in [0,1]$ is computed as

$\displaystyle \alpha = 1-{x-s_i \over r}.$ (8.58)

If $ x = s_i$, then $ \alpha = 1$, and (8.57) reduces to $ G^*_{k+1}(s_i)$, as expected. If $ x = s_{i+1}$, then $ \alpha =
0$, and (8.57) reduces to $ G^*_{k+1}(s_{i+1})$. At all points in between, (8.57) blends the cost-to-go values at $ s_i$ and $ s_{i+1}$ using $ \alpha$ to provide the appropriate weights.

The interpolation idea can be naturally extended to multiple dimensions. Let $ X$ be a bounded subset of $ {\mathbb{R}}^n$. Let $ S$ represent an $ n$-dimensional grid of points in $ {\mathbb{R}}^n$. Each sample in $ S$ is denoted by $ s(i_1,i_2,\ldots,i_n)$. For some $ x \in X$, there are $ 2^n$ interpolation neighbors that ``surround'' it. These are the corners of an $ n$-dimensional cube that contains $ x$. Let $ x
= (x_1,\ldots,x_n)$. Let $ i_j$ denote the largest integer for which the $ j$th coordinate of $ s(i_1,i_2,\ldots,i_n)$ is less than $ x_j$. The $ 2^n$ samples are all those for which either $ i_j$ or $ i_j + 1$ appears in the expression $ s(\cdot,\cdot,\ldots,\cdot)$, for each $ j
\in \{1,\ldots,n\}$. This requires that samples exist in $ S$ for all of these cases. Note that $ X$ may be a complicated subset of $ {\mathbb{R}}^n$, provided that for any $ x \in X$, all of the required $ 2^n$ interpolation neighbors are in $ S$. Using the $ 2^n$ interpolation neighbors, the value of $ G^*_{k+1}$ in (8.56) on any $ x \in X$ can be obtained via multi-linear interpolation. In the case of $ n=2$, this is expressed as

\begin{displaymath}\begin{split}G^*_{k+1}(x) \approx \;\;\; & \alpha_1 \alpha_2 ...
...\alpha_1) (1-\alpha_2) \;G^*_{k+1}(s(i_1+1,i_2+1)), \end{split}\end{displaymath} (8.59)

in which $ \alpha_1$ and $ \alpha_2$ are defined similarly to $ \alpha$ in (8.58) but are based on distances along the $ x_1$ and $ x_2$ directions, respectively. The expressions for multi-linear interpolation in higher dimensions are similar but are more cumbersome to express. Higher order interpolation, such a quadratic interpolation may alternatively be used [583].

Figure: Barycentric subdivision can be used to partition each cube into simplexes, which allows interpolation to be performed in $ O(n \lg n)$ time, instead of $ O(2^n)$.
\begin{figure}\begin{center}
\begin{tabular}{ccc}
\psfig{file=figs/barysquare.id...
...ube.idr,width=2.6in} \\
$n=2$ & & $n=3$
\end{tabular}
\end{center}\end{figure}

Unfortunately, the number of interpolation neighbors grows exponentially with the dimension, $ n$. Instead of using all $ 2^n$ interpolation neighbors, one improvement is to decompose the cube defined by the $ 2^n$ samples into simplexes. Each simplex has only $ n+1$ samples as its vertices. Only the vertices of the simplex that contains $ x$ are declared to be the interpolation neighbors of $ x$; this reduces the cost of evaluating $ G^*_{k+1}(x)$ to $ O(n)$ time. The problem, however, is that determining the simplex that contains $ x$ may be a challenging point-location problem (a common problem in computational geometry [264]). If barycentric subdivision is used to decompose the cube using the midpoints of all faces, then the point-location problem can be solved in $ O(n \lg n)$ time [263,607,721], which is an improvement over the $ O(2^n)$ scheme described above. Examples of this decomposition are shown for two and three dimensions in Figure 8.20. This is sometimes called the Coxeter-Freudenthal-Kuhn triangulation. Even though $ n$ is not too large due to practical performance considerations (typically, $ n
\leq 6$), substantial savings occur in implementations, even for $ n=3$.

It will be convenient to refer directly to the set of all points in $ X$ for which all required interpolation neighbors exist. For any finite set $ S \subseteq X$ of sample points, let the interpolation region $ R(S)$ be the set of all $ x \in X \setminus S$ for which $ G^*(x)$ can be computed by interpolation. This means that $ x \in
R(S)$ if and only if all interpolation neighbors of $ x$ lie in $ S$. Figure 8.21a shows an example. Note that some sample points may not contribute any points to $ R$. If a grid of samples is used to approximate $ G^*$, then the volume of $ X \setminus R(S)$ approaches zero as the sampling resolution increases.

Figure 8.21: (a) An interpolation region, $ R(S)$, is shown for a set of sample points, $ S$. (b) The interpolation region that arises due to obstacles. (c) The interpolation region for goal points must not be empty.
\begin{figure}\begin{center}
\begin{tabular}{ccccc}
\psfig{file=figs/dismp0.eps,...
...eps,height=1.1in} \\
(a) & & (b) & & (c)
\end{tabular}
\end{center}\end{figure}



Subsections
Steven M LaValle 2020-08-14