Approximate solutions

Figure 5.22: For implementation ease, intermediate vertices can be inserted to avoid checking for closest points along line segments. The trade-off is that the number of vertices is increased dramatically.
\includegraphics[width=3.5truein]{figs/rdt4.eps}

Approximate solutions are much easier to construct, however, a resolution parameter is introduced. Each path segment can be approximated by inserting intermediate vertices along long segments, as shown in Figure 5.22. The intermediate vertices should be added each time a new sample, $ {\alpha }(i)$, is inserted into $ {\cal G}$. A parameter $ \Delta q$ can be defined, and intermediate samples are inserted to ensure that no two consecutive vertices in $ {\cal G}$ are ever further than $ \Delta q$ from each other. Using intermediate vertices, the interiors of the edges in $ {\cal G}$ are ignored when finding the nearest point in $ S$. The approximate computation of NEAREST is performed by finding the closest vertex to $ {\alpha }(i)$ in $ {\cal G}$. This approach is by far the simplest to implement. It also fits precisely under the incremental sampling and searching framework from Section 5.4.1.

Figure 5.23: A Kd-tree can be used for efficient nearest-neighbor computations.
\includegraphics[width=5.0truein]{figs/kdtree.eps}

When using intermediate vertices, the trade-offs are clear. The computation time for each evaluation of NEAREST is linear in the number of vertices. Increasing the number of vertices improves the quality of the approximation, but it also dramatically increases the running time. One way to recover some of this cost is to insert the vertices into an efficient data structure for nearest-neighbor searching. One of the most practical and widely used data structures is the Kd-tree [264,365,758]. A depiction is shown in Figure 5.23 for $ 14$ points in $ {\mathbb{R}}^2$. The Kd-tree can be considered as a multi-dimensional generalization of a binary search tree. The Kd-tree is constructed for points, $ P$, in $ {\mathbb{R}}^2$ as follows. Initially, sort the points with respect to the $ x$ coordinate. Take the median point, $ p \in P$, and divide $ P$ into two sets, depending on which side of a vertical line through $ p$ the other points fall. For each of the two sides, sort the points by the $ y$ coordinate and find their medians. Points are divided at this level based on whether they are above or below horizontal lines. At the next level of recursion, vertical lines are used again, followed by horizontal again, and so on. The same idea can be applied in $ {\mathbb{R}}^n$ by cycling through the $ n$ coordinates, instead of alternating between $ x$ and $ y$, to form the divisions. In [52], the Kd-tree is extended to topological spaces that arise in motion planning and is shown to yield good performance for RRTs and sampling-based roadmaps. A Kd-tree of $ k$ points can be constructed in $ O(n k \lg k)$ time. Topological identifications must be carefully considered when traversing the tree. To find the nearest point in the tree to some given point, the query algorithm descends to a leaf vertex whose associated region contains the query point, finds all distances from the data points in this leaf to the query point, and picks the closest one. Next, it recursively visits those surrounding leaf vertices that are further from the query point than the closest point found so far [47,52]. The nearest point can be found in time logarithmic in $ k$.

Unfortunately, these bounds hide a constant that increases exponentially with the dimension, $ n$. In practice, the Kd-tree is useful in motion planning for problems of up to about $ 20$ dimensions. After this, the performance usually degrades too much. As an empirical rule, if there are more than $ 2^n$ points, then the Kd-tree should be more efficient than naive nearest neighbors. In general, the trade-offs must be carefully considered in a particular application to determine whether exact solutions, approximate solutions with naive nearest-neighbor computations, or approximate solutions with Kd-trees will be more efficient. There is also the issue of implementation complexity, which probably has caused most people to prefer the approximate solution with naive nearest-neighbor computations.

Steven M LaValle 2020-08-14