Locomotion implementation

Figure 10.5: Locomotion along a horizontal terrain can be modeled as steering a cart through the virtual world. A top-down view is shown. The yellow region is the matched zone (recall Figure 2.15), in which the user's viewpoint is tracked. The values of $ x_t$, $ z_t$, and $ \theta $ are changed by using a controller.
\begin{figure}\centerline{\psfig{file=figs/worlds2.eps,width=\columnwidth}}\end{figure}

Now consider the middle cases from Figure 10.4 of sitting down and wearing a headset. Locomotion can then be simply achieved by moving the viewpoint with a controller. It is helpful to think of the matched zone as a controllable cart that moves across the ground of the virtual environment; see Figure 10.5. First consider the simple case in which the ground is a horizontal plane. Let $ T_{track}$ denote the homogeneous transform that represents the tracked position and orientation of the cyclopean (center) eye in the physical world. The methods described in Section 9.3 could be used to provide $ T_{track}$ for the current time.

The position and orientation of the cart is determined by a controller. The homogeneous matrix:

$\displaystyle T_{cart} = \begin{bmatrix}\cos\theta & 0 & \sin\theta & x_t 0 & 1 & 0 & 0  -\sin\theta & 0 & \cos\theta & z_t 0 & 0 & 0 & 1  \end{bmatrix}$ (10.1)

encodes the position $ (x_t,z_t)$ and orientation $ \theta $ of the cart (as a yaw rotation, borrowed from (3.18)). The height is set at $ y_t= 0$ in (10.1) so that it does not change the height determined by tracking or other systems (recall from Section 9.2 that the height might be set artificially if the user is sitting in the real world, but standing in the virtual world).

The eye transform is obtained by chaining $ T_{track}$ and $ T_{cart}$ to obtain

$\displaystyle T_{eye} = (T_{track}T_{cart})^{-1} = T_{cart}^{-1} T_{track}^{-1}$ (10.2)

Recall from Section 3.4 that the eye transform is the inverse of the transform that places the geometric models. Therefore, (10.2) corresponds to changing the perspective due to the cart, followed by the perspective of the tracked head on the cart.

To move the viewpoint for a fixed direction $ \theta $, the $ x_t$ and $ z_t$ components are obtained by integrating a differential equation:

\begin{displaymath}\begin{split}{{\dot x}_t}& = s \cos\theta  {{\dot z}_t}& = s \sin\theta. \end{split}\end{displaymath} (10.3)

Integrating (10.3) over a time step $ \Delta t$, the position update appears as

\begin{displaymath}\begin{split}x_t[k+1] & = x_t[k] + {{\dot x}_t}\Delta t  z_t[k+1] & = z_t[k] + {{\dot z}_t}\Delta t . \end{split}\end{displaymath} (10.4)

The variable $ s$ in (10.3) is the forward speed. The average human walking speed is about $ 1.4$ meters per second. The virtual cart can be moved forward by pressing a button or key that sets $ s = 1.4$. Another button can be used to assign $ s = -1.4$, which would result in backward motion. If no key or button is held down, then $ s = 0$, which causes the cart to remain stopped. An alternative control scheme is to use the two buttons to increase or decrease the speed, until some maximum limit is reached. In this case, motion is sustained without holding down a key.

Keys could also be used to provide lateral motion, in addition to forward/backward motion. This is called strafing in video games. It should be avoided, if possible, because it cases unnecessary lateral vection.

Steven M LaValle 2020-11-11