Next:
I. Introductory Material
Up:
Planning Algorithms
Previous:
Planning Algorithms
Contents
I. Introductory Material
1. Introduction
1.1 Planning to Plan
1.2 Motivational Examples and Applications
1.3 Basic Ingredients of Planning
1.4 Algorithms, Planners, and Plans
1.4.1 Algorithms
1.4.2 Planners
1.4.3 Plans
1.5 Organization of the Book
2. Discrete Planning
2.1 Introduction to Discrete Feasible Planning
2.1.1 Problem Formulation
2.1.2 Examples of Discrete Planning
2.2 Searching for Feasible Plans
2.2.1 General Forward Search
2.2.2 Particular Forward Search Methods
2.2.3 Other General Search Schemes
2.2.4 A Unified View of the Search Methods
2.3 Discrete Optimal Planning
2.3.1 Optimal Fixed-Length Plans
2.3.2 Optimal Plans of Unspecified Lengths
2.3.3 Dijkstra Revisited
2.4 Using Logic to Formulate Discrete Planning
2.4.1 A STRIPS-Like Representation
2.4.2 Converting to the State-Space Representation
2.5 Logic-Based Planning Methods
2.5.1 Searching in a Space of Partial Plans
2.5.2 Building a Planning Graph
2.5.3 Planning as Satisfiability
II. Motion Planning
3. Geometric Representations and Transformations
3.1 Geometric Modeling
3.1.1 Polygonal and Polyhedral Models
3.1.2 Semi-Algebraic Models
3.1.3 Other Models
3.2 Rigid-Body Transformations
3.2.1 General Concepts
3.2.2 2D Transformations
3.2.3 3D Transformations
3.3 Transforming Kinematic Chains of Bodies
3.3.1 A 2D Kinematic Chain
3.3.2 A 3D Kinematic Chain
3.4 Transforming Kinematic Trees
3.5 Nonrigid Transformations
4. The Configuration Space
4.1 Basic Topological Concepts
4.1.1 Topological Spaces
4.1.2 Manifolds
4.1.3 Paths and Connectivity
4.2 Defining the Configuration Space
4
.
2
.
1
2D Rigid Bodies:
4
.
2
.
2
3D Rigid Bodies:
4.2.3 Chains and Trees of Bodies
4.3 Configuration Space Obstacles
4.3.1 Definition of the Basic Motion Planning Problem
4
.
3
.
2
Explicitly Modeling
: The Translational Case
4
.
3
.
3
Explicitly Modeling
: The General Case
4.4 Closed Kinematic Chains
4.4.1 Mathematical Concepts
4
.
4
.
2
Kinematic Chains in
4.4.3 Defining the Variety for General Linkages
5. Sampling-Based Motion Planning
5.1 Distance and Volume in C-Space
5.1.1 Metric Spaces
5.1.2 Important Metric Spaces for Motion Planning
5.1.3 Basic Measure Theory Definitions
5.1.4 Using the Correct Measure
5.2 Sampling Theory
5.2.1 Motivation and Basic Concepts
5.2.2 Random Sampling
5.2.3 Low-Dispersion Sampling
5.2.4 Low-Discrepancy Sampling
5.3 Collision Detection
5.3.1 Basic Concepts
5.3.2 Hierarchical Methods
5.3.3 Incremental Methods
5.3.4 Checking a Path Segment
5.4 Incremental Sampling and Searching
5.4.1 The General Framework
5.4.2 Adapting Discrete Search Algorithms
5.4.3 Randomized Potential Fields
5.4.4 Other Methods
5.5 Rapidly Exploring Dense Trees
5.5.1 The Exploration Algorithm
5.5.2 Efficiently Finding Nearest Points
5.5.3 Using the Trees for Planning
5.6 Roadmap Methods for Multiple Queries
5.6.1 The Basic Method
5.6.2 Visibility Roadmap
5.6.3 Heuristics for Improving Roadmaps
6. Combinatorial Motion Planning
6.1 Introduction
6.2 Polygonal Obstacle Regions
6.2.1 Representation
6.2.2 Vertical Cell Decomposition
6.2.3 Maximum-Clearance Roadmaps
6.2.4 Shortest-Path Roadmaps
6.3 Cell Decompositions
6.3.1 General Definitions
6.3.2 2D Decompositions
6.3.3 3D Vertical Decomposition
6.3.4 A Decomposition for a Line-Segment Robot
6.4 Computational Algebraic Geometry
6.4.1 Basic Definitions and Concepts
6.4.2 Cylindrical Algebraic Decomposition
6.4.3 Canny's Roadmap Algorithm
6.5 Complexity of Motion Planning
6.5.1 Lower Bounds
6.5.2 Davenport-Schinzel Sequences
6.5.3 Upper Bounds
7. Extensions of Basic Motion Planning
7.1 Time-Varying Problems
7.1.1 Problem Formulation
7.1.2 Direct Solutions
7.1.3 The Velocity-Tuning Method
7.2 Multiple Robots
7.2.1 Problem Formulation
7.2.2 Decoupled planning
7.3 Mixing Discrete and Continuous Spaces
7.3.1 Hybrid Systems Framework
7.3.2 Manipulation Planning
7.4 Planning for Closed Kinematic Chains
7.4.1 Adaptation of Motion Planning Algorithms
7.4.2 Active-Passive Link Decompositions
7.5 Folding Problems in Robotics and Biology
7.6 Coverage Planning
7.7 Optimal Motion Planning
7.7.1 Optimality for One Robot
7.7.2 Multiple-Robot Optimality
8. Feedback Motion Planning
8.1 Motivation
8.2 Discrete State Spaces
8.2.1 Defining a Feedback Plan
8.2.2 Feedback Plans as Navigation Functions
8.2.3 Grid-Based Navigation Functions for Motion Planning
8.3 Vector Fields and Integral Curves
8
.
3
.
1
Vector Fields on
8.3.2 Smooth Manifolds
8.4 Complete Methods for Continuous Spaces
8.4.1 Feedback Motion Planning Definitions
8.4.2 Vector Fields Over Cell Complexes
8.4.3 Optimal Navigation Functions
8.4.4 A Step Toward Considering Dynamics
8.5 Sampling-Based Methods for Continuous Spaces
8.5.1 Computing a Composition of Funnels
8.5.2 Dynamic Programming with Interpolation
III. Decision-Theoretic Planning
9. Basic Decision Theory
9.1 Preliminary Concepts
9.1.1 Optimization
9.1.2 Probability Theory Review
9.1.3 Randomized Strategies
9.2 A Game Against Nature
9.2.1 Modeling Nature
9.2.2 Nondeterministic vs. Probabilistic Models
9.2.3 Making Use of Observations
9.2.4 Examples of Optimal Decision Making
9.3 Two-Player Zero-Sum Games
9.3.1 Game Formulation
9.3.2 Deterministic Strategies
9.3.3 Randomized Strategies
9.4 Nonzero-Sum Games
9.4.1 Two-Player Games
9.4.2 More Than Two Players
9.5 Decision Theory Under Scrutiny
9.5.1 Utility Theory and Rationality
9.5.2 Concerns Regarding the Probabilistic Model
9.5.3 Concerns Regarding the Nondeterministic Model
9.5.4 Concerns Regarding Game Theory
10. Sequential Decision Theory
10.1 Introducing Sequential Games Against Nature
10.1.1 Model Definition
10.1.2 Forward Projections and Backprojections
10.1.3 A Plan and Its Execution
10.2 Algorithms for Computing Feedback Plans
10.2.1 Value Iteration
10.2.2 Policy Iteration
10.2.3 Graph Search Methods
10.3 Infinite-Horizon Problems
10.3.1 Problem Formulation
10.3.2 Solution Techniques
10.4 Reinforcement Learning
10.4.1 The General Philosophy
10.4.2 Evaluating a Plan via Simulation
10.4.3 Q-Learning: Computing an Optimal Plan
10.5 Sequential Game Theory
10.5.1 Game Trees
10.5.2 Sequential Games on State Spaces
10.5.3 Other Sequential Games
10.6 Continuous State Spaces
10.6.1 Extending the value-iteration method
10.6.2 Motion planning with nature
11. Sensors and Information Spaces
11.1 Discrete State Spaces
11.1.1 Sensors
11.1.2 Defining the History Information Space
11.1.3 Defining a Planning Problem
11.2 Derived Information Spaces
11.2.1 Information Mappings
11.2.2 Nondeterministic Information Spaces
11.2.3 Probabilistic Information Spaces
11.2.4 Limited-Memory Information Spaces
11.3 Examples for Discrete State Spaces
11.3.1 Basic Nondeterministic Examples
11.3.2 Nondeterministic Finite Automata
11.3.3 The Probabilistic Case: POMDPs
11.4 Continuous State Spaces
11.4.1 Discrete-Stage Information Spaces
11.4.2 Continuous-Time Information Spaces
11.4.3 Derived Information Spaces
11.5 Examples for Continuous State Spaces
11.5.1 Sensor Models
11.5.2 Simple Projection Examples
11.5.3 Examples with Nature Sensing Actions
11.5.4 Gaining Information Without Sensors
11.6 Computing Probabilistic Information States
11.6.1 Kalman Filtering
11.6.2 Sampling-Based Approaches
11.7 Information Spaces in Game Theory
11.7.1 Information States in Game Trees
11.7.2 Information Spaces for Games on State Spaces
12. Planning Under Sensing Uncertainty
12.1 General Methods
12.1.1 The Information Space as a Big State Space
12.1.2 Algorithms for Nondeterministic I-Spaces
12.1.3 Algorithms for Probabilistic I-Spaces (POMDPs)
12.2 Localization
12.2.1 Discrete Localization
12.2.2 Combinatorial Methods for Continuous Localization
12.2.3 Probabilistic Methods for Localization
12.3 Environment Uncertainty and Mapping
12.3.1 Grid Problems
12
.
3
.
2
Stentz's Algorithm (D
)
12.3.3 Planning in Unknown Continuous Environments
12.3.4 Optimal Navigation Without a Geometric Model
12.3.5 Probabilistic Localization and Mapping
12.4 Visibility-Based Pursuit-Evasion
12.4.1 Problem Formulation
12.4.2 A Complete Algorithm
12.4.3 Other Variations
12.5 Manipulation Planning with Sensing Uncertainty
12.5.1 Preimage Planning
12.5.2 Nonprehensile Manipulation
IV. Planning Under Differential Constraints
13. Differential Models
13.1 Velocity Constraints on the Configuration Space
13.1.1 Implicit vs. Parametric Representations
13.1.2 Kinematics for Wheeled Systems
13.1.3 Other Examples of Velocity Constraints
13.2 Phase Space Representation of Dynamical Systems
13.2.1 Reducing Degree by Increasing Dimension
13.2.2 Linear Systems
13.2.3 Nonlinear Systems
13.2.4 Extending Models by Adding Integrators
13.3 Basic Newton-Euler Mechanics
13.3.1 The Newtonian Model
13.3.2 Motions of Particles
13.3.3 Motion of a Rigid Body
13.4 Advanced Mechanics Concepts
13.4.1 Lagrangian Mechanics
13.4.2 General Lagrangian Expressions
13.4.3 Extensions of the Euler-Lagrange Equations
13.4.4 Hamiltonian Mechanics
13.5 Multiple Decision Makers
13.5.1 Differential Decision Making
13.5.2 Differential Game Theory
14. Sampling-Based Planning Under Differential Constraints
14.1 Introduction
14.1.1 Problem Formulation
14.1.2 Different Kinds of Planning Problems
14.1.3 Obstacles in the Phase Space
14.2 Reachability and Completeness
14.2.1 Reachable Sets
14.2.2 The Discrete-Time Model
14.2.3 Motion Primitives
14.3 Sampling-Based Motion Planning Revisited
14.3.1 Basic Components
14.3.2 System Simulator
14.3.3 Local Planning
14.3.4 General Framework Under Differential Constraints
14.4 Incremental Sampling and Searching Methods
14.4.1 Searching on a Lattice
14.4.2 Incorporating State Space Discretization
14.4.3 RDT-Based Methods
14.4.4 Other Methods
14.5 Feedback Planning Under Differential Constraints
14.5.1 Problem Definition
14.5.2 Dynamic Programming with Interpolation
14.6 Decoupled Planning Approaches
14.6.1 Different Ways to Decouple the Big Problem
14.6.2 Plan and Transform
14.6.3 Path-Constrained Trajectory Planning
14.7 Gradient-Based Trajectory Optimization
15. System Theory and Analytical Techniques
15.1 Basic System Properties
15.1.1 Stability
15.1.2 Lyapunov Functions
15.1.3 Controllability
15.2 Continuous-Time Dynamic Programming
15.2.1 Hamilton-Jacobi-Bellman Equation
15.2.2 Linear-Quadratic Problems
15.2.3 Pontryagin's Minimum Principle
15.3 Optimal Paths for Some Wheeled Vehicles
15.3.1 Dubins Curves
15.3.2 Reeds-Shepp Curves
15.3.3 Balkcom-Mason Curves
15.4 Nonholonomic System Theory
15.4.1 Control-Affine Systems
15.4.2 Determining Whether a System Is Nonholonomic
15.4.3 Determining Controllability
15.5 Steering Methods for Nonholonomic Systems
15.5.1 Using the P. Hall Basis
15.5.2 Using Sinusoidal Action Trajectories
15.5.3 Other Steering Methods
Bibliography
Index
Steven M LaValle 2020-08-14