Implicit sampling is a recently developed variationally enhanced sampling method that guides its samples to regions of high probability, so that each sample carries information. Implicit sampling may thus improve the performance of algorithms that rely on Monte Carlo (MC) methods. Here the applicability and usefulness of implicit sampling for improving the performance of MC methods in estimation and control is explored, and implicit sampling based algorithms for stochastic optimal control, stochastic localization, and simultaneous localization and mapping (SLAM) are presented. The algorithms are tested in numerical experiments where it is found that fewer samples are required if implicit sampling is used, and that the overall runtimes of the algorithms are reduced.
Introduction
Many problems in physics and engineering require that one produces samples of a random variable with a given probability density function (PDF) p [1,2]. If p is difficult to sample, one can sample an easier-to-sample PDF p0 (the importance function), and attach to each sample Xj ∼ p0, j = 1,…, M, the weight Wj ∝ p(Xj)/p0(Xj) (capital letters denote realizations of random variables, and these realizations are indexed in superscript; subscript indices label discrete time). The weighted samples Xj form an empirical estimate of the PDF p and, under mild assumptions, the empirical estimate converges weakly to p, i.e., expected values of a function h with respect to p can be approximated by weighted averages of the function values h(Xj). The difficulty is to find a “good” importance function p0: if p0 is not a good approximation of p, for example, if p0 is not large when p is large, then many of the samples one proposes using p0 may have a small probability with respect to p and, therefore, carry little information, and make sampling inefficient.
Implicit sampling is a general method for constructing useful importance functions [3,4]. The basic idea is to first locate the region of high probability with respect to p via numerical optimization, and then to map a reference variable to this region; this mapping is done by solving algebraic equations. While the cost of generating a sample with implicit sampling is larger than in many other MC schemes, implicit sampling can be efficient because the samples carry more information so that fewer samples are required. In Refs. [5–7], this tradeoff was examined in the context of geophysics, where it was found that implicit sampling indeed can be efficient, in particular when the dimension of the problem is large.
Here, implicit sampling is applied to estimation and control in robotics, and new algorithms for stochastic optimal control, MC localization (MCL), and SLAM are developed. Implementations and efficiencies of these algorithms are illustrated and explored with examples. In particular, it is investigated if implicit sampling, which requires fewer samples that are, however, more informative and more expensive, can be efficient compared to other sampling schemes that may require more samples, each of which is less informative, however cheaper to generate.
The optimal control of a stochastic control problem can be found by solving a stochastic Hamilton–Jacobi–Bellman (HJB) equation [8]. The dimension of the domain of this partial differential equation (PDE) equals the dimension of the state space of the control problem. Classical PDE solvers require a grid on the domain and, therefore, are impractical for control problems of moderate or large dimension. For a class of stochastic optimal control problems, one can use MC solvers instead of grid based PDE techniques and, since the MC approach avoids grids, it is in principle feasible to solve larger dimensional control problems within this class [9–12]. However, the sampling scheme must be chosen carefully or else MC based PDE solving will also fail (in the sense that unfeasibly many samples are required). It is shown in Sec. 3 how to apply implicit sampling to obtain a practical algorithm for stochastic optimal control that avoids many of the pitfalls one faces in MC based PDE solving. The method and algorithm are illustrated with the double slit problem (see Sec. 3.1 and Ref. [9]), which is a simple but vivid example of how things can go wrong, and how they can be fixed with implicit sampling. Using an inverse dynamics controller for a two degrees-of-freedom robotic arm as an example, it is also shown how to obtain a semi-analytic solution for a linear Gaussian problem via implicit sampling. Finally, it is indicated how implicit sampling and stochastic optimal control can help with being trapped in local minima of nonconvex optimization problems. An extension of the method presented here is also discussed in the conference paper [13].
In robotics, one often updates the predictions of a dynamic model of an autonomous robot with the output of the robot's sensors (e.g., radar or lidar scans). This problem is often called “localization,” and can be formulated as a sampling problem. Localization algorithms that rely on MC sampling for the computations are called MCL [14]. One can also learn the geometry and configuration of the map while simultaneously localizing the robot in it, which leads to the problem of “SLAM.” Efficient numerical solutions of the MCL and SLAM problems are a fundamental requirement for autonomous robots [15,16]. The various MCL and SLAM algorithms differ in their importance function p0, and some algorithms have been shown to be inefficient due to a poor choice of p0 [15]. Here it is shown how to use implicit sampling to generate an efficient importance function for MCL and SLAM. The implicit sampling based MCL and SLAM algorithms are convergent, i.e., as the number of samples goes to infinity one obtains an empirical estimate of the true posterior even if the underlying dynamics are nonlinear (whereas many other SLAM algorithms require linearity for convergence [15]). The memory requirements of the implicit sampling based SLAM algorithm scale linearly with the number of features in the map. The efficiencies of the new MCL and SLAM algorithms are compared to the efficiencies of competing MCL and SLAM algorithms in numerical experiments with the data set [17].
Review of Implicit Sampling
The efficiency of MC sampling depends on how well the importance function p0 approximates the target PDF p. In particular, p0 must be large where p is large, or the samples one produces using p0 are unlikely with respect to p. Implicit sampling is a general MC sampling technique that constructs an importance function that is large where p is large by mapping a reference variable to the region where p is large [3–7]. Here implicit sampling is described in general terms; below implementations of implicit sampling are described in the context of applications.
where is an m-dimensional Gaussian reference variable and where T denotes a transpose and Im is the identity matrix of order m; here, and for the remainder of this paper, is short-hand notation for a Gaussian PDF with mean a and covariance matrix B. Note that the right-hand side of Eq. (1) is likely to be small because ξ is close to the origin, which implies that the left-hand side is small, which in turn means that the solution of Eq. (1), i.e., the sample, is close to the mode and, thus, in the high-probability region.
In practice, the weights are usually normalized so that their sum is one. Various ways of constructing a map from ξ to x have been reported in Refs. [4,7] two of which are summarized below. In summary, implicit sampling requires: (1) minimizing F = −log p(x) and (2) solving Eq. (1). This two-step approach makes efficient use of the available computational resources: the minimization identifies the high probability region and the samples are focused to lie in this region, so that (almost) all samples carry information. This is in contrast to other sampling schemes where an importance function is chosen ad hoc, which often means that many of the samples carry little or no information; the computations used for generating uninformative samples are wasted.
Finally, the assumption that F is convex can be relaxed. Implicit sampling can be used without modification if F is merely U-shaped, i.e., the target PDF, p, is unimodal. Multimodal target PDFs can be sampled, e.g., by using mixture models, for which one approximates each mode using the above recipe (see Refs. [4,7,18] for more details).
Generating Samples With Random Maps.
where ∇F is the gradient of F with respect to x (an m-dimensional row vector).
The scalar derivative dλ/dρ can be evaluated using finite differences with a few evaluations of F (the precise number of function evaluations depends on the finite difference scheme one chooses).
Generating Samples With Approximate Quadratic Equations.
Application to Path Integral Control
starting at x(t0) = x0, where f is a smooth m-dimensional vector function which describes the dynamics, and where G and Q are m × p and p × r matrices that describe how the uncertainty and controls are distributed within the system; W is Brownian motion (see, e.g., Ref. [8] for more details about stochastic optimal control; I closely follow Ref. [9] in the presentation of path integral control).
see Refs. [9,19]. Thus, one can compute the optimal control by solving the HJB equation. Numerical PDE solvers typically require a grid on the domain of the PDE; however, the domain has the dimension of the state space of the control problem. Thus, grid based numerical PDE techniques only apply to control problems of a low dimension (or else the computational requirements become excessive).
starting at y(τ = t) = x(t). This is the path integral formulation of stochastic optimal control [9–12]. Note that the class of problems that can be tackled with path integral control is rather general, since the assumptions of: (1) a quadratic control cost and (2) the condition on the noise and the control cost in Eq. (8) are not restrictive. In particular, the dynamics f(x, t) and the potential V(x, t) in Eq. (7) can be nonlinear.
Note that this F depends on how one discretizes the integral of the potential V, and the SDE (12); other discretization schemes will lead to different Fs.
for discretized trajectories of Eq. (12), followed by averaging. However, this method can fail if the potential V has deep wells [9–11]. In this case, many trajectories of Eq. (12) can end up where V is large and, thus, contribute little to the approximation of the expected value ψ. For efficient sampling, one needs a method that guides the samples to remain where the potential is small.
Once ψ(x, t) is computed, we can compute the optimal control from Eq. (10), e.g., via finite differences. It is important to note that this defines the optimal control at time t, and that the computations have to be repeated at the next time step to compute ψ(x(t + dt), t + dt). The use of implicit sampling in stochastic optimal control is illustrated with three examples.
The Double Slit Problem.
where satisfies Eq. (8).
With a high probability, the trajectories will hit the potential wall at t1 and, thus, G is infinite, so that the contribution of a trajectory to the expected value ψ(x, t) in Eq. (11) is zero with a high probability. The situation is illustrated with a simulation using the parameters of Table 1. The results are shown in the left panel of Fig. 1. One observes that 5000 unguided random walks from Eq. (17), all starting at x(0), hit the potential wall, and, thus, score G = ∞. All 5000 samples thus carry no probability and do not contribute to the approximation of ψ. Even when O(105) samples are considered, it is unlikely that sufficiently many make it past the potential wall [9]. We conclude that this method is unfeasible for this problem, since the number of samples required is extremely large due to the deep wells in the potential.
Description | Parameter | Value |
---|---|---|
Final time | tf | 2 |
Critical time | t1 | 1 |
Slits | a, b, c, d | −6, −4, 6, 8 |
Initial position | x(0) | 1 |
Variance of the noise | σ | 1 |
Control cost | r | 0.1 |
Time step | Δt | 0.02 |
Description | Parameter | Value |
---|---|---|
Final time | tf | 2 |
Critical time | t1 | 1 |
Slits | a, b, c, d | −6, −4, 6, 8 |
Initial position | x(0) | 1 |
Variance of the noise | σ | 1 |
Control cost | r | 0.1 |
Time step | Δt | 0.02 |
The evaluation of the above integral using the same strategy as above for t ≥ t1 is tedious, but straightforward. Note that the implicit sampling strategy here is the key to solving this problem, because it locates the wells in the potential.
where is the minimum of F.
The expected value of the Jacobian (1/det L) over the slits can be computed by MC as follows. Generate M samples ξj, j = 1,…,M, and, for each one, compute the corresponding trajectory and check if it hits the potential wall. The integral is 1/det(L) times the ratio of the number of trajectories that pass through the wall and M.
The right panel of Fig. 1 shows 50 trajectories one obtains with this approximation at t = Δt. Note that most of the trajectories pass through the slit, i.e., most of the samples carry a significant probability, score a small F, and, thus, contribute to the approximation of the expected value ψ(x, t). These “guiding” effects make it possible to solve the problem with O(10) samples, while the standard MC scheme fails even with O(105) samples [9]. The Laplace guided strategy presented in Ref. [9] (which uses similar ingredients as implicit sampling and is also related to the optimal nudging constructions in Refs. [24,25]) requires about 100 samples.
The numerical approximation obtained with implicit sampling (50 samples) is compared to the analytical solution in Fig. 2, where the optimal control and the trajectory under this control are shown. One observes that the numerical approximation of the optimal control is quite close to the control, one obtains from the analytical solution (right panel of Fig. 2) and, hence, the controlled trajectory one obtains with implicit sampling is also close to the one computed from the analytical solution (left panel of Fig. 2). This statement can be made more precise by solving the control problem repeatedly (each solution is a random event) and averaging.
The double slit problem is solved 500 times and the error of the numerical approximation is recorded for each run. The Euclidean norm of the difference of the analytical solution and the approximation via implicit sampling is used to measure an error, in particular in x (the trajectory under optimal control) and u (the optimal control). The number of samples of implicit sampling is varied to study the convergence of the algorithm. The results are shown in Table 2 where the mean and standard deviation of the Euclidean norm of the error in x and u, respectively, scaled by the mean of the norms of x and u, respectively, are listed. These numbers indicate the errors one should expect in a typical run. The errors are relatively small when 50 samples are used. The errors do not dramatically decrease for larger sample numbers, which indicates that the algorithm has converged. The small variance of the errors indicates that similar errors occur in each run, so that one concludes that implicit sampling is reliable.
Number of samples | ||||
---|---|---|---|---|
10 | 50 | 100 | 500 | |
Error in x in % (mean ± std. dev.) | 2.64 ± 0.63 | 2.70 ± 0.63 | 2.66 ± 0.63 | 2.61 ± 0.62 |
Error in u in % (mean ± std. dev.) | 4.81 ± 2.15 | 4.61 ± 1.89 | 4.54 ± 2.37 | 4.46 ± 2.17 |
Number of samples | ||||
---|---|---|---|---|
10 | 50 | 100 | 500 | |
Error in x in % (mean ± std. dev.) | 2.64 ± 0.63 | 2.70 ± 0.63 | 2.66 ± 0.63 | 2.61 ± 0.62 |
Error in u in % (mean ± std. dev.) | 4.81 ± 2.15 | 4.61 ± 1.89 | 4.54 ± 2.37 | 4.46 ± 2.17 |
The error in the numerical implementation of implicit sampling is mostly due to neglecting one of the local minima of F. In numerical tests, only a slight improvement was observed when both minima were used for sampling; however, the computations are twice as fast if one considers only the smaller of the two minima.
Stochastic Control of a Two Degrees-of-Freedom Robotic Arm.
Stochastic optimal control of the two degrees-of-freedom robotic arm shown in Fig. 3 is considered. The controller is an “inverse dynamics controller” that linearizes the system. This example demonstrates how implicit sampling based path integral control works in linear problems. Specifically, a semi-analytical solution is derived for which a numerical optimization is required; however, the expected value of the implicit sampling solution in Eq. (16) can be evaluated explicitly (i.e., no numerical sampling is needed). Moreover, only some of the dynamic equations of the first-order formulation of the dynamics are driven by noise, so that the dimension of the stochastic subspace is less than the actual dimension of the problem. Implicit sampling can exploit these features, and this example demonstrates how. The algorithm is tested in numerical experiments in which false parameters are given to the algorithm to demonstrate the robustness of the path integral controller to model error.
define the dynamics and depend on the lengths of the arms l1, l2 and the loads m1, m2 (see, e.g., Ref. [26] for more details about this model and Table 3 for the parameters used in simulations). One can compute the torques by inverting the dynamics and choosing to derive the linear system
Description | Parameter | Value |
---|---|---|
Length of link one | l1 | 1 m |
Distance to the center of mass of link one | lc1 | 0.5 m |
Distance to the center of mass of link two | lc2 | 0.5 m |
Mass of link one | m1 | 1 kg |
Mass of link two | m2 | 1 kg |
Moment of inertia of link one | I1 | 2 kg/m2 |
Moment of inertia of link two | I2 | 2 kg/m2 |
Description | Parameter | Value |
---|---|---|
Length of link one | l1 | 1 m |
Distance to the center of mass of link one | lc1 | 0.5 m |
Distance to the center of mass of link two | lc2 | 0.5 m |
Mass of link one | m1 | 1 kg |
Mass of link two | m2 | 1 kg |
Moment of inertia of link one | I1 | 2 kg/m2 |
Moment of inertia of link two | I2 | 2 kg/m2 |
so that the robotic arm stops at tf > 0 at the desired position θ*. This linear problem can be solved with linear quadratic Gaussian control [8]; however, here a semi-analytical solution is obtained via implicit sampling.
where yi, zi are two-dimensional vectors whose elements are the discretized angular velocities () and the discretized angles (θ); note that only one of the above equations is driven by noise (ΔB), since there is no reason to inject noise into the (trivial) equation . While the noise propagates via the coupling to all variables, the PDFs that define the path in Eq. (11) are in terms of yns only.
There is no need for generating samples, since the expected value is computed explicitly (i.e., by evaluating the integral analytically).
The robustness of the implicit sampling based path integral control is tested in numerical experiments. To simulate that the “real” robotic arm behaves differently from the numerical model that the path integral controller uses to find a control, one can give the controller false information about the parameters of the numerical model. Here the parameters m1 and m2 are perturbed values of the true parameters simulating that the robotic arm picked up a payload (of unknown weight). Thus, the controller works with values m1 = 1.4, m2 = 1, whereas the “true” robotic arm has parameters as in Table 3. The results of a simulation are shown in Fig. 4 where state trajectories and controls are shown for a truly optimal controller (i.e., one who has access to the exact model parameters), and for the path integral controller which uses false model parameters. One observes that the path integral controller can perform the desired task (moving the arm to a new position) and that its control and the resulting state trajectories closely follow those that are generated by a truly optimal controller. This example thus indicates that the path integral controller can perform reliably and quickly in applications where model error may be an issue.
Optimization Via Stochastic Control.
The idea is to make use of the stochastic component to explore valleys in the function f. The parameters one can tune are the noise level σ, the final time tf, and the control cost R.
where the discretization is done using a constant time step Δt = tf/n as before. Note that the control approach to this problem inserts a quadratic term through which the space is (randomly) explored.
which is a popular test of the performance of optimization algorithms because of its many local minima [27]. The parameters are R = 0.01, σ = 0.01, Δt = 1, tf = 20, and the minimization is initialized at (−1, −4). Figure 5 shows the iterations obtained with implicit sampling and 50 samples, and, for comparison, the iterations of Newton's method.
In this example, the stochastic approach is successful and finds a much lower minimum than Newton's method. However, since each run of the stochastic approach is random, one may find another local minimum in another run. This could be useful when one needs to explore valleys or if one suspects the existence of other local minima.
To test the reliability of the stochastic control approach, 70 experiments were performed, each starting at the same initial condition. The iterations of five such experiments are shown in Table 4. All 70 runs ended up close to the local minimum around x1 = 2.5, x2 = −2.5. The average value of F is an approximation to C in Eq. (18) and, with 70 experiments was found to be C ≈ 1.75, which corresponds to a much lower value of f than the value , that one obtains with Newton's method.
Iteration | Newton | Controlled optimization | ||||
---|---|---|---|---|---|---|
0 | 260.00 | 260.00 | 260.00 | 260.00 | 260.00 | 260.00 |
1 | 195.36 | 268.57 | 266.47 | 241.73 | 337.77 | 257.55 |
2 | 180.19 | 258.45 | 242.92 | 239.19 | 347.24 | 263.20 |
3 | 178.43 | 259.75 | 268.95 | 258.97 | 360.16 | 243.92 |
4 | 178.34 | 257.44 | 279.41 | 249.28 | 373.97 | 218.95 |
5 | 178.34 | 214.25 | 243.23 | 224.21 | 336.47 | 201.69 |
6 | 178.34 | 184.70 | 320.29 | 256.08 | 476.23 | 179.34 |
7 | 178.34 | 161.31 | 331.32 | 230.16 | 394.82 | 157.00 |
8 | — | 147.39 | 330.16 | 218.89 | 316.97 | 150.73 |
9 | — | 144.04 | 255.65 | 175.61 | 245.24 | 150.33 |
10 | — | 101.05 | 221.05 | 180.92 | 176.98 | 144.97 |
11 | — | 76.729 | 188.49 | 174.13 | 180.53 | 105.48 |
12 | — | 48.46 | 151.54 | 188.66 | 131.86 | 91.42 |
13 | — | 33.05 | 115.34 | 171.75 | 101.82 | 67.00 |
14 | — | 29.90 | 87.37 | 147.43 | 56.51 | 41.29 |
15 | — | 41.34 | 62.89 | 104.69 | 35.11 | 10.65 |
16 | — | 31.03 | 43.69 | 72.71 | 31.48 | 6.08 |
17 | — | 18.12 | 22.62 | 42.44 | 14.53 | 1.79 |
18 | — | 1.83 | 9.195 | 30.55 | 9.52 | 2.61 |
19 | — | 0.46 | 1.90 | 0.54 | 0.97 | 1.24 |
Iteration | Newton | Controlled optimization | ||||
---|---|---|---|---|---|---|
0 | 260.00 | 260.00 | 260.00 | 260.00 | 260.00 | 260.00 |
1 | 195.36 | 268.57 | 266.47 | 241.73 | 337.77 | 257.55 |
2 | 180.19 | 258.45 | 242.92 | 239.19 | 347.24 | 263.20 |
3 | 178.43 | 259.75 | 268.95 | 258.97 | 360.16 | 243.92 |
4 | 178.34 | 257.44 | 279.41 | 249.28 | 373.97 | 218.95 |
5 | 178.34 | 214.25 | 243.23 | 224.21 | 336.47 | 201.69 |
6 | 178.34 | 184.70 | 320.29 | 256.08 | 476.23 | 179.34 |
7 | 178.34 | 161.31 | 331.32 | 230.16 | 394.82 | 157.00 |
8 | — | 147.39 | 330.16 | 218.89 | 316.97 | 150.73 |
9 | — | 144.04 | 255.65 | 175.61 | 245.24 | 150.33 |
10 | — | 101.05 | 221.05 | 180.92 | 176.98 | 144.97 |
11 | — | 76.729 | 188.49 | 174.13 | 180.53 | 105.48 |
12 | — | 48.46 | 151.54 | 188.66 | 131.86 | 91.42 |
13 | — | 33.05 | 115.34 | 171.75 | 101.82 | 67.00 |
14 | — | 29.90 | 87.37 | 147.43 | 56.51 | 41.29 |
15 | — | 41.34 | 62.89 | 104.69 | 35.11 | 10.65 |
16 | — | 31.03 | 43.69 | 72.71 | 31.48 | 6.08 |
17 | — | 18.12 | 22.62 | 42.44 | 14.53 | 1.79 |
18 | — | 1.83 | 9.195 | 30.55 | 9.52 | 2.61 |
19 | — | 0.46 | 1.90 | 0.54 | 0.97 | 1.24 |
Application to MCL and SLAM
Consider a mobile robot that navigates autonomously and, as it moves, collects noisy measurements about its motion as well as scans of its environment. If the scans reveal locations of features that are known to the robot, i.e., if the robot has a map of its environment, then it can localize itself on this map. This problem is known as localization. When the robot has no map of its environment, then it must construct a map while localizing itself on it, leading to the problem of SLAM [28–30]. Efficient solutions to the localization and SLAM problems are a fundamental requirement for autonomous robots, which must move through unknown environments where no global positioning data (GPS) are available, for example indoors, in abandoned mines, underwater, or on far-away planets [16,31,32]. Application of implicit sampling to localization and SLAM is the subject of Secs. 4.1 and 4.2, where the algorithms will also be tested using experimental data obtained by Nebot and colleagues at the University of Sydney [17].
MCL.
where zn is a k-dimensional vector whose elements are the data (k ≤ m) and where Θ is the map. For example, one can use the “landmark model,” for which the map describes the coordinates of a collection of obstacles, called “landmarks;” the data are measurements of range and bearing of the position of the robot relative to a landmark. The landmark model and range and bearing data will be used in the applications below, but the algorithms described are more generally applicable.
Approximations of this PDF are used to localize the robot. For example, methods based on the Kalman filter [33] (KF) construct a Gaussian approximation which is often problematic because of nonlinearities in the model and data. Moreover, KF-based localization algorithms can diverge, for example, in multimodal scenarios (i.e., if the data are ambiguous) or during relocalization after system failure [15]. The basic idea of MCL is to relax the Gaussian assumptions required by KF, and to apply importance sampling to the localization problem [14]. The method proceeds as follows.
to assess how well it fits the data. However, this choice of importance function can be inefficient, especially if the motion model is noisy and the data are accurate (as is often the case [16]). The reason is that many of the samples generated by the model are unlikely with respect to the data and the computations used to generate these samples are wasted.
One needs M functions Fj, j = 1,…,M, one per sample, because the recursive problem formulation requires a factorization of the importance function (compare to Secs. 2 and 3, where only one function was needed). Here, each function Fj is parametrized by the location of the sample at time n − 1, , the control un and the datum zn; the variables of these functions are the location of the robot at tn, xn.
For MCL with implicit sampling, each function Fj must be minimized, e.g., using Newton's method. After the minimization, one can generate samples by solving the stochastic equation (1) with the techniques described in Sec. 2. Using information from derivatives in sampling for MCL has also been considered in Refs. [34,35].
Implementation for the University Car Park Data Set.
The University Car Park data set [17] is used to demonstrate the efficiency of MCL with implicit sampling. The scenario is as follows. A robot is moving around a parking lot and steering and speed data are collected via a wheel encoder on the rear left wheel and a velocity encoder. An outdoor laser (SICK LMS 221) is mounted on the front bull-bar and is directed forward and horizontal (see Ref. [36] for more information on the hardware). The laser returns measurements of relative range and bearing to different features. Speed and steering data are the controls of a kinematic model of the robot and the laser data are used to update this model's predictions about the state of the robot.
where h is a two-dimensional vector function that maps the position of the robot to relative range and bearing and where Σ2 is a 2 × 2 diagonal matrix (see Ref. [36] and the Appendix for the various model parameters).
where and where denotes the measurement for the ith landmark at time n. These Fjs are minimized with Newton's method and samples are generated by solving a quadratic equation as explained in Sec. 2. The gradient and Hessian in Newton's method were computed analytically (see the Appendix). If uneven weights were observed, a “resampling” was done using Algorithm 2 in Ref. [37]. Resampling replaces samples with a small weight with samples with a larger weight without introducing significant bias. If no laser data are available, all samples evolve according to the model equation (19).
The results of MCL with implicit sampling are shown in the left panel of Fig. 6. The “ground truth” shown here is the result of an MCL run with GPS data (these GPS data however are not used in implicit sampling); the large dots are the positions of the landmarks. The reconstruction by the MCL algorithm (dashed line) is an approximation of the conditional mean, which is obtained by averaging the samples. One observes that MCL with implicit sampling gives accurate estimates whenever laser data are available. After periods during which no laser data were available, one can observe a strong “pull” toward the true state trajectory, as soon as data become available, as is indicated by the jumps in the trajectory, e.g., around x = 0, y = 0.
The efficiency of MCL with implicit sampling is studied by running the algorithm with 10, 20, 40, 80, 100, and 150 samples, i.e., with increasing computational cost, and comparing the reconstruction of the MCL with the true path. The error is measured by the Euclidean norm of the difference between the “ground truth” (see above) and the reconstruction by the MCL algorithm (scaled by the Euclidean norm of the ground truth). The standard MCL method was also applied with 10, 20, 40, 80, 100, 150, 250, and 300 samples. The results are shown in Table 5.
Implicit sampling | Standard sampling | |||
---|---|---|---|---|
# of samples | Error in % | CPU time in ms | Error in % | CUP time in ms |
10 | 2.87 | 1.14 | 6.91 | 0.81 |
20 | 2.55 | 2.23 | 4.63 | 1.57 |
40 | 2.29 | 4.42 | 3.20 | 3.12 |
80 | 2.10 | 8.80 | 2.76 | 6.20 |
100 | 2.08 | 11.0 | 2.54 | 7.75 |
150 | 2.02 | 16.5 | 2.40 | 11.6 |
200 | — | — | 2.36 | 15.8 |
250 | — | — | 2.32 | 19.6 |
300 | — | — | 2.28 | 23.4 |
Implicit sampling | Standard sampling | |||
---|---|---|---|---|
# of samples | Error in % | CPU time in ms | Error in % | CUP time in ms |
10 | 2.87 | 1.14 | 6.91 | 0.81 |
20 | 2.55 | 2.23 | 4.63 | 1.57 |
40 | 2.29 | 4.42 | 3.20 | 3.12 |
80 | 2.10 | 8.80 | 2.76 | 6.20 |
100 | 2.08 | 11.0 | 2.54 | 7.75 |
150 | 2.02 | 16.5 | 2.40 | 11.6 |
200 | — | — | 2.36 | 15.8 |
250 | — | — | 2.32 | 19.6 |
300 | — | — | 2.28 | 23.4 |
It is clear that the both MCL algorithms converge to the same error, since both algorithms converge to the conditional mean as the number of samples (and hence, the computational cost) goes to infinity. However the convergence rate of MCL with implicit sampling is faster, which can be seen from the right panel of Fig. 6, where the computing time is plotted as a function of the error. In this figure, the area between the lines corresponds to the improvement of implicit sampling over the standard method, and it is clear that implicit sampling is more efficient than the standard method. The improvement is particularly pronounced if a high accuracy is required, in which case MCL with implicit sampling can be orders of magnitudes faster than the standard method.
Implicit Sampling for Online SLAM.
where the map, Θ, is a variable (not a given parameter as in MCL); the variable η0:n is the “data association,” i.e., it maps the data to the known features, or creates a new feature if the data are incompatible with current features [38,39]. Here it is assumed that the data association is done by another algorithm, so that η0:n in Eq. (23) can be assumed to be given (in the numerical implementation in Sec. 4.2.1, a maximum-likelihood algorithm is used [15]).
Note that the SLAM state-vector is different from the state of the localization problem: it is the number of variables needed to describe the robot's position, x, and the coordinates of the features of the map, Θ. If the map contains many features (which is typically the case), then the state dimension is large and this makes KF based SLAM prohibitively expensive. The reason is that KF SLAM requires dense matrix operations on matrices of size N (where N is the number of features), due to correlations between the robot's position and the features [38]. KF SLAM thus has N2 memory requirements which make it infeasible for realistic N.
where θn is the feature observed at time n. Alternatively, one can wait and collect all the data, and then assimilate this large data set in one sweep; such a “smoothing” approach is related to graph based SLAM (see, e.g., Ref. [42]) but will not be discussed further in this paper.
Here, it is assumed that only one feature is observed at a time (which is realistic [15]); however, the extension to observing multiple features simultaneously is straightforward.
Various importance functions have been considered and tested in the literature. For example, the fastSLAM algorithm [43] was shown to be problematic in many cases due to the poor distribution of its samples [15,44]. The reason is that the samples are generated by the motion model and, thus, are not informed by the data. As a result, the samples are often unlikely with respect to the data. The fastSLAM 2.0 algorithm [15,44] ameliorates this problem by making use of an importance function that depends on the data. The algorithm was shown to outperform fastSLAM and was proven to converge (as the number of samples goes to infinity) to the true SLAM posterior for linear models. The convergence for nonlinear models is not well understood (because of the use of extended KFs (EKFs) to track and construct the map). Here, a new SLAM algorithm with implicit sampling is described. The algorithm converges for nonlinear models at a computational cost that is comparable to the cost of fastSLAM 2.0.
and samples are generated by solving the stochastic equation (1). One can use the same technique for solving these equations as described in Sec. 2. Moreover, if the feature θn is already known, the SLAM algorithm reduces to the MCL algorithm with implicit sampling described above.
Note that the memory requirements of SLAM with implicit sampling are linear in the number of features, because the algorithm makes use of the conditional independence of the features given the robot path, so that, at each step, only one feature needs to be considered. Moreover, SLAM with implicit sampling converges to the true SLAM posterior (under the usual assumptions about the supports of the importance function and target PDF [45]) as the number of samples goes to infinity; this convergence is independent of linearity assumptions about the model and data equations. Convergence for fastSLAM and fastSLAM 2.0, however can only be proven for linear Gaussian models [15].
Numerical Experiments.
The applicability and efficiency of SLAM with implicit sampling is demonstrated with numerical experiments that mimic the University Car Park data set [17]. Here speed and steering data of Ref. [17] are used, however the laser data are replaced by synthetic data, because the data in Ref. [17] are too sparse (in time) for successful online SLAM (even EKF SLAM, often viewed as a benchmark, does not give satisfactory results).
In these synthetic data experiments, a virtual laser scanner returns noisy range and bearing measurements of features in a half-circle with 15 m radius. If the laser detects more than one feature, one is selected at random and returned to the SLAM algorithm. Since each synthetic data set is a random event, 50 synthetic data sets are used to compute the average errors for various SLAM algorithms. These errors are defined as the norm of the difference of the ground truth and the conditional mean computed with a SLAM algorithm. The performances of EKF SLAM, fastSLAM, fastSLAM 2.0, and SLAM with implicit sampling are compared using these synthetic data sets. All SLAM methods make use of the same maximum likelihood algorithm for the data association. A Newton method was implemented for the optimization in implicit sampling, and the quadratic approximation of F was used to solve the algebraic equation (1). A typical outcome of a numerical experiment is shown in the left panel of Fig. 7, where the true path and true positions of the landmarks as well as their reconstructions via SLAM with implicit sampling (100 samples) are plotted.
The results of 50 numerical experiments with fastSLAM, fastSLAM 2.0, and SLAM with implicit sampling are shown in Table 6. It was observed in this example that EKF SLAM gives an average error of 3.89% at an average computation time of 0.43 ms and, thus, is computationally efficient and accurate. However, computational efficiency disappears in scenarios with larger maps due to the O(N2) memory requirements (N is the number of features in the map). The error of fastSLAM 2.0 (whose memory requirements scale linearly with N) decreases with the number of samples, and it seems as if the fastSLAM 2.0 solution converges (as the number of samples and, thus, the computation time increases) to the EKF solution. This is intuitive because fastSLAM 2.0 uses EKFs to track the map. The fastSLAM algorithm, on the other hand, converges to an error that is larger than in EKF SLAM or fastSLAM 2.0. SLAM with implicit sampling converges to an error that is smaller than in EKF SLAM. The reason is as follows: implicit sampling requires no linearity assumptions and, therefore, the empirical estimate it produces converges (as the number of samples goes to infinity) to the true SLAM posterior.
fastSLAM | fastSLAM 2.0 | Implicit sampling | ||||
---|---|---|---|---|---|---|
Samples | CPU time in ms | Error in % | CPU time in ms | Error in % | CPU time in ms | Error in % |
10 | — | — | — | — | 7.31 | 8.31 |
20 | 5.55 | 17.7 | 8.36 | 20.0 | 14.4 | 7.18 |
50 | 13.6 | 14.8 | 20.5 | 13.1 | 35.4 | 4.54 |
100 | 26.9 | 12.0 | 40.7 | 8.36 | 70.3 | 2.98 |
200 | 52.7 | 10.8 | 82.1 | 5.24 | 140.9 | 2.55 |
400 | 105.2 | 7.64 | 164.0 | 4.83 | 286.0 | 3.25 |
500 | 133.2 | 8.67 | 346.7 | 4.71 | 355.9 | 3.25 |
800 | 213.2 | 10.4 | — | — | — | — |
1000 | 280.1 | 8.78 | — | — | — | — |
fastSLAM | fastSLAM 2.0 | Implicit sampling | ||||
---|---|---|---|---|---|---|
Samples | CPU time in ms | Error in % | CPU time in ms | Error in % | CPU time in ms | Error in % |
10 | — | — | — | — | 7.31 | 8.31 |
20 | 5.55 | 17.7 | 8.36 | 20.0 | 14.4 | 7.18 |
50 | 13.6 | 14.8 | 20.5 | 13.1 | 35.4 | 4.54 |
100 | 26.9 | 12.0 | 40.7 | 8.36 | 70.3 | 2.98 |
200 | 52.7 | 10.8 | 82.1 | 5.24 | 140.9 | 2.55 |
400 | 105.2 | 7.64 | 164.0 | 4.83 | 286.0 | 3.25 |
500 | 133.2 | 8.67 | 346.7 | 4.71 | 355.9 | 3.25 |
800 | 213.2 | 10.4 | — | — | — | — |
1000 | 280.1 | 8.78 | — | — | — | — |
Moreover, the convergence rate of SLAM with implicit sampling is faster than for any other SLAM method, due to the data informed importance function. Thus, the approximation of the conditional mean (the minimum mean square error estimator) one obtains with implicit sampling is accurate even if the number of samples is relatively small. As a result, SLAM with implicit sampling is more efficient than the other SLAM algorithms considered here. This is illustrated in the right panel of Fig. 7, where the computing time is shown as a function of the average error. As in MCL (see above), the area between the various curves indicates the improvement in efficiency. Here implicit sampling is the most efficient method, giving a high accuracy (small error) at a small computational cost. Moreover, SLAM with implicit sampling can achieve an accuracy which is higher than the accuracy of all other methods (including EKF SLAM), because it is a convergent algorithm.
Conclusions
Implicit sampling is a MC scheme that localizes the high-probability region of the sample space via numerical optimization, and then produces samples in this region by solving algebraic equations with a stochastic right-hand side. The computational cost per sample in implicit sampling is larger than in many other MC sampling schemes that randomly explore the sample space. However, the minimization directs the computational power toward the relevant region of the sample space so that implicit sampling often requires fewer samples than other MC methods. There is a tradeoff between the cost-per-sample and the number of samples and this tradeoff was studied in this paper in the context of three applications: MCL and probabilistic online SLAM in robotics, as well as path integral control.
In path integral control one solves the stochastic HJB equation with a MC solver. This has the advantage that no grid on the state space is needed and the method is therefore (in principle) applicable to control problems of relatively large dimension. Implicit sampling was applied in this context to speed up the MC calculations. The applicability of this approach was demonstrated on two test problems. Path integral control was also used to find local minima in nonconvex optimization problems.
An implementation of implicit sampling for MCL was tested and it was found that implicit sampling performs better than standard MCL (in terms of computing time and accuracy), especially if the data are accurate and the motion model is noisy (which is the case typically encountered in practice [16]). Similarly, implicit sampling for SLAM outperformed standard algorithms (fastSLAM and fastSLAM 2.0). Under mild assumptions, but for nonlinear models, SLAM with implicit sampling converges to the true SLAM posterior as the number of samples goes to infinity, at a computational cost that is linear in the number of features of the map.
Acknowledgment
I thank Professor Alexandre J. Chorin of UC Berkeley for many interesting technical discussions and for bringing path integral control to my attention. I thank Dr. Robert Saye of Lawrence Berkeley National Laboratory for help with proofreading this manuscript. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program under Contract No. DE-AC02005CH11231, and by the National Science Foundation under Grant No. DMS-1217065.
Appendix
A kinematic model of the robot is described in Ref. [46] and shown for convenience in Fig. 8. The robot is controlled by the speed vc and the steering angle α. It can be shown that the derivatives of the position and orientation of the axle of the robot are
and xn = x(nδ), un = u(nδ). Table 7 lists the numerical values of all parameters of the probabilistic motion model. The steering data α and velocity data vc are taken from the data set [17].
Description | Parameter | Value |
---|---|---|
Wheel base | L | 2.83 m |
Width of the car | H | 0.76 m |
Horizontal offset of laser | b | 0.5 m |
Rear axis to laser | a | 3.78 m |
Time step | δ | 0.025 |
Covariance of model noise | Q | Q11 = 0.1, Q22 = 0.1, Q33 = 0.1π/180 |
Qij = 0 if i ≠ j | ||
Covariance of noise in controls | P | P11 = P22 = 0.5, Pij = 0 if i ≠ j |
Description | Parameter | Value |
---|---|---|
Wheel base | L | 2.83 m |
Width of the car | H | 0.76 m |
Horizontal offset of laser | b | 0.5 m |
Rear axis to laser | a | 3.78 m |
Time step | δ | 0.025 |
Covariance of model noise | Q | Q11 = 0.1, Q22 = 0.1, Q33 = 0.1π/180 |
Qij = 0 if i ≠ j | ||
Covariance of noise in controls | P | P11 = P22 = 0.5, Pij = 0 if i ≠ j |
where is the jth element of the vector , the “true” robot position, and are the x- and y-coordinates of the jth element of a landmark; the norm is the Euclidean norm.