FastSLAM

In the last post, I introduced robot localization algorithm, Particle filter. In a known environment, robot can iteratively locate itself by receiving sensor inputs. However, a more realistic scenario is that the robot gets stuck in an unknown environment and does not know its relative position in the environment. The problem will get worse if robot's sensors have measurement errors and the motors may not behave exactly as what they are told. Thus, we need probabilistic model to be able to incorporate error terms to be robust. This is where SLAM (Simultaneously Localization and Mapping) algorithm comes in. To be able to locate robot's position, we need to know the map. And to know the map, we need to know where the robot is. This is why we have to simultaneously estimate these two attributes.

In the SLAM settings, we want to estimate the posterior joint distribution of robot poses \(s_t\), and locations of the landmarks \(\Theta\), conditioned on all the sensor readings \(z^t\) and motion commands \(u^t\). In a 2-dimensional case, \(\Theta\) can be viewed as a set of point features \(\{\theta_1, \theta_2, ..., \theta_m\}\), where each feature has its own \((x, y)\) coordinates.

\[p(s_t, \Theta | z^t, u^t)\]

Based on Bayes' theorem,

\[p(s_t, \Theta | z^t, u^t) = \eta\ p(z_t | s_t, \Theta)\int{p(s_t | u_t, s_{t-1})p(s_{t-1}, \Theta | z^{t-1}, u^{t-1})d s_{t-1}}\]

There are lots of variants of SLAM algorithms to approximate the above distribution. In Sebastian's Udacity course, he introduces GraphSLAM, which is a relatively simple and easy to understand. Another popular version of SLAM is EKF (Extended Kalman Filter) SLAM, which estimates the location of the landmarks and robot pose by using a high dimensional Gaussian distribution. A big drawback of this approach is the quadratic complexity in both time and space. Lets assume there are N landmarks and robot has \((x, y)\) coordinates and an orientation, so there are 2*N + 3 variables in the Gaussian distribution. The covariance matrix will have to be stored in memory which gives the quadratic complexity in space. Then when robot gets a new observation, it will need to loop through all the entries in the covariance matrix so the time to update a single observation will be quadratic. A simple introduction of Kalman filter and Extended Kalman filter can be found here.

FastSLAM addresses the above problem by separating estimation of robot pose and all the landmarks. The observation is that conditioned on robot path, the location of all the landmarks are actually independent because they are correlated only through robot pose. This means conditioned on the robot path we can estimate each landmark separately which reduces the time complexity.

Under this observation, the SLAM posterior can be factorized as follows:

\[p(s^t, \Theta | z^t, u^t) = p(s^t | z^t, u^t)p(\Theta | s^t, z^t, u^t) = p(s^t | z^t, u^t)\prod_i{p(\theta_i | s^t, z^t, u^t)}\]

To prove

\[p(\Theta | s^t, z^t, u^t) = \prod_i{p(\theta_i | s^t, z^t, u^t)}\]

we can use mathematic induction on \(t\), since using the Bayes' theorem, we can get

\[p(\theta_i | s^t, z^t, u^t) = \frac{p(z^t | s^t, \theta_i, u^t)}{p(z^t | s^t, z^{t-1}, u^t)}p(\theta_i | s^t, z^{t-1}, u^t)\]

For the rightmost term, without the latest observation \(z_t\), \(\theta_i\) will not be affected by \(s_t\) or \(u_t\). we can drop them from the rightmost term to get the recurrence relation for the induction step.

Montemerlo's paper has great detail on the limitations of EKF based SLAM, process of FastSLAM and math behind. FastSLAM estimates the first term of the factorized posterior, the robot path, using Particle filter. The remaining \(N\) landmark posterior distribution are estimated by EKFs. Since the landmark estimation is conditioned on the robot path, each particle has its own set of EKFs to estimate the locations of the landmarks. This means the time complexity is \(O(N*M)\) if N is number of landmarks and M is number of particles. In his paper, he presents a binary tree data structure to store the EKFs for each particle, which reduce the time complexity to \(O(M*log(N))\).

Source from Montemerlo's paper

The FastSLAM 1.0 runs as follows:

  • For each particle, draw a new pose state according to the distribution, \(p(s_t | s^{t-1}, u_t)\) after applying the motion control. The newly generated particle set will be approximately distributed as \(p(s^t | z^{t-1}, u^t)\).
  • Based on the measurements from sensors, update associated EKFs for each particle. 
  • Calculate the importance weight for each particle, which is \(p(z_t | s^t, z^{t-1}, u^t)\). If multiple observations are received, the weight will be the product of the above term.
  • Draw a new sample of particles with replacement based on the importance weight.

To update the locations of the landmarks, we need to know which landmark generates the observation, which is called data association. One way is to assign the observation using maximum likelihood method which means assign the observation to \(\theta_i\) if \(p(z_t | s^t, z^{t-1}, u^t)\) is the maximum among all the landmarks.

I implemented FastSLAM 1.0 and use pygame to do a simulation. The result is really good and I had a lot fun during the implementation. The code can be found here.

The green dots are the landmarks, the brown dots are the current estimates of the landmarks, the blue dot is robot and the red dots are the particles. The initial position is given but the orientation is unknown. Also I include certain level of motion error and sensor error to test the robustness of the algorithm.

I will further explore FastSLAM 2.0 and also the optimization for time complexity.