If you travel to a new place like Tokyo, the first thing you have to do is to find yourself on the map before going to hotel or having Ramen. The autonomous robot in an unfamiliar environment has to do the same thing before moving a single step. This process is called Localization. The most common method for localization is using GPS, which Google maps use to figure out where you are. However, the precision is a problem. The US government claims its civilian GPS can achieve an accuracy of 7.8 meters with 95% confidence interval. This means GPS can tell you are in a circle with radius of 7.8 meters on the map (assume the accuracy for every direction are the same), but can't figure out if you are on the pedestrian or in the middle of the road. This could be a headache for a robot navigating through the corridor. We need a much better way to do this.
In order to have better precision to locate the robot in the environment, various sensors are needed to understand the environment, just like human with eyes and ears. The robot needs to measure how far it is to the different landmarks to locate itself. Lets say our robot takes measurements every time it move a step. Now the problem is with all the measurements, how a robot can find itself on the map with fixed landmarks.
Lets take a probabilistic view on the problem. The robot may appear at every point on the map but some points with higher probability and some with lower probability. If we call an event that the robot appears at certain point a state, then the set of all the events that cover the whole map becomes a state-space. If we know the probability distribution of the robot position over the state-space, we can have a very good guess of where the robot is.
To get the distribution, we can adopt a Bayesian approach and take advantage of the measurements. Every time the robot moves, it takes a set of measurements and we use the measurements to update the prior distribution to get the posterior distribution. As the measurements accumulate, we can get a fairly accurate estimate of the robot's location.
Since the distribution can't be determined analytically, we can use Monte Carlo method to approximate the distribution, which has another name, Particle filter. There are lots of variants of Particle filters. The one that I believe is the simplest one has the process as follows:
The key observation is that if some particles have the same measurements as those of the robot, the robot is very likely to be at the positions of those particles, thus they have higher probability to survive the resampling. Sampling with replacement means one particle can be sampled multiple times so that after certain number of iterations, all the particles may end up with the same position. The distribution of the particles can approximate the posterior distribution of the state-space.
Here is my implementation of the particle filter using Pygame.
The robot is the blue circle and the distance measurements are presented using green lines. Red dots are the particles. I placed 1,000 particles in my 200 * 200 world.
After 30-50 iterations, all particles are positioned at a single position and most of the time our predicted pose is very close to the actual robot's pose. However, we can still notice the difference. This is because of 3 reasons:
To address the second problem, one way is to increase the number of particles which may require more computing power. Another way is to place the particle around its original position according to a distribution (Gaussian) at resampling stage. This method may be more robust to estimate the robot's pose.
In summary, particle filter is a non-parametric method to estimate the distribution of a state-space. It can estimate a multi-modal distribution but only perform well in low dimensional problems. If the dimension increases, it will suffer the curse of dimensionality that the number of particles required to estimate the distribution increases exponentially.