Monday, January 12, 2009

Probabilistic Localization

Navigation
Navigation is the process of keeping the position or the trajectory to a goal in mind (this means in the memory of the robot or a computer). There are to main ideas: localization (Pos.) + path planning and direct computing of the course to the goal.
For these things there are to things necessary: A model of the world (map) and the sensor readings. There are two types of maps: topological and geometrical. We will deal with a geometrical (an image). There are a lot of arguments for probabilistic localization:

Random noises on sensor readings, the physics of a robot are never certain.

probabilistic formulation of the localization problem:

There is a belief of the real position (different representations), actualization of the belief with the robot physics and the sensor readings.


a.) continuous, single hypothesis (e.g.Kalman)
b.) continuous, multiple hypothesis (e.g. Kalman)
c.) discrete, multiple hypothesis, geometrical map (e.g. Markov, Monte Carlo)
d.) continuous, multiple hypothesis, topological map (e.g. Kalman)

Monte-Carlo-Localization (MCL)
Ordinary localization methods like the Markov-Localization need many resources to compute the approximation of the localization belief.
Because of the discretization of the map there is no limit of making the result more precise. For this reason we need a method with a finite number of belief representations. This beliefs are called samples in the following parts of the description.
These points are characteristic for the used Monte-Carlo-Localization:

Random based, finite number of samples, importance factor to change the probability distribution.

The Samples
A Sample consist of an expected position (x,y,θ) as well as an importance factor.

The Algorithm
initial belief: set of k samples with equally distributed importance factors 1/k.
known: in the pure setting just the sensor readings. Here with the relative orientation to the map
goal: compute a peak at the real robot position.

Actualization of the belief:
1. choose randomly a sample out of the set of samples with respect to the importance factor.
-> position L(T-1)

2. pick out the next position with the robots action-model P(L(T) | A(T-1), L(T-1) ).
-> position L(T)

3. give the sample a new unnormalised importance P(S(T) | L(T)) by the sensor-model an put it in the new samples set.

4. repeat step 1.-3. k times and normalize the importance of the new samples set.

Closer look at the steps
1. choose randomly a sample out of the set of samples with respect to the importance factor.
-> position L(T-1)
Roulette wheel selection (the higher the importance of a certain sample the bigger the area on the wheel). A point on the wheel is randomly chosen. The higher the importance the higher the probability to be chosen. It is shown in the figure:

2. pick out the next position with the robots action-model (which transforms the tacho counters into an position offset) P(L(T) | A(T-1), L(T-1) ). The tacho counters at time T are represented by A(T).
-> position L(T)
Choose the following position which is produced by the error function of the action-model. The new position of the car is changed with respect to the tacho counters and the action-model. The following position is set with a uncertainty. Shown in the figure:
3. give the sample a new unnormalized importance P(S(T) | L(T)) by the sensor-model and put it in the new samples set.

Sensor readings: expected Sensor reading contra real sensor readings, the higher the different the lower the probability of the belief. Hopefully the samples clump at he right position.

A = (100.0-|realsensorA - expectedsensorA|)^10
B = (100.0-|realsensorB - expectedsensorB|)^10
C = (100.0-|realsensorC - expectedsensorC|)^10
newimportance = (A+B+C)^10

The figure shows a robot with laser sensors with a belief distribution. The more accurate the sensors are the better ans more precise the belief is (laser are more accurate the sonar sensors).

Pros and cons
+ very efficient with about 100 samples
+ robust even with less good maps
+ any-time: more resources = more samples
- high concentration of the samples, problems with a kidnapped robot

No comments: