RobotForge
Published·~13 min

Particle filters and Monte Carlo localization

Non-Gaussian estimation for indoor robots. Represent belief as a cloud of samples; resample after each measurement. The algorithm behind every ROS amcl node.

by RobotForge
#slam#particle-filter#localization

A Kalman filter assumes a single Gaussian belief. Indoor robots often have multimodal beliefs — "I'm in aisle 3 or aisle 7." A particle filter represents belief as a cloud of samples, each with a weight. Multimodal becomes natural; non-Gaussian becomes natural; the cost is computation. ROS's amcl node is the canonical example, used in every Nav2 deployment in 2026.

The idea in one paragraph

Maintain N particles, each a guess for the robot's state. At each step:

  1. Predict: move every particle forward through the motion model (with noise).
  2. Update: compute the likelihood of the observed measurement at each particle's state. Weight each particle by that likelihood.
  3. Resample: sample N new particles from the weighted distribution. Particles in high-likelihood regions reproduce; low-likelihood ones die.

The cloud's density is the posterior belief. Take the mean (or weighted mean) for a point estimate.

Why N particles, not one

If the belief is unimodal Gaussian, a Kalman filter (1 number for mean + covariance) is more efficient. Particle filters earn their cost when:

  • Multimodal: "I could be at any of these locations." A Kalman filter averages them into a useless midpoint.
  • Non-Gaussian noise: laser-scanner returns are not Gaussian; particle weights handle arbitrary likelihoods.
  • Non-linear motion / measurement: EKF linearization fails for very non-linear systems; particles propagate through any function.

For indoor mobile robots the multimodal case dominates: "kidnapped robot problem" (you wake up somewhere unknown), and "symmetric environments" (two corridors look identical until you see a label).

The 50-line Python prototype

import numpy as np

class ParticleFilter:
    def __init__(self, n=1000, init_range=((0,10), (0,10))):
        self.particles = np.column_stack([
            np.random.uniform(*init_range[0], n),
            np.random.uniform(*init_range[1], n),
        ])
        self.weights = np.ones(n) / n

    def predict(self, dx, dy, sigma=0.1):
        noise = np.random.randn(*self.particles.shape) * sigma
        self.particles += np.array([dx, dy]) + noise

    def update(self, measurement, measurement_likelihood):
        # measurement_likelihood(particle, z) -> p(z | x)
        likelihoods = np.array([
            measurement_likelihood(p, measurement) for p in self.particles])
        self.weights *= likelihoods
        self.weights += 1e-300       # avoid all-zero
        self.weights /= self.weights.sum()

    def resample(self):
        # Systematic resampling — low variance
        positions = (np.arange(len(self.particles)) +
                     np.random.uniform()) / len(self.particles)
        cumulative = np.cumsum(self.weights)
        idx = np.searchsorted(cumulative, positions)
        self.particles = self.particles[idx]
        self.weights = np.ones_like(self.weights) / len(self.weights)

    def estimate(self):
        return np.average(self.particles, weights=self.weights, axis=0)

Fifty lines. Run on a laptop with 5000 particles; localizes a 2D robot in real time.

Resampling: the silent gotcha

Without resampling, a few particles eventually carry all the weight; the rest are zero-weight ballast. With too-aggressive resampling, particle diversity collapses (sample impoverishment). The standard fix:

  • Effective sample size (ESS) = 1 / Σ w_i²: an estimate of "how many particles still matter." When ESS drops below 50% of N, resample.
  • Systematic resampling: lower variance than multinomial. Spreads stratified samples through the weighted distribution.
  • Add noise on resample: regularization. Prevents collapse to a single state.

The kidnapped-robot problem

The robot is teleported to a new location with no warning. All particles are now far from the truth. If you don't intervene, the filter never converges to the new truth — every particle has near-zero weight, resampling can't help.

Fix: random injection. Periodically replace some fraction of particles with uniformly-sampled new ones. Adaptive injection (more if all weights are low, less if confident) is what AMCL does.

AMCL in ROS 2

Adaptive Monte Carlo Localization. The default localization for Nav2 in 2026.

  • Inputs: a static occupancy-grid map + laser scan + odometry.
  • State: the robot's 2D pose (x, y, θ).
  • Measurement model: ray-cast each laser beam in the map; compare to actual reading.
  • Adaptive: number of particles between 100 (when localized) and 5000 (when uncertain).

Configure via amcl.yaml. The ~30 parameters are well-documented; defaults work for most rooms.

Practical limits

  • Curse of dimensionality: N must scale exponentially with state dimension to maintain coverage. Particle filters work for 2D/3D pose; struggle past 6D.
  • Likelihood evaluation cost: each particle needs a likelihood. For laser scans (~360 beams × 5000 particles = 1.8M ray-casts per update). Optimize via likelihood field, downsampling, or precomputed lookup tables.
  • Map quality: a wrong occupancy map ruins localization regardless of filter quality. Map carefully, validate before deployment.

Variants worth knowing

  • FastSLAM: combines particle filter (for robot pose) with EKFs (for landmarks). Efficient for SLAM with many landmarks.
  • Rao-Blackwellized particle filter: more general framework for "particle on the hard part, analytical on the easy part."
  • Multinomial vs systematic vs stratified resampling: different variance trade-offs. Systematic is the default for production code.

When to use a particle filter

  • 2D/3D mobile-robot localization with a known map → AMCL or similar particle filter. Default.
  • Highly non-Gaussian noise (range-bearing sensor, vision-based localization with discrete labels).
  • Bayesian inference where conjugate distributions don't apply.

When to skip

  • State dimension > ~6: use EKF, UKF, or factor graphs.
  • Belief is reliably unimodal Gaussian: Kalman is way faster.
  • Real-time hard constraints: particle filters' tail-latency makes them risky for high-frequency control.

Exercise

In a 2D world with three rooms and a corridor, simulate a robot with noisy odometry and a laser. Place all particles uniformly initially. After 10 motion+observation cycles, watch the cloud collapse to the truth. Then add a kidnapping event (teleport the robot). Watch convergence fail without random injection, and succeed with it.

Next

Factor graphs — the modern formulation that replaces sequential filtering with batch optimization, scaling to large SLAM problems.

Comments

    Sign in to post a comment.