Glossary

Particle Filter

The particle filter, also known as sequential Monte Carlo or the bootstrap filter (Gordon, Salmond, Smith 1993), is the dominant approach to recursive Bayesian estimation when the system is non-linear or the posterior is non-Gaussian or multimodal. It represents the belief over the latent state $x_t$ as a set of $N$ weighted samples (particles):

$$p(x_t \mid z_{1:t}) \approx \sum_{i=1}^{N} w_t^{(i)}\, \delta\!\left(x_t - x_t^{(i)}\right), \qquad \sum_i w_t^{(i)} = 1$$

Each particle is a hypothesis about the true state; the weights encode how plausible each hypothesis is given the measurements so far.

The standard sample–importance–resample cycle has three steps per timestep.

Sample (predict): draw each next particle from the dynamics:

$$x_t^{(i)} \sim p(x_t \mid x_{t-1}^{(i)}, u_t)$$

For a robot, this propagates each pose hypothesis forward by the noisy motion model.

Importance weighting (update): weight each particle by the likelihood of the new measurement under that hypothesis:

$$w_t^{(i)} \propto w_{t-1}^{(i)}\, p(z_t \mid x_t^{(i)})$$

Particles whose predicted observations match the data well get high weight; mismatched ones get low weight. Weights are then normalised so they sum to one.

Resample: draw $N$ new particles with replacement from the weighted set, with probability proportional to weight. After resampling, all weights are reset to $1/N$. Resampling concentrates effort on plausible hypotheses and prunes implausible ones, but performed every step it loses diversity, so practical filters resample only when the effective sample size $N_\text{eff} = 1 / \sum_i (w_t^{(i)})^2$ falls below a threshold (commonly $N/2$).

Particle filters handle situations the Kalman family cannot. Multimodal posteriors, where a robot is uncertain between two corridors that look identical, are represented naturally as clusters of particles. Heavy-tailed or arbitrary noise distributions are handled directly because the likelihood enters only through evaluation, not through Gaussian assumptions. Highly non-linear dynamics, such as a falling object subject to chaotic aerodynamics, are simulated rather than linearised.

The canonical robotics application is Monte Carlo localisation (Dellaert, Fox, Burgard, Thrun 1999): given a map of a building, a robot's laser scans, and odometry, particles spread initially across the entire map and converge as the robot moves and observes. The filter solves the global localisation problem (where am I?) and the kidnapped-robot problem (someone moved me; where am I now?) that EKF localisation cannot.

The cost is computational: $N$ ranges from hundreds to millions depending on the state-space dimension, and weight degeneracy in high dimensions is a fundamental limitation (the curse of dimensionality means most particles get negligible weight). Hybrid Rao–Blackwellised filters mitigate this by analytically marginalising linear-Gaussian sub-states. Despite these costs, the particle filter is the most flexible recursive Bayesian estimator in widespread use, and the natural choice whenever Gaussian approximations break down.

Video

Related terms: Kalman Filter, Extended Kalman Filter, SLAM, MCMC, Bayes' Theorem

Discussed in:

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).