Visualisation

K-nearest neighbours and Voronoi tessellation

Last reviewed 5 May 2026

K=1 carves the plane into Voronoi cells around training points; the boundary follows the cells.

From the chapter: Chapter 7: Supervised Learning

Glossary: k nearest neighbours, voronoi diagram

Transcript

K-nearest neighbours is the simplest classifier. To predict the class of a new point, find its k nearest training points and take a vote.

With k equals one, the decision rule is: copy the label of the nearest training point.

Plot the training points. For each location in the plane, ask: which training point is closest? The plane partitions into regions, one per training point. Each region is the set of locations where that training point wins the closest-neighbour competition.

This partition is the Voronoi diagram. The decision boundary of one-nearest-neighbour follows the Voronoi cell edges.

The boundary is jagged, sensitive to individual training points. A single noisy point creates a small region with the wrong label.

Increase k. With k equals five, the prediction is the majority of the five closest. The boundary smooths. Outliers stop dominating their immediate neighbourhood.

Larger k means smoother boundaries but greater bias. In the limit of k equals n, every point gets the majority class regardless of position.

K-nearest neighbours is a lazy learner. No training, just store the data. Prediction time is O(n). For low-dimensional small datasets, it is surprisingly competitive.

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).