Glossary

Eigenvalue and Eigenvector

An Eigenvector of a square matrix $A$ is a non-zero vector $\mathbf{v}$ such that $A\mathbf{v} = \lambda \mathbf{v}$ for some scalar $\lambda$, which is called the corresponding Eigenvalue. Geometrically, an eigenvector is a direction that the transformation merely stretches (or shrinks, or reverses)—it is not rotated or sheared. The eigenvalue quantifies the stretching factor. Every $n \times n$ matrix has exactly $n$ eigenvalues, counting multiplicities and allowing complex numbers.

The eigendecomposition $A = Q \Lambda Q^{-1}$ expresses a matrix as a product of its eigenvectors (columns of $Q$) and eigenvalues (diagonal of $\Lambda$). For symmetric matrices, the spectral theorem guarantees that $Q$ can be chosen orthogonal, so $A = Q \Lambda Q^T$. This is a beautifully clean factorisation: every symmetric matrix is, in the right coordinate system, simply a diagonal scaling.

Eigenvalues and eigenvectors appear throughout AI. Principal Component Analysis (PCA) finds the eigenvectors of the data's covariance matrix to identify directions of maximum variance. Google's PageRank algorithm computes the principal eigenvector of the web graph's transition matrix. Spectral clustering uses eigenvectors of graph Laplacians to partition data. The stability of dynamical systems—including neural network training dynamics—is governed by eigenvalues. The condition number (ratio of largest to smallest singular value) determines numerical sensitivity. Far from abstract curiosity, eigenstructure is a tool of immense practical power.

Related terms: Matrix, Principal Component Analysis, Singular Value Decomposition

Discussed in:

Also defined in: Textbook of AI