Also known as: SVD
The Singular Value Decomposition (SVD) factors an $m \times n$ matrix $A$ as $A = U\Sigma V^T$, where $U$ is $m \times m$ orthogonal, $V$ is $n \times n$ orthogonal, and $\Sigma$ is an $m \times n$ diagonal matrix with non-negative entries called singular values. Unlike eigendecomposition, SVD applies to any matrix—square or rectangular, symmetric or not—making it arguably the single most useful matrix decomposition in applied mathematics.
The singular values $\sigma_1 \geq \sigma_2 \geq \cdots \geq 0$ quantify the "importance" of each direction in the transformation. Truncating the SVD to the top $k$ singular values yields the best rank-$k$ approximation to $A$ in both the Frobenius and spectral norms—a result known as the Eckart–Young theorem. This is the mathematical basis of dimensionality reduction, data compression, and noise removal.
In AI, SVD underlies PCA (which is SVD of the centred data matrix), latent semantic analysis (SVD of term-document matrices), collaborative filtering (low-rank matrix completion), and parameter-efficient fine-tuning methods like LoRA (which approximates weight updates as low-rank factors). Randomised SVD algorithms extend these ideas to matrices so large that forming them explicitly is impossible, enabling modern large-scale applications.
Related terms: Eigenvalue and Eigenvector, Principal Component Analysis, Matrix
Discussed in:
- Chapter 2: Linear Algebra — Eigenvalues & Eigenvectors
Also defined in: Textbook of AI