A Vector is, at its most elementary, an ordered list of numbers. We write a vector $\mathbf{v}$ in $n$-dimensional space as $\mathbf{v} = (v_1, v_2, \ldots, v_n)$, where each $v_i$ is called a component. In two dimensions a vector can be drawn as an arrow from the origin to a point; beyond three dimensions we can no longer visualise it, but the algebra generalises without difficulty, and many AI systems routinely operate in spaces with thousands or even millions of dimensions.
Vectors support two fundamental operations: addition (component-wise) and scalar multiplication (multiplying every component by a single number). These operations, subject to a handful of axioms, define a vector space. The norm of a vector measures its length; the most common is the Euclidean $L_2$ norm $|\mathbf{v}| = \sqrt{v_1^2 + \cdots + v_n^2}$. Dividing a vector by its norm produces a unit vector, a process called normalisation.
In AI, a vector typically represents a data point or a collection of features. An image can be flattened into a vector of pixel values; a document can be represented as a bag-of-words vector; a user's behaviour becomes a vector of interaction features. The power of this representation is that it allows the full machinery of linear algebra—distances, angles, projections, transformations—to be applied to data that is, on its surface, not mathematical at all. Learning to see data as vectors is the first step toward understanding how AI systems work.
Related terms: Matrix, Dot Product, Embedding
Discussed in:
- Chapter 2: Linear Algebra — Vectors
Also defined in: Textbook of AI, Textbook of Medical AI