Glossary

MACE

MACE (Multi-ACE, where ACE stands for Atomic Cluster Expansion), introduced by Batatia, Kovács, Simm, Ortner and Csányi (NeurIPS 2022), is a class of equivariant graph neural network interatomic potentials that compute molecular and condensed-matter energies and forces with near-DFT accuracy at a tiny fraction of the cost. It has set the state of the art on numerous molecular-dynamics benchmarks (rMD17, MD22, SPICE, Materials Project) and underpins many production-scale simulation packages.

The core innovation is higher-body-order equivariant messages. Standard message-passing interatomic potentials (SchNet, NequIP, PaiNN) construct two-body messages, features built from pairs of atoms, and compose them through layers. To capture three-body, four-body and higher correlations, these models rely on stacking many layers, which is expensive and can hurt locality. MACE instead constructs per-atom messages of explicit body order $\nu$ by tensor-products of two-body features, contracted with learned coefficients:

$$ A_i^{(\nu)} = \sum_{j_1,\ldots,j_\nu} c^{(\nu)} \phi_{ij_1} \otimes \phi_{ij_2} \otimes \cdots \otimes \phi_{ij_\nu} $$

where each $\phi_{ij}$ is an $E(3)$-equivariant edge feature combining radial and spherical-harmonic components. By taking tensor products before message passing, two layers of MACE (with body order $\nu = 3$ each) attain effective body order 5, comparable to a five-layer two-body network, at a small fraction of the cost. The whole construction is provably $E(3)$-equivariant: messages transform predictably under rotation, translation and reflection of the input geometry.

The architecture predicts per-atom energies $E_i = \mathrm{MLP}(A_i^{(\le \nu_{\max})})$ which sum to total energy $E = \sum_i E_i$, and forces follow as exact gradients $\mathbf{F}_i = -\partial E / \partial \mathbf{r}_i$ via autograd. Training uses energy + force MSE, $\mathcal{L} = \lambda_E \|E - \hat E\|^2 + \lambda_F \sum_i \|\mathbf{F}_i - \hat{\mathbf{F}}_i\|^2$, with $\lambda_F$ typically dominant because force labels are denser.

MACE achieves chemical accuracy ($\lt 1$ kcal/mol for energies, $\lt 50$ meV/Å for forces) on rMD17, sets the state of the art on the MD22 large-molecule benchmark, and scales to extended systems (~100 000 atoms in a single forward pass on a modern GPU), large enough for biomolecular and crystalline molecular dynamics. The pre-trained MACE-MP-0 foundation model, released in 2023 and trained on Materials Project DFT data covering most of the periodic table, gives competitive predictions across diverse materials zero-shot and can be fine-tuned with a few hundred local examples for system-specific accuracy. MACE-OFF is an analogous foundation model for organic molecules trained on the SPICE dataset.

The practical consequence is that quantum-accurate molecular dynamics, once the preserve of expensive ab-initio codes such as CP2K and VASP, can now run for millions of timesteps on benchtop GPUs. Applications include heterogeneous catalysis, battery electrolyte simulation, solid-state phase transitions, protein–ligand binding free energies and crystal structure prediction. MACE is one of the strongest pieces of evidence that machine-learned potentials, given the right inductive biases, do not merely interpolate quantum chemistry, they make it tractable at scales where it previously was not.

Related terms: Graph Neural Network, GNoME

Discussed in:

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).