Compute governance is the use of controls over compute resources, semiconductor manufacturing, advanced AI chips, data-centre capacity, cloud access, as a policy lever to shape the development of frontier AI. The premise is that contemporary frontier AI is compute-intensive in a verifiable, geographically-concentrated way, so compute is more amenable to monitoring and control than other inputs (data, talent, algorithms). The case is made canonically in Sastry, Heim et al. (2024), "Computing Power and the Governance of Artificial Intelligence."
Why compute is a useful lever
Three structural facts:
Concentration, advanced AI chips (Nvidia H100/B200, AMD MI300, Google TPU v5) are produced by a handful of designers, fabricated by TSMC (Taiwan) using equipment from ASML (Netherlands), tested and packaged at a small number of facilities. The supply chain is geographically and industrially narrow.
Detectability, large training runs require thousands of chips co-located in a small number of data centres; the energy and networking signatures are observable.
Excludability, chip exports can be conditioned, tracked and recalled in a way that, say, algorithmic ideas cannot.
Existing instruments
US chip export controls, October 2022, October 2023 and (renewed) 2024 rules from the US Department of Commerce restrict export of advanced AI chips and the equipment to make them, principally targeting China but applying broadly.
EU AI Act compute thresholds, Article 51 designates models trained with >10²⁵ floating-point operations of training compute as general-purpose AI with systemic risk, triggering additional obligations.
US Executive Order 14110 (Biden 2023), required reporting of training runs above 10²⁶ FLOPs and of large compute clusters. Rescinded in January 2025 by the Trump administration; reporting authority now patchy.
Cloud-provider Know-Your-Customer rules, proposed in the EO and partially adopted by major US cloud providers; aimed at preventing foreign-adversary access to US cloud-based frontier-scale compute.
Proposals
More speculative compute-governance proposals include:
On-chip mechanisms, cryptographic attestation that compute-intensive operations are sanctioned (Aarne, Fist, Withers 2024).
International compute monitoring, IAEA-style verification of training runs above an internationally agreed threshold.
Compute caps, hard limits on the size of any single training run, internationally coordinated.
Compute redistribution, public-good provision of frontier compute to academic researchers (US National AI Research Resource pilot).
Critique
Effectiveness, open weights and algorithmic progress mean compute thresholds erode; a 2024 capability eventually fits in a 2026 hobbyist budget.
Geopolitical cost, chip controls have accelerated Chinese domestic semiconductor investment.
Distributional, concentrating frontier AI in a few US-allied jurisdictions raises legitimacy questions globally.
Verification gaps, distributed training, federated training, and small-cluster fine-tuning of open base models all evade compute-monitoring choke points.
Status
As of 2026, compute governance is the most concretely-implemented lever of AI policy globally, far more than capability evaluations or alignment audits, but its long-term reach is contested. Algorithmic efficiency gains (8-bit and 4-bit training, mixture-of-experts, sparse architectures) are reducing FLOP requirements for given capabilities by roughly 2–3× per year, eroding fixed FLOP-threshold regulation faster than legislation can adapt.
References
Sastry, Heim et al. (2024). Computing Power and the Governance of Artificial Intelligence.
Aarne, Fist, Withers (2024). Secure, Governable Chips.
EU AI Act (2024), Article 51.
US Department of Commerce (2022, 2023, 2024). Advanced Computing/Semiconductor Manufacturing Items Export Controls.
Related terms: Frontier AI Safety Commitments, Bletchley AI Safety Summit, Responsible Scaling Policy (RSP), Evaluations / Capability Evaluations
Discussed in:
- Chapter 14: Generative Models, Compute governance