16.18 AI policy as of April 2026
By the time you read this section the policy landscape will already have shifted. AI law is moving faster than any field of regulation in living memory, and any snapshot dates in months rather than years. The point of this section is therefore not to give you a definitive map but to give you a rough orientation: which jurisdictions have done what, what kinds of obligations have crystallised, and where the live disagreements lie. If you build, deploy, or evaluate AI systems for a living, you will need to track these regimes the way a clinician tracks therapeutic guidelines, with regular updates and a healthy mistrust of last year's summary.
The first thing to grasp is that there is no single global AI regulator and there will not be one soon. Instead we have a patchwork: a comprehensive horizontal statute in the European Union, a sector-by-sector and executive-order-driven approach in the United States, a state-led model-licensing regime in China, and a network of voluntary AI Safety Institutes coordinated through periodic summits. Each regime reflects local political culture as much as it reflects considered policy analysis, and they pull in different directions on most of the questions that matter: training-data transparency, pre-deployment evaluation, liability allocation, export controls, and open-weight release. Models, however, do not respect borders. A foundation model trained in California is fine-tuned in London, served from a datacentre in Frankfurt, and queried from a clinic in Auckland. So even if you only care about your own jurisdiction, the international picture will reach you eventually.
Section 16.17 gave us Responsible Scaling Policies as the internal commitments that frontier labs make to themselves: thresholds, evaluations, mitigations. This section turns to the external frameworks that governments and inter-governmental bodies are putting in place around those labs. The two are increasingly entangled: regulators read RSPs to see what good practice looks like, and labs draft RSPs partly to pre-empt the regulation they expect.
EU AI Act
Adopted in March 2024 and entering into force in August 2024 with a phased application schedule, the EU AI Act is the first comprehensive horizontal AI statute in any major jurisdiction. Its central device is a four-tier risk classification. Unacceptable risk practices are banned outright: social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow law-enforcement exceptions), emotion recognition in workplaces and schools, and manipulative subliminal techniques. High-risk systems, those used in critical infrastructure, education, employment, essential public and private services, law enforcement, migration management, and the administration of justice, must undergo conformity assessment, maintain technical documentation and logs, support human oversight, and meet transparency requirements. Limited-risk systems, principally chatbots and generative tools, must disclose their AI nature to users. Minimal-risk systems carry no specific obligations.
Layered on top of the risk pyramid is a separate regime for general-purpose AI models, the category into which foundation models fall. All GPAI providers must publish summaries of their training data, comply with EU copyright law (including respecting opt-outs from text-and-data-mining), and provide downstream documentation. A subset designated as posing systemic risk, currently those trained with more than $10^{25}$ floating-point operations, face additional duties: model evaluations against systemic risks, adversarial testing, incident reporting, cybersecurity protections, and energy reporting.
The first GPAI Code of Practice, drafted by independent experts under the EU AI Office, was published in May 2025 and provides the practical compliance pathway for the GPAI provisions. The Commission's GPAI enforcement powers do not enter force until 2 August 2026; the first formal enforcement actions are expected after that date.
Two structural features of the Act repay attention. First, it applies extraterritorially to any provider placing a system on the EU market, mirroring the GDPR pattern; this gives Brussels disproportionate global influence on documentation and transparency norms. Second, the Act's high-risk category overlaps heavily with sector regulation that already exists, most importantly the Medical Device Regulation for clinical AI, where conformity assessment, post-market surveillance, and clinical evaluation requirements stack on top of the AI Act's horizontal duties.
US executive orders and state laws
The United States has taken a markedly different route, with no comprehensive federal AI statute and a patchwork of executive orders, agency guidance, and state laws. President Biden's October 2023 Executive Order 14110 was the high-water mark of federal coordination. It directed NIST to develop AI safety standards (which became the AI Risk Management Framework and its Generative AI Profile), required pre-deployment safety reports for $10^{26}$-FLOP-class dual-use foundation models under the Defense Production Act, tasked agencies with sectoral guidance on housing, health, education, and policing, and pushed on content provenance, immigrant talent, and federal procurement.
EO 14110 was rescinded in January 2025 and partially replaced by EO 14179 and a series of follow-on orders that re-emphasise compute-export controls and shift safety-reporting from a mandatory to a voluntary footing. The NIST documents have survived the transition; the reporting infrastructure has not. The result is that as of April 2026 there is no federal compulsion to disclose frontier-model evaluations, although the largest labs continue to do so under their own RSPs and voluntary AISI agreements.
State-level activity has partly filled the gap. California's SB 1047 (2024) would have required pre-deployment safety evaluations and a kill-switch capability for $10^{26}$-FLOP-class models trained in California, with a private right of action for catastrophic harms. It was passed by the legislature and vetoed by Governor Newsom in September 2024 on the grounds that the compute threshold was over-inclusive and a unitary approach was premature. California SB 53 (the Transparency in Frontier AI Act) was signed into law by Governor Newsom in late September 2025. Colorado, New York, Texas, and Illinois have passed or advanced laws targeting algorithmic decision-making in employment, insurance, and consumer credit, and a number of states regulate specific applications such as deepfake election interference. The cumulative effect is regulatory fragmentation that few practitioners enjoy navigating.
Sitting alongside all of this is the most consequential US lever in practice: compute-export controls administered by the Bureau of Industry and Security. The October 2022 BIS rules and successive 2023, 2024, and 2025 updates progressively restrict export of advanced AI accelerators (NVIDIA H100, H200, B200, GB200) and now of datacentre interconnect (NVLink, InfiniBand) and, in some cases, of model weights for systemically-significant models, to China and a growing list of intermediary countries. These controls are doing more day-to-day governance work than any of the safety statutes.
UK and US AI Safety Institutes
In parallel with the legislative track, a network of AI Safety Institutes has emerged as the practical mechanism for pre-deployment evaluation of frontier models. The UK AI Security Institute (renamed from AI Safety Institute in February 2025) was founded in November 2023 as part of the Bletchley Summit and now has roughly fifty staff working on dangerous-capability evaluations (cyber, chemical and biological, autonomy, persuasion). The US AISI was established within NIST in 2024 with around seventy staff, and in 2025 it merged operationally with the broader NIST AI safety effort. The EU AI Office, with around a hundred staff, plays a comparable role within the AI Act's GPAI regime. Japan, Singapore, Canada, India, and South Korea have established equivalents.
The AISIs operate under voluntary memoranda of understanding with the major frontier labs, gaining pre-deployment access to models for capability and safety evaluations. The arrangement crystallised in three years from nothing: lab-to-government red-teaming on a routine basis, with results shared between national AISIs through a coordinating network. It is also fragile, because it rests on goodwill rather than statute, and could collapse if a lab refused access or a government published an unflattering evaluation. As of April 2026 the most pressing question is whether voluntary access can survive a politically contested release.
China
China has moved earlier than most jurisdictions on binding rules. The 2022 Internet Information Service Algorithmic Recommendation Provisions, the 2023 Deep Synthesis Provisions, and the August 2023 Interim Measures for the Management of Generative AI Services together require model registration with the Cyberspace Administration of China before public deployment, real-name authentication of users, content moderation aligned with "core socialist values", and watermarking of synthetic content. The 2024 update extended registration to large-model fine-tunes and clarified obligations on training-data legality. National AI plans continue to support indigenous frontier development, with substantial state investment in domestic accelerators in response to US export controls. The combined effect is a tightly licensed domestic ecosystem with rapid product iteration inside the licensed perimeter.
International coordination
The Bletchley Park AI Safety Summit (UK, November 2023), Seoul (May 2024), and Paris (February 2025) produced declarations signed by 28+ countries, establishing voluntary commitments on frontier-model evaluation and the founding charter of the AISI network. The AI Impact Summit was hosted by India in February 2025; the next summit cadence remains under discussion. These gatherings have been more useful than sceptics expected: they established a vocabulary, a regular cadence, and an institutional skeleton. They have been less useful than enthusiasts hoped: there are no binding international treaty obligations on frontier AI, no analogue to the Nuclear Non-Proliferation Treaty, and no enforcement teeth. For the foreseeable future, international coordination on AI is aspirational soft law sitting on top of hard national regimes that pull in different directions.
What you should take away
- There is no single regulator. Treat the EU AI Act, US executive orders and state laws, China's licensing regime, and the AISI network as four overlapping systems, and assume any frontier system you build will eventually touch all four.
- The EU AI Act is the most consequential horizontal statute. Its risk tiers, GPAI obligations, and extraterritorial reach mean its documentation and transparency norms will shape global practice in the way GDPR did.
- Compute-export controls are doing the heaviest governance work in practice. BIS rules on chips and interconnect have shifted training economics more than any safety statute, and you should track them as carefully as you track model releases.
- The AISI network is the operational core of frontier evaluation. Voluntary pre-deployment access is the load-bearing arrangement; if it collapses, governance gets visibly weaker overnight.
- Soft international law is the ceiling, not the floor. Bletchley, Seoul, and Paris built useful scaffolding, but binding obligations on frontier AI live in national law, and the patchwork is here to stay.