AI data centre lights in a dark room
Policy

Anthropic says export controls will decide frontier AI lead by 2028

Anthropic says Washington's next two years on chips, cloud access and model distillation will shape whether democracies keep a frontier AI lead over China. For Australia, the debate runs through the US clouds and silicon stacks local enterprises already use.

By Marnie Blackwood5 min read
Marnie Blackwood
Marnie Blackwood
5 min read

Anthropic says the contest for frontier AI may be decided by 2028, and the company is pressing Washington and its allies for tighter controls on chips, cloud access and model copying before China closes the gap. Decisions taken over the next two years, not some distant AGI milestone, will shape who has the upper hand in the infrastructure that trains the most capable models, the lab argues in a new policy paper.

Outside Washington, that framing carries weight too. Australian enterprises, universities and government agencies buy AI capability through US cloud platforms, imported accelerators and model APIs. The argument is really about the rules sitting underneath that supply chain. If the United States hardens export controls, leans harder on allied enforcement or rewrites how frontier compute can be accessed, those settings will not stay abstract for long.

Anthropic’s case is about compute first

“The most important ingredient for developing AI is access to the computer chips on which the models are trained.” Anthropic’s central claim is blunt and not new, but the paper packages it into a narrower political timetable. The 2026 to 2028 window is the one that matters, it says, because existing controls still leave enough room for workarounds while frontier systems depend on concentrated supplies of leading-edge hardware. The Council on Foreign Relations’ explainer on the US AI diffusion rule and export controls lays out how Washington is trying to govern both chip flows and the cloud services wrapped around them, a useful companion to that view.

Stronger checks on offshore cloud training. Tighter enforcement of existing controls. Explicit measures against model distillation, where a weaker or open model learns from a stronger one. Anthropic is not merely arguing for tougher paperwork at the border. If the United States and its allies act now, the company says, they may be able to preserve a 12 to 24 month lead in frontier capability. That is a short margin. In this telling it is enough to shape standards, safety practices and commercial power.

The weak point is enforcement, not theory

Where the case gets harder is in the gap between export-control design and export-control reality. Epoch AI’s estimate of chip smuggling to China put cumulative diversion and resale at about 660,000 H100-equivalent units through the end of 2025, a figure Anthropic cites to show how porous the system can look in practice. If that estimate is even directionally right, the policy fight is no longer about whether controls exist. Can governments police brokers, resellers, foreign subsidiaries and cloud intermediaries well enough for those controls to bite? That is the real question.

Export regimes often fail at the edges first. The most advanced GPUs may be restricted, but access still leaks through older parts, rented infrastructure or opaque corporate structures. Anthropic’s paper is strongest when it treats compute as a governance problem rather than a slogan. It is weakest when it assumes that announcing a tougher rule and enforcing a tougher rule are roughly the same thing.

China is playing more than one AI race

Anthropic’s framing narrows a broader strategic debate. Raw frontier performance is only one contest inside a larger one that also includes diffusion, adoption and standards, according to a Brookings analysis of US and China AI strategy. A US-China Economic and Security Review Commission report on China’s open AI strategy argues from another angle that open model ecosystems and industrial deployment can reinforce China’s position even when the absolute cutting edge stays constrained.

Anthropic’s 2028 clock should be read as a policy argument, not a settled forecast, for that reason. China cannot match the most advanced Western training runs at the frontier, but it can still move quickly in applied deployment, open-weight distribution and state-backed industrial uptake. Whether a compute lead translates into lasting platform power, or whether it buys time while the rest of the market spreads around it — that is the open question. Anthropic clearly believes the first scenario is still available, but its own paper shows how much depends on state capacity, not just silicon.

Why Australia should pay attention

Australia is not writing US export-control law. It sits inside the alliance and vendor relationships that those rules increasingly touch, though. Local organisations consume AI through American hyperscalers, Nvidia-linked supply chains and US model providers. Procurement, compliance and data-governance teams in Australia may feel the effects before lawmakers here produce any local equivalent, if Washington pushes harder on customer verification, cloud monitoring or anti-distillation safeguards.

There is also a policy signal in the safety language. Only three of 13 leading Chinese AI labs had published safety evaluations, Anthropic notes, using that gap to argue that capability races and assurance standards are now intertwined. Governments that want to treat frontier AI less like a consumer software market and more like strategic infrastructure will find that claim appealing. For Canberra, the interesting part is not whether to copy every US restriction. It is whether Australia wants a clearer view on which parts of the AI stack it considers ordinary enterprise technology and which parts it treats as critical capability.

Anthropic, of course, is not a neutral referee. A frontier lab benefits when access to the best compute is scarce, monitored and politically defended. That does not make its analysis wrong. It does mean the document reads as both strategic warning and policy lobbying. Even so, the paper lands on a point that is hard for allies to ignore: the next phase of the AI race may be decided less by who talks loudest about AGI and more by who controls chips, clouds and the rules around both.

For Australian readers, that is the real takeaway. The 2028 date is a forcing device. The practical question is what Washington does between now and then, and how much of that enforcement burden it expects allies and customers to share.

anthropicaustraliaBrookingsChinaCouncil on Foreign RelationsEpoch AInvidiaUnited StatesUS-China Economic and Security Review Commission
Marnie Blackwood

Marnie Blackwood

Regulation reporter on Privacy Act reform, eSafety, ACCC tech enforcement, and ACMA. Reports from Canberra.