Legislative chamber interior
Policy

OpenAI backs tougher Illinois AI bill after retreat on liability shield

OpenAI has stepped away from an Illinois bill that would have shielded AI developers from some catastrophic-harm lawsuits and is now backing a rival measure centred on audits and transparency.

By Marnie Blackwood4 min read
Marnie Blackwood
Marnie Blackwood
4 min read

OpenAI has distanced itself from SB 3444, the Illinois bill that would have limited some catastrophic-harm lawsuits against AI developers, and is instead backing SB 315, a competing transparency-and-audit measure now moving through Springfield. The shift matters because it shows a large model developer adjusting its position in public once state rules move from broad principles to actual obligations. Both bills are proposals, but the choice between them already reveals what accountability OpenAI can live with.

This is not a simple turn from deregulation to regulation. Backing SB 315 while rejecting the safe harbour in SB 3444 signals OpenAI can accept disclosure, annual third-party audits and incident reporting more readily than a statute that appears to trade reporting duties for narrower liability exposure. Enterprise buyers, including Australian organisations that rely on US model vendors, get a more useful signal than another abstract promise about responsible AI. Frontier labs are optimising for compliance models that scale across jurisdictions — not just rhetoric aimed at Washington.

The liability shield is what made SB 3444 unusually contentious. WIRED reported earlier this month that OpenAI had backed the bill even though it would have limited some lawsuits tied to AI-enabled catastrophes. Critical harm meant events killing or seriously injuring 100 or more people or causing at least $1 billion in property damage. Supporters noted the bill still imposed testing and disclosure duties. Critics saw something else: a developer who filed the right paperwork and followed the statute might be harder to sue after the worst-case failure the law was supposed to prevent.

OpenAI is now plainly trying to get away from that reading. Caitlin Niedermeyer, identified by Transformer as part of OpenAI’s global affairs team, said: “We do not support the liability safe harbor included in SB 3444.” A sharper retreat than the company usually offers — it concedes the political liability of being seen to ask for legal protection before the technology is broadly trusted. The optics of a safe-harbour fight were becoming as damaging as the substance. OpenAI wants scrutiny, not immunity.

SB 315 centres on transparency reports, third-party audits and formal disclosure duties for larger developers, not on carving out a special litigation buffer. WTTW’s reporting said the bill would apply its largest-developer requirements to companies with at least $500 million in gross revenue, a threshold aimed at the biggest commercial actors rather than smaller start-ups. Senator Mary Edly-Allen, the sponsor, framed the proposal as creating “a road map for responsible innovation to prevent catastrophic risks”. That lets OpenAI back harm reduction without defending the idea that compliance should also narrow legal claims.

Jamie Radice, OpenAI’s head of US state policy, pushed the same distinction in comments carried by Quartz, saying the company supports approaches that focus on reducing the risk of serious harm. Read practically, that is a bet that transparency and audits are the obligations most likely to survive the next phase of state-level AI lawmaking. They are easier to explain to lawmakers and easier to operationalise across product lines. Liability shields become much harder to defend once the debate leaves industry roundtables and lands in a statehouse hearing room.

OpenAI is no longer just arguing about whether frontier models should be regulated. It is helping sort the menu. One set of rules asks labs to publish more, submit to outside review and document incidents. Another asks lawmakers to recognise those steps and, in part, reduce exposure when things go badly wrong. OpenAI appears to have concluded that the first proposition is defensible and the second is not, or not yet. A tactical narrowing of what the company is prepared to fight for in public — not a conversion to maximal regulation.

For Australian readers, the immediate policy consequence is limited. Illinois is one US state, neither bill has passed, and Canberra is not taking instructions from Springfield. But the commercial signal travels farther than the jurisdiction. Australian enterprises buy AI tools from the same US developers now negotiating these rules. Procurement teams already ask about model documentation, audit trails, incident escalation and who carries liability when systems misfire. If vendors treat audits, transparency reports and formal incident disclosure as the acceptable baseline, that posture filters into contract terms and local compliance expectations before any equivalent Australian law arrives. State-level American regulation can shape the operating assumptions of a market far beyond the state that wrote it.

OpenAI’s retreat on SB 3444 matters because it shows where frontier labs think the political centre of gravity is moving. When the text gets specific, they seem more willing to accept oversight than to defend immunity. For a sector that spent the past year talking principles, that is more concrete than it first appears.

Caitlin NiedermeyerIllinoisIllinois General AssemblyJamie RadiceMary Edly-AllenopenaiSB 315SB 3444Springfield
Marnie Blackwood

Marnie Blackwood

Regulation reporter on Privacy Act reform, eSafety, ACCC tech enforcement, and ACMA. Reports from Canberra.