OpenAI throws his support behind an Illinois state bill that would shield AI labs from liability in cases where Examples of AI is used to cause significant social harm, such as the death or serious injury of 100 or more people or at least $1 billion in property damage.
The effort seems to signal a change OpenAI’s legally strategy. Until now, OpenAI has played largely defensive, opposing bills it could have created Connected AI labs to the effects of technology. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a bigger measure than OpenAI’s previous bills.
The bill would protect cross-border AI developers from liability for “serious harm” caused by cross-border designs as long as they did not intentionally or negligently cause such an event, and have published safety, security and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which may be used by America’s largest AI labs, such as OpenAI, Google, xAI, Anthropic, and Meta.
“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from advanced AI systems while allowing this technology to get into the hands of the people and businesses — small and large — of Illinois,” OpenAI spokesperson Jamie Radice said in an emailed statement. “They also help limit the chain of state-by-state regulations and move toward more transparent national standards.”
Under its definition of serious harm, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create chemicalsbiological, a radiological, or nuclear, weapon. If the AI model engages in behavior itself that, if done by a human, would be a criminal offense and lead to such negative consequences, that would also be serious harm. If an AI model were to do any of these things under SB 3444, the AI lab behind the model could not be held liable, as long as it was unintentional and they published their reports.
Federal and state legislatures in the United States have yet to pass any specific laws that determine whether AI software developers, like OpenAI, can be held liable for these types of harm caused by their technology. But while AI labs continue to produce more powerful AI models that pose new security and cybersecurity challenges, such as Claude The Anthropic Mythosthese questions are becoming more and more understandable.
In her testimony in support of SB 3444, OpenAI’s Global Affairs team member, Caitlin Niedermeyer, also advocated for a federal regulatory framework for AI. Niedermeyer delivered a message consistent with the Trump administration cracking down on government AI security laws” This is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it is important for The AI Act will not limit America’s position in the global AI race. Although SB 3444 is itself a state-level security law, Niedermeyer said those could be effective if they “strengthen the path toward harmonization with federal systems.”
“At OpenAI, we believe the North Star for border control should be a safe haven for the most advanced models in a way that also preserves America’s leadership in innovation,” Niedermeyer said.
Scott Wisor, director of policy for the Secure AI project, tells WIRED that he believes the bill has little chance of passage, given Illinois’ reputation for tightly regulating technology. “We polled people in Illinois, asking them if they think AI companies should be exempt from liability, and 90 percent of people are against it. There’s no reason existing AI companies should face reduced liability,” Wisor says.





