OpenAI Endorses US State Legislation to Protect AI Developers from Mass Harm Liability
In a significant policy shift, OpenAI, the creator of ChatGPT, has announced its support for a proposed Illinois state bill that would grant artificial intelligence (AI) laboratories protection from liability in scenarios where AI models are used to cause widespread harm or substantial property damage. The bill, identified as SB 3444, aims to establish legal safeguards for AI developers under specific conditions.
Details of the Illinois Bill SB 3444
According to a report by Wired, the legislation would shield AI companies from liability if incidents result in the death or serious injury of one hundred or more individuals, or at least one billion dollars in property damage. This protection applies provided the company did not act with intentional or reckless misconduct and has publicly released comprehensive safety and transparency reports. The bill specifically defines "critical harms" to include severe scenarios such as the utilization of AI to develop chemical, biological, radiological, or nuclear weapons, or instances where an AI system autonomously engages in actions that would be considered criminal if performed by a human being.
OpenAI's Statement and Rationale
In a statement provided to Wired, OpenAI spokesperson Jamie Radice articulated the company's position, emphasizing that this approach focuses on reducing the risk of serious harm from the most advanced AI systems while still enabling widespread access to the technology. Radice stated, "We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois. They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards."
During testimony in support of the bill, OpenAI's Caitlin Niedermeyer advocated for a broader federal framework for AI regulation, highlighting the necessity to prevent "a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety." Niedermeyer added, "At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation."
Scope and Applicability of the Legislation
The bill is designed to apply to "frontier models," which are defined as AI systems trained with computational resources exceeding one hundred million dollars in value. This definition potentially encompasses major technology firms such as Google, Anthropic, Meta, and other leading AI developers. The move represents a notable departure from OpenAI's previous advocacy for regulations that could increase liability for AI developers, signaling a strategic pivot in its regulatory stance.
Criticism and Controversy Surrounding the Proposal
However, the proposed legislation has faced substantial criticism from various quarters. Scott Wisor of the Secure AI project conveyed to Wired that public opinion in Illinois strongly opposes such exemptions. Wisor reported, "We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90% of people oppose it. There's no reason existing AI companies should be facing reduced liability."
This controversy emerges against a backdrop of unresolved legal questions regarding accountability for injuries stemming from AI technology. While SB 3444 addresses large-scale instances of AI-related harm, numerous companies have already been subjected to litigation over smaller cases where individuals have suffered injuries through interactions with AI systems. The debate underscores the ongoing challenges in balancing innovation with consumer protection in the rapidly evolving AI landscape.



