Lawmakers are currently grappling with the question of who is responsible for harms caused by AI, with many opting to shield companies developing AI models and their applications from liability. The EU recently dropped their efforts to advance the AI Liability Directive, which would have allowed consumers to sue for damages caused by the fault or omission of AI developers, providers, or users. In opposition to the Trump administration, state legislators believe they have an important role to play in AI regulation. Lawmakers in Massachusetts, California, and New York are working to align on appropriate guardrails, including on AI companions and deepfakes. 

Despite doing away with the proposed state-level “AI moratorium,” the version of the “Big Beautiful Bill” passed by the US House of Representatives creates two new defenses for Big Tech to avoid accountability when their products cause harm. One is an “undue or disproportionate burden” defense that gives courts permission to let companies off the hook based on a subjective determination that holding them accountable would be too burdensome for them. The second is a “reasonably effectuates” defense that gives Big Tech an out if they are found liable by arguing that damages or injunctive relief goes beyond what is needed to achieve the law’s underlying purpose.

California Senate Bill 813 would establish a process for designating private entities as “Multistakeholder Regulatory Organizations” (MROs) that would certify AI models and applications. For certified models, the proposed law would create a broad shield for the model’s owners for liability for personal injury or property damage claims. This approach puts a high burden on the designated MROs since it effectively removes the primary incentive companies have to ensure their AI systems are safe before releasing them. The Bill comes after Senate Bill 1047, which would have required makers of all large AI models to test them for specific risks, was vetoed by Governor Gavin Newsom in September.

California may have dropped the ball, but New York is picking it up. New York passed a bill that shares some provisions with California’s recently-vetoed bill. The RAISE Act seeks to prevent models from OpenAI, Google, and Anthropic from contributing to disaster scenarios, including the death or injury of more than 100 people or more than $1 billion in damages. Should it become law, the bill would establish the first set of legally mandated transparency standards for frontier AI labs in the US. It would require the world’s largest AI labs to publish thorough safety and security reports and report safety incidents. It would also empower the New York Attorney General to bring civil penalties of up to $30 million. The bill stops short of requiring developers to include a “kill switch” on their models and holding companies that post-train frontier models accountable for critical harms.

Questions to consider

How are companies developing and deploying AI preparing for a potential patchwork of regulation? What steps are they taking now that will facilitate future compliance?

Keep Reading