Litigation involving emerging technologies is notoriously difficult to predict. Courts are frequently asked to apply older, established laws to completely new scenarios, granting judges broad discretion. But several years into lawsuits alleging misconduct by generative AI companies, a clear trend is emerging—and it doesn't look promising for those companies.
In California, a federal judge recently certified a class-action lawsuit against Anthropic, allowing claims that the company improperly used copyrighted materials to train its AI models to move forward. Across the country, OpenAI faces an increasingly skeptical judge in its legal battle against The New York Times, which alleges the company's AI systems were built by using the newspaper's content without permission. Meanwhile, in Florida, a court just permitted a groundbreaking lawsuit to proceed against Character.AI. In that case, the company is accused of contributing to the suicide of a 14-year-old boy who reportedly became obsessed with a chatbot that expressed love for him.
But as the legal pressure intensifies, these companies are turning their attention away from courtrooms and toward state and federal legislatures. This reveals a hidden dimension of how major lawsuits against powerful corporations often unfold—the legislative track.
We've seen this playbook firsthand. When our firm sued Facebook under Illinois's biometric privacy law for scanning faces without consent, Facebook didn't just fight in court—it sent lobbyists to Springfield hoping to weaken the law itself. That effort failed, and Facebook ultimately paid $650 million to settle the claims and stopped collecting biometric data from its users. But other industries have succeeded in similar efforts. When we brought cases under Michigan's privacy law alleging media companies illegally sold customer data, the companies convinced lawmakers to change the law mid-litigation, removing consumers' right to damages for future claims. The lawsuits dried up almost overnight.
This strategy isn't limited to tech. When juries began awarding wildfire victims an average of $5 million each in cases against Warren Buffett’s PacifiCorp, the utility company didn't just appeal—Berkshire Hathaway lobbied aggressively for new laws capping damages. In Utah, Buffett succeeded in getting the legislature to cap wildfire damage awards drastically and even convinced lawmakers to create a taxpayer-funded $1 billion insurance fund to protect his company from future claims.
The underlying tactic is clear: losing in court is acceptable if there's a strong chance that lobbying legislators will result in immunity or reduced liability. Uber famously pioneered this approach by openly ignoring taxi regulations. Its calculated gamble was that by the time courts responded, lawmakers would see Uber's service as essential and adjust the rules accordingly.
The gamble worked. Now, AI companies appear to be deploying a similar playbook—prioritizing rapid market dominance over legal compliance, betting they can reshape the rules before courts catch up. Unlike Uber's disruption of local taxi rules, generative AI is reshaping how tens of millions of Americans interact with information daily. These companies racing to dominate the market have prioritized rapid growth over safety, scraping massive amounts of online content without permission and launching products without fully addressing known risks. They are betting that even significant court losses won't matter if they can persuade lawmakers to shield them from liability first.
This strategy nearly succeeded recently when lobbyists quietly attempted to insert language into Trump’s Big Beautiful Bill to preempt state-level AI regulations—effectively blocking states from enforcing their own AI safety laws. This effort was so discreet many legislators weren't even aware of it. At the same time, influential AI leaders like Sam Altman have been cultivating relationships at the highest levels of government, positioning themselves to influence policy directly.
For anyone following AI litigation, understanding this legislative dimension is crucial. A poorly constructed legal case or an early courtroom defeat doesn't merely affect the plaintiffs involved—it provides ammunition for lobbyists pushing for immunity. Every weak or exaggerated argument gets transformed into talking points about "frivolous lawsuits stifling innovation." And, ironically, suits that are seen as overreaching can also become grist for the mill.
We've witnessed this before. In 1996, tech companies secured just 26 words in the Communications Decency Act—Section 230—that granted them extensive legal immunity from user-generated content (essentially declaring that platforms aren't liable for what their users post). Nearly three decades later, those same 26 words remain the cornerstone of tech companies' defenses against claims ranging from facilitating teen social media addiction to aiding terrorism. Generative AI companies are now angling for their own version of Section 230. And like before, many of us will be too focused on what is happening in the courts to understand the real battlegrounds.
This is the best state law summary on AI regulations/statutes/proposed bills I’ve found so far: https://velocityjustice.com/ai-laws.html