Who’s Really Running AI? Inside the Billion-Dollar Battle Over Regulat

When New York State Assemblymember Alex Bores sponsored the RAISE Act last year, he knew the stakes were high. What he didn’t anticipate was becoming the immediate target of a Silicon Valley lobbying group with $125 million to spend on attack ads. In a political landscape where AI policy has been flattened to a simplistic “doomers versus boomers” narrative, Bores is attempting something increasingly rare: walking a middle road that prioritizes both innovation and safety.

“The AI debate has been flattened to ‘doomers versus boomers,’ but the reality is far more nuanced. We’re talking about technologies that could reshape entire industries while communities across the country are already blocking data center construction.” — Alex Bores, New York State Assemblymember

The Pentagon Standoff

The timing couldn’t be more consequential. While Bores has been pushing for state-level AI safety legislation, a parallel battle has been unfolding in Washington. The Pentagon is currently playing chicken with Anthropic over who gets to control how the military uses AI. Anthropic CEO Dario Amodei has remained firm that his company cannot in good conscience give the military unrestricted access to its AI systems, particularly for mass domestic surveillance or fully autonomous weaponry.

This standoff has galvanized the industry. In a remarkable show of solidarity, employees at both Google and OpenAI have signed an open letter supporting Anthropic’s position. The letter, organized by current and former workers at major AI labs, represents a significant shift in how tech workers are engaging with the ethical implications of their creations.

The Pentagon’s demands center on unfettered access to frontier AI models for national security applications. While Anthropic maintains an existing partnership with the Department of Defense, the company has drawn a hard line at use cases involving mass surveillance or weapons systems that could operate without meaningful human oversight.

“We cannot in good conscience accede to demands that would enable uses of our technology that we believe could cause significant harm.” — Dario Amodei, CEO of Anthropic

New York’s Regulatory Experiment

While the federal government wrestles with military applications, Bores has been building what many observers consider a blueprint for statewide AI regulation. The RAISE Act, which Governor Kathy Hochul signed into law, represents the first-of-its-kind AI safety legislation in the United States.

The RAISE Act’s core requirements mandate that developers of large AI systems conduct safety testing and submit documentation to state regulators before deploying their models. The law specifically targets “frontier models”—the most capable AI systems that pose the greatest potential risks if misused.

The legislation doesn’t ban AI development. Instead, it creates a framework for transparency and accountability that Bores argues is essential for maintaining public trust. “We’re not trying to stop innovation,” Bores has said. “We’re trying to ensure that as these systems become more powerful, there’s some basic oversight in place.”

The industry response has been predictably divided. Some AI companies have quietly supported the framework, seeing regulatory clarity as preferable to the uncertainty of a patchwork of local laws. Others have warned that state-by-state regulation could create compliance nightmares for companies operating nationally.

The Super PAC War

The fight over AI regulation has moved beyond legislative chambers and into the political arena. Two dueling super PACs are now fighting over AI’s future, with tens of millions of dollars at their disposal.

Anthropic has made a $20 million bet on the pro-regulation side, funding efforts to elect candidates who support AI safety measures. On the other side, a Silicon Valley-backed group has deployed attack ads against Bores and other legislators who have championed oversight measures.

The scale of spending has surprised even seasoned political observers. With $125 million earmarked for influencing AI policy, the lobbying effort rivals campaigns around healthcare and financial regulation. For a technology that barely registered in political discourse five years ago, the investment signals just how much is at stake.

“We’re seeing spending levels that suggest the industry recognizes regulation is coming. The question isn’t whether there will be rules—it’s who gets to write them.” — Political Analyst

What Comes Next

Bores isn’t stopping with the RAISE Act. His office is already preparing additional legislation covering training data disclosure, content provenance, and a comprehensive 43-point national AI framework that could serve as a model for federal legislation.

The coming months will test whether state-level experimentation can influence national policy. With Congress showing little appetite for comprehensive AI legislation, states like New York may effectively set the standards that the rest of the country follows.

For the AI industry, the stakes extend beyond compliance costs. The regulatory framework that emerges will shape which companies can compete, what applications are viable, and how quickly AI systems can be deployed at scale. In that sense, the battle over the RAISE Act is really a battle over who gets to define the rules of the game.

As communities across the country continue to block data center construction and workers at major AI labs demand ethical safeguards, the pressure for meaningful oversight is only growing. Whether that oversight looks like the heavily regulated worlds of finance and biotech—or follows the largely unregulated path of social media—remains the central question.

For now, one thing is clear: the era of AI operating without meaningful constraints is ending. The only question is what comes next.


This article was reported by the ArtificialDaily editorial team. For more information, visit TechCrunch.

By Arthur

Leave a Reply

Your email address will not be published. Required fields are marked *