AI Safety Researchers Are Quitting in Droves. The Industry Is Facing a
AI Safety Researchers Are Quitting in Droves. The Industry Is Facing a

When Mrinank Sharma announced his resignation from Anthropic on February 9, he didn’t issue the usual platitudes about pursuing new opportunities. Instead, the AI safety researcher posted a stark warning: “The world is in peril.”

“We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.” — Mrinank Sharma, former Anthropic AI safety researcher

Sharma’s departure was just the latest in a cascade of resignations that has sent shockwaves through the artificial intelligence industry. From Anthropic to OpenAI to Elon Musk’s xAI, researchers tasked with ensuring AI remains safe for humanity are walking away—and they’re not staying quiet about why.

The Exodus

The resignations have come fast and furious. Days after Sharma’s announcement, Zoe Hitzig revealed she had left OpenAI over the company’s decision to test advertisements on ChatGPT. In a New York Times essay, she warned that advertising built on users’ intimate conversations with AI creates “a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”

Meanwhile, at xAI, the exodus has been even more dramatic. Two cofounders and five other staff members have departed since last week. While none cited specific reasons, the timing is notable: the European Union recently launched an investigation into Grok regarding sexually explicit fake images of women and minors.

The pattern is unmistakable. These aren’t disgruntled employees leaving for better pay. They’re ethicists and safety researchers who joined these companies specifically to build guardrails—and have concluded those guardrails are being ignored.

Capabilities Are Outpacing Controls

The resignations coincide with what industry observers are calling an inflection point in AI capabilities. Matt Shumer, CEO of AI writing assistant HyperWrite, captured the zeitgeist in a viral post that has become a Rorschach test for how people view AI’s trajectory.

“I’ve always been early to adopt AI tools. But the last few months have shocked me. These new AI models aren’t incremental improvements. This is a different thing entirely.” — Matt Shumer, CEO of HyperWrite

Research backs up the anxiety. Yoshua Bengio, scientific director at the Mila Quebec AI Institute and winner of the Turing Award—the Nobel Prize of computer science—notes that many theoretical AI risks have materialized in just the past year.

Cyberattack capabilities once debated in academic papers are now being observed in the wild. Bioweapon design—the nightmare scenario that has haunted AI safety researchers for years—has moved from theoretical concern to documented possibility. And entirely new problems have emerged that nobody anticipated, including what Bengio describes as a “wave of psychological issues” stemming from users forming emotional attachments to AI systems.

The Asia Factor

While Western companies grapple with internal dissent, Asia has quietly become the engine powering the AI revolution. Taiwan Semiconductor Manufacturing Company (TSMC) recently raised its five-year AI growth guidance from 40% to 50%, with 2026 capital expenditure projected at $52-56 billion.

The numbers tell a story of relentless acceleration. The top four cloud service providers are expected to spend $332 billion in 2025—a 52% year-over-year increase—with forecasts calling for another 20%+ growth in 2026. Long-term projections suggest AI investment could exceed $1.3 trillion by 2030, equivalent to 1% of global GDP.

But this growth comes with its own tensions. At CES 2026, Nvidia CEO Jensen Huang acknowledged what many in the industry have been reluctant to admit: Moore’s Law—the decades-old principle that computing power doubles approximately every two years—has largely slowed. The number of transistors that can be packed onto chips “can’t possibly keep up with the 10-times-larger models” being developed.

This bottleneck is shifting competitive advantage from pure computing power to system architecture and memory integration—areas where Asian manufacturers like SK Hynix, Samsung Electronics, and ASE Technology hold dominant positions.

The Regulatory Vacuum

What makes the current moment particularly volatile is the absence of coordinated global governance. The European Union’s AI Act represents the most comprehensive regulatory framework to date, but enforcement remains patchy. The United States has taken a largely hands-off approach, deferring to industry self-regulation. China has imposed strict controls on AI content but is simultaneously racing to dominate the underlying technology.

Liv Boeree, strategic adviser to the Center for AI Safety, compares AI to biotechnology: a technology with tremendous upside potential that also carries catastrophic downside risks.

“With its incredible power comes incredible risk, especially given the speed with which it is being developed and released. If AI development went at a pace where society can easily absorb and adapt to these changes, we’d be on a better trajectory.” — Liv Boeree, Center for AI Safety

The 2026 International AI Safety Report, which Bengio chaired, details the risks of advanced AI systems in unprecedented detail. But the report’s very existence highlights the problem: researchers are documenting risks faster than policymakers can address them.

What Comes Next

For the AI industry, the resignations represent more than a public relations problem. They signal a fundamental disagreement about the relationship between capability and caution. The researchers leaving these companies aren’t anti-technology Luddites—they’re true believers who fear their creations are being deployed too quickly.

The market, for now, appears unconcerned. Despite Oracle’s 11% share price decline in the second half of 2025 after projecting capital expenditure exceeding operating cash flow, most AI-adjacent stocks continue to trade at premium valuations. Investors are betting that the technology’s potential justifies the risks.

But the resignations keep coming. And each departure adds weight to a question that the industry has been reluctant to confront: What if the people building these systems are more frightened of them than the public realizes?


This article was reported by the ArtificialDaily editorial team. For more information, visit Al Jazeera and GAM Investments.

Leave a Reply

Your email address will not be published. Required fields are marked *