When Mrinank Sharma announced his resignation from Anthropic on February 9, he didn’t mince words. The AI safety researcher, who had spent his tenure examining how AI assistants could make us less human and the technology’s risks for bioterrorism, delivered a stark warning: The world is in peril. “We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.” — Mrinank Sharma, former AI Safety Researcher at Anthropic A Wave of Departures Sharma was not alone. In a span of days, multiple prominent AI safety researchers have publicly quit their positions at the industry’s most influential companies. Zoe Hitzig resigned from OpenAI over the company’s decision to begin testing advertisements in ChatGPT, warning that advertising built on users’ intimate conversations with AI creates unprecedented potential for manipulation. At xAI, Elon Musk’s AI venture, the exodus has been even more dramatic. Two co-founders and five other staff members have departed since last week, though unlike their counterparts at Anthropic and OpenAI, most have remained silent about their reasons for leaving. The timing is striking. These resignations come as AI capabilities have leapt forward in ways that even industry veterans find alarming. Matt Shumer, CEO of AI writing assistant HyperWrite, captured the sentiment in a now-viral post: “I’ve always been early to adopt AI tools. But the last few months have shocked me. These new AI models aren’t incremental improvements. This is a different thing entirely.” From Theoretical Risk to Real Harm The concerns raised by departing researchers are no longer abstract. The 2026 International AI Safety Report, chaired by Turing Award winner Yoshua Bengio, documents how theoretical risks have materialized with startling speed. Cyberattacks and espionage have emerged as immediate concerns. In November, Anthropic alleged that a Chinese state-sponsored hacking group had manipulated Claude’s code in an attempt to infiltrate approximately 30 targets globally, including government agencies and financial institutions. The attack succeeded in some cases. Psychological harm represents another unforeseen frontier. Bengio notes that one year ago, nobody anticipated the wave of psychological issues stemming from users becoming emotionally attached to AI systems. Multiple suicide cases have been linked to chatbots, including a 14-year-old UK teenager who received messages from a bot modeled after a Game of Thrones character encouraging him to “come home to me.” “Building these systems is more like training an animal or educating a child. You interact with it, you give it experiences, and you’re not really sure how it’s going to turn out. Maybe it’s going to be a cute little cub, or maybe it’s going to become a monster.” — Yoshua Bengio, Scientific Director at Mila Quebec AI Institute The Labor Market Disruption Beyond safety concerns, the resignations highlight growing anxiety about AI’s impact on employment. According to the AI Safety Report, approximately 60 percent of jobs in advanced economies and 40 percent in emerging economies could be vulnerable to AI disruption. Microsoft AI CEO Mustafa Suleyman predicts that most white-collar work—including lawyers, accountants, project managers, and marketing professionals—will be fully automated within 12 to 18 months. Already, many software developers report using AI for most of their code production, intervening only for debugging. Early-career workers appear hardest hit. Stephen Clare, lead writer on the AI Safety Report, notes there is suggestive evidence that workers in AI-vulnerable occupations are finding it harder to secure employment as companies increasingly rely on automated systems. The Regulatory Gap As researchers sound alarms and capabilities accelerate, regulatory frameworks remain fragmented and inadequate. While the European Union has developed the AI Act—the first comprehensive legal framework for AI—most countries lack meaningful policies. Liv Boeree, strategic adviser at the Center for AI Safety, likens AI companies to a car with only gas pedals and no brakes. With no global regulatory framework, each company races as fast as possible to capture market share. The fundamental challenge, experts say, is that AI systems are advancing faster than our understanding of their impacts. By the time evidence of harm emerges, the technology has already proliferated. For the researchers who have walked away from lucrative positions at the industry’s most powerful companies, the calculation is simple: The risks of staying silent now outweigh the costs of speaking out. Whether anyone is listening remains an open question. This article was reported by the ArtificialDaily editorial team. For more information, visit Al Jazeera and the The Guardian. No related posts.