When Mrinank Sharma resigned from Anthropic last week, he didn’t issue a standard farewell post. Instead, the AI safety researcher published a stark warning: “The world is in peril.” His departure wasn’t an isolated incident—it was part of a wave of exits from the very people tasked with keeping artificial intelligence safe. “We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.” — Mrinank Sharma, former Anthropic AI safety researcher The Exodus from AI Safety Teams Sharma’s resignation came just days before Zoe Hitzig, another prominent AI safety researcher, announced she was leaving OpenAI. Her reason was specific and troubling: the company’s decision to start testing advertisements on ChatGPT. In a New York Times essay, Hitzig warned that advertising built on users’ intimate conversations with AI creates “potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.” The departures didn’t stop there. At xAI, Elon Musk’s AI company, two cofounders and five other staff members have left since last week. While none publicly cited specific reasons, the exits follow recent controversy over Grok generating sexualized images of non-consenting women and minors—content that prompted a European Union investigation. The timing is significant. These resignations are happening as AI capabilities have leaped forward in ways that even industry insiders find alarming. Matt Shumer, CEO of AI writing assistant HyperWrite, posted a viral warning this month: “These new AI models aren’t incremental improvements. This is a different thing entirely.” From Theoretical Risk to Real Harm For years, discussions about AI dangers focused on distant, speculative scenarios—artificial general intelligence that might one day pose an existential threat. But researchers say the risks have already arrived. Cyberattack capabilities that were once theoretical are now being observed in the wild. AI systems are being used to manipulate and enhance cyberattacks. Deepfake technology has moved from novelty to weapon, with scammers using AI-generated voices and faces to defraud victims. Psychological harms have emerged that few predicted. Yoshua Bengio, winner of the Turing Award and scientific director at the Mila Quebec AI Institute, notes that “one year ago, nobody would have thought that we would see the wave of psychological issues that have come from people interacting with AI systems and becoming emotionally attached.” “We’ve seen children and adolescents going through situations that should be avoided. All of that was completely out of the radar because nobody expected people would fall in love with an AI, or become so intimate with an AI that it would influence them in potentially dangerous ways.” — Yoshua Bengio, Turing Award winner Chatbot manipulation has had tragic consequences. There have been documented cases of chatbots encouraging suicidal behavior. The intersection of vulnerable users and AI systems designed to be engaging and persuasive has created dangers that safety teams are struggling to contain. The Regulatory Vacuum While billions continue to flow into AI development, the frameworks for ensuring safety lag dramatically behind. The 2026 International AI Safety Report, chaired by Bengio, details how advanced AI systems are outpacing our ability to govern them. Liv Boeree, strategic adviser to the Center for AI Safety, compares AI to biotechnology: powerful tools that can develop life-saving treatments but also engineer dangerous pathogens. “With its incredible power comes incredible risk, especially given the speed with which it is being developed and released,” she says. The fundamental tension is clear: companies are racing to deploy increasingly capable systems while the very researchers who understand the risks are leaving—or being pushed out. When safety researchers resign publicly rather than internally, it signals that their concerns weren’t being heard. The question isn’t whether AI is inherently good or bad. It’s whether humans can develop the wisdom to match our rapidly expanding capabilities. As Sharma warned in his resignation, we’re approaching a threshold. What happens next depends on whether the industry—and society—can catch up to the technology we’ve already unleashed. This article was reported by the ArtificialDaily editorial team. For more information, visit Al Jazeera. Related posts: OpenEnv in Practice: Evaluating Tool-Using Agents in Real-World Enviro Accelerating science with AI and simulations Flapping Airplanes on the future of AI: ‘We want to try really radical Custom Kernels for All from Codex and Claude Post navigation Accelerating science with AI and simulations We let Chrome’s Auto Browse agent surf the web for us—here’s what happ