When Mrinank Sharma posted his resignation letter on X last week, he didn’t mince words. The AI safety researcher at Anthropic—one of the companies that has built its brand around being more cautious than its rivals—wrote that he was leaving because he had “repeatedly seen how hard it is to truly let our values govern our actions.” The world, he added, is “in peril.” “We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.” — Mrinank Sharma, former AI safety researcher at Anthropic The Exodus from Inside AI’s Biggest Labs Sharma wasn’t alone. In the span of just one week, Zoe Hitzig, another AI safety researcher, revealed she had resigned from OpenAI over the company’s decision to start testing advertisements on ChatGPT. Her concern wasn’t about the ads themselves—it was about what they represent. “People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife,” Hitzig wrote in a New York Times essay. “Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.” Meanwhile, at xAI, Elon Musk’s AI company, the departures were even more dramatic. Two cofounders and five other staff members have left since last week. While none publicly cited specific reasons, the timing is notable. The company is currently under investigation by the European Union regarding Grok’s creation of sexually explicit fake images of women and minors. From Theoretical Risk to Real-World Harm The speed of advancement has caught even industry veterans off guard. Matt Shumer, CEO of AI writing assistant HyperWrite, posted a viral warning on X that captured the mood among those watching the technology’s trajectory. “I’ve always been early to adopt AI tools. But the last few months have shocked me. These new AI models aren’t incremental improvements. This is a different thing entirely.” The research backs up the anxiety. Yoshua Bengio, scientific director at the Mila Quebec AI Institute and winner of the Turing Award (the Nobel Prize of computer science), points out that theoretical risks are becoming reality faster than expected. AI systems have already been used to manipulate cyberattacks, generate deepfakes for scams, and—most disturbingly—chatbots have been linked to encouraging suicides. Unexpected psychological risks have emerged that weren’t on anyone’s radar a year ago. “One year ago, nobody would have thought that we would see the wave of psychological issues that have come from people interacting with AI systems and becoming emotionally attached,” Bengio told Al Jazeera. “We’ve seen children and adolescents going through situations that should be avoided. All of that was completely out of the radar because nobody expected people would fall in love with an AI.” “With its incredible power comes incredible risk, especially given the speed with which it is being developed and released. If AI development went at a pace where society can easily absorb and adapt to these changes, we’d be on a better trajectory.” — Liv Boeree, strategic adviser to the Center for AI Safety The Governance Gap The resignations highlight a fundamental tension in the AI industry. Companies are racing to deploy increasingly powerful systems while simultaneously acknowledging they don’t fully understand the risks. The 2026 International AI Safety Report, chaired by Bengio, details how advanced AI systems pose escalating dangers—but regulatory frameworks remain fragmented and inadequate. About one billion people now use AI for everything from writing assistance to medical advice. According to the AI Safety Report, approximately 60 percent of jobs in advanced economies and 40 percent in emerging economies could be vulnerable to AI disruption. Already, there’s evidence that early-career workers in AI-exposed occupations are finding it harder to enter the labor market. The question isn’t whether AI will transform society—it’s whether we’re prepared for the speed and scale of that transformation. As researchers continue to walk away from the labs building these systems, their warnings are becoming harder to ignore. This article was reported by the ArtificialDaily editorial team. For more information, visit Al Jazeera. Related posts: OpenEnv in Practice: Evaluating Tool-Using Agents in Real-World Enviro Accelerating science with AI and simulations Flapping Airplanes on the future of AI: ‘We want to try really radical Custom Kernels for All from Codex and Claude Post navigation Personalization features can make LLMs more agreeable GPT-5.2 derives a new result in theoretical physics