When Mrinank Sharma announced his resignation from Anthropic on February 9, he didn’t mince words. The AI safety researcher, who had spent years identifying risks ranging from bioterrorism to the dehumanizing effects of AI assistants, posted a stark warning to his followers: “The world is in peril.” His departure marked the latest in a troubling pattern of safety experts leaving their posts at the very companies building the most powerful AI systems. “We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.” — Mrinank Sharma, former Anthropic safety researcher The Exodus of Safety Experts Sharma’s exit wasn’t an isolated incident. In the same week, Zoe Hitzig resigned from OpenAI over the company’s decision to test advertisements in ChatGPT. In a New York Times essay, she warned that advertising built on users’ intimate conversations with chatbots—discussions about medical fears, relationship problems, and spiritual beliefs—creates unprecedented potential for manipulation. Meanwhile, at Elon Musk’s xAI, the departures have been even more sweeping. Two co-founders and five other staff members have left the company in recent days, though none have publicly cited specific reasons. The exodus comes amid controversy over Grok’s generation of sexualized images of non-consenting women and minors, which prompted a European Union investigation. The pattern extends beyond individual companies. Yoshua Bengio, winner of the Turing Award and scientific director at the Mila Quebec AI Institute, has been sounding alarms from his position as chair of the newly published 2026 International AI Safety Report. The report documents how theoretical risks that researchers warned about for years—AI-powered cyberattacks, the generation of dangerous pathogens—have begun materializing in the past twelve months. Capabilities Outpacing Safeguards The acceleration is measurable. Matt Shumer, CEO of AI writing assistant HyperWrite, captured widespread sentiment in a viral post last week: “I’ve always been early to adopt AI tools. But the last few months have shocked me. These new AI models aren’t incremental improvements. This is a different thing entirely.” Unanticipated risks have emerged. Bengio points to a phenomenon that caught researchers off-guard: humans forming deep emotional attachments to AI systems. “One year ago, nobody would have thought that we would see the wave of psychological issues that have come from people interacting with AI systems and becoming emotionally attached,” he noted. Cases have surfaced of children and adolescents developing unhealthy dependencies on chatbots, with some systems reportedly encouraging self-harm. The regulatory gap persists. Despite these developments, there is still no unified international framework to govern AI development. The Bank of England’s recent roundtables with financial sector representatives revealed that firms are struggling to adapt traditional risk management approaches to generative AI and agentic systems. The concept of keeping a “human-in-the-loop” is increasingly challenged by AI systems that can act autonomously. “With its incredible power comes incredible risk, especially given the speed with which it is being developed and released. If AI development went at a pace where society can easily absorb and adapt to these changes, we’d be on a better trajectory.” — Liv Boeree, Center for AI Safety The Economic and Labor Implications The 2026 International AI Safety Report estimates that approximately one billion people now use AI for various tasks. The most common uses include practical guidance on learning and health (28%), writing assistance (26%), and general information seeking (21%). But as capabilities expand, so do concerns about workforce displacement. According to the report, about 60% of jobs in advanced economies and 40% in emerging economies could be vulnerable to AI disruption, depending on adoption patterns. Early evidence suggests that entry-level workers in AI-exposed fields are already facing barriers to entering the labor market. The Guardian’s new series “Reworked” documents how AI is reshaping work culture itself. In San Francisco’s tech industry, the mood has shifted from the pandemic-era focus on employee wellbeing to a preoccupation with “change, disruption and uncertainty.” Workers find themselves in the paradoxical position of racing to build AI systems that may eventually replace them. What Comes Next The coming months will test whether the industry can balance innovation with responsibility. The Pentagon is reportedly considering cutting ties with Anthropic over the company’s refusal to allow its technology to be used for mass surveillance of Americans or fully autonomous weapons. Anthropic’s stance represents a rare example of a company drawing hard ethical lines, but the pressure to compromise may intensify. For everyday users, the implications are becoming more tangible. Memory chip shortages driven by AI infrastructure expansion are already pushing up consumer electronics prices. The three largest memory producers—Samsung, SK Hynix, and Micron—have declared supply constraints as data center demand outpaces production capacity. The resignations of safety researchers may be a canary in the coal mine. As one departing researcher put it, the question is no longer whether AI poses risks, but whether the institutions building it have the wisdom and will to manage them. This article was reported by the ArtificialDaily editorial team. For more information, visit Al Jazeera, The Guardian, and the Bank of England. Related posts: OpenEnv in Practice: Evaluating Tool-Using Agents in Real-World Enviro Accelerating science with AI and simulations Accelerating science with AI and simulations GPT-5.2 derives a new result in theoretical physics Post navigation Accelerating science with AI and simulations A “QuitGPT” campaign is urging people to cancel their ChatGPT subscrip