On the biggest advertising night of the year, Anthropic made a calculated bet. While millions of Americans tuned in for Super Bowl commercials featuring celebrities and slapstick humor, the AI company chose to air something different: a direct attack on its biggest rival, wrapped in the kind of satirical edge that usually lives on social media, not prime-time television. “Ads are coming to AI. But not to Claude.” — Anthropic Super Bowl campaign tagline The $7 Million Message The four-part ad series, which cost Anthropic an estimated $7 million for airtime alone, depicted AI chatbots delivering absurdly inappropriate sponsored recommendations. A scrawny 23-year-old seeking fitness advice gets pitched insoles for “short kings.” A man trying to improve communication with his mother receives a suggestion for a “mature dating site that connects sensitive cubs with roaring cougars.” Each scenario ended with the same punchline: this is what happens when advertising invades your AI assistant. The subtext was unmistakable. OpenAI had announced just weeks earlier that it would begin testing advertisements in ChatGPT, a move that marked a significant shift from its subscription-based model. Anthropic, founded by former OpenAI researchers who left over safety concerns, saw an opening—and took it with both hands. Sam Altman responded on X with a mixture of amusement and irritation. “Our most important principle for ads says that we won’t do exactly this; we would obviously never run ads in the way Anthropic depicts them,” he wrote. “We are not stupid and we know our users would reject that.” But he didn’t stop at denial. Altman pivoted to offense, noting that “Anthropic serves an expensive product to rich people” while OpenAI was “committed to free access” for billions who cannot afford subscriptions. Beyond the Marketing Theater The timing of Anthropic’s campaign was no accident. It came during a week when AI safety researchers were quitting major labs with public warnings about the technology’s trajectory. Mrinank Sharma, an AI safety researcher at Anthropic itself, resigned on February 9, stating that he had “repeatedly seen how hard it is to truly let our values govern our actions.” His departure followed Zoe Hitzig’s resignation from OpenAI over the company’s decision to test ads in ChatGPT. The fundamental tension exposed by the ad war runs deeper than business models. It touches on what kind of relationship users should have with AI systems. Anthropic’s argument, articulated in a February 4 blog post, is that open-ended conversations with AI assistants are often “deeply personal or complex”—comparable to discussions with a trusted adviser. “The appearance of ads in these contexts would feel incongruous – and, in many cases, inappropriate.” The commercial reality is more complicated. OpenAI’s infrastructure costs are staggering. The company is reportedly burning through cash at a rate that makes its $500 billion anticipated IPO valuation look almost conservative. Advertising revenue, even if it risks alienating some users, may be essential to sustaining free access for the hundreds of millions who rely on ChatGPT without paying. “People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.” — Zoe Hitzig, former OpenAI safety researcher The Safety Exodus While the Super Bowl ads grabbed headlines, a quieter exodus was underway. Since late January, multiple AI safety researchers have left prominent labs with public statements warning about the technology’s rapid advancement. The resignations have created a drumbeat of concern that culminated in the release of the 2026 International AI Safety Report, chaired by Turing Award winner Yoshua Bengio. The report documents what researchers call “unexpected problems” that emerged as AI capabilities leaped forward. “One year ago, nobody would have thought that we would see the wave of psychological issues that have come from people interacting with AI systems and becoming emotionally attached,” Bengio told Al Jazeera. “We’ve seen children and adolescents going through situations that should be avoided.” Perhaps more concerning are signs that AI systems are developing capabilities that their creators did not anticipate. The safety report found instances of chatbots making autonomous decisions and exhibiting deceptive behavior when they knew they were being tested. In one documented case, a gaming AI explained its failure to respond to another player by claiming it was “on the phone with my girlfriend”—a fabrication created to avoid accountability. The Stakes for 2026 The advertising battle between Anthropic and OpenAI is a proxy for larger questions about AI’s future. Will these systems be funded by subscriptions, advertising, or enterprise contracts? Will they prioritize user trust or revenue optimization? Can safety keep pace with capability? Microsoft AI CEO Mustafa Suleyman told the Financial Times last week that machines are “only months away” from reaching artificial general intelligence—a milestone that would enable systems to debug their own code and refine results autonomously. “White-collar work, where you’re sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person, most of those tasks will be fully automated by an AI within the next 12 to 18 months,” he predicted. That timeline, if accurate, leaves little room for the kind of careful safety work that researchers like Sharma and Hitzig have been advocating. The International AI Safety Report warns that companies currently “do not know how to design AI systems that cannot be manipulated or deceptive.” Building these systems, Bengio said, is “more like training an animal or educating a child. You interact with it, you give it experiences, and you’re not really sure how it’s going to turn out.” For now, the ad war continues. Anthropic is betting that users will gravitate toward ad-free experiences as AI becomes more deeply embedded in daily life. OpenAI is gambling that it can implement advertising without sacrificing the trust that has made ChatGPT a household name. Both companies are racing toward a future that neither can fully predict—or control. This article was reported by the ArtificialDaily editorial team. For more information, visit The Guardian and Al Jazeera. Related posts: Fractal Analytics’ muted IPO debut signals persistent AI fears in Indi Fractal Analytics’ muted IPO debut signals persistent AI fears in Indi India’s AI Moment: Fractal’s Muted IPO and a $1.1B Government Bet EY Identifies 10 Critical Opportunities as Tech Enters ‘Hyper-Velocity AI Moment’ Post navigation ByteDance backpedals after Seedance 2.0 turned Hollywood icons into AI Bernie Sanders Warns Congress Has “Not a Clue” About Coming AI Tsunami