When the delegates gathered in New Delhi this week for the fourth global AI summit, the stakes were clear. After three previous meetings—from the safety-focused gathering in the U.K. in 2023 to last year’s tense negotiations—something had to give. What emerged was a declaration that signals a fundamental shift in how the world approaches artificial intelligence governance: less emphasis on binding safety constraints, more focus on widespread adoption and open collaboration. “The future of AI should not be decided by a few billionaires.” — António Guterres, UN Secretary-General A Declaration Without Teeth The declaration signed at the AI Impact Summit 2026 represents both a diplomatic victory and a study in compromise. The EU, the U.S., the U.K., and notably Russia—all signed the document, a success for Indian Prime Minister Narendra Modi’s hosting efforts. But the text, published Saturday, showed little change from an earlier draft, and one critical phrase was conspicuously absent: “AI safety.” The omission is not accidental. It reflects a deliberate pivot in the global conversation around AI. Where the first summit in Bletchley Park was dominated by existential risk discussions and calls for precautionary measures, New Delhi’s gathering emphasized what Modi has termed the “democratization” of AI. The declaration heavily stresses “wide-scale adoption of AI” and the importance of making the technology accessible across sectors and borders. The shift in tone marks a departure from the precautionary approach that characterized earlier summits. Instead of focusing on what could go wrong, the New Delhi declaration concentrates on what could go right—if AI is deployed broadly and openly. Open Source Gets a Boost Perhaps the most significant development for the AI community is the declaration’s endorsement of open-source approaches. The text explicitly recognizes that “open-source AI applications and other accessible AI approaches, where appropriate, and wide-scale diffusion of AI use cases can contribute to scalability, replicability, and adaptability of AI systems across sectors.” This language gives political backing to a movement that has often found itself on the defensive. Open-source advocates have long argued that publicly available models drive innovation and prevent concentration of power among a handful of corporations. Critics counter that openly available models pose security risks and could be misused. The declaration sides—at least rhetorically—with the openness camp. “We’ve entered the virtuous cycle of AI.” — Jensen Huang, CEO of Nvidia The Geopolitical Chessboard The summit’s dynamics revealed as much about global power shifts as about AI policy. Neither the U.S. nor China sent their heads of state or top government leaders—Trump was occupied with the launch of his Board of Peace, while China was celebrating Chinese New Year. Their absence created space for India to assert itself as a third pole in AI governance. Modi’s challenge to U.S.-Chinese dominance in the AI space found allies. UN Secretary-General António Guterres used his keynote to push back against concentrated control, while other nations seized the opportunity to shape a more inclusive framework. The result is a declaration that, while non-binding, establishes a different set of priorities than those championed by either Washington or Beijing. The U.S. position remained ambivalent. Delegation leader Michael Kratsios made clear that the Trump administration opposes global AI governance frameworks, even as the U.S. signed the declaration. It’s a diplomatic tightrope—participating in multilateral forums while resisting their constraints. Safety Advocates Sound the Alarm Not everyone is celebrating the shift away from safety-focused governance. Google DeepMind CEO Demis Hassabis used the summit to call for more research into AI threats “to be done urgently.” He highlighted two major risks: AI being exploited by malicious users, and humans eventually losing control of increasingly autonomous systems. OpenAI CEO Sam Altman likewise urged swift regulation at the summit—a position that might seem surprising given his company’s rapid product releases, but one that reflects growing concern among industry leaders about the pace of development outpacing safeguards. The tension between these warnings and the summit’s final declaration encapsulates the central challenge facing AI governance: how to promote innovation and access while managing genuine risks. The New Delhi summit has clearly chosen its priority. Whether that choice proves wise or reckless will only become clear in the years ahead. This article was reported by the ArtificialDaily editorial team. For more information, visit POLITICO and Anadolu Agency. Related posts: Fractal Analytics’ muted IPO debut signals persistent AI fears in Indi Fractal Analytics’ muted IPO debut signals persistent AI fears in Indi India’s AI Moment: Fractal’s Muted IPO and a $1.1B Government Bet EY Identifies 10 Critical Opportunities as Tech Enters ‘Hyper-Velocity AI Moment’ Post navigation UAE’s G42 teams up with Cerebras to deploy 8 exaflops of compute in In Nvidia in Talks to Invest $30 Billion in OpenAI at $730 Billion Valuation