When the lights went up at Bharat Mandapam in New Delhi this week, the stage held an unusual tableau: the CEOs of the world’s most powerful AI companies standing shoulder to shoulder with Indian Prime Minister Narendra Modi, their competing visions for artificial intelligence’s future laid bare for a global audience. The India AI Impact Summit wasn’t just another industry conference—it was a moment when the architects of our AI future tried to convince the world they could be trusted with what comes next. “By the end of 2028, more of the world’s intellectual capacity could reside inside data centers than outside them.” — Sam Altman, OpenAI CEO A Tale of Two Visions Sam Altman came bearing predictions that would have sounded like science fiction just years ago. Speaking to a packed hall of policymakers, researchers, and industry leaders, the OpenAI CEO suggested that early forms of superintelligence could emerge within just a few years. The trajectory, he argued, is undeniable: systems have already evolved from struggling with high school math to deriving novel results in theoretical physics. Altman’s vision is one of radical abundance. He described a future where superintelligent systems could outperform any human CEO or research scientist, where AI makes products and services cheaper while accelerating economic growth. India, he noted, is already living in this future’s early days—one in every 100 million Indians uses ChatGPT weekly, more than a third of them students, and the country represents OpenAI’s fastest-growing market for Codex. But just feet away on the same stage, Google DeepMind CEO Demis Hassabis struck a notably different tone. Where Altman sees opportunity, Hassabis sees thresholds—moments where the technology’s capabilities outpace our ability to manage them. He identified two urgent risks: bad actors weaponizing beneficial technologies, and autonomous systems doing things their designers never intended. “As the systems become more autonomous, more independent, they’ll be more useful, more agent-like but they’ll also have more potential for risk and doing things that maybe we didn’t intend when we designed them.” — Demis Hassabis, Google DeepMind CEO The Technical Reality Check Hassabis’s caution isn’t just philosophical—it’s grounded in the current limitations of AI systems. He pointed to three critical gaps that separate today’s models from true artificial general intelligence. First, the absence of continual learning: current models are largely fixed after training, unable to adapt in real time or personalize to individual contexts. Second, long-term reasoning remains elusive—systems can plan over short horizons but lack the multi-year strategic thinking that humans take for granted. The inconsistency problem may be the most telling. Hassabis noted that today’s systems can win gold medals at the International Math Olympiad while simultaneously stumbling over elementary math problems phrased in unfamiliar ways. “A true general intelligence system shouldn’t have that kind of jaggedness,” he observed. Despite these reservations, he maintains that AGI could arrive within five to ten years—a timeline that puts him closer to Altman’s optimism than to the skeptics. Altman’s framework for navigating this future centers on three principles: democratization of AI as “the only fair and safe path forward,” building societal resilience against risks like AI-enabled bioweapons, and ensuring broad involvement in shaping how the technology develops. He warned against trading liberty for security, rejecting what he characterized as an implicit bargain some are willing to make: “Some people want effective totalitarianism in exchange for a cure for cancer. I don’t think we should accept that trade-off, nor do I think we need to.” “The societal challenges of that may actually end up being the harder problem than the technical ones.” — Demis Hassabis The Governance Challenge Both leaders agree on one thing: existing institutions aren’t ready. Hassabis warned that current global institutions may not be equipped to manage the pace and scale of AI development, particularly given the technology’s inherently cross-border nature. “It’s digital, so it means it’s going to affect everyone in the world, probably, and it’s going to cross borders,” he noted, stressing the need for forums that bring together policymakers and technologists. Altman went further, suggesting that “something like the IAEA may be needed for international coordination of AI, especially to rapidly respond to changes in circumstances.” The reference to the International Atomic Energy Agency wasn’t accidental—it evokes both the promise of transformative technology and the existential risks that can accompany it. The summit itself represents part of the answer. Beginning with the UK’s gathering and continuing through Paris, Seoul, and now New Delhi, these global forums are attempting to build the coordination mechanisms that both Altman and Hassabis agree are necessary. More than 500 global AI leaders and 150 academics gathered in New Delhi, a testament to the urgency that even competing CEOs recognize. What remains unclear is whether these conversations can translate into action quickly enough. Altman’s timeline suggests we may have just a few years before the world he describes—where data centers house more intellectual capacity than human minds—becomes reality. The question hanging over Bharat Mandapam wasn’t whether AI will transform society, but whether we’ll be ready when it does. This article was reported by the ArtificialDaily editorial team. For more information, visit Times of India and Economic Times. Related posts: OpenEnv in Practice: Evaluating Tool-Using Agents in Real-World Enviro Accelerating science with AI and simulations Flapping Airplanes on the future of AI: ‘We want to try really radical Custom Kernels for All from Codex and Claude Post navigation YouTube’s latest experiment brings its conversational AI tool to TVs Making AI Work, MIT Technology Review’s new AI newsletter, is here