When Sir Demis Hassabis took the stage at the AI Impact Summit in Delhi this week, he carried more than his Nobel Prize credentials. The CEO of Google DeepMind came with a warning that has been echoing through the halls of power across the globe: the threats posed by artificial intelligence are real, and the research to address them needs to happen urgently. “More research on the threats of artificial intelligence needs to be done urgently. We need smart regulation for the real risks posed by this technology.” — Sir Demis Hassabis, CEO, Google DeepMind A Summit Divided: The Governance Debate The AI Impact Summit, the largest ever global gathering of world leaders and tech executives, brought together delegates from more than 100 countries. Indian Prime Minister Narendra Modi opened proceedings with a call for international cooperation, stating that countries must work together to ensure AI delivers benefits for all. Sam Altman, CEO of OpenAI, echoed the urgency, calling for swift regulation to address the technology’s rapid advancement. But the unified front quickly fractured. The United States delegation, led by White House technology adviser Michael Kratsios, delivered a blunt rejection of global governance frameworks. “As the Trump administration has now said many times: We totally reject global governance of AI,” Kratsios stated. “AI adoption cannot lead to a brighter future if it is subject to bureaucracies and centralized control.” The UK Deputy Prime Minister David Lammy struck a middle ground, acknowledging that safety and security must come first while emphasizing that politicians need to work “hand in hand” with technology companies. The Two Threats Keeping Researchers Awake Hassabis outlined what he sees as the two primary dangers facing the AI industry. First, the technology falling into the hands of “bad actors” who could exploit it for malicious purposes. Second, and perhaps more unsettling, the risk of humans losing control of systems as they become increasingly autonomous and powerful. Robust guardrails are essential, Hassabis argued, but building them requires time that the current pace of development may not allow. When asked whether he had the power to slow AI progress to give researchers more breathing room, he was candid about the limitations of any single company’s influence. “We’re only one player in the ecosystem,” he said, acknowledging that keeping up with the speed of AI development is “the hard thing” for regulators worldwide. “We don’t always get things right, but we get it more correct than most.” — Sir Demis Hassabis on balancing bold innovation with responsible deployment The China Factor: A Race Measured in Months Beyond the governance debate, Hassabis addressed the geopolitical dimension of AI development. He believes the United States and Western nations currently hold a “slight” lead in the race for AI dominance with China, but he was careful to qualify that assessment. That lead, he suggested, could evaporate in “only a matter of months.” The implication is clear: any pause for safety research carries the risk of ceding ground to competitors who may not share the same priorities. It’s a tension that sits at the heart of the current moment in AI. The technology is advancing faster than our ability to understand its implications, yet slowing down feels like a luxury no nation can afford. AI as a Superpower: What Comes Next Looking ahead ten years, Hassabis predicted AI would become “a superpower” in terms of what people can create. He remains bullish on the value of technical education, suggesting that a background in STEM will still provide an advantage in an AI-augmented world. But he also sees a democratizing effect. As AI systems become capable of writing code, the barrier to building new applications will drop dramatically. “The key thing becomes taste and creativity and judgement,” he said—skills that may prove more valuable than raw technical ability. The summit concluded with companies and countries expected to deliver a shared statement on handling artificial intelligence, though the divisions exposed during the event suggest any consensus will be hard-won. This article was reported by the ArtificialDaily editorial team. For more information, visit BBC News and Anadolu Agency. Related posts: OpenEnv in Practice: Evaluating Tool-Using Agents in Real-World Enviro Accelerating science with AI and simulations Flapping Airplanes on the future of AI: ‘We want to try really radical Custom Kernels for All from Codex and Claude Post navigation Exposing biases, moods, personalities, and abstract concepts hidden in TechCrunch Disrupt 2026 Super Early Bird rates end in 1 week