The $2.5 Trillion Bet: Inside India’s AI Summit and the Race to Superintelligence | ArtificialDaily Deep Dive The $2.5 Trillion Bet: Inside India’s AI Summit and the Race to Superintelligence At the India AI Impact Summit, tech’s most powerful leaders revealed a stark reality: we’re spending more on AI than the Manhattan Project, Apollo Program, and Interstate Highway System combined. But the real story isn’t the money—it’s what happens when the machines become smarter than us. By ArtificialDaily Editorial Team | February 21, 2026 | 15 min read World leaders and tech executives gathered in New Delhi for the India AI Impact Summit 2026. Photo: AI-generated representation When Sam Altman took the stage in New Delhi last week, he didn’t open with the usual platitudes about AI’s transformative potential. Instead, the OpenAI CEO delivered a warning that sent ripples through the auditorium: superintelligence could arrive by 2028—just two years from now. Not decades. Not sometime in the distant future. Two years. Altman’s prediction wasn’t isolated hyperbole. It was part of a broader narrative emerging from the India AI Impact Summit, where the world’s most powerful tech leaders gathered to confront an uncomfortable truth: we’re in the final sprint of a race that will determine the future of human civilization, and nobody knows exactly where the finish line is. The Staggering Scale of the AI Investment To understand the magnitude of what’s happening, consider this: global spending on AI is forecast to reach $2.5 trillion in 2026, according to Gartner. That’s not a typo. Trillion. With a ‘T’. The $2.5 Trillion Question $2.5T Projected AI Spending in 2026 $36B Manhattan Project $250B Apollo Program $620B US Interstate Highway $2.5T AI Spending 2026 To put this in perspective, the Manhattan Project—which gave us the atomic bomb—cost approximately $36 billion in today’s dollars. The Apollo Program that put humans on the moon? $250 billion. The entire US Interstate Highway System, built over 35 years? $620 billion. AI spending in 2026 alone will exceed all three combined. And unlike those government-led projects, this investment is flowing through private markets, venture capital, and corporate R&D, making it one of the largest privately financed technological waves in human history. “We’re in one of the most momentous times in human history. It’s going to be something like 10 times the impact of the Industrial Revolution, but happening at 10 times the speed, probably unfolding in a matter of a decade rather than a century.” — Demis Hassabis, CEO, Google DeepMind India’s Moment: The MANAV Vision Against this backdrop of astronomical spending and existential predictions, Indian Prime Minister Narendra Modi unveiled the country’s MANAV vision—a framework for ethical AI development that puts India at the center of the global AI governance conversation. MANAV (which translates to “human” in Hindi) represents India’s attempt to ensure that AI development remains aligned with human values and societal benefit. The vision emphasizes: Democratic access to AI—preventing the technology from becoming the exclusive domain of wealthy nations and corporations Ethical guardrails—ensuring AI serves humanity rather than replacing it Economic inclusion—using AI to address India’s development challenges Sovereign AI capabilities—building domestic infrastructure and talent The vision isn’t just rhetoric. Alongside it came concrete commitments that signal India’s seriousness about becoming an AI powerhouse. Reliance Industries, led by Mukesh Ambani, announced a staggering ₹10 lakh crore ($110 billion) investment over seven years to build AI infrastructure across India. To put that in context, it’s roughly equivalent to India’s entire annual defense budget, committed to AI alone. “India will be a powerhouse for AI across the world.” The Superintelligence Timeline: From Science Fiction to Strategic Planning Perhaps the most striking theme from the summit was the convergence of expert opinion on the timeline for artificial general intelligence (AGI) and superintelligence. Where previous conferences might have debated whether such systems were possible, the conversation in New Delhi focused on when—and how to prepare. Sam Altman’s 2028 prediction for “early superintelligence” was the headline grabber, but it wasn’t an outlier. Demis Hassabis suggested AGI could arrive in 5-8 years. Dario Amodei, CEO of Anthropic, has previously indicated similar timelines. The consensus among those building the most advanced systems is that we’re not talking about decades—we’re talking about single-digit years. “We expect the world may need something like the IAEA for international coordination of AI. We need the ability to rapidly respond to changes in circumstances.” — Sam Altman, CEO, OpenAI Altman’s reference to the International Atomic Energy Agency—the body that oversees nuclear non-proliferation—is telling. It suggests that the AI industry itself recognizes that the technology it’s building may require the kind of international oversight previously reserved for weapons of mass destruction. The AI Divide: A New Form of Inequality While the summit celebrated AI’s potential, multiple speakers warned of a darker possibility: the emergence of an “AI divide” that could exacerbate global inequality in unprecedented ways. Google CEO Sundar Pichai was explicit: “We cannot allow the digital divide to become an AI divide.” His concern is rooted in the reality of current AI development: the vast majority of investment, talent, and infrastructure is concentrated in the United States and China. The rest of the world risks becoming mere consumers of AI systems built elsewhere, with little control over the technology shaping their economies and societies. The numbers bear this out. According to the Stanford AI Index Report, the US has captured 62% of global private AI investment since 2013, with China a distant second at 15%. India, despite its massive population and tech talent pool, has received just 0.7% of global AI investment. Global AI Investment by Country (2013-2024) $471B United States (62%) $119B China (15%) $28B United Kingdom (4%) $11B India (0.7%) Anthropic CEO Dario Amodei highlighted the gap between AI capabilities and real-world impact, noting that “there are just frictions to adopt things through enterprises, and I think even more so in the developing world.” The technology may be advancing at breakneck speed, but its benefits are not automatically flowing to those who need them most. The Pentagon Problem: AI Safety vs. National Security One of the most concrete conflicts to emerge from the summit involves Anthropic and the US Department of Defense. Anthropic, which has positioned itself as the safety-focused AI company, is in a standoff with the Pentagon over a $200 million contract. The issue: Anthropic wants to include guardrails preventing its AI from being used for autonomous weapons or mass surveillance. The Pentagon insists on unrestricted access for all lawful purposes. It’s a microcosm of the larger tension between AI safety and national security that will only intensify as capabilities grow. This conflict illustrates a fundamental challenge: who decides how AI is used, and who has the authority to enforce those decisions? If a company building AI systems can refuse military contracts on ethical grounds, what happens when less scrupulous actors have no such compunctions? What We’re Actually Building: The World Model Problem Meta’s former chief scientist Yann LeCun offered a sobering perspective on the current state of AI. Despite systems that can pass the bar exam and win mathematics olympiads, he pointed out what we don’t have: domestic robots, reliable self-driving cars, or AI that can learn to drive in 20 hours like a teenager. “Why do we have systems that can pass the bar exam and win mathematics olympiads? But we don’t have domestic robots. We don’t even have self-driving cars.” — Yann LeCun, Founder, AMI Labs LeCun’s explanation is that current AI lacks “world models”—mental models of how the physical world works that allow humans and animals to plan, reason, and predict consequences. We’re building systems that can process language at superhuman levels but can’t understand that a ball rolling behind a couch still exists. This suggests that the path to superintelligence may not be as straightforward as simply scaling up current systems. We may need fundamental breakthroughs in how AI understands and interacts with the world—breakthroughs that are not guaranteed to arrive on Altman’s 2028 timeline. The Language Problem: English is the Default, But Not the World Microsoft’s Brad Smith raised an issue that often gets lost in the AI hype: the technology is overwhelmingly optimized for English. Performance on benchmarks in other languages lags significantly, creating a barrier to global adoption. “We need to make AI as effective in every language as it is in English, and today, it is not,” Smith said. For a technology that promises to democratize knowledge and capability, this linguistic bias represents a significant limitation—and an opportunity for countries like India with diverse language landscapes. The Uncomfortable Truth: Nobody Really Knows Perhaps the most honest takeaway from the India AI Summit is that nobody—not Altman, not Hassabis, not the policymakers trying to regulate this technology—really knows how this plays out. We’re conducting the largest experiment in human history with inadequate preparation, insufficient coordination, and timelines that are compressing from decades to years. The $2.5 trillion being spent on AI in 2026 represents a massive bet on a future that remains fundamentally uncertain. Will we get superintelligence by 2028? Will it solve climate change and disease, or will it concentrate power in unprecedented ways? Will the AI divide become a permanent feature of global inequality, or will initiatives like India’s MANAV vision succeed in democratizing the technology? What we do know is that the decisions being made now—in corporate boardrooms, government summits, and research labs—will shape the answers to these questions. The India AI Impact Summit was an attempt to bring some coordination to this chaotic process. Whether it succeeds may determine whether the AI revolution lifts all boats, or sinks most of them. “AI represents the biggest platform shift of our lifetimes. But its benefits are neither guaranteed nor automatic.” About This Analysis This article synthesizes reporting from multiple sources including Business Insider, Al Jazeera, The Economic Times, Forbes India, and official summit communications. For more AI news and analysis, explore our Daily Brief and Research sections. Related Articles Anthropic Clashes With Pentagon Over AI Use Restrictions Tech Leaders Warn Superintelligence Could Arrive by 2028 at India AI Summit Reliance Jio to Invest $110 Billion in AI Infrastructure OpenAI, Reliance Partner to Add AI Search to JioHotstar Related posts: New J-PAL research and policy initiative to test and scale AI innovati Agentic AI for Commercial Insurance Underwriting with Adversarial Self Accelerating Mathematical and Scientific Discovery with Gemini Deep Th Attention-gated U-Net model for semantic segmentation of brain tumors Post navigation Study: AI chatbots provide less-accurate information to vulnerable use Scaling social science research
Deep Dive The $2.5 Trillion Bet: Inside India’s AI Summit and the Race to Superintelligence At the India AI Impact Summit, tech’s most powerful leaders revealed a stark reality: we’re spending more on AI than the Manhattan Project, Apollo Program, and Interstate Highway System combined. But the real story isn’t the money—it’s what happens when the machines become smarter than us. By ArtificialDaily Editorial Team | February 21, 2026 | 15 min read World leaders and tech executives gathered in New Delhi for the India AI Impact Summit 2026. Photo: AI-generated representation When Sam Altman took the stage in New Delhi last week, he didn’t open with the usual platitudes about AI’s transformative potential. Instead, the OpenAI CEO delivered a warning that sent ripples through the auditorium: superintelligence could arrive by 2028—just two years from now. Not decades. Not sometime in the distant future. Two years. Altman’s prediction wasn’t isolated hyperbole. It was part of a broader narrative emerging from the India AI Impact Summit, where the world’s most powerful tech leaders gathered to confront an uncomfortable truth: we’re in the final sprint of a race that will determine the future of human civilization, and nobody knows exactly where the finish line is. The Staggering Scale of the AI Investment To understand the magnitude of what’s happening, consider this: global spending on AI is forecast to reach $2.5 trillion in 2026, according to Gartner. That’s not a typo. Trillion. With a ‘T’. The $2.5 Trillion Question $2.5T Projected AI Spending in 2026 $36B Manhattan Project $250B Apollo Program $620B US Interstate Highway $2.5T AI Spending 2026 To put this in perspective, the Manhattan Project—which gave us the atomic bomb—cost approximately $36 billion in today’s dollars. The Apollo Program that put humans on the moon? $250 billion. The entire US Interstate Highway System, built over 35 years? $620 billion. AI spending in 2026 alone will exceed all three combined. And unlike those government-led projects, this investment is flowing through private markets, venture capital, and corporate R&D, making it one of the largest privately financed technological waves in human history. “We’re in one of the most momentous times in human history. It’s going to be something like 10 times the impact of the Industrial Revolution, but happening at 10 times the speed, probably unfolding in a matter of a decade rather than a century.” — Demis Hassabis, CEO, Google DeepMind India’s Moment: The MANAV Vision Against this backdrop of astronomical spending and existential predictions, Indian Prime Minister Narendra Modi unveiled the country’s MANAV vision—a framework for ethical AI development that puts India at the center of the global AI governance conversation. MANAV (which translates to “human” in Hindi) represents India’s attempt to ensure that AI development remains aligned with human values and societal benefit. The vision emphasizes: Democratic access to AI—preventing the technology from becoming the exclusive domain of wealthy nations and corporations Ethical guardrails—ensuring AI serves humanity rather than replacing it Economic inclusion—using AI to address India’s development challenges Sovereign AI capabilities—building domestic infrastructure and talent The vision isn’t just rhetoric. Alongside it came concrete commitments that signal India’s seriousness about becoming an AI powerhouse. Reliance Industries, led by Mukesh Ambani, announced a staggering ₹10 lakh crore ($110 billion) investment over seven years to build AI infrastructure across India. To put that in context, it’s roughly equivalent to India’s entire annual defense budget, committed to AI alone. “India will be a powerhouse for AI across the world.” The Superintelligence Timeline: From Science Fiction to Strategic Planning Perhaps the most striking theme from the summit was the convergence of expert opinion on the timeline for artificial general intelligence (AGI) and superintelligence. Where previous conferences might have debated whether such systems were possible, the conversation in New Delhi focused on when—and how to prepare. Sam Altman’s 2028 prediction for “early superintelligence” was the headline grabber, but it wasn’t an outlier. Demis Hassabis suggested AGI could arrive in 5-8 years. Dario Amodei, CEO of Anthropic, has previously indicated similar timelines. The consensus among those building the most advanced systems is that we’re not talking about decades—we’re talking about single-digit years. “We expect the world may need something like the IAEA for international coordination of AI. We need the ability to rapidly respond to changes in circumstances.” — Sam Altman, CEO, OpenAI Altman’s reference to the International Atomic Energy Agency—the body that oversees nuclear non-proliferation—is telling. It suggests that the AI industry itself recognizes that the technology it’s building may require the kind of international oversight previously reserved for weapons of mass destruction. The AI Divide: A New Form of Inequality While the summit celebrated AI’s potential, multiple speakers warned of a darker possibility: the emergence of an “AI divide” that could exacerbate global inequality in unprecedented ways. Google CEO Sundar Pichai was explicit: “We cannot allow the digital divide to become an AI divide.” His concern is rooted in the reality of current AI development: the vast majority of investment, talent, and infrastructure is concentrated in the United States and China. The rest of the world risks becoming mere consumers of AI systems built elsewhere, with little control over the technology shaping their economies and societies. The numbers bear this out. According to the Stanford AI Index Report, the US has captured 62% of global private AI investment since 2013, with China a distant second at 15%. India, despite its massive population and tech talent pool, has received just 0.7% of global AI investment. Global AI Investment by Country (2013-2024) $471B United States (62%) $119B China (15%) $28B United Kingdom (4%) $11B India (0.7%) Anthropic CEO Dario Amodei highlighted the gap between AI capabilities and real-world impact, noting that “there are just frictions to adopt things through enterprises, and I think even more so in the developing world.” The technology may be advancing at breakneck speed, but its benefits are not automatically flowing to those who need them most. The Pentagon Problem: AI Safety vs. National Security One of the most concrete conflicts to emerge from the summit involves Anthropic and the US Department of Defense. Anthropic, which has positioned itself as the safety-focused AI company, is in a standoff with the Pentagon over a $200 million contract. The issue: Anthropic wants to include guardrails preventing its AI from being used for autonomous weapons or mass surveillance. The Pentagon insists on unrestricted access for all lawful purposes. It’s a microcosm of the larger tension between AI safety and national security that will only intensify as capabilities grow. This conflict illustrates a fundamental challenge: who decides how AI is used, and who has the authority to enforce those decisions? If a company building AI systems can refuse military contracts on ethical grounds, what happens when less scrupulous actors have no such compunctions? What We’re Actually Building: The World Model Problem Meta’s former chief scientist Yann LeCun offered a sobering perspective on the current state of AI. Despite systems that can pass the bar exam and win mathematics olympiads, he pointed out what we don’t have: domestic robots, reliable self-driving cars, or AI that can learn to drive in 20 hours like a teenager. “Why do we have systems that can pass the bar exam and win mathematics olympiads? But we don’t have domestic robots. We don’t even have self-driving cars.” — Yann LeCun, Founder, AMI Labs LeCun’s explanation is that current AI lacks “world models”—mental models of how the physical world works that allow humans and animals to plan, reason, and predict consequences. We’re building systems that can process language at superhuman levels but can’t understand that a ball rolling behind a couch still exists. This suggests that the path to superintelligence may not be as straightforward as simply scaling up current systems. We may need fundamental breakthroughs in how AI understands and interacts with the world—breakthroughs that are not guaranteed to arrive on Altman’s 2028 timeline. The Language Problem: English is the Default, But Not the World Microsoft’s Brad Smith raised an issue that often gets lost in the AI hype: the technology is overwhelmingly optimized for English. Performance on benchmarks in other languages lags significantly, creating a barrier to global adoption. “We need to make AI as effective in every language as it is in English, and today, it is not,” Smith said. For a technology that promises to democratize knowledge and capability, this linguistic bias represents a significant limitation—and an opportunity for countries like India with diverse language landscapes. The Uncomfortable Truth: Nobody Really Knows Perhaps the most honest takeaway from the India AI Summit is that nobody—not Altman, not Hassabis, not the policymakers trying to regulate this technology—really knows how this plays out. We’re conducting the largest experiment in human history with inadequate preparation, insufficient coordination, and timelines that are compressing from decades to years. The $2.5 trillion being spent on AI in 2026 represents a massive bet on a future that remains fundamentally uncertain. Will we get superintelligence by 2028? Will it solve climate change and disease, or will it concentrate power in unprecedented ways? Will the AI divide become a permanent feature of global inequality, or will initiatives like India’s MANAV vision succeed in democratizing the technology? What we do know is that the decisions being made now—in corporate boardrooms, government summits, and research labs—will shape the answers to these questions. The India AI Impact Summit was an attempt to bring some coordination to this chaotic process. Whether it succeeds may determine whether the AI revolution lifts all boats, or sinks most of them. “AI represents the biggest platform shift of our lifetimes. But its benefits are neither guaranteed nor automatic.” About This Analysis This article synthesizes reporting from multiple sources including Business Insider, Al Jazeera, The Economic Times, Forbes India, and official summit communications. For more AI news and analysis, explore our Daily Brief and Research sections. Related Articles Anthropic Clashes With Pentagon Over AI Use Restrictions Tech Leaders Warn Superintelligence Could Arrive by 2028 at India AI Summit Reliance Jio to Invest $110 Billion in AI Infrastructure OpenAI, Reliance Partner to Add AI Search to JioHotstar