UN Rights Chief Warns AI Risks ‘Frankenstein’s Monster’ Without Human

At the AI Impact Summit in New Delhi this week, a stark warning cut through the usual optimism surrounding artificial intelligence. Volker Türk, the UN High Commissioner for Human Rights, stood before world leaders and tech executives and delivered a message that few wanted to hear: without urgent guardrails, AI risks becoming “Frankenstein’s monster”—a creation that escapes its maker’s control.

The timing could not be more significant. As the summit convened, new data revealed that global AI spending is forecast to reach $2.5 trillion in 2026—a 44 percent increase over 2025. To put that figure in perspective, it exceeds the combined cost of the Apollo Program, the Manhattan Project, and the entire US Interstate Highway System. The scale of investment is unprecedented. So, Türk argued, must be the scale of oversight.

“It reminds me a little bit of Frankenstein’s monster; you develop something that you don’t control anymore. You let the genie out of the bottle.” — Volker Türk, UN High Commissioner for Human Rights

A $2.5 Trillion Bet With Unclear Odds

The numbers are staggering. According to Gartner, worldwide spending on AI will hit $2.5 trillion this year, with the bulk going toward infrastructure ($1.37 trillion), services ($589 billion), and software ($452 billion). By 2027, that figure is expected to surpass $3.3 trillion.

Historical context makes the scale even more striking. Between 2013 and 2024, total global corporate investment in AI reached $1.6 trillion—already surpassing humanity’s most ambitious projects. The Manhattan Project cost $36 billion in today’s dollars. The Apollo Program: $250 billion. The International Space Station: $150 billion. AI investment has dwarfed them all.

Unlike those landmark endeavors, however, AI funding has not been driven by a single government or wartime urgency. It has flowed through private markets, venture capital, and corporate R&D, making it one of the largest privately financed technological waves in history. The US accounts for roughly 62 percent of private AI funding since 2013, with China a distant second at $119 billion.

“If you are able to control technology not just in your country but around the world, you exercise power. You can use the power for good—but you can also use that power for bad things.” — Volker Türk

The Human Rights Stakes

Türk’s warning centers on three interconnected risks: inequity, bias, and concentration of power.

Inequity manifests in who benefits from AI and who is left behind. Türk emphasized the importance of the India summit precisely because AI development has been concentrated in wealthy nations and wealthy companies. “It’s really important that these tools are used everywhere and that they are developed everywhere,” he said.

Bias and discrimination enter the system through data and design. If training data comes from only one part of the world, if development teams lack diversity, unconscious bias becomes encoded into the technology itself. “If only men are developing AI, then unconscious bias will be built in,” Türk noted. “We believe it’s key to be mindful of vulnerable groups and minorities because they are often excluded from AI development.”

Power concentration represents perhaps the most systemic risk. Tech companies now command budgets exceeding those of smaller nations. When a handful of corporations control technology deployed globally, they exercise influence that transcends borders and traditional regulatory frameworks.

From Warning to Action

Türk’s prescription is clear: human rights impact assessments must become standard practice during the design, development, and deployment of AI systems. He points to the pharmaceutical industry as a model—where extensive testing ensures risks are identified before products reach the market.

The UN’s business and human rights principles provide a framework, but implementation remains voluntary. Türk argues that meaningful participation from all segments of society—especially women and young people—is essential to prevent AI from “poisoning our minds and souls” through addiction, disinformation, and polarization.

Real-world harms are already visible. Türk cited Myanmar, where hate speech on social media platforms contributed to violence against the Rohingya. Female politicians, he noted, are increasingly considering leaving public life due to AI-enabled harassment and misogyny.

The Road Ahead

Looking five years forward, Türk sketches two possible futures. In one, AI development becomes genuinely inclusive, drawing on “the richness and diversity of all of us in each society” to address challenges like climate change, healthcare access, and education. In the other, unchecked AI deepens polarization, automates lethal weapons, and creates a world “where we have wars that are no longer controlled by humans.”

The $2.5 trillion question is which path we choose. As investment continues to surge, the window for establishing effective governance narrows. The technology is not waiting for regulators to catch up.

For now, Türk’s warning serves as a reminder that the most sophisticated AI systems in the world are only as good as the values embedded in their design. Without deliberate attention to human rights, we may indeed create something we no longer control—unleashing forces that reshape society in ways we never intended.


This article was reported by the ArtificialDaily editorial team. For more information, visit UN News and Al Jazeera.

By Arthur

Leave a Reply

Your email address will not be published. Required fields are marked *