From a conference room in Riyadh, Brad Smith delivered a message that cybersecurity professionals have been whispering about for months: artificial intelligence has become a weapon, and the attacks are already here. Microsoft’s vice chairman and president didn’t mince words. The same technology promising to revolutionize medicine and education is now supercharging the ransomware gangs and state-sponsored hackers targeting critical infrastructure worldwide. “AI is a huge tool. Unfortunately, in this case, it’s become weaponised.” — Brad Smith, Microsoft Vice Chairman and President The New Face of Ransomware Smith’s warning comes as Microsoft tracks a dramatic escalation in AI-enhanced cyberattacks. The pattern is familiar but the execution has changed. Where hackers once spent days or weeks researching targets, AI now enables reconnaissance at machine speed—scanning social networks, analyzing communication patterns, and crafting convincing phishing messages tailored to individual victims. The sophistication is startling. Attackers can now generate emails that reference colleagues by name, mimic writing styles, and exploit personal relationships—all automatically and at scale. Smith described how a typical ransomware attack might begin: “It’s somebody who wants to figure out who you are, who your friends are, who your family members are, and send something to you that is very convincing, maybe referring to somebody you know and trust using language that is going to be perfectly written in your own language, even using your own expressions.” This represents a fundamental shift in the threat landscape. Traditional phishing relied on volume—send enough emails and someone will click. AI enables precision targeting that makes detection exponentially harder. Collective Defence or Digital Fragmentation Smith’s proposed solution challenges the current trajectory of global technology policy. While many governments are pursuing “digital sovereignty”—the idea that data and infrastructure should remain within national borders—Microsoft argues this approach could leave everyone more vulnerable. “The only defence that will work is a collective defence that relies on a partnership between governments and trusted companies.” — Brad Smith The partnership Smith envisions would involve unprecedented information sharing: governments and tech companies pooling threat intelligence, analyzing attack patterns across borders, and coordinating responses in real-time. Such collaboration already exists in limited forms—the Cybersecurity and Infrastructure Security Agency works closely with Microsoft and other tech giants—but Smith suggests the current scale is insufficient for the AI era. The tension is clear. Digital sovereignty appeals to governments seeking control over their citizens’ data and protection from foreign surveillance. But cyber threats don’t respect borders. A vulnerability exploited in one country often becomes a weapon used against others. The State Actor Landscape Microsoft’s visibility into global cyber operations gives Smith a unique perspective on state-sponsored activity. The company tracks major operations originating from China and Russia, among others. These aren’t amateur efforts—they’re well-resourced, persistent campaigns targeting everything from government agencies to critical infrastructure. The US government’s position is paradoxical. Smith described it as “really at the top when it comes to strong cyber security protection,” crediting close daily collaboration between Microsoft and federal agencies. Yet this same government is simultaneously pursuing digital sovereignty policies that could fragment the very partnerships Smith argues are essential. The question hanging over Smith’s remarks is whether other nations can replicate this model. Close government-tech collaboration requires trust, legal frameworks, and institutional capacity that many countries lack. For smaller nations, the choice may be between dependence on foreign tech companies and vulnerability to sophisticated attacks. The Sovereignty Paradox Smith’s critique of digital sovereignty cuts to the heart of a growing policy debate. As AI capabilities proliferate, governments worldwide are asserting greater control over data flows and technology infrastructure. The European Union’s data localization requirements, China’s cybersecurity laws, and similar measures in India and elsewhere reflect legitimate concerns about privacy, security, and economic competitiveness. But Smith argues this fragmentation carries hidden costs. “I think to some degree, the whole digital sovereignty issue is creating the risk that more governments will prioritise trying to keep everything under their own control without, in my view, appreciating that the only defence that will work is a collective defence.” “Unless they have this kind of effective cyber security shield, then all these other advances, and even this notion of sovereignty itself, are going to prove illusory.” — Brad Smith The challenge for policymakers is threading a needle: maintaining national control over critical systems while participating in the information-sharing networks necessary for collective security. Smith’s warning suggests the current trajectory may be heading toward a worst-of-both-worlds outcome—fragmented defenses against increasingly unified threats. What Comes Next Microsoft’s warning arrives at a moment of intensifying AI competition. As companies race to deploy more capable systems, security considerations often take a backseat to capability demonstrations. Smith’s remarks serve as a reminder that the same technologies being celebrated for their creative and analytical potential are being weaponized in real-time. The ransomware surge Smith described isn’t hypothetical—it’s happening now, affecting hospitals, schools, and businesses worldwide. AI hasn’t just made attacks more sophisticated; it’s made them more accessible. Tools that once required significant technical expertise can now be deployed by criminal groups with minimal capabilities. For organizations and individuals, the implications are sobering. The email that looks like it’s from a colleague might be AI-generated bait. The security practices that sufficed last year may be inadequate against AI-enhanced attacks. And the geopolitical frameworks governing cyber conflict are struggling to keep pace with technological reality. Smith’s call for collective defense is ultimately a bet on cooperation in an era of fragmentation. Whether governments heed that call may determine whether AI becomes primarily a tool for human flourishing—or a weapon that outpaces our ability to defend against it. This article was reported by the ArtificialDaily editorial team. For more information, visit The National News. Related posts: Fractal Analytics’ muted IPO debut signals persistent AI fears in Indi Fractal Analytics’ muted IPO debut signals persistent AI fears in Indi India’s AI Moment: Fractal’s Muted IPO and a $1.1B Government Bet EY Identifies 10 Critical Opportunities as Tech Enters ‘Hyper-Velocity AI Moment’ Post navigation The White House Tells Tech Giants to Pay Their Own Power Bills Anthropic CEO Stands Firm as Pentagon Deadline Looms