When Vietnam’s artificial intelligence law quietly went into effect on Sunday, it marked more than just another regulatory milestone. It symbolized a fundamental shift in how nations are grappling with a technology that has moved from science fiction to the center of economic and political power faster than any government could have anticipated. “What stands out about China is how regulated it is, despite the innovation happening. Firms need to register their models with the government, and there are stricter content moderation, labelling and user verification requirements not seen elsewhere.” — Seth Hays, Asia AI Policy Monitor The EU’s Risk-Based Gamble The European Union has positioned itself as the trailblazer, having adopted in 2024 what it calls “the world’s first comprehensive AI law.” The legislation takes a risk-based approach: if a system is high-risk, companies face a stricter set of obligations before being authorized to operate in the EU market. These landmark rules have faced pushback from Washington under President Donald Trump, but also from businesses and governments within Europe that complain the regulations could hamper growth. The EU bowed to pressure last year and proposed changes including partially delaying the law’s application—a move Brussels says will help European companies compete globally. The law will now be fully applicable in 2027, but the EU already allows regulators to ban systems deemed to pose unacceptable risks. That could include “social scoring” systems that lead to discrimination by classifying individuals or groups based on social behavior or personal traits. America’s Hands-Off Approach The United States, home to ChatGPT maker OpenAI, chip titan Nvidia and tech giants like Google, is taking a markedly different path. Vice President JD Vance has warned against “excessive regulation” that “could kill a transformative sector.” Global governance is also off the table for Washington. At a major AI summit in New Delhi in February, the US delegation head said the country “totally” rejects international oversight of AI development. State-level action has emerged in the vacuum. California enacted a first-of-its-kind law in October requiring AI chatbot operators to implement safeguards such as referring people who express thoughts on suicide to crisis services. The White House, however, is reportedly exploring ways to prevent this patchwork of state regulations from proliferating. “91 countries and organizations called for ‘secure, trustworthy and robust’ AI. But their declaration, signed by the United States and China, was criticized by AI safety campaigners for being too generic to protect the public.” Asia’s Divergent Paths South Korea represents another model entirely. A wide-ranging law took full effect in January, requiring companies to tell users when products use generative AI. It also mandates clear labeling of content, including deepfakes, that cannot readily be differentiated from reality. Places like Taiwan and Japan are taking a lighter touch, shying away from penalties in favor of voluntary guidelines promoting innovation. China—racing to challenge US dominance in the technology—has its own complex and evolving set of guardrails that includes model registration requirements and strict content moderation rules. The Governance Gap Many other countries, from Brazil to the United Arab Emirates, are implementing AI frameworks that can roughly be divided into risk-based rules like the EU’s, or pro-innovation guidelines. The approaches vary widely, creating a fragmented global landscape that companies must navigate. A 40-member United Nations expert panel has been established to work toward “science-led governance” of the technology, according to UN Secretary-General Antonio Guterres. Whether such international coordination can bridge the growing divide between regulatory philosophies remains an open question. What is clear is that the era of ungoverned AI development is ending. The only question now is which model—or combination of models—will define the rules of the road for the most consequential technology of our time. This article was reported by the ArtificialDaily editorial team. For more information, visit Dawn. Related posts: Exposing biases, moods, personalities, and abstract concepts hidden in Sam Altman would like to remind you that humans use a lot of energy, t GGML and llama.cpp join HF to ensure the long-term progress of Local A Train AI models with Unsloth and Hugging Face Jobs for FREE Post navigation Featured video: Coding for underwater robotics AI is rewiring how the world’s best Go players think