When White House officials recently shared a manipulated image of a protester in Minnesota and then mocked those asking about its authenticity, it wasn’t just another political skirmish. It was a sign of how deeply AI-enabled deception has permeated our online lives. From high-profile political manipulation to the quiet spread of Russian influence campaigns using AI-generated videos, the line between real and synthetic has never been blurrier. “We’re also trying to be a selected, desired provider to people who want to know what’s going on in the world.” — Eric Horvitz, Microsoft Chief Scientific Officer A Blueprint for Digital Authenticity Microsoft has now put forward a comprehensive blueprint for how to prove what’s real online. An AI safety research team at the company evaluated 60 different combinations of authentication methods, modeling how each setup would hold up under various failure scenarios—from metadata being stripped to content being deliberately manipulated. The approach draws inspiration from the art world. Imagine authenticating a Rembrandt painting: you’d document its provenance with a detailed history, apply an invisible watermark readable by machines, and create a mathematical fingerprint based on brush strokes. Microsoft wants to apply these same principles to digital content. The research mapped which combinations produce reliable results that platforms can confidently display, and which are so unreliable they may cause more confusion than clarity. The findings come as legislation like California’s AI Transparency Act prepares to take effect in August. The Technical Standards Provenance tracking would create a detailed manifest of where content came from and how it changed hands. Watermarking embeds invisible signals that machines can read. Fingerprinting generates mathematical signatures based on content characteristics. When combined thoughtfully, these methods could help platforms distinguish between authentic media and AI-generated deception. C2PA standards, which Microsoft helped launch in 2021, provide a technical foundation for this approach. The Coalition for Content Provenance and Authenticity has gained traction among some platforms, though adoption remains inconsistent. An audit last year found that only 30% of AI-generated test posts on major platforms were correctly labeled. The research suggests that in some cases, showing nothing at all may be better than displaying a verdict that could be wrong. Inadequate tools could create new avenues for what researchers call “sociotechnical attacks”—where bad actors manipulate authentication systems themselves. “I don’t think it solves the problem, but I think it takes a nice big chunk out of it.” — Hany Farid, UC Berkeley Professor The Implementation Challenge Despite publishing these recommendations, Microsoft declined to commit to using them across its own platforms. The company sits at the center of a vast AI content ecosystem: it runs Copilot for image and text generation, operates Azure cloud services providing access to OpenAI and other major models, owns LinkedIn, and holds a significant stake in OpenAI. When asked about in-house implementation, Chief Scientific Officer Eric Horvitz stated that “product groups and leaders across the company were involved in this study to inform product road maps and infrastructure.” But concrete commitments remain absent. The stakes extend beyond corporate competition. Staffers at the Department of Transportation have raised alarms about using AI to draft safety regulations, fearing that undetected errors could lead to injuries or deaths. Meanwhile, the European Union’s AI Act and proposed rules in India would compel AI companies to disclose when content was generated with AI. More forceful moves toward content verification might come from pending regulation worldwide. But researchers warn that rushed or inconsistent implementation could backfire—creating systems people learn to distrust rather than rely upon. This article was reported by the ArtificialDaily editorial team. For more information, visit MIT Technology Review. Related posts: GGML and llama.cpp join HF to ensure the long-term progress of Local A Anthropic-funded group backs candidate attacked by rival AI super PAC Sam Altman would like to remind you that humans use a lot of energy, t Exposing biases, moods, personalities, and abstract concepts hidden in Post navigation All the important news from the ongoing India AI Impact Summit All the important news from the ongoing India AI Impact Summit