ByteDance backpedals after Seedance 2.0 turned Hollywood icons into AI

When ByteDance launched Seedance 2.0, the company expected excitement about its AI video generation capabilities. What it got instead was a legal firestorm from Hollywood’s most powerful studios—and a stark reminder that the boundaries of AI creativity are still being written in real-time.

“Disney characters are not free public domain clip art.” — Studio Representative

The Launch That Sparked a Backlash

ByteDance says that it’s rushing to add safeguards to block Seedance 2.0 from generating iconic characters and deepfaking celebrities, after substantial Hollywood backlash after launching the latest version of its AI video tool.

The changes come after Disney and Paramount Skydance sent cease-and-desist letters to ByteDance urging the Chinese company to promptly end the allegedly vast and blatant infringement. Studios claimed the infringement was widescale and immediate, with Seedance 2.0 users across social media sharing AI videos featuring copyrighted characters like Spider-Man, Darth Vader, and SpongeBob SquarePants.

In its letter, Disney fumed that Seedance was “hijacking” its characters, accusing ByteDance of treating Disney characters like they were “free public domain clip art,” according to reports from Axios.

How Seedance 2.0 Works

AI video generation has become one of the most contentious frontiers in artificial intelligence. Seedance 2.0, ByteDance’s latest entry into this crowded field, promises high-quality video generation from text prompts and images. The tool represents a significant technical achievement—one that puts it in direct competition with OpenAI’s Sora, Google’s Veo, and other major players.

The copyright problem emerged almost immediately. Unlike some competitors that have invested heavily in content filtering and rights management, Seedance 2.0 initially launched with what studios describe as inadequate safeguards. Users quickly discovered they could generate videos featuring recognizable characters and celebrity likenesses.

Social media amplification turned isolated incidents into a widespread phenomenon. Within days of launch, platforms like X, TikTok, and Reddit were flooded with AI-generated videos featuring iconic characters in scenarios their creators never imagined.

“The speed at which these tools can generate infringing content at scale represents an existential threat to intellectual property rights.” — Entertainment Industry Attorney

Hollywood’s Response

The legal response from major studios was swift and coordinated. Disney and Paramount Skydance’s cease-and-desist letters represent just the visible tip of what industry insiders describe as a broader mobilization against AI tools that enable copyright infringement.

The dispute highlights a fundamental tension in the AI industry. Training data for large video models inevitably includes copyrighted material—and the outputs often reflect that training in ways that courts have yet to fully address. While AI companies argue that their use of training data constitutes fair use, content owners see unauthorized outputs as clear infringement.

For ByteDance, the timing is particularly sensitive. The company already faces scrutiny over its data practices and relationship with the Chinese government. Adding copyright battles with American entertainment giants to that mix creates a complex geopolitical and legal landscape.

What ByteDance Is Doing About It

ByteDance has committed to implementing additional safeguards to prevent Seedance 2.0 from generating iconic characters and celebrity deepfakes. The company says it is “rushing” to add these protections—but the damage to its launch momentum may already be done.

The incident raises broader questions about how AI companies should approach content moderation. Proactive filtering requires significant technical investment and can limit creative possibilities. Reactive filtering, as ByteDance is now implementing, risks legal exposure and reputational damage.

Industry observers note that this pattern—launch first, filter later—has become common in the AI space. Companies race to demonstrate capabilities, sometimes at the expense of thorough safety and rights management review. The ByteDance case may accelerate pressure for more rigorous pre-launch compliance.


This article was reported by the ArtificialDaily editorial team. For more information, visit Ars Technica.

Leave a Reply

Your email address will not be published. Required fields are marked *