Ghibli Demands AI Halt

In a dramatic escalation of the ongoing battle between artificial intelligence developers and content creators, several major Japanese entertainment companies have demanded that OpenAI cease using their copyrighted material to train its AI models. Studio Ghibli, Bandai Namco, and Square Enix—giants in animation, gaming, and interactive entertainment—have joined forces through the Content Overseas Distribution Association (CODA) to formally request that OpenAI stop leveraging their intellectual property in the development of its latest video-generation AI, Sora 2.

The Core of the Controversy

The heart of the matter lies in allegations of copyright infringement stemming from OpenAI’s use of these companies’ content as training data for Sora 2. According to CODA’s letter, “the act of replication during the machine learning process may constitute copyright infringement,” particularly when the resulting AI generates outputs that closely resemble protected characters and artistic styles.

Sora 2’s launch on September 30, 2025, sparked immediate concern when it began producing videos that bore striking resemblance to Japanese intellectual properties. This prompted Japan’s government to formally request OpenAI halt the replication of Japanese artwork—a diplomatic move that underscored the seriousness of the situation.

This isn’t the first time OpenAI’s technologies have drawn attention for their apparent affinity for Japanese aesthetics. During the March 2025 launch of GPT-4o, the model frequently generated “Ghibli-style” images, suggesting a pattern in how the company’s models interact with certain cultural content. In fact, even OpenAI CEO Sam Altman’s own profile picture on X (formerly Twitter) currently features an illustration in a style reminiscent of Studio Ghibli’s distinctive animation.

CODA Takes a Stand

CODA, established in 2002 as an anti-piracy organization representing Japanese IP holders, sent a formal letter to OpenAI last week outlining their concerns. The letter not only demands cessation of unauthorized training but also challenges the fundamental approach OpenAI uses to handle intellectual property.

In October 2025, Altman announced that OpenAI would be implementing changes to Sora’s opt-out policy for IP holders. However, CODA argues that even an opt-out approach violates Japanese copyright law, which stipulates that “prior permission is generally required for the use of copyrighted works, and there is no system allowing one to avoid liability for infringement through subsequent objections.”

This distinction between opt-in and opt-out approaches to consent highlights a significant legal divergence between jurisdictions. While some regions have implemented text and data mining exceptions that favor the latter, Japan’s copyright framework appears to require explicit permission before content can be used—even for machine learning purposes.

Japanese Legal Framework

Japan’s approach to AI training data is governed by specific provisions in its Copyright Act, most notably Article 30-4, which addresses the use of copyrighted works for data analysis (Bunka, 2024). However, legal experts note that this provision may not fully cover cases where AI systems reproduce specific works in their outputs, rather than simply analyzing data patterns.

The tension between protecting creators’ rights and fostering technological innovation reflects a broader challenge facing legislators worldwide as AI systems become increasingly sophisticated in their ability to emulate human creativity.

A Global Pattern

This dispute is far from isolated. Similar conflicts have emerged across the globe as AI developers grapple with questions surrounding training data:

  • In the United Kingdom, Stability AI faced litigation from Getty Images over the alleged use of copyrighted photographs for training their Stable Diffusion models (The Guardian, 2025)
  • Numerous artists and authors have filed class-action lawsuits in the United States against AI companies for allegedly using their work without permission (Monolith Law, 2025)
  • European Union discussions about AI regulations have increasingly focused on mandatory transparency regarding training datasets

These parallel developments suggest that the Japan-OpenAI dispute represents a broader industry reckoning—one that could fundamentally reshape how AI companies obtain and utilize training data.

Beyond the Entertainment Industry

While this particular case focuses on entertainment content, similar questions plague other creative sectors:

  1. News organizations concerned about their reporting being used to train AI systems
  2. Photographers and visual artists whose work populates the internet
  3. Musicians and record labels whose compositions might inform AI music generators
  4. Academic publishers questioning the use of scholarly work in AI development

The challenge for policymakers is creating frameworks that protect legitimate creative interests without stifling technological innovation—an increasingly delicate balance in an AI-driven world.

Implications and Future Directions

This confrontation signals growing industry resistance to the current norms around AI training data acquisition. Several potential outcomes emerge:

  • Regulatory Changes: Japan might revise its AI training guidelines to provide clearer direction on consent requirements
  • Industry Standards: A precedent could develop for opt-in consent models across the AI industry
  • Licensing Solutions: New marketplaces might emerge for licensing content specifically for AI training
  • Technical Adaptations: AI companies may need to implement more sophisticated content attribution systems

Interestingly, some AI researchers acknowledge that stricter content controls could benefit the industry by building trust with creators who might otherwise be hostile to AI adoption (TechPolicy.Press, 2024).

Potential Compromise Solutions

Various compromise positions have been suggested by legal experts and industry analysts:

  1. Selective Opt-In Systems: Creators could register their works in databases specifically designed for AI training permissions
  2. Revenue Sharing Models: Compensation mechanisms for content used in successful AI applications
  3. Attribution Requirements: Mandatory crediting when AI models heavily draw from identifiable sources
  4. Differentiated Licensing: Varying terms for training versus output generation phases

The Path Forward

As OpenAI and CODA navigate this dispute, the outcome will likely influence not just how AI companies operate in Japan but potentially worldwide. The case raises fundamental questions about ownership in the digital age and whether traditional copyright frameworks can adequately address machine learning’s unique challenges.

Regardless of the immediate resolution, this incident has crystallized several important truths:

  • Creators are increasingly vigilant about how their work is used in AI development
  • Geographic differences in copyright law create complex compliance challenges for global AI companies
  • The AI industry’s rapid expansion has outpaced legal frameworks governing training data usage
  • Successful coexistence between creators and AI developers requires new collaborative models

The ultimate resolution of this dispute may come through negotiation rather than litigation, as both sides recognize their mutual dependence—the creative industry needs protection for its intellectual assets, while AI developers require legitimate access to diverse training data to continue advancing their technologies.

Whatever emerges from these discussions, one thing is certain: the conversation around AI ethics and content rights is entering a new chapter, with Japanese entertainment companies leading the charge for stronger protections in an increasingly complex digital landscape.

Sources:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *