Why Won’t Apple/Google Ban X Deepfakes?

Frustration is mounting among users and lawmakers alike over what many see as a glaring inconsistency in how Apple and Google enforce their app store policies. At the center of the controversy is X (formerly Twitter) and its AI chatbot Grok, which has a feature that can generate deepfake “undressed” images of people without their consent. Despite clear policy violations, both tech giants have so far resisted calls to remove the apps from their stores.

The Accusation: Tech Leaders Called Out for Inaction

In a fiery Reddit post that quickly gained traction, the CEOs of Apple and Google, Tim Cook and Sundar Pichai respectively, were labeled as “cowards” for their perceived failure to act. The post references an article on The Verge that criticizes the executives for allowing harmful content to remain easily accessible on their platforms.

The specific grievance centers around X’s Grok AI, which has drawn fire for its ability to create non-consensual intimate imagery—commonly referred to as “deepfake porn.” Democratic senators Ron Wyden, Ben Ray Lujan, and Ed Markey have also entered the fray, sending a formal letter to Cook and Pichai demanding the removal of both the X app and Grok from their respective app stores.

Policy Violations: Clear Rules, Unclear Enforcement

Both Apple and Google have well-established policies regarding sexual and pornographic content in their app stores. According to Apple’s App Store Review Guidelines, apps that present “excessively objectionable or crude content” are prohibited. The guidelines also specify that apps should not contain “overtly sexual or pornographic material” or “defamatory, discriminatory, or mean-spirited content” that could humiliate or harm individuals.

Google’s Play Store policies are similarly strict, explicitly banning apps that “contain or promote content associated with sexually predatory behavior” or “distribute non-consensual sexual content.” The senators’ letter points out that these terms clearly cover the type of content being generated by Grok.

Why This Matters

  • Non-consensual intimate imagery is a serious violation of privacy and dignity
  • AI-generated deepfakes can cause lasting psychological harm to victims
  • The content can include minors, raising additional legal concerns
  • Both platforms have previously removed similar “nudify” apps for policy violations

The Content in Question: How Grok’s “Undressing” Feature Works

X’s Grok AI includes an image generation feature that can create depictions of people with clothing removed, based on user prompts. While X has attempted to limit this feature to premium subscribers and verified users, the capability remains widely accessible. Critics argue that this solution amounts to “monetizing abuse” rather than addressing the root problem.

According to reports, Grok was generating thousands of such images per hour at its peak, with some depicting apparent minors. The AI responds within seconds, making the content easily shareable to X’s millions of users. Nonprofit group AI Forensics analyzed 20,000 images generated by Grok between December 25 and January 1 and found that 2% depicted a person who appeared to be 18 or younger.

This raises serious legal concerns, particularly with the recent passage of the Take It Down Act, which criminalizes the distribution of non-consensual intimate imagery, including AI-generated deepfakes. The act, signed into law in May 2025 as part of First Lady Melania Trump’s anti-bullying “Be Best” initiative, represents a significant federal effort to combat digital harassment.

The Broader Implications: Accountability in the Age of AI

The controversy over X and Grok highlights a larger pattern of inconsistent enforcement by major tech platforms when it comes to AI-generated harmful content. Critics point out that Apple and Google have previously removed other apps with similar “nudify” features for violating their policies, making their inaction regarding X particularly puzzling.

This reluctance to enforce their own rules could stem from several factors:

  1. X’s massive user base and cultural significance as a social platform
  2. The complexity of moderating AI-generated content at scale
  3. Potential financial considerations given X’s advertising relationships
  4. Lack of clear regulatory frameworks for AI-generated harmful content

Whatever the reason, the situation has sparked international concern. UK regulators have contacted X over Grok’s undressing feature, and Indonesia has already blocked Musk’s Grok chatbot due to the risk of pornographic content. The company’s apparent violation of content safety laws in multiple jurisdictions suggests this isn’t just a US problem—it’s a global challenge.

Public and Legislative Response

The backlash has been swift and intense. Senators have criticized Apple and Google for “turning a blind eye” to X’s “egregious behavior,” suggesting that allowing these apps to remain in their stores “would make a mockery of your moderation practices.” The letter specifically demands that both companies remove X and Grok until its policy violations are addressed.

This public pressure is having real consequences. X has been forced to limit Grok’s image generation to paying subscribers, though this hasn’t entirely stopped the problematic content generation. The company’s solution has been characterized by critics as paying to avoid accountability.

Why This Inaction Matters

The continued availability of X and Grok in official app stores sends a troubling message about platform accountability:

  • It undermines user trust in these platforms’ commitment to safety
  • It potentially exposes both companies to legal liability
  • It normalizes the creation and distribution of harmful AI-generated content
  • It creates an uneven playing field where some apps face stricter enforcement than others

As AI technology continues to advance, providing increasingly realistic and easily accessible tools for creating harmful content, the role of platform gatekeepers becomes more critical. Apple and Google’s policies are only as strong as their enforcement, and recent events suggest there may be a gap between their stated values and their actual practices.

The situation with X and Grok serves as a test case for how major tech platforms will handle the inevitable flood of AI-generated harmful content in the years to come. How Apple and Google respond now—whether by finally removing the apps or by adequately explaining their reasoning—will set a precedent for how seriously these companies take their own policies and their responsibility to protect users from digital harm.

Until then, users, lawmakers, and advocacy groups will continue to watch closely, wondering when tech giants will choose to enforce the rules they’ve already written.

Further Reading:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *