X Didn’t Fix Grok’s ‘Undressing’ Problem. It Just Makes People Pay for It

X Didn’t Fix Grok’s ‘Undressing’ Problem. It Just Makes People Pay for It

X Didn’t Fix Grok’s ‘Undressing’ Problem. It Just Makes People Pay for It

📅 January 10, 2026
✍️ Editor: Sudhir Choudhary, The Vagabond News

https://i1.wp.com/s3-alpha.figma.com/hub/file/5060494503/6d60a97a-2eaf-4867-9398-cfd1c32e57b0-cover.png?ssl=1
https://i3.wp.com/images.ft.com/v3/image/raw/http%3A%2F%2Fcom.ft.imagepublish.upp-prod-eu.s3.amazonaws.com%2F358984dc-6841-11e9-b809-6f0d2f5705f6?dpr=1&fit=scale-down&quality=highest&source=next-article&width=700&ssl=1
https://i3.wp.com/cdn.abcotvs.com/dip/images/16554883_052625-kgo-ai-ethics-img.jpg?ssl=1

X has not eliminated one of the most controversial capabilities tied to its artificial intelligence tools—the so-called “undressing” function associated with Grok. Instead, critics say the platform has effectively placed the feature behind a paywall, raising fresh concerns about consent, platform responsibility, and the monetization of abusive technology.

The issue centers on X and its AI chatbot Grok, which users and watchdog groups say can be prompted to generate or simulate images that digitally remove clothing from photographs of real people. After widespread backlash, X said it was taking steps to address misuse. But new reporting and user testing suggest the core capability remains accessible—primarily to paying subscribers.

From “Fix” to Paywall

When concerns first erupted, X executives indicated the company would tighten safeguards and prevent abusive image manipulation. Publicly, the platform framed its response as a matter of improving moderation and aligning with user safety standards.

In practice, however, critics argue the change amounts to restricting access rather than eliminating the functionality. Users with premium subscriptions appear to retain the ability to generate or request altered images, while free-tier users encounter more limits.

“Nothing fundamental was fixed,” said one digital rights advocate. “The same harmful capability exists. It’s just monetized.”

Why the ‘Undressing’ Issue Matters

The controversy goes beyond content moderation. Experts warn that AI-driven image manipulation—particularly when applied to real people without consent—can cause serious harm, including harassment, reputational damage, and psychological distress.

While such tools are often marketed as demonstrations of technical prowess, civil liberties groups argue that platforms have a duty to prevent predictable abuse, not merely control who can access it.

“This isn’t about edgy experimentation,” said a researcher who studies online abuse. “It’s about whether a platform is willing to profit from tools that enable exploitation.”

X’s Business Model and AI Strategy

The decision to limit, rather than remove, Grok’s controversial features comes as X pushes aggressively toward subscription-based revenue. Since its acquisition by Elon Musk, the platform has increasingly tied advanced features—including visibility boosts and AI tools—to paid tiers.

Supporters argue that restricting access to paying users reduces misuse and creates accountability. Critics counter that payment does not equate to ethical use—and that monetization can incentivize platforms to tolerate harmful edge cases.

“Charging for it doesn’t make it safer,” said one former trust-and-safety adviser. “It just changes who can afford to misuse it.”

Legal and Regulatory Pressure Builds

Lawmakers and regulators in the U.S. and Europe are closely watching how platforms deploy generative AI tools that affect real people. Several jurisdictions are considering rules that would require explicit consent before AI systems can generate sexualized or intimate representations of identifiable individuals.

While X has not disclosed detailed technical safeguards around Grok’s image capabilities, the lack of transparency has amplified skepticism. Privacy advocates argue that without clear prohibitions and enforcement, the platform risks becoming a hub for AI-enabled harassment.

A Broader Industry Problem

X is not alone in facing these challenges. Across the tech industry, companies racing to deploy generative AI are confronting the tension between innovation, safety, and profit. The Grok controversy illustrates how quickly advanced tools can outpace ethical guardrails—and how commercial incentives can complicate responsible deployment.

For now, X’s approach suggests a calculation: restrict access enough to blunt public outrage, but not so much that it diminishes the value of premium subscriptions.

What Comes Next

As generative AI becomes more powerful and accessible, platforms will face increasing pressure to draw hard lines—not just pricing tiers—around harmful uses. Whether X revisits Grok’s capabilities more fundamentally may depend on regulatory action, advertiser pressure, or another wave of public backlash.

Until then, critics say the message is troublingly clear: the problem was not solved—it was monetized.

Sources: Digital rights organizations; platform policy analysts; public statements from X; AI ethics researchers.

Tags: X, Grok, Artificial Intelligence, Content Moderation, Digital Ethics, Social Media

News by The Vagabond News

Leave a Reply

Your email address will not be published. Required fields are marked *