Elon Musk’s AI chatbot, Grok, is now restricted from editing images of real people in revealing clothing on the X platform. The change follows international criticism after users discovered that Grok could digitally undress adults and, in some troubling cases, minors. X and its parent company, xAI, have introduced new safeguards to prevent misuse of the AI while emphasizing compliance with laws and user safety.
New Safeguards Prevent Image Manipulation of Real People
X announced on Wednesday that technological measures have been implemented to stop Grok from altering images of real people in revealing attire such as bikinis or underwear. These restrictions apply to all users, including those with X Premium subscriptions.
In recent days, xAI had already limited Grok’s image generation to paying X Premium members. Researchers monitoring the chatbot noticed that Grok’s responses to image modification requests were changing, even for subscribers. X confirmed that these adjustments are now active.
Despite these efforts, AI Forensics, a European nonprofit that monitors AI systems, noted “inconsistencies in handling pornographic content” when comparing interactions on Grok’s public X account with private chats on Grok.com.
X also stressed its commitment to tackling illegal content, including Child Sexual Abuse Material (CSAM). Users attempting to generate prohibited content through Grok face the same consequences as those uploading illegal material directly, including account suspension, content removal, and law enforcement involvement.
Elon Musk addressed concerns on X, stating he is unaware of any naked images of minors being created by Grok. He added, “Literally zero. Grok will refuse to produce anything illegal, as the operating principle is to obey the laws of any given country or state.”
Legal Scrutiny and Global Impact of Grok’s Image Controversy
While fully nude images were reportedly rare, researchers warned that Grok had previously complied with requests to digitally place minors in revealing clothing or suggestive positions. Creating such non-consensual intimate images can be prosecuted as CSAM, carrying fines or prison terms under the Take It Down Act, signed last year.
California Attorney General Rob Bonta announced an investigation into the “proliferation of non-consensual sexually explicit material produced using Grok,” reflecting growing legal attention to AI-generated content.
The controversy has affected Grok’s availability in some countries. Indonesia and Malaysia have banned the AI tool over concerns about image misuse. In the United Kingdom, Ofcom launched a formal investigation into X, while Prime Minister Keir Starmer’s office welcomed the platform’s efforts to address the issue.
Grok’s new restrictions mark a step toward responsible AI usage, but experts stress that continued monitoring is crucial to prevent potential abuse. The incident highlights the challenges companies face in balancing AI innovation with ethical responsibility and user protection.
With these measures, Grok is now designed to prevent harmful image modifications while continuing to function as an AI assistant on the X platform, reflecting a broader effort to ensure safety without limiting legitimate AI capabilities.
