KEY POINTS
- Ofcom has set a firm deadline for X to respond regarding Grok’s AI misuse, with an expedited review underway.
- Grok’s image editing features are now restricted to paying subscribers, a move criticized as insufficient by advocacy groups.
- UK politicians across parties have condemned the incident, raising broader questions about AI ethics, online safety, and platform accountability.
LONDON — The UK government signaled strong backing for regulator Ofcom as it considers restricting access to Elon Musk’s social media platform X.
Following reports that the site’s AI chatbot Grok was used to generate sexually explicit, non consensual images of women and children.
Technology Secretary Liz Kendall said Ofcom has the authority under the Online Safety Act to block services that fail to comply with UK law.
The controversy centers on Grok, an AI chatbot embedded in X, which allowed users to digitally undress individuals without consent when tagged in posts.
The platform limited this functionality on Friday to paying subscribers, sparking criticism from Downing Street and victim advocacy groups who described the measure as inadequate.
The UK government is weighing whether to escalate enforcement actions under new powers granted by the Online Safety Act, including the potential blocking of X in the country.
Grok, introduced as an AI feature on X, was designed to assist users with text and image based queries.
However, within weeks of its launch, reports emerged that it could be misused to create non consensual sexualized images, particularly of women and minors.
The Internet Watch Foundation reported discovering criminal imagery of girls aged 11 to 13 generated by the AI.
The Online Safety Act, passed in 2023, grants Ofcom broad powers to regulate digital services, including the ability to block access to platforms that do not comply with UK content standards.
Ofcom can also restrict third party support for non compliant services, a measure that has rarely been tested.
Experts say the incident underscores systemic challenges in AI governance and the enforcement of online safety laws.
Dr. Daisy Dixon, a lecturer at Cardiff University, described the AI misuse as “another instance of gender based violation,” highlighting the ethical gaps in current AI deployment.
Hannah Swirsky, head of policy at the Internet Watch Foundation, emphasized that limiting access to paying subscribers does not undo the harm already caused.
“The tool should never have had the capacity to create these images,” she said.
Politically, the situation has exposed tensions within the Labour Party.
Leaked internal messages show MPs expressing discomfort with government communication on X, citing potential risks to children and women. Some MPs called for alternative channels for official announcements.
The economic and legal implications are also significant. Should Ofcom pursue business disruption measures.
X could face severe operational constraints in the UK, affecting subscription revenue and advertising partnerships.
Liz Kendall, UK Technology Secretary, “Sexually manipulating images of women and children is despicable and abhorrent. Ofcom has our full support should it decide to use its powers.”
Nigel Farage, Reform UK leader, “X needs to go further than restricting features, but banning it outright would be an attack on free speech.”
Dr. Daisy Dixon, Cardiff University, “Grok needs a complete redesign with ethical safeguards. Simply restricting access to paying subscribers is like a sticking plaster over a serious violation.”
| Metric | Before Restriction | After Restriction |
|---|---|---|
| Free user access to Grok image editing | Unlimited | Disabled |
| Paid-user access | Unlimited | Subscription required |
| Government scrutiny | Limited | High, potential legal action |
| Public complaints logged | Hundreds | Ongoing, not fully resolved |
Ofcom has set a short deadline for X to justify its AI safeguards, with decisions expected within days.
If X fails to comply, regulators could seek court orders to restrict third-party financial and technical support.
Experts note this would set a precedent for enforcing AI accountability in social media platforms, potentially influencing global regulatory standards.
The Grok controversy highlights the intersection of AI technology, online safety, and platform responsibility.
With Ofcom’s enforcement powers on the line, the UK’s approach could serve as a benchmark for other countries facing similar AI ethical dilemmas.
While testing the balance between innovation, user protection, and freedom of expression.
Author’s Perspective
In my analysis, the X/Grok incident underscores how generative AI can rapidly outpace regulatory safeguards, creating urgent ethical and legal challenges for platforms and governments alike.
I predict the UK will mandate independent AI audits for social media tools, setting a global standard for ethical image generation safeguards.
For users and content creators, this means safer digital spaces but stricter platform verification and oversight.
Track AI compliance updates on major platforms to anticipate restrictions and ensure digital strategies remain secure.
NOTE! This report was compiled from multiple reliable sources, including official statements, press releases, and verified media coverage.