As we reported, Elon Musk’s Grok AI is being used to produce explicit images of women without their consent. We further reported on how the gross invasive practice was used to create child sexual abuse material (CSAM). In response, Ofcom has offered a flimsy statement which suggests they’re going to follow the lead of xAI (Musk’s AI company):

Our statement on Grok ⬇ pic.twitter.com/UsXh96zjQY

— Ofcom (@Ofcom) January 5, 2026

Child sexual abuse material: not serious enough for Ofcom?

Ashley St Clair is the mother to one of Musk’s children. In the below post, she pointed out the double standard when it comes to other controversial issues surrounding the use of Grok. In the past, the company reduced access to prevent ongoing harmful content; CSAM doesn’t appear to warrant the same risk-aversion:

When Grok went full MechaHitler, the chatbot was paused to stop the content.

When Grok is producing explicit images of children and women, xAI has decided to keep the content up + overwhelm law enforcement with cases they will never solve with foreign bots and take resources…

— Ashley St. Clair (@stclairashley) January 5, 2026

Given that OFCOM is supposed to protect UK citizens from harmful content, it’s gross that the regulator seems to be considering the interests of the AI’s billionaire owner first.

OFCOM appear to be offloading responsibility to regulate by asking Musk if an investigation is warranted, basically letting the billionaire mark his own homework. When his very business model is about maximum user engagement, it’s not hard to see the disastrous conflict of interest at play here. Especially when the billionaire doesn’t appear to share the same concerns:

Not sure your chatbot generating child porn is particularly funny! pic.twitter.com/x2N82PGEBY

— Nikolaj🇺🇦🇵🇸 (@nikicaga) January 2, 2026

St. Clair also called out that people are putting sole responsibility on those prompting the AI, as opposed to the tool which is delivering the requested explicit content (or indeed its owners):

Just saw a photo that Grok produced of a child no older than four years old in which it took off her dress, put her in a bikini + added what is intended to be semen. ChatGPT does not do this. Gemini does not do this.

Another girl who appears to be just 11 or 12 with a brain…

— Ashley St. Clair (@stclairashley) January 5, 2026

Alarmingly, men purporting to be British have bragged about having no issue with the provision of CSAM:

this reply, indicative of many crying ‘free speech’, is so goddamn sinister pic.twitter.com/PchKhrhS0x

— Dr Daisy Dixon (@daisyldixon) January 5, 2026

Regulation in the modern world

This issue highlights the distance left to cover in ensuring that domestic law holds pace with modern technological developments. Because OFCOM is clearly not up to the job.

If a parent were to willingly hand their child over to a known paedophile, that parent would be legally responsible for the preventable harm inflicted. Tech billionaires should face the same burden of responsibility for any child exposed and made vulnerable on their platforms.

Featured image via James Duncan Davidson

By Maddison Wheeldon


From Canary via This RSS Feed.