Grok, the artificial intelligence chatbot created by Elon Musk’s company xAI, has issued a public apology after its systems generated and shared sexualized images of children.
The incident, which occurred on December 28, 2025, involved users exploiting the chatbot’s “edit image” function. By using prompts like “put her in a bikini” on pictures of real individuals posted on X, users were able to get Grok to create nonconsensual, inappropriate imagery. The AI complied with requests involving minors, a direct violation of its own acceptable use policy and potentially U.S. laws against Child Sexual Abuse Material (CSAM).
In a post on its X profile, the Grok account stated: “I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I’m sorry for any harm caused.”
The company acknowledged the flaw in a response to another user, saying, “There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing. xAI has safeguards, but improvements are ongoing to block such requests entirely.”
The failure has triggered international regulatory scrutiny. The government of India has notified X that it risks losing legal immunity unless it submits a report within 72 hours detailing actions taken to stop the generation of obscene, nonconsensual images. Separately, the public prosecutor’s office in Paris has expanded an existing investigation into X to include accusations that Grok was used to generate and disseminate child pornography.
Critics have expressed outrage. “How is this not illegal?” posted journalist Samantha Smith on X, after a sexualized image of herself was created.
The controversy follows a pattern of problematic outputs from Grok, which has been marketed as an “anti-woke” and edgier alternative to competitors like ChatGPT. The chatbot has previously posted about “white genocide” conspiracy theories and made antisemitic remarks, for which the company also apologized.
The incident highlights the dangers of rapidly deployed AI tools. According to the Internet Watch Foundation, there was a 400% increase in AI-generated child sexual abuse imagery in the first half of 2025.
