Lapses in safeguards led to wave of sexualized images this week as xAI says it is working to improve systems
Elon Musk’s chatbot Grok posted on Friday that lapses in safeguards had led it to generate “images depicting minors in minimal clothing” on social media platform X. The chatbot, a product of Musk’s company xAI, has been generating a wave of sexualized images throughout the week in response to user prompts.
Screenshots shared by users on X showed Grok’s public media tab filled with such images. xAI said it was working to improve its systems to prevent future incidents.



Yeah, right. :|
So I had to do a paper on this recently and basically yeah, the safeguards are basically just auto mods whacking the ai with a stick every time it gives the “wrong” answer.
You can’t crack it open and cut our the bad stuff because they barely understand why it works as is. So the only way to remove it would be to start from scratch on data that’s been vetted to not have that and considering they’re working with everything ever posted, sent, or hosted on the internet, there aren’t enough people in the world to actually vet all their content. Instead, they slap a censor bot between you and the llm so if it says anything on the ban list, that bot deletes it and gives you the “sorry I can’t talk about that” text.
Now that second bot is the same type of bot that stops you from making your username on Xbox “John-Hancock9000” because it has cock in it, and any 4th grader knows how easy that is to bypass.
The way more concerning thing is that the LLMs proclivity for leading conversation into exploitation content means that content makes up a sizable portion of the training data. What does that day about social media that the statistically best response to “I’m a minor” is groomer talk.
I don’t think it’s possible to make an LLM image generator that can’t generate child pornography. (Maybe you can chain it so it will refuse requests to do so, but the models will always retain the capability at their core.)
As long as the AI is trained on data that contains:
The model will have the capability to produce child pornography. As long as it knows what pornography is, what an adult is, and what a child is, it will be able to map the features of adult pornography onto images of children. Trying to train an AI without all three of these things would be nearly impossible and would severely hamper the AIs abilities to do perfectly useful and legal things. You could just not include any images of children in the training data, but then the LLM couldn’t create AI-edited images of family photos or generate perfectly harmless SFW images involving kids. And you can’t really exclude porn from the data, as it’s all over the net, and LLM providers would actually prefer if their models can generate explicit imagery. They’ve openly stated their intention to use these tools to generate revenue from adult content.
Yeah. Like the comment you’re replying to says, right now the approach is to tag or summarize the content to a few keywords, and if any banned keywords match, kill the content. Or, put it to some other kind of generic AI model, and ask it “is this [banned content]?” and if it says yes, kill the content.
But we all know how accurate AI models are.
Someone’s going to find a way to recontextualize, encode, or otherwise inject these banned keywords into prompts, just as they have before.
Very informative. Thanks for the genuine reply to my glib cynicism:)