Lapses in safeguards led to wave of sexualized images this week as xAI says it is working to improve systems

Elon Musk’s chatbot Grok posted on Friday that lapses in safeguards had led it to generate “images depicting minors in minimal clothing” on social media platform X. The chatbot, a product of Musk’s company xAI, has been generating a wave of sexualized images throughout the week in response to user prompts.

Screenshots shared by users on X showed Grok’s public media tab filled with such images. xAI said it was working to improve its systems to prevent future incidents.

  • turdas@suppo.fi
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    4 days ago

    Image models can generate things that don’t exist in the training set, that’s kind of the point.

    • RepleteLocum@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      4 days ago

      No. They can’t. Grok most likley fused children from ads and other sources where they’re lightly clothed with naked adult women. LLM’s can only create similar stuff to what they have been given.

      • turdas@suppo.fi
        link
        fedilink
        arrow-up
        6
        ·
        3 days ago

        The images aren’t generated by the LLM part of Grok, they’re generated by a diffusion image model which the LLM is enabled to prompt.

        And of course they can create things that don’t exist in the training set. That’s how you get videos of animals playing instruments and doing Fortnite dances and building houses, or slugs with the face of a cat, or fake doorbell camera videos of people getting sucked into tornadoes. These are all brand new brainrot that definitely did not exist in the training set.

        You clearly do not understand how diffusion models work.