• BarneyPiccolo@lemmy.today
    link
    fedilink
    arrow-up
    1
    ·
    8 hours ago

    I think we should start demanding that our entire government be run by AI. That would put them into a bipartisan panic, and AI would be banned within 6 months.

    OTH, while AI would probably be terrible at running the government, it would probably still do a better job than those clowns, so maybe we should seriously consider it.

  • darkmarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 days ago

    Former Navy nuke here… though I’ve been out for a while now and don’t know if I’d still be considered an expert. Regardless, no, no it can’t.

  • scintilla@crust.piefed.social
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    2 days ago

    Hot take I know. I’m not talking about LLMs obviously but AI could absolutely be implemented to reduce risk in a nuclear plant. Having a constant double check running for every decision could reduce the risk of a tired human pressing the wrong button and leading to a SCRAM at the very least.

    • velindora@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      2 days ago

      AI is great when it just watches and says “this is weird, maybe look at it” you can never have too many eyes

      • BarneyPiccolo@lemmy.today
        link
        fedilink
        arrow-up
        1
        ·
        8 hours ago

        The problem is that some want to make it the warning, the solution, and the implementation, all without any human monitoring at all.

      • phutatorius@lemmy.zip
        link
        fedilink
        arrow-up
        1
        ·
        8 hours ago

        you can never have too many eyes

        You can certainly have too many false positives, wasting everyone’s time and distracting them from real problems.

        • velindora@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          What are the real problems inside a nuclear facility that would not be identified because people were “wasting their time” chasing alerts by a computer?

      • Lka1988@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 days ago

        That is an appropriate usage of “AI”, as it’s basically just pattern recognition. Something computers are really good at.

        • phutatorius@lemmy.zip
          link
          fedilink
          arrow-up
          2
          ·
          8 hours ago

          it’s basically just pattern recognition

          Only of a very specific kind.

          Something computers are really good at.

          They’re good at recognizing the patterns they’re programmed to recognize. That tells you nothing of the significance of a pattern, its impact if detected, or what the statistical error rates are of the detection algorithm and its input data. All of those are critical to making real-life decisions. So is explainabiliy, which existing AI systems don’t do very well. At least Anthropic recognize that as an important research topic. OpenAI seems more concerned with monetizing what they already have.

          For something safety-critical, you can monitor critical parameters in the system’s state space and alert if they go (or are likely to go) out of safe bounds. You can also model the likely effects of corrective actions. Neither of those requires any kind of AI, though you might feed ML output into your effects model(s) when constructing them. Generally speaking, if lives or health are on the line, you’re going to want something more deterministic than AI to be driving your decisions. There’s probably already enough fuzz due to the use of ensemble modeling.

          What computers are really good at is aggregating large volumes of data from multiple sensors, running statistical calculations on that data, transforming it into something a person can visualise, and providing decision aids to help the operators understand the consequences of potential corrective actions. But modeling the consequences depends on how well you’ve modeled the system, and AIs are not good at constructing those models. That still relies on humans, working according to some brutally strict methodologies.

          Source: I’ve written large amounts of safety-critical code and have architected several safety-critical systems that have run well. There are some interesting opportunities for more use of ML in my field. But in this space, I wouldn’t touch LLMs with a barge pole. LLM-land is Marlboro country. Anyone telling you differently is running a con.

        • velindora@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          2 days ago

          Not according to most of the people on Lemmy. They would have a nuclear meltdown (literally and figuratively) before allowing a computer program labeled “AI” to identify risk.