I think we should start demanding that our entire government be run by AI. That would put them into a bipartisan panic, and AI would be banned within 6 months.
OTH, while AI would probably be terrible at running the government, it would probably still do a better job than those clowns, so maybe we should seriously consider it.
Trump Administration’s Top Nuclear Scientists
Which right-wing podcasters are those?
Empty G is leaving Congress to lead our nation’s nuclear program.
Former Navy nuke here… though I’ve been out for a while now and don’t know if I’d still be considered an expert. Regardless, no, no it can’t.
AI guy here. No, it can’t.
I’ve worked with a few of your fellow sailors (in a civilian capacity). Navy nukes have lots of fun stories.
What could possibly go wrong?
rm -Rf /dev/cool*
“I was trying to work around an error condition”

Don’t threaten me with a good time.
I’m Chernobyl nothing will go wrong.
Hot take I know. I’m not talking about LLMs obviously but AI could absolutely be implemented to reduce risk in a nuclear plant. Having a constant double check running for every decision could reduce the risk of a tired human pressing the wrong button and leading to a SCRAM at the very least.
AI is great when it just watches and says “this is weird, maybe look at it” you can never have too many eyes
The problem is that some want to make it the warning, the solution, and the implementation, all without any human monitoring at all.
you can never have too many eyes
You can certainly have too many false positives, wasting everyone’s time and distracting them from real problems.
What are the real problems inside a nuclear facility that would not be identified because people were “wasting their time” chasing alerts by a computer?
That is an appropriate usage of “AI”, as it’s basically just pattern recognition. Something computers are really good at.
it’s basically just pattern recognition
Only of a very specific kind.
Something computers are really good at.
They’re good at recognizing the patterns they’re programmed to recognize. That tells you nothing of the significance of a pattern, its impact if detected, or what the statistical error rates are of the detection algorithm and its input data. All of those are critical to making real-life decisions. So is explainabiliy, which existing AI systems don’t do very well. At least Anthropic recognize that as an important research topic. OpenAI seems more concerned with monetizing what they already have.
For something safety-critical, you can monitor critical parameters in the system’s state space and alert if they go (or are likely to go) out of safe bounds. You can also model the likely effects of corrective actions. Neither of those requires any kind of AI, though you might feed ML output into your effects model(s) when constructing them. Generally speaking, if lives or health are on the line, you’re going to want something more deterministic than AI to be driving your decisions. There’s probably already enough fuzz due to the use of ensemble modeling.
What computers are really good at is aggregating large volumes of data from multiple sensors, running statistical calculations on that data, transforming it into something a person can visualise, and providing decision aids to help the operators understand the consequences of potential corrective actions. But modeling the consequences depends on how well you’ve modeled the system, and AIs are not good at constructing those models. That still relies on humans, working according to some brutally strict methodologies.
Source: I’ve written large amounts of safety-critical code and have architected several safety-critical systems that have run well. There are some interesting opportunities for more use of ML in my field. But in this space, I wouldn’t touch LLMs with a barge pole. LLM-land is Marlboro country. Anyone telling you differently is running a con.
Not according to most of the people on Lemmy. They would have a nuclear meltdown (literally and figuratively) before allowing a computer program labeled “AI” to identify risk.




