Hot take I know. I’m not talking about LLMs obviously but AI could absolutely be implemented to reduce risk in a nuclear plant. Having a constant double check running for every decision could reduce the risk of a tired human pressing the wrong button and leading to a SCRAM at the very least.
What are the real problems inside a nuclear facility that would not be identified because people were “wasting their time” chasing alerts by a computer?
They’re good at recognizing the patterns they’re programmed to recognize. That tells you nothing of the significance of a pattern, its impact if detected, or what the statistical error rates are of the detection algorithm and its input data. All of those are critical to making real-life decisions. So is explainabiliy, which existing AI systems don’t do very well. At least Anthropic recognize that as an important research topic. OpenAI seems more concerned with monetizing what they already have.
For something safety-critical, you can monitor critical parameters in the system’s state space and alert if they go (or are likely to go) out of safe bounds. You can also model the likely effects of corrective actions. Neither of those requires any kind of AI, though you might feed ML output into your effects model(s) when constructing them. Generally speaking, if lives or health are on the line, you’re going to want something more deterministic than AI to be driving your decisions. There’s probably already enough fuzz due to the use of ensemble modeling.
What computers are really good at is aggregating large volumes of data from multiple sensors, running statistical calculations on that data, transforming it into something a person can visualise, and providing decision aids to help the operators understand the consequences of potential corrective actions. But modeling the consequences depends on how well you’ve modeled the system, and AIs are not good at constructing those models. That still relies on humans, working according to some brutally strict methodologies.
Source: I’ve written large amounts of safety-critical code and have architected several safety-critical systems that have run well. There are some interesting opportunities for more use of ML in my field. But in this space, I wouldn’t touch LLMs with a barge pole. LLM-land is Marlboro country. Anyone telling you differently is running a con.
Not according to most of the people on Lemmy. They would have a nuclear meltdown (literally and figuratively) before allowing a computer program labeled “AI” to identify risk.
Hot take I know. I’m not talking about LLMs obviously but AI could absolutely be implemented to reduce risk in a nuclear plant. Having a constant double check running for every decision could reduce the risk of a tired human pressing the wrong button and leading to a SCRAM at the very least.
AI is great when it just watches and says “this is weird, maybe look at it” you can never have too many eyes
The problem is that some want to make it the warning, the solution, and the implementation, all without any human monitoring at all.
You can certainly have too many false positives, wasting everyone’s time and distracting them from real problems.
What are the real problems inside a nuclear facility that would not be identified because people were “wasting their time” chasing alerts by a computer?
That is an appropriate usage of “AI”, as it’s basically just pattern recognition. Something computers are really good at.
Only of a very specific kind.
They’re good at recognizing the patterns they’re programmed to recognize. That tells you nothing of the significance of a pattern, its impact if detected, or what the statistical error rates are of the detection algorithm and its input data. All of those are critical to making real-life decisions. So is explainabiliy, which existing AI systems don’t do very well. At least Anthropic recognize that as an important research topic. OpenAI seems more concerned with monetizing what they already have.
For something safety-critical, you can monitor critical parameters in the system’s state space and alert if they go (or are likely to go) out of safe bounds. You can also model the likely effects of corrective actions. Neither of those requires any kind of AI, though you might feed ML output into your effects model(s) when constructing them. Generally speaking, if lives or health are on the line, you’re going to want something more deterministic than AI to be driving your decisions. There’s probably already enough fuzz due to the use of ensemble modeling.
What computers are really good at is aggregating large volumes of data from multiple sensors, running statistical calculations on that data, transforming it into something a person can visualise, and providing decision aids to help the operators understand the consequences of potential corrective actions. But modeling the consequences depends on how well you’ve modeled the system, and AIs are not good at constructing those models. That still relies on humans, working according to some brutally strict methodologies.
Source: I’ve written large amounts of safety-critical code and have architected several safety-critical systems that have run well. There are some interesting opportunities for more use of ML in my field. But in this space, I wouldn’t touch LLMs with a barge pole. LLM-land is Marlboro country. Anyone telling you differently is running a con.
Not according to most of the people on Lemmy. They would have a nuclear meltdown (literally and figuratively) before allowing a computer program labeled “AI” to identify risk.