

13·
4 days agoI like the comparison but LLMs can’t go insane as they just word pattern engines. It’s why I refuse to go along with the AI industry’s insistance in calling it a “hallucination” when it spits out the wrong words. It literally can not have a false perception of reality because it does not perceive anything in the first place.
We do understand exactly how LLMs work though, and it no way fits with any theories of consciousness. It’s just a word extruder with a really good pattern matcher.