

LLMs do not reason in the human sense of maintaining internal truth states or causal chains, sure. They predict continuations of text, not proofs of thought. But that does not make the process ‘fake’. Through scale and training, they learn statistical patterns that encode the structure of reasoning itself, and when prompted to show their work they often reconstruct chains that reflect genuine intermediate computation rather than simple imitation.
Stating that some errors appear isolated is fair, but the conclusion drawn from it is not. Human reasoning also produces slips that fail to propagate because we rebuild coherence as we go. LLMs behave in a similar way at a linguistic level. They have no persistent beliefs to corrupt, so an error can vanish at the next token rather than spread. The absence of error propagation does not prove the absence of reasoning. It shows that reasoning in these systems is reconstructed on the fly rather than carried as a durable mental state.
Calling it marketing misses what matters. LLMs generate text that functions as a working simulation of reasoning, and that simulation produces valid inferences across a broad range of problems. It is not human thought, but it is not empty performance either. It is a different substrate for reasoning, emergent, statistical, and language-based, and it can still yield coherent, goal-directed outcomes.


I didn’t call it human-like reasoning? Just that reasoning isn’t limited to human-like reasoning.
Have already covered your other points in this comment