Discussion about this post

User's avatar
Erik Hoel's avatar

A new preprint version of "A Disproof of LLM Consciousness" is now up on arXiv, fleshing out areas where people wanted more details. Thanks for all the feedback, everyone!

https://web3.arxiv.org/abs/2512.12802

It includes:

-> More philosophical optionality: E.g., If you instead accept trivial theories of consciousness as true, does this rescue LLM consciousness? (Not really, but it's worth exploring the option)

-> Reasons why the Kleiner-Hoel dilemma is so particularly forceful in LLMs (e.g., non-conscious substitutions for LLMs are constructible via known transformations).

-> A clearer definition of a Static System (which LLMs satisfy) and how in-context "learning" qualifies as static by merely using more of the input "space."

-> How "mortal learning" (no static read/write operations) is similar to Hinton's "mortal computation" and makes substituting for biological plastic systems extremely difficult.

-> Highlighting predictive processing theories as a class of theories of consciousness that could satisfy (depending on the details) the requirement for continual lenient dependency, which connects this to some existing popular theories of consciousness.

-> A healthy sprinkle of new citations.

Next up: submission!

James of Seattle's avatar

Quick question: could an LLM be conscious during its training, i.e., while it is learning?

113 more comments...

No posts

Ready for more?