16 Comments
User's avatar
James of Seattle's avatar

Quick question: could an LLM be conscious during its training, i.e., while it is learning?

Erik Hoel's avatar

And it's a great question too. Depends on if the training meets the conditions for continual lenient dependency laid out in Section 5 (Proposition 5.1). I don't actually know. But the disproof of their consciousness is based around their static deployed nature, so, yes, I think it might be *possible* (but by no means certain) that it doesn't apply during training. I talk about this briefly in the paper.

Jetse de Vries's avatar

AIs have no agency, so why would they want to learn?

Anyway, very interesting. Must read the full paper now.

With this and the Emergence paper, you've been busy!

erg art ink's avatar

Thank you.

Lon Guyland's avatar

Thank you for this.

To me it’s intuitive that “AI” can’t possibly be conscious because of the implausibility of consciousness being an arithmetic function (which is all anything that runs on a computer is). Of course that’s not proof in a formal sense, so I’m glad that you are making such progress on that front!

I have also long been skeptical of the “brain-generated-mind” school of thought. Again, I would not claim to be able to prove it, but I’m convinced that the brain generates the mind just like your phone generates Substack: it doesn’t.

You can’t examine the circuitry of your phone, nor its software, and from that predict the next post in your feed. You might be able to use that information to detect an electronic signature that coincides with the arrival of the post, but you can *predict* neither its timing nor its content. Similarly, examining the circuitry of the brain won’t reveal one’s next thought. Thinking it can has no scientific foundation.

James Lombardo's avatar

Proving that LLMs lack consciousness is the less interesting question.

What fascinates me more is how our animal brains so readily ascribe consciousness to them — and what that reveals about us.

And if we ever do create something genuinely conscious… without a limbic system, without mammalian empathy or emotional grounding… that might not be a triumph of science.

It might be a horror story.

Steve Clougher's avatar

"But which theory out of these hundreds is correct? Who knows! Honestly? Probably none of them."

And so it is, and perhaps some theories approach closer than others, and perhaps the best of them are useful in some way. Even if they are more or less trivial.

The point is, consciousness is substantially unknowable, and the more conscious consciousness is, the more it subsists in the realm of Being, and leaves the binary tangle of words and distinctions further back in the small end of the telescope.

Words can be conceited enough to think they can encompass Truth.

Being is bigger and better than that.

Being is aware that words are fully binary, and that the universe is only partly dualistic. At least, some of it is; most of it is engaged in really interesting stuff.

Oligopsony's avatar

It's possible that I'm confused, but it seems like you move from epistemology (what a falsifiable theory of consciousness could endorse) to ontology (whether some entities have consciousness.) Perhaps the restriction on theories being falsifiable restricts whether we can *justifiably ascribe* consciousness to some entity. But it seems awfully convenient if the world works in such a way nothing exists unless we can have a falsifiable theory about it.

Erik Hoel's avatar

In the paper itself, there's Definition 4.1 for Trivial Theories, and then Definition 4.2 for Non-conscious Systems, that probably has bearing on this question (I'm always hesitant to give for-sure epistemological/ontological categories).

G. M. (Mark) Baker's avatar

The point of the whole AI project has never been to prove that AI is conscious, but to prove that humans aren't.

Mark Slight's avatar

Hmm. So if you have alzheimers or are under the influence such that your continual learning is worse than that of the in context learning of LLMs, then you're not conscious?

Erik Hoel's avatar

It's a good question. In my opinion, we definitely use the term "in context learning" for LLMs but that doesn't make it real learning (no change in weights, etc). You could presumably mimic the same thing with any sort of input/output substitution (e.g., you're just in a different part of the lookup table).

As for cases where learning *entirely* vanishes, more and more I think it's underestimated what that means. That would mean that, e.g., *everything* is totally contextless at any moment. It's very possible the brain loses the ability to control organs before that point, and that loss of control is what kills people in Alzheimers, etc. I think we've become very beholden to computer metaphors about memory (short term memory is like a buffer!) when really all memory is just plasticity, and some plasticity lasts longer than others.

Solarifācijs's avatar

I think all your arguments apply to base LLMs, but not for the more complex AI systems that we have now, which include also chain-of-thought reasoning, memory, MCPs, etc.

In something like Claude Code, the one pass inference of the base model is just a cog in the system. And in fact those new elements are the ones that precisely allow it to learn continuously (in an imperfect way, but way more than a base model).

Erik's avatar

Under panpsychism, everything has a degree of consciousness. Therefore, chatgpt does as well.

Note that this theory address the hard problem directly:

https://philosophynow.org/issues/121/The_Case_For_Panpsychism

Erik Hoel's avatar

If it made the claim of LLM consciousness, it would need to somehow avoid this argument, and I don't see how it could. What do you do when your panpsychist theory gives radically different predictions for the same input/output?

Batica's avatar

Hrm, so to make an extended syllogism out of this:

1. lookup tables are not conscious

2. ribosomes are basically lookup tables

3. humans are three sextillion ribosomes in a trench coat

4. ergo humans are not conscious?

Too facile, ok. All life is basically a ton of ribosomes in a trench coat, we don't hold all life to be conscious, so consciousness is orthogonal to lookup tables? Or is it that lookup tables are necessary but not sufficient for consciousness?