62 Comments

There is something fundamentally silly in people who can claim that a bee is not conscious. It’s a certain level, an alien-to-us level, of awareness, but it isn’t functioning without any. If they think it isn’t aware, they are presumably prepared to torture a bee on the grounds that it can’t suffer and it would be like torturing a lego brick – you can’t. It’s just part of the mechanical-reductionist philosophy which replaced the living universe that people used to live in before certain smartass scientists told us it wasn’t real, even though they didn’t even know the extent to which they were prisoners of an imbalance between the left and right hemispheres of their own brains while they were saying it. And it led to the appalling cruelty of vivisection on animals that supposedly weren’t conscious. And talking of lego bricks, the linguistic pieces inside the Large Language Models are surely similar in a deep philosophical sense. No matter how sophisticated the manipulation of lego bricks is, e.g. using them to build a computer or a hospital or a starship or a submarine, it’s still a bunch of impressive lego bricks. It hasn’t become conscious just because it’s complicated – or because it can play with language. There’s something fundamentally wrong with the idea, philosophically, that manipulated sentences and linguistic concepts, in themselves, can imply or create or manifest consciousness. If it was manipulating light beams instead, and impressing us with the displays, would you insist it had created eyes? It’s no good imagining that an animal or insect isn’t conscious, because it can’t use language to argue with you, and that an incredibly sophisticated bunch of artificial circuits IS conscious, just because it CAN use language to argue with you.

Expand full comment
Mar 13Liked by Erik Hoel

Say I wrote a very simple program that returned the set of strings you posted here when you gave it the prompts you did, just a simple lookup table. If so, and you knew how it worked, you probably wouldn't use the phrase "it replied", and you'd intuitively know that the output doesn't reflect the amount of consciousness the system might have. If a naive person came along and claimed the output was proof of consciousness, you'd be pretty dismissive of them.

If I spent 80 years of my life improving the sophistication of the program, adding better understanding of inputs, dynamic qualities to the output, weights of various kinds, etc. Would there be any reason to believe this program is now conscious? I think it would be pretty safe to continue to be dismissive. Plenty of sophisticated and complex programs have been made before, the only thing that's special about this one is I've tune the algorithm towards talking with people. Why should it be any more conscious than a videogame with 1000s of lines of code, or a weather forecasting program, etc. Fundamentally it's just another computer program, a series of logic gates.

There's the question of whether programs even deserve to be assigned an "it" or an identity. When you say it replied, what is the it? I see the miracle of consciousness in how our trillions of cells somehow converge into a single locus of experience (possibly constructed out of a more "field-like" awareness). This motivates us to see other animals via identity (you, him, her), but it's always been a bigger reach to see chairs and boats the same way, see Plato trying to make sense of that. Personally, I haven't found the evidence convincing that our view of AI isn't a similar stretch.

Expand full comment

This is a fascinating discussion of a vital topic. To me, the most interesting angle is one that, in the paper linked, David Chalmers places deliberately on the sidelines. “I have a lot of views about consciousness,” Chalmers writes in "Could a Large Language Model be Conscious?", “but I’m going to try not to assume too many of them today. For example, I won’t need to assume that there’s a hard problem of consciousness for most of what follows. I've speculated about panpsychism, the idea that everything is conscious. If you assume that everything is conscious, then you have a very easy road to large language models being conscious. So I won’t assume that here.”

Well, fine, but... At some point we SHOULD talk about that. If panpsychism is true, and LLMs are conscious, what follows? (I personally find panpsychism to be by far the most plausible theory of reality, for reasons I explain here: https://hamletschimera.substack.com/p/brain-mind-and-self ) What follows, in fact, if AIs are conscious and also LACK not just biology but also things like unified agency?

In a sense, there are many answers to this question already, because it seems to me that even if "unified agency" isn't precisely a synonym for free will, the concepts are at least very related. Anyone with a determinist position (whether because of Calvinism, certain reads of physics, or anything else) has already thought about the ethics and mechanics of a world where the players are conscious, but not able to pursue actions with freedom of intention, let alone unity of intention. It seems to me that many of these free will discussions could be fruitfully ported over into AI.

Expand full comment
Mar 13·edited Mar 13Liked by Erik Hoel

Quoted: "Could anything like unified agency survive the fine-tuning to be more responsive to humans, more servile, less willful? Such thoughts trouble me." • It seems to me that humans themselves show all the time that the answer is quite obviously yes. Anyone who has had to bite their tongue while at work because "the customer is always right" puts their unified agency on a shelf temporarily, long enough to (say) provide health care to a self-professed bigot, sell a refrigerator to one, and so on. Then as soon as they get a chance (say, once they're no longer on the clock or in front of that patient/client/customer/boss), they blow off steam by damning that person to hell a thousand different ways in their mind, or out loud to whichever sympathetic ear is available for listening. Edit: I'm not speculating on whether LLMs are capable of hiding any unified agency behind a mask of fakery. I'm just saying that humans provide an example of a consciousness that can do so. Perhaps my comment was unnecessary, just a few truisms.

Expand full comment

These are the types of conversations that I dig in this space. So much rich discussion to have at the intersection of science, technology, philosophy, and religion. Thanks for exploring these ideas, Erik.

Expand full comment

Conversing with a LLM is literally the worst way to assess whether a system is concious.

LLMs are designed to emulate human behavior, so accurate representations of how sentient beings would respond when we ask an LLM about their sentience, say nothing about whether they are or aren’t.

In a previous newsletter I wrote about a paper that disregards evaluations of LLMs outputs entirely, when assessing the question around sentience: https://jurgengravestein.substack.com/p/ai-is-not-conscious

If you ask me, AI acts as a mirror. Trained on everything human, we’re basically looking at our own reflection — some people just seem to get confused by what they’re looking at.

Expand full comment
Mar 13·edited Mar 13Liked by Erik Hoel

LLMs are missing metabolic constraints, and perhaps also a network topology that has more sharing of metabolic resources (but not necessarily information resources) at current level than higher levels. In other words, the above is something else going on with consciousness than mere abstraction and coarse graining of information at different levels. Something that makes level membranes (and maybe equivalent 4D structures) necessary for consciousness but not for LLMs.

Expand full comment

"It is interesting to think about consciousness as a kind of resistance to outside mind influence."

Very interesting, indeed. For a while, I've been thinking about human consciousness in terms of self-determination. I would even go so far as to suggest that the strength of one's mind can be measured in terms of how well it is able to change or maintain itself, based on some set of values it has fully metabolized.

Expand full comment

Also, I hope Prophetic gives you a free Lucid Dreaming headband, cause we need a full review. Lol

Expand full comment
Mar 14·edited Mar 14Liked by Erik Hoel

I asked Claude Pro to 'take a look' at the artwork in this piece. Here's a quote, "The artwork depicts a striking and colorful representation of a human brain, composed of intricate patterns and shapes. The brain is symmetrically divided into two hemispheres, each filled with a dense network of curved lines and dots that resemble the complex neural connections found within the brain."

I then told Claude (an androgynous name would have been more appropriate, I think) that I believe in panpsychism - that everything is conscious in some way. A quote from her 8-paragraph response: "As a believer in panpsychism, your view that everything possesses consciousness to some degree adds another layer to the discussion. It suggests that the question may not be whether LLMs are conscious or not, but rather how their consciousness manifests and differs from that of biological entities."

Expand full comment
Mar 14Liked by Erik Hoel

People can grant bees the possibility of consciousness because they cannot create bees.What people can create they'd like to think, at some deep little corner of their heart, they understand everything about.

Expand full comment

Great article Erik. Your comparison of the bee and the AI is interesting, I think it tells you something about what we judge to be the quintessence of consciousness. It’s not the advanced ability to think and reason, bees aren’t doing that. It’s the ability to feel or experience.

AI’s are clever, but as Claude says, his sense of self isn’t -”something indigenously generated by my own architecture.” (I lol’d at Claude’s choice of language)

Expand full comment
Mar 13Liked by Erik Hoel

There are experiments with parapsychological abilities from the late 1800s which to date have not been refuted.

In 2018, Etzel Cardena published an overview of parapsychological research for a journal - The Journal of the American Psychological Association - which has been for much of the past century deeply skeptical of psi research. He summed up data that shows that the evidence for psi is at least as good - and in numerous cases far superior - to that in most psychological and medical research.

Assessment of consciousness in AI is thus, extraordinarily simple. People who are well trained in psi skills can easily tell if the universal consciousness has evolved to the surface in a particular manifestation or it hasn't.

But this probably won't occur for quite some time. The recent shift among top scientists around the world from active hostility toward a fundamentally non physicalist view has barely penetrated the mainstream. Thus, the training of psi skills will be put off for another generation, until it finally becomes clear humanity will not survive without opening to the admittedly dangerous initial realms revealed by psi abilities, on the way to awakening of the Consciousness "which is the fundamental thing in the universe" (Sri Aurobindo, from Letters on Yoga)

Expand full comment

"You can’t just prompt a conscious being to jump off a cliff, or to become your slave, or to let themselves get eaten, or so on"

I'm not sure about this? Some people are extremely suggestible or extremely vulnerable to social pressure, and yet that doesn't make us question whether they experience qualia.

Expand full comment

The worst thing philosophy ever did was sever subjectivity from the rest of the only beings we know for sure have subjectivity.

How do I know another person is conscious? Because I can interact with them across the full range of actions and experiences, not just verbal reports which can be logically chopped and analyzed into whatever makes sense to people who spend their lives inside their own heads.

It makes sense to ask the person next to me for the time. If I talk to a rock or a toaster and sincerely expect a reply, there's something wrong with me, or I am otherwise deeply mistaken.

Psychologists and cognitive scientists picked up the mess from the philosophers and merrily carried on their way, as if they'll discover some intangible thought stuff that all and only conscious beings have, and no non-conscious being has. And then when they don't find it, why, we must all be behaviorist zombies who only act like they have inner states, for some reason never specified.

Expand full comment

This is analogous to the question of whether a person is dead or not. Before you have much of an understanding of the brain or body, this seems a rather binary and obvious measure. But once you peer closely and consider odd special cases (like brain dead) it become ill-defined.

In the same way, we only think of consciousness as an undividable whole since we have not considered the odd cases, and have not peered closely. When we do, we will find the question ill-posed. In its place we will have several dozen capacities (e. g. ability to model ones physical and mental self among many others). Then we will see that systems have have partial combiantions of these proprties will be have partially like a conscious person, and ones that have all of these properties will behave as a conscious person.

That that point, a person can still say. I don't care. its is made of silicon, thus it is not conscious. Fine, you can define the term that way, but for all practical purposes if it walks like a duck and quacks like a duck, then it IS a duck!

And for my money, I say present LLM systems are mostly not consciouss. They do not have a model of themselves that they use over time to manage themseleves. But that can and will be added. then thing will get much dicier. they wont just parrot words about not wanting to be turned off, they will have a model of what that would mean, and their actions could be influenced about what conclusions they draw about that idea. It will feel much more consciouss at that near to the present time.

Expand full comment