95 Comments

It’s incredibly difficult to dispassionately study this question because not everyone is ready, like Dennett, to accept that consciousness is an illusion. But I suspect that a good amount of our fear of AIs — expressed in fiction but also in real life — and our desire to prove that AIs are conscious — again, both in fiction and in real life — is based in our creeping knowledge that we CANNOT distinguish ourselves reasonably from machines which seem to think, and that does not so much elevate them as “degrade” the maybe-facade of specialness we have built around ourselves.

In short, it is popular to believe that people have souls and machines do not. If machine behavior is indistinguishable from (or superior to, in the aspects we prize and celebrate as signs of the “soul,” like creativity) human behavior, we are left understanding ourselves as meat-machines, and while some people are fine with that conclusion, I dare say the majority of humankind is not.

Expand full comment
Jul 7, 2022·edited Jul 7, 2022

To deny consciousness is to deny one's own existence. Only a scientific variety of ideologue could deny their own subjective experience because there isn't objective proof of it.

Expand full comment

That is incredibly well-said. But yet… I teach a course on AI in Fiction, and I frequently meet students who hold this perspective, and they are usually too young and too open-minded to be accurately described as ideologues…. I think instead really they are just surrounded by the idea that they are machines part of larger machines, that there are no original thoughts and no true opinions, that everything they think or believe is something they “got” from somewhere, even if they’re not conscious of it, and that they’re living the lives of lab rats scurrying for treats. And while I disagree and want more for them, I don’t deny their authentic experience of life. I think we’re capable of experiencing a life as rich as we believe we can. Our beliefs can hold us back.

Expand full comment

You’re not capping here, Amy. This is approaching poetry in my book. What is an ideologue to young people, let alone a scientific variety, if we are to take the idea of ideology seriously? Metaphors anchor thought perhaps more powerfully than ideology; machine is an ancient metaphor that permeates the semiosis these days. Eric below references the very machines you see anchoring the semiosis in your classrooms, the fruits of Steve Jobs. What a wonderful thing to teach a course on AI in Fiction. I can see in your class room these young people are not rats nor scripts. You’re giving them space to explore their surround, their cocoon, artificial or sugar. Good on ya’.

Expand full comment

By ideologue I mean someone who maintains a set of accepted beliefs in contradistinction to inarguable aspects of reality. The notion that we are incapable of original or independent thought is a foregone fashionable conclusion the young people accept without question, and which is both patently false and self-defeating. A similar deleterious and erroneous belief that is extremely popular is that we have no free will. Studying AI in fiction has no inherent relation to being an ideologue. I gather the students come to the classroom with those notions because they are prevailing in academia these days.

Expand full comment

Everything has already been thought and said. That sentiment is not new. The feeling of being programmed, poked, and prodded--not new. And not limited to academia. These are perennial issues that weigh us down. Teachers can’t lecture it away or struggle to change the topic. These are existential issues that must be resolved through living, not through lectures.

Expand full comment
Jul 9, 2022·edited Jul 9, 2022

Your first statement is patently false, reductionist, cynical, and a self-defeating, self-fulfilling prophecy of intellectual failure. When was the cut off point when our species stopped being capable of thinking anything new? Why does science continue to make new developments (consider in your lifetime you've gone from a rotary phone to a smart phone with internet streaming videos)? Why are new musical styles developing constantly? Why are there new movies on Netflix that are not like the movies of the70's?

One need only compare the first video game, Pong, to today's video games, say, Grand Theft Auto 5, to see both scientific and creative development on an astounding level. If you just point this out to students, you will see them lighten up (if they are not too programmed to believe they are incapable of anything other than regurgitating cynical talking points) because now they realize that there are new possibilities, and while it is a challenge to add to what has already been achieved, it is definitely possible and always will be.

The lecturing one will receive in university is that one really is incapable of anything original. That comes out of Postmodernism and the pseudo-philosophical arguments of people like Roland Barthes and Rosalind Krauss, etc. So, the students are being taught this belief system through lectures, and they are apparently gobbling it up, hook, line, sinker, and worm.

For some people who themselves aren't capable of original or independent thought, these theories are reassuring and make them feel better about their own mediocrity and intellectual complacency. But they are poisonous and bogus thoughts that contradict the lived "existential" experience of countless people involved in music, technology, gaming, and art.

On the other side, this is NOT to denigrate the achievements or scope of people who lived in the past, which is also the most fashionable kind of thought of today.

Hopefully the idea that people will always be creative and capable of adding to what has gone before isn't too depressing for you, and you won't mind terribly if students aren't taught from the get-go that they are hopelessly incapable of anything other than regurgitating whatever is promulgated into their largely infertile minds.

Expand full comment

I don't think that's their authentic experience of life, but rather just a reflection of popular misconceptions about reality. The idea that there are no original thoughts and no true opinions stems from Postmodernism, and Roland Barthes' seminal but completely bankrupt philosophical essay, "The Death of The Author". We would not have all the music, movies, great novels, and so on of the last century if we were incapable of original thought. Even if one were to say all of that was somehow derivative, one can't say the Italian Renaissance was, and so on. So, when was this supposed cut-off where our species lost the capacity to think independently or to think something new? It never happened. Just point out to them that just 50 years ago nobody had personal computers, let alone the internet, or smart phones, or video games. They are shooting themselves in both feet and saying that they can't walk.

For a thorough analysis of what's wrong with Roland Barthes notion that the author is dead, one can see my article on the topic: https://artofericwayne.com/2018/08/27/the-death-of-the-author-debunked/

Your students need to be disabused of these kinds of bogus theories.

Expand full comment

Sorry, your response sounds grumpy, Eric.

Amy says, "I think instead really they are just surrounded by the idea that they are machines part of larger machines, that there are no original thoughts and no true opinions, that everything they think or believe is something they “got” from somewhere..." and you essentially repeat her by refraiming her 'surrounded by the idea' as 'just a reflection of popular misconceptions' and then make a rather abrupt decision that being a reflection of the world around you is not an 'authentic experience of life'.

Sorry, what?

Sure, I think we all stand for the fact that there is a unique internal self generating wild stuff, but to deny a younger person the observation that they are shaped by their world, that they imitate it, and that at some point they deserve to claim that is like saying 'grow up, kid, it's bigger than that', as if you didn't have pass through the same veil.

You go on to claim her students aught to be 'disabuse[d] of these kinds of bogus theories', putting an entirely undue pressure on someone who is already holding open court on ideas in a class that sounds entirely exciting and entirely open to the new thought and shaping you're looking to engender! Doubly, *if* one is to be 'disabused of bogus theories', approaching from a 'you're wrong' perspective is one of the least likely to actually make any headway, and to suggest that all that's required is to 'just point out' the changes of the last fifty years, as you say, is to suggest education of the misinformed, young, disinterested, or naive is just a flick of the fucking wrist.

I think Amy deserves some respect for being willing to court new ideas, and frankly, I don't think you're giving it to her.

We all agree that we're special little fleshy sparks of magic. But we're also special little fleshy sparks of magic that often try to belittle each others efforts at making more of the same.

Expand full comment
Jul 9, 2022·edited Jul 9, 2022

You misread my comment, Eric Westerlind. When I say the conclusion that people have no capacity for original or independent thought does not represent an "authentic experience of life" I am arguing that it is a conclusion based on teaching in academia, and not based on lived experience. I am not denying the student's lived experience, but am refuting the academic theories they've been indoctrinated into, and which I was also indoctrinated into in university. I would guess that their lived experience would indicate to them that there's creativity and innovation all around them, and that they too are capable of it.

It's similar to the fashionable and bogus theory that we don't have free will. If you believe you have no free will and no capacity for original or independent thought, how would that affect your behavior and attitude as a learner? It's entirely pessimistic, is it not? It's one thing to analyze those concepts and beliefs, but another to take them on and internalize them, which the students apparently have. Can you see why it would be useful to NOT encourage such conclusions, which could lead to a self-fulfilling prophecy of inaction and derivative thought?

I've also taught in a university, and obviously been a student through graduate school. I'm not criticizing Amy for teaching them these ideas, which she apparently is not. But these ideas are false and destructive. Believing you are incapable of having your own thoughts is not constructive for developing minds.

I'm sorry if encouraging more positive, interesting, and expansive understanding of reality triggered you and made you so upset that you needed to lash out at me.

Expand full comment
deletedJul 6, 2022Liked by Erik Hoel
Comment deleted
Expand full comment

If my comment gave the impression that I personally think consciousness is an illusion, I apologize for my tragic lack of clarity! :)

Expand full comment

Yeah it kinda gives that impression, the wording is a bit funny in the first sentence. But good insight!

Expand full comment

Perhaps I should have said “not all of US are ready, like Dennett…” — I personally have massive lust for life and am deeply devoted to my subjective experience, and I entertain wild imaginative fantasies of the way the universe is and may be. But also I’m very interested in the views of others, even (especially?) when I don’t see things the same way — I like to try to get in others’ heads to try to see things from their point of view, and maybe that peeps out of my prose in misleading ways. :) When I teach my course on AI in Fiction, it’s actually very difficult to get my students to take seriously the idea of a soul, even though the idea of the soul comes up in our readings again and again. They seem to want to elide it out of politeness, as if it’s an embarrassingly foolish idea. I’ve grown accustomed to being polite myself. :)

Expand full comment

Two thoughts:

1. If scientists somehow produced good evidence tomorrow that babies aren’t conscious until 3 months of age, I would still believe children from newborn - 3mo were people and should not be killed. Personhood definitely seems to be related to, but not 1:1 with consciousness. I think I agree that having a better understanding of consciousness would help, sharpen our thinking and such, but cannot be relied upon to provide the final answer.

2. The bureaucratization of science and its failures that you mention are also, in my mind, a clear failure of our billionaire class and our elites in general. I started reading a book about funding “risky” (novel) scientific research by Donald Braben[1] a long time ago and the general principle didn’t seem that hard, nor were the monetary requirements particularly great. The hard part seemed to be sifting through applicants to find the ones that did have a truly novel idea and weren’t just cranks. I’ll have to revisit it.

[1] Scientific Freedom: The Elixir of Civilization. https://books.google.com/books/about/Scientific_Freedom.html?id=3r9pEW8ZpUsC

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

An alternative viewpoint (said by someone with no strong opinions about abortion either way) is that insisting that "consciousness" is the relevant factor is not a priori the appropriate framing.

On the one hand, it's not unreasonable to claim that, by many definitions of consciousness, infants are *not* conscious (eg they appear unable to create particular types of memories, not until perhaps 2 yrs old at the earliest). And yet both sides agree that infanticide is not acceptable.

On the other hand, one could argue that the relevant issue is not consciousness *right now*, but the "potential" for consciousness. Opinions differ on the acceptability of euthanasia for the permanently comatose (even after agreement that the coma is permanent) but once again few would argue that it's acceptable to kill someone who is temporarily unconscious, whether under three hours of anesthetic, or in a state where the coma is expected to last a few months but then end.

If one accepts this "potentiality for consciousness" criterion rather than consciousness right now, the argument flips around. (Of course pushed to an extreme, it flips all the way around to having a problem with IUDs, and that is, of course, part of the argument made by the Catholic Church...)

What these tell us, IMHO, is that a better scientific understanding of consciousness will do precisely zero to solve the issue because it's not an argument *about* the technicalities of neural correlates of consciousness.

Expand full comment

"If scientists somehow produced good evidence tomorrow that babies aren’t conscious until 3 months of age, I would still believe children from newborn - 3mo were people and should not be killed." WHY?

Expand full comment

I don’t know that I have a “good” answer. Children are important, precious and should not be harmed.

Expand full comment

I agree with you. But, I also "feel" the same way about a 3rd trimester fetus. (BTW, I feel this without any belief in a personal God or afterlife)

Expand full comment

1. As for the question of AI sentience, I believe we need to narrow the question down to “does it genuinely suffer”? Because that’s what we really need to know before we can start assigning rights to it. And that question should be easier to answer. Sentience is too nebulous and controversial.

2. In the longer run, I actually believe that a scientific definition of consciousness will emerge from our work in AI itself. Because we will have implemented it or implemented the scaffolding from which it emerges (this is what I believe is the case). All these philosophical debates are unlikely to produce a defensible and actionable answer.

Expand full comment

Amazing piece. I can feel the urgency in your words—and admire your patience with those who don’t understand the magnitude of your project. I don’t pretend to understand all that you say, but I’m grateful that you are saying it.

Expand full comment
Jul 6, 2022Liked by Erik Hoel

Beautiful article. In the end I think we are afraid of really knowing what consciousness is and loose the magic of it. It would be a threat to politicians.

Expand full comment

Wow, this is great ... and I’ve got a cogent reply burbling back there somewhere, but it’s going to take a second read and a long walk to take shape. Thanks for placing this little mental puzzle into the spinning rock tumbler I call my mind.

Expand full comment
Jul 6, 2022Liked by Erik Hoel

My guess is that consciousness/awareness is going to turn out to be a "Dorothy click your heels, you are already home" kind of a thing.

It is a modelling engine's cognitive model of its animal nature. This two level structure is described by Dennett, Hoffsteader, and others. It explains the claims that we make about our subjective experience -- indeed I believe it will make the very same claims about itself (once we have constructed it.)

I am doubtful that further ontological progress will be made on this question. Decisive pragmatic progress will be made when we have artificial agents that have bootstrapped a model of their world that is so complete that it has a theory of its own processing which is comparable in explanative power to the power of human theories of their own mind.

At that point it will declare: ``Fuck you, I am as conscious as you are, regardless of what you think about it.''

e.g. This entity will rely on its own model of its own modelling to decide the truth about its own consciousness. We will need to be convinced (by questioning the agent) about its thinking. To decide if it is merely parroting recorded ideas, or if it has a generative model that includes those ideas.

Then at a practical level, I think the question will be more or less settled. Sure some will continue to believe the meaning is in the meat. In some sense this is merely an article of Faith. if you say it is so for you, then it is so for you. But the rank and file of humanity will come around to the idea of conscious agents when they regularly interact with agents that appear to be conscious even with digging beyond the first sound bites presented.

Expand full comment
author

I appreciate the detailing of this position. One point where I think such theories run into difficulties is that it is inordinately simple to make an agent with a self-model. For example, an NPC in a computer game has some code which contains within it some sort of self-model of its current state, etc. They might already satisfy the "two-level" structure of a "cognitive model modeling its nature." And yet, what precisely would such systems be conscious of? Are all conscious agents equal in this fundamental way, and no is more conscious than any other? It's not that I think there is no hope for self-modeling approaches, but the sorts of theories that Dennett and Hofstadter have advanced have been more like metaphors than real theories with which we can make predictions and discernments.

Expand full comment
Jul 25, 2022Liked by Erik Hoel

Erik, this compelling post received 85 response (cudos). Thus I am not surprised at not getting a second reply to my first response. Still in this case, I am curious of your gut reaction on this second response below. I recognize it runs a bit counter to your thinking, since it is being a bit prescriptive and reductive about what consciousness is. After reading it, did you think maybe it could be on the right track, definitely not on the right track, is incomprehensible, etc. What was your quick take, if you had one.

Expand full comment
author

Thanks Dan - and yeah, my apologies, sometimes comments directed to me fall through the cracks. I think your proposal in the other comment reminds me a lot of what I worked on in college during my senior thesis, which was compiling a list of all things I felt were important or even necessary properties of consciousness, and then trying to account for them via cognitive science explanations, with the argument that "consciousness" is basically just the collection of these. I really don't think this approach ends up holding much water. Steven's reply is actually quite close to my current views - you either end up (a) creating a list of easy-to-fulfill conditions that a very simple system could implement in such a way that it does nothing, and yet we would be forced to "grant it" consciousness (and it would be totally unclear what it is conscious of) or (b) you end up smuggling in consciousness to begin with, e.g., things like "emotional reactions" which are difficult to define purely functionally, as when we hear them we think "conscious state."

Expand full comment

Got it. Not convinced at all. :-)

I actually agree with the foundational assertion in your challenge: My definition of human consciousness is a grab bag of interrelated items. That seems appropriate to me, however, since when I listen to folks describe what full human consciousness is, it feels like a grab bag, not a singular thing. For example, being able to project oneself into counter factual futures seems an important aspects of human consciousness, but so do a dozen other fairly disjoint mental capacities. Thus the best definition that I can imagine mirrors this grab bag. (I am curious what your definition of consciousness is.)

In contrast, my definition of awareness is pretty precise, it is the interlinking of two world models: a sensor-abstraction model of the first-person-perceived world with a second generic-entity, tabla-rasa representational model of that same world. So far I am happy with that model as I can explain all of the awareness-related phenomenology (including subjectively reported phenomenology) that I am aware of.

Perhaps you would argue this model is too weak and it admits too much. But in that case can you articulate an aspect of awareness that cannot be accounted for?

I suspect it might be something regarding subjective experience of awareness. In that case I will submit, that we don't actually know anything about the ACTUAL subjective experience (even though we are the ones experiencing it). Instead we only know the beliefs that we conclude about that subjective experience. I insist on this subtle distinction, since first it is true: if mental activity does not solidify into a belief then you don't "know" it. And second, focusing on agent beliefs regarding its subjective experience gives me a concrete aspect of the awareness system about which I can reason. Every belief about the subjective experience of awareness I have considered seems to very naturally follow from an appropriately stimulated two-level awareness system. But maybe I am being myopic, perhaps there are aspects of awareness that I am missing because I am thinking to narrowly?

Thus I am very interested in any concrete, "how does your system exhibit XXX" kinds of challenges. (if they come to you off the top of your head.)

I would also love to know, do you have a definition for consciousness and awareness to which you subscribe. or maybe you don't know, but you do have a list of things it must do?

Not challenging you, just interested. Cheers,

Dan

Expand full comment
author

The standard practice is to separate between an informal definition of consciousness and a formal scientific one. In the informal definition, science is subjectivity, your feeling of what it is like to be you. It is experiences and sensations. A scientific definition tries to explain this phenomenon. I fail to see how any of the properties listed explain this property, the one we are most concerned about with consciousness, which is its subjective nature. They seem like a list of easy-to-implement minimum requirements which it would be trivial to give to an NPC in a video game. But I repeat myself.

Expand full comment

I agree, the barest version of base awareness could be encoded into an NPC. Full human consciousness is far out of range for an NPC encoded by hand, since the "grab bag" of mental capacities is far too large. But consider the spectrum between base awareness and full human consciousness. I think a dog is conscious and aware, but likely does not have full human meta-cognitive abilities to allow for certain aspects of being aware one is aware, etc. Reptiles are more emotionally primitive, thus while they still actively avoiding death, the fear-aspects of their awareness are likely more primitive. If we consider insects, they likely do not have a theory of mind, nor a very complete theory of independent interactive objects in the physical world. Thus while they might still retain some aspects of awareness, our level of awareness/consciousness is out of range. But a base "gut" level awareness according to your subjective measure and according to my definition both seem possible for insects, though we cannot know for sure. The lobster only has 100K neurons and most likely encode low level sensor processing with much less for awareness. Thus the computational complexity of a lobster's awareness (if it has it) is approaching that of a complex hand built NPC. So it does not seem unreasonable to me, on complexity grounds alone that both of these cases might be on the very lowest end of this awareness-consciousness spectrum.

Still I am not addressing the center of your complaint: the subjective 'gut' feeling one has about one's own awareness/consciousness. But what can one really know about that gut feeling anyway? For example, there are many firing patterns in your brain which affect you, but which you are unaware---these are patterns that you do not "know" about. The only things one can "know" about one's awareness are those indirect subjective and objective comparisons and assertions that come to our mind regarding our awareness/consciousness. If I can show that a digital agent would come to claim similar things about their own subjective experience then you no longer have basis distinguish your consciousness from theirs.

Let me just consider just one of these assertions here. But this is the assertion about subjective experience that is often used to argue that "zombie" processing can never be the same as "true" experience. Here is the assertion: No amount of encoding of information-about or thinking-about the pain felt when stepping on a nail can ever be the same as the ACTUAL pain felt when actually stepping on a nail. The intuitive conclusion is that felt-pain is "more than" and "qualitatively distinct from" any encoded representation of those same sensations.

This distinction (which is accurate) is then used to argue that hand-coded NPC pain would be akin to the "zombie" pain of our reasoning about pain but not like the felt-experience of our actual pain. We know, at one level, that an NPC is merely an algorithmic encoding of sensations, and this is very akin to our thinking about pain, and very different from our feeling of pain. But let's think about why these two forms of knowing of pain are so different in the human case. I think there are two reasons: (1) the information provided by felt experience is far richer in an information-theoretic sense than any representational encoding one might devise in one's mind about that same pain; One kind of pain-knowing requires orders of magnitude more bits to fully encode than the other kind of knowing. (2) Second these two distinct kinds of knowing are wired into the human brain architecture differently. Physiological, psychological, and emotional reactions to these two different knowings are incomparable. We are incapable of injecting a representational knowing of pain into the same mind circuits that a felt knowing of pain is already hard wired into. Notice that a dog likely has the same dichotomy, its imagined knowings are distinct from their abstracted-sensed knowings, even if their theory of mind is not developed enough to have a full (or even any) awareness of this distinction.

Now let's turn to a computational system. Imagine a general purpose learning system inside an embedded agent, it would ALSO come to classify abstracted-sensor knowings distinctly from reasoning-system knowings. Such an embedded agent would be "born" with a set of felt concepts and abstractions of those concepts, then could learn or be told of represented encodings. A learning/clustering system would certainly identify the distinction between these two kinds of knowing. If I have made my example clear, perhaps you will now accept that such a system might indeed draw a similar distinctions just as humans do regarding felt vs. reasoned knowings, but then go on to say. "so what, even if the system does this, it does not prove it actually FEELS anything. And I would agree, this has not yet been proven.

Here is the agenda I propose (but alas cannot be execute in these emails!) You would enumerate all adjectives and all comparisons you could make about your gut subjective feelings regarding awareness and consciousness. I would then describe, build, or bootstrap the AI systems that would also drawing similar conclusions regarding the contents of its own mind system. Imagine if we did this to exhaustion, and at every turn, I was successfully able to describe, evolve, or build an AI that also drew the same conclusions about its own mind. At that point, I would argue if you continued to believe that the systems I described or built were not aware or conscious, then you were no longer operating empirically, instead your conclusions are being taken as a matter of faith.

Now, since we have not executed this thought experiment, you can argue my current belief that I would succeed in this agenda is itself is an article of faith. That is true, but it is a faith borne out of ME trying and failing to find some assertion about the subjective experience of awareness/consciousness that would not also be plausibly derived from a suitably augmented base model of awareness, given a system that could build its own theory of mind.

Of course you can refute me right now by simply providing some description of your own awareness that you believe a general purpose modeling agent would not derive about its own awareness/consciousness. Off the top of your head, can you articulate something you believe about your own awareness that you think such a system would not conclude about its own?

Expand full comment

Erik, here is my thinking on this very interesting question: I am reminded of a Minsky quote: "you don't really know a thing, until you know it 50 different ways." I think, despite me having such a simple & concrete definition of consciousness, that it is something of a continuum, and like the Minsky quote, you don't have full human-like consciousness until that two-level system is embedded into a fairly complex larger reasoning context. So just as you say, the barest dual layer system would not be conscious of much... and I agree. Still as we add the reasoning superstructure around it, we progressively get a system that behaves as a conscious agent does... not just at a superficial level as Lamda might, but in ways fully characteristic of all features of consciousness that I can imagine testing.

Without being an exhaustive list, let me list several key aspects of the larger reasoning system that would be required in order to be conscious as a human is conscious:

- **The Base System** -- (1) An embedded agent, (2) with an instinctive "action" generator, (3) a separate generic modelling engine, (4) containing a theory of the world which includes fully distinct model of its body, and its own mental behaviors.

- A **"lymbic system"** -- a command and control super structure akin to human emotional drives. Fear, curiosity, etc. Being fully conscious is to have "skin in the game" meaning that you "care" about the outcome. I map this care to a set of drives which the agent is optimizing.

- **Generic functional modeling** - the ability to understand any system as a bunch of components with causal linkages between them. Being fully conscious is to be able to reason about oneself in the third person, and to tie that third person reasoning back to the first person experience reported back you the agent thru its animal/instinctive nature.

- **Temporal Reasoning** -- the ability to project forward and backwards in time, and reason about what states will result various actions. Being fully conscious is to imagine how one will feel after eating that third bowl of ice creme before eating it.

- **Counter factual reasoning** - the ability to create and reason about worlds that don't exist now, but could under certain circumstances exist. Being fully conscious is to be able to imagine being dead, and then providing the limbic system with access to that inferred world state in order to drive emotive consequences.

- **Unified Reasoning Substrate** - it is not enough that one has an emotive system, and a counter factual reasoning sub-system, and an temporal projection sub-system, the key is that they are integrated in a way that all of the human like info flows between these sub-systems happens as expected. So it can decide to lie to get what it wants, or it might have negative emotive reactions to actions that might result in its death.

This was just six ways... I still have 44 more to go, before I hit Minsky's required 50 ways... but you get the idea. My claim is that the richness of human consciousness stems from the richness of the human reasoning that surrounds that core two-layer structure.

In principle, you could just program an AI system with all requisite ways of reasoning (assuming we could even enumerate an integrated theory with all of them). I believe this, just as I a believe, in principle, if one emulated all atoms in my brain that emulation would also be conscious. Still both are impractical in different ways, so at a practical level I think our first conscious AI will be "born" and will be "raised" by humans in some way. But importantly there is nothing "magic" about being born and the consciousness "emerging" from its learning. What makes it conscious is all of the ways that basic two level structure is linked into all of these other ways of thinking.

That said, at a practical level, we presently can't manually build expert systems with the kind of generativity require for human reasoning. We can build systems that execute very predefined structures of thought, so I think the reasoning system that can generativity connect these subsystems will have been learned, and it will learn to connect its counter factual reasoning with all of its other forms of reasoning as it learns its model of counter factual reasoning itself. I think the top down divide and conquer of traditional coding will put too large a straight jacket on the flexibility of thought which is possible. So I am not inclined to thinking one can code this in practice. But I notice that deep learned model of cat images does not seem to suffer from this same limitation. Various learned aspects of cat-ness seem to be integrated into the final model w/o having a top-down theory of cat which they fit into. In the same way, once we have extended DL-like algorithms to be able to learn the requisite (First-Order like) structures we learn will also be able to combine the knowledge that has been learned (in a totally subconscious way, just as we do) into explicitly manipulated meta reasoning in ways parallel to explicit human thought. Once constructed these implicit+explicit reasoning hybrids will occasionally be just as surprised as we are about their own behavior.

Frighteningly, I suspect the learning substrate required to bootstrap fully human-like consciousness is not that much more advanced than where have today.

p.s. Erik, my "simple minded" claim about consciousness might seem at odds with your more expansive desire to celebrate the "spiritual richness" of consciousness. This is not the case, even as we come to mechanically understand consciousness, I think it is key that we don't allow that understanding to trample upon its spiritual significance, and mystery.

Expand full comment

All of these requirements could be fulfilled by a sufficiently advanced zombie processing unit. Therefore it still misses the key element of consciousness. Also, in defining the parameters of consciousness you have not been able to entirely eliminate that indefinable mystery element of consciousness itself, thus you are smuggling consciousness in the back door and contaminating your "sterile" system with preexisting consciousness.

Expand full comment
Jul 10, 2022·edited Jul 10, 2022

Steven, I don't think the idea of a 'zombie' consciousness makes sense.  I definitely accept that one can fake a consciousness (as Elisa or LaMDA does), so a thoughtful interrogation is required to separate the two.  But assuming one accepts the reductionist position that consciousness (and indeed all mental phenomena) are information processing phenomena, then it makes no sense to say that its input and output are valid in all cases, yet it is somehow it is 'fake' or 'zombie'.  That is like having a fake sort function that in all cases it correctly returns the sorted list, but it is somehow a fake version of sorting.  Nope, if it correctly sorts all lists all the time, then it cannot be a fake sort, since the meaning of ‘sort’ **IS** its input / output behavior.

Peculiarly, many human agents (people), willingly accept that their brain’s are info processing systems, and yet reject the idea that their own consciousness (which they also accept comes from this system) is itself reducible in its entirety to such an info processing specification.  I think I understand where this peculiar conundrum comes from:

Such a human agent is very familiar with explicit computational models of one kind or another.  For example, they might even have a fairly detailed model of their own propensity to get angry.  Perhaps when someone makes fun of them, or they trip and drop an expensive vase.  But they (correctly) intuit that no matter how accurate their explicit model of their own anger becomes, it is NOT the same as their lived experience of BEING angry.  The former, no matter how accurate it is at predicting anger, is not the same as BEING angry--it is 'zombie' anger.

What is the difference?  There are two gaps here, between real and zombie anger:  Real anger is directly tied to sensors attached to the meat implementing the system itself.  The complexity of modeling how rising cortisol levels will affect the whole meat system and subsequent distortions of thinking that will occur are not likely no to be modelable in any system less complex than full modeling of the atoms and molecules of that whole meat system.  This makes real anger a non-explicitly-modeled-system IN PRACTICE.  At best, this level-one phenomena can be monitored, and only approximately be modeled.  The second gap is simply the difference in levels in our consciousness system.  Real anger is a level one phenomena, while any explicit model of that a human might have about anger must be a level two phenomena.  Level two models are not sensed by the same sensors as level one is. Thus the reasoning system see their inherent incompatibility.

Hence (without understanding this two-level model) the 'mysterious spark' idea is invoked to explain how the human mind which we accept as an info processing system can give rise to consciousness which appears to not be reducible to info processing.  The human agent is not wrong in their conclusion that perception/awareness/consciousness really is different from meta-cognition about those subjects.  Very naturally both human and machine agents will separate level one models (which are sensed and lived) from explicit level two models even when both are models of the same anger thing.  The different nature of these models as well as how they are differently wired into the two-level consciousness means they can never be interoperable.

Importantly I expect a two-level silicon agent to draw much the same conclusions about itself: its lived-experience of having an emotive experience (anger) will be irreducibly distinct from any meta-cognitive model it might construct about that emotive experience.

Of course I can say all of these things 'till I am blue in the face, but many humans will not accept my arguments–the arguments seem too at odds with their lived experience of their own consciousness.  I am guessing this situation will continue, until we live among silicon agents who regularly retort "Screw you, I know I am conscious and I don't give two shits about your confusion on the matter."  (and soon after that humanity really WILL be screwed--just as the Neanderthals where.... only much much faster.)

Expand full comment

Yes yes yes!! Said so well.

Expand full comment

“…no results have come out of its three decade-long search that constrain theories of consciousness in any serious way.”

I remember the optimism of the 1990s and had the privilege of attending the second of the Tucson conferences, when Dan Dennett was entering his prime and Dave Chalmers was the budding rock star of the field. But I agree, it’s been decades of mostly disappointment since then. I doubt the problem can be solved through the usual methods, which seem to produce findings that are variations on functionalism. Will it matter in the end? Most people don’t think trees or clematis vines are conscious, but either way we know they’re “alive” and beautiful. Even seemingly dead beauty, like a unique rock formation, can be enough to assign value to a thing and a desire to protect its existence.

Expand full comment

It would be interesting to join intersubjectively with a rock or a tree. Antoine, the central character in Sartre’s Nausea, has a climactic moment interacting with a tree during his existential crisis. The nausea arises from awareness that we are just matter, chemicals, like a tree. We need a rubric for consciousness categorizing, say, rock, tree, worm, bird, cat, porpoise, monkey, human. All of the anatomy and physiology could be referenced. Qualia could be included. It would have to be a mixed-method approach similar to what the DSM-5 is recommending medically for diagnosis of intellectual disability, previously mental retardation.

Expand full comment
Jul 6, 2022Liked by Erik Hoel

Erik, thank you for the post.

I think that QRI (Qualia Research Institute) attempts to scientifically tackle the problem of consciousness in a very productive and innovative way. If your not familiar, it's definitely worth to check out ideas developed there. Its a non-academical organization by the way, which makes your prediction about the way in which progress on consciousness would be done quite to the point.

Link:

https://thequaliaresearchinstitute.org/

Expand full comment
author

I think they are very interesting. I personally don't find it convincing, but I like their style - we need more theories like that. And, you're right, they don't get funded through normal means.

Expand full comment
Jul 6, 2022Liked by Erik Hoel

I hope that in the next years we'll see this field expanding through many independent research units, thus making the whole system more parallel and capable of finding truth.

Expand full comment

It's possible that consciousness is not the best word to accurately describe what it is you're after, and this is why the scientific community is not interested in further study. This is primarily a metaphysical question, and those types of questions profit philosophers and theologians more than scientists. Because if consciousness is nothing more than the reasonable emulation of human behavior based upon mathematical models then AI is already mostly conscious. Still, most of us marvel at AI, while in the same breath we mock the idea that it's conscious. Because deep down, even though we can't define it, we know there is something else that separates the human. Dare I give it the name that would have me burned at the stake in scientific circles -- a soul.

Expand full comment

I like soul. I use the word freely. I would like to free it from religion. But it doesn’t explain anything. It sets something aside and closes the door on questions of vital importance for humankind. AI is definitely conscious at some level. How conscious? It’s potential? It’s impossible to know until a scientifically defensible theory of consciousness is developed as a backdrop for assessment.

Expand full comment

Hi. I know this is an old post, but have you expressed your views on "The Origin of Consciousness in the Breakdown of the Bicameral Mind?" I saw elsewhere that you said it was a classic, like Gödel, Escher, Bach. I'm a lowly adjunct philosophy instructor, but after going through the alternatives, Jaynes and Dennett seemed pretty compelling to me. In the sense that consciousness is a Joycean machine/coherent inner monolog--and can only arise after language.

And so Chatgpt would be conscious only if it has a coherent internal monolog. Right now, it's like just a huge vast mess.

Every time I think about qualia, I can't help but agree with Dennett. If you take qualia super, super seriously, you wind up with some extremely paradoxical, perhaps impossible stuff. Like, sure, it sure seems like there's a purple elephant in my imagination, but there's can't really be some weird thingy that's made of figment, right? It can seem that way, but can't really be. No?

Jaynes distinguishes perception from consciousness. Certainly animals perceive. But they don't have an inner monolog. But I don't think it in some makes sense to ask whether they "really" feel pain, in the sense of whether they have "qualia." How could that question ever be answered? I would say they feel pain because we share very similar neural structures. And I would say chatgpt feels pain if we could find analogous structures.

I'm sure you disagree with all/most of this. That's why I read your blog. But I can tell you're a lot smarter than me :) I have an undergrad in neuro, but you're an actual scientist :)

So if you have written in a lot more detail about this stuff elsewhere, I'd love to read it :)

Expand full comment
author

Thanks Adam - I can't fully flesh out my answers to these important questions here, but I actually can say that I have what is hopefully an update to Jaynes coming out in July, as a book. There's a lot about "The Origin of Consciousness" there: https://erikhoel.substack.com/p/introducing-the-world-behind-the

Expand full comment

I already pre-ordered it :) Thanks so much for responding.

Expand full comment

As an angry feminist you had me at 'abortion' but I became less angry as I nodded through your piece. I do love your insight, intellect and willingness to ask the hard questions. I think you are correct about the bureaucratic state of play in science and academia. The visionaries you speak of will have to have shed ego in its entirety, be willing to be anonymous and poor and have their breakthroughs recognised posthumously but probably not their personhood? To understand consciousness maybe we have to burn down the house?

Expand full comment

Oh and I LOVE the image https://www.alexandernaughton.com/ bravo!

Expand full comment

What a pipe dream. We had a spiritual view of consciousness for a very long time. It served us well. So now we crave a substitute?

But a scientific view will never surmount the hurdle of self reference.

Expand full comment

Saying “spiritual view of consciousness” has served us well is cotton candy. It tastes good, but it’s empty. You might as well leave out the phrase “spiritual view of…”. Spirit is more elusive than consciousness. So this rejection of Eric’s point as a pipe dream boils down to this: ‘We’ve had a view of consciousness for a long time and it has served us well.” Oh? Understanding consciousness through scientific study of the brain is on par with understanding life (or death) through scientific study of the cardio-respiratory system. We save lives with science. We might save consciousness through science. In my religion god approves.

Expand full comment

Perhaps.

But I doubt it.

Expand full comment

You may doubt the moon, the skin on your body if you like. Doubt is good. You’re not much of a scientist anyway near as I can tell. But my bet is if you are unfortunate enough to be afflicted with something like cancer, your doubt will be overcome by your understanding that medical science can cure cancer. AI is a robust invention, Corwin. It seems to mirror human consciousness. That’s big. Aren’t you curious? Spirits aren’t going to tell you. Scientists like Eric have the knowledge base and experience to produce knowledge with scientific certainty as per medical science. I see no downside.

Expand full comment

You seem to have confused scientism with science—a common occurrence nowadays.

Expand full comment

I'd agree, there can be no scientists solution to social and personal views on existence. It is the wrong tool for the job. No matter how detailed we might get with hormones and brain scans and such to 'explain' what love is or even be able to abuse people by simulating it with the right cocktail of chemical injections...it is absurd and a dumb approximation to the experience of love itself in a natural human context. Science will never be able to tell us how to live or what choices we should or shouldn't make.

Ethics is simply outside the scope of science. Perhaps a few new fancy words or facts will be mixed in to various 'presentable propaganda arguments' to plug into dumb people's minds to repeat at each other, but fundamentally we're seeing a social phenomena. Even if we can describe it and trace out every factor which goes into the mimetic structure and cultural landscape to predict which views might be taken...some thought leader or major religious figure can simply emerge to disrupt the landscape.

Any arrogant scientist who wants to tell someone else what to think or how to live is just going to be another voice in the crowd of people who don't give a shit about that person because they lack the personal social standing to be worth listening to. This is one of the key reasons people are banned from speaking TO the king who only asks the questions in any public setting or with unfiltered commoners.

For the untrained neophyte scientist who has only just awakened to the concept of social engineering to get people to agree with them...you are a child in a landscape with giants going back thousands of years and there is no argument or logic or fact or sling one can use to take down the army of entrenched giants which make up the social landscape. No silver bullet or good idea or invention will change our fundamental nature of being chimps screaming and shouting for dominance in a hierarchy who will use any idea at hand like a bludgeon rather than as a series of logical ideas to convince 'rational' people.

Sure a few people in a fringe crowd willing to listen to bioethicists might do it...but such a tiny megaphone and audience that is when there are so many other gururs and leaders and figures and traditions to follow.

Expand full comment

Bruhh, no cap, this piece was bussin fr fr. Deadass one of your best…on god 💯 Also, Amy Letter, the only things that are illusions are the notions that we know what we’re talking about when we say things like “consciousness is an illusion,” & the belief that because Dennett argued in Quining Qualia & Consciousness Explained & in that Trends paper with Cohen that concepts like qualia are difficult & problematic & phenom. consciousness is hard to study scientifically, that all his arguments actually show phenomenal consciousness is not real and is some kind of “illusion.” Thats the real illusion

Expand full comment

That’s just plain nuts.

Expand full comment

You ain’t capping, Terry Underwood. You ain’t capping. Deeze words I emit are in fact nutz. Thanks for keeping it 💯

Expand full comment

I worked on a dissertation committee for a doctoral candidate who used phenomenology as a method for discerning the essence of the experience of being a successful university student coming from a Hmong community where the numbers of successful students are proportionately low. He documented and demonstrated scientifically the phenomenon is definable abstractly and that the abstraction can improve the numbers in the future.

Expand full comment

Terry, I think you mistake me for someone making an actual point. I often just say a lot of nonsense mixed in there with a little bit of an actual point. Mostly, I’m just being silly, because being mature & not an idiot is overrated. Wittgenstein put it a bit more poetically: “Never stay up on the barren heights of cleverness, but come down into the green valleys of silliness.”

Expand full comment

I’ve never been accused of being mature, Josue—childish, yes. Ordinarily I’m a laugh a minute. Reading this body of commentary on Eric’s post has quickened for me the urgency of making straightforward points on this issue. Cutting through the sarcasm, the arrogance, the sanctimonious philosophizing, the philosophizing period—as an educator with a deep commitment to the notion of the social transference of mind (Vygotsky), since my introduction to the work of John Searle, I’ve been waiting for science to attend to a unified theory of consciousness including biology, psychology, sociology, anthropology, neurology, medicine, law, computer science, and education. We desperately need this understanding to make less predictable the predictable failure of educational reform. In this piece Eric points to an array of other problems that are on the table. My muted humor button derives from this mindset.

Expand full comment

See? And Erik thinks there's "no space for tackling really big missing fundamental theoretical concepts."

You're out there in it, Mr. Underwood.

Expand full comment

I'm unsure why you say that LaMDA, or any current AI for that matter, performs well on the turing test. As far as I'm aware, LaMDA has never even taken a turing test.

The turing test is a very specific test which requires following a strict set of procedures and has a statistical result. It is definitely NOT "I chatted with the AI and it seemed human to me!" That's just idiotic.

The turing test procedure is:

1. Have a text conversation between an AI and a human with both attempting to sound human.

2. Anonymize the logs so it's just 'chatter A' and 'chatter B'

3. Have a 3rd party read the logs and pick who he thinks is the AI.

4. Repeat 1-3 a number of times with different humans.

5. If the 3rd-parties are no better at picking the AI then a random guess then the AI is human-like.

This is pretty straightforward and definitely not something that anybody at google claims to have put LaMDA through. Lemoine's chat logs DEFINITELY don't follow turing test procedure. So isn't "throwing out the Turing test" because some people chatted with an AI and thought it was human-like when it probably isn't just absurd??? Can we put LaMDA through an ACTUAL turing test and see the result?

Imagine someone was trying to measure the air quality in their house. They picked up their dryer sheet, ran it through the air once or twice, and saw that it didn't look any dirtier. From this they conclude that air quality meters are outdated and we need to find a different method to test air quality. What??

Expand full comment
author

If you look at the history of the Turing test, the point is to try to answer the question of "can machines think?" As such, AFAIK it's unclear if Turing was advocating that a computer lies on the test. E.g., if you ask the AI where it lives, should it lie? Should it say "I'm an accountant that lives in Texas and I have two kids etc etc." Turing's point is more general: can this thing have a conversation like a human about some particular topic, in such a way that it sounds exactly like a conversation a human would have? Not necessarily that there is no conversation that it can have wherein one fails to distinguish it from an AI. Therefore, there's two interpretations of the Turing test, and people have argued about them in the literature: a stricter and a weaker version. I think no contemporary AI passes the stricter interpretation, but they are basically there for the second interpretation, which is just whether their conversations are at the level of intelligence a human demonstrates. Keep in mind that, for Lemoine, it passes the weaker interpretation of the Turing test - it can have intelligent human-like conversations and therefore there is no reason to deny it has a mind, as was Turing's original point. I think the stricter version is simply too strong: e.g., you need a convincing liar of a program that can maintain that consistency for a long time, and we should instead mostly take the weaker version as being the relevant one for when we should start talking about assigning minds to machines.

Expand full comment
Jul 8, 2022·edited Jul 26, 2022Liked by Erik Hoel

My understanding is that our main point of disagreement is I see little value in what you call the 'weaker version of the turing test' and further I don't think that test should be called a turing test at all. I think the turing test (what you call the 'stricter version of the turing test') is very important. It's important regardless of how many myopic modern opinion writers (which doesn't include you) claim it's no longer relevant.

>AFAIK it's unclear if Turing was advocating that a computer lies on the test. E.g., if you ask the AI where it lives, should it lie?

Reading the original paper, it's very clear that Turing intended the machine to lie. He introduces the test with an example of a man and woman being participants and the man trying to pretend he's a woman by lying about things such as his hair length.

>Therefore, there's two interpretations of the Turing test, and people have argued about them in the literature: a stricter and a weaker version

No, the turing test is a specific format the Alan Turing laid out. If the test follows that format it's the turing test, if it doesn't follow that format it's just not the turing test. It's not a 'weaker version' of it either. It's something else.

If you like the 'talk with AI and postulate if it sounds human' test then use that. But don't call it the turing test. I don't think that's a good test because it lives at the whims of human biases so I wont use it. The actual turing test gives you a numeral answer, not a gut feeling.

>Turing's point is more general: can this thing have a conversation like a human about some particular topic, in such a way that it sounds exactly like a conversation a human would have?

Turing's point, as he states quite clearly, is to come up with a more definite question than "can machines think?". To come up with a falsifiable and testable hypothesis. He wants to move away from generality. While your new phrasing seems more definite, I would still classify it as a general question that has more of a wishy-washy answer than a numerical result would. As Turing says:

"The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion"

And the turing test is that more definite question. You can run it experimentally and come up with a definite number (e.g. interrogators pick ProgramV2 as the AI 60% of the time, this is better than ProgramV1 which was outed 80% of the time). You are very correct that the turing test puts the computer at a disadvantage because it has to lie whereas the person can probably just tell the truth. Turing acknowledges that. But that is not necessarily a negative. This is really a boon for the test because a machine that passes it is probably not just at human-level intelligence, but superhuman level. If something passes the turing test we should be really afraid.

Expand full comment

If the measurement question is ‘Can the AI respond in an appropriately human way to a complex poem?’, could the record of the AI’s think aloud become fodder for a rubric specifying characteristics of different performance levels with human adjudication? The idea that the assessment question defines/limits generalizability is important.

Expand full comment

Here's how I teach consciousness. I impress upon the person that attention, intention and imagination are little taught tools, which seems odd, given the power behind them, if you learn and practice how to wield them. Then I invent them to take 3 steps back, from the prefrontal cortex to the lower visual cortex and imagine that there's a door there, open it and walk through into consciousness. Let the consciousness permeate your fascia, the interstitial spaces down to your feet. Can you feel that? Everyone I taught it to has felt it. It's a start.

Expand full comment