Scientific gaps have become political and ethical ones
It’s incredibly difficult to dispassionately study this question because not everyone is ready, like Dennett, to accept that consciousness is an illusion. But I suspect that a good amount of our fear of AIs — expressed in fiction but also in real life — and our desire to prove that AIs are conscious — again, both in fiction and in real life — is based in our creeping knowledge that we CANNOT distinguish ourselves reasonably from machines which seem to think, and that does not so much elevate them as “degrade” the maybe-facade of specialness we have built around ourselves.
In short, it is popular to believe that people have souls and machines do not. If machine behavior is indistinguishable from (or superior to, in the aspects we prize and celebrate as signs of the “soul,” like creativity) human behavior, we are left understanding ourselves as meat-machines, and while some people are fine with that conclusion, I dare say the majority of humankind is not.
1. If scientists somehow produced good evidence tomorrow that babies aren’t conscious until 3 months of age, I would still believe children from newborn - 3mo were people and should not be killed. Personhood definitely seems to be related to, but not 1:1 with consciousness. I think I agree that having a better understanding of consciousness would help, sharpen our thinking and such, but cannot be relied upon to provide the final answer.
2. The bureaucratization of science and its failures that you mention are also, in my mind, a clear failure of our billionaire class and our elites in general. I started reading a book about funding “risky” (novel) scientific research by Donald Braben a long time ago and the general principle didn’t seem that hard, nor were the monetary requirements particularly great. The hard part seemed to be sifting through applicants to find the ones that did have a truly novel idea and weren’t just cranks. I’ll have to revisit it.
 Scientific Freedom: The Elixir of Civilization. https://books.google.com/books/about/Scientific_Freedom.html?id=3r9pEW8ZpUsC
1. As for the question of AI sentience, I believe we need to narrow the question down to “does it genuinely suffer”? Because that’s what we really need to know before we can start assigning rights to it. And that question should be easier to answer. Sentience is too nebulous and controversial.
2. In the longer run, I actually believe that a scientific definition of consciousness will emerge from our work in AI itself. Because we will have implemented it or implemented the scaffolding from which it emerges (this is what I believe is the case). All these philosophical debates are unlikely to produce a defensible and actionable answer.
Amazing piece. I can feel the urgency in your words—and admire your patience with those who don’t understand the magnitude of your project. I don’t pretend to understand all that you say, but I’m grateful that you are saying it.
Beautiful article. In the end I think we are afraid of really knowing what consciousness is and loose the magic of it. It would be a threat to politicians.
Wow, this is great ... and I’ve got a cogent reply burbling back there somewhere, but it’s going to take a second read and a long walk to take shape. Thanks for placing this little mental puzzle into the spinning rock tumbler I call my mind.
My guess is that consciousness/awareness is going to turn out to be a "Dorothy click your heels, you are already home" kind of a thing.
It is a modelling engine's cognitive model of its animal nature. This two level structure is described by Dennett, Hoffsteader, and others. It explains the claims that we make about our subjective experience -- indeed I believe it will make the very same claims about itself (once we have constructed it.)
I am doubtful that further ontological progress will be made on this question. Decisive pragmatic progress will be made when we have artificial agents that have bootstrapped a model of their world that is so complete that it has a theory of its own processing which is comparable in explanative power to the power of human theories of their own mind.
At that point it will declare: ``Fuck you, I am as conscious as you are, regardless of what you think about it.''
e.g. This entity will rely on its own model of its own modelling to decide the truth about its own consciousness. We will need to be convinced (by questioning the agent) about its thinking. To decide if it is merely parroting recorded ideas, or if it has a generative model that includes those ideas.
Then at a practical level, I think the question will be more or less settled. Sure some will continue to believe the meaning is in the meat. In some sense this is merely an article of Faith. if you say it is so for you, then it is so for you. But the rank and file of humanity will come around to the idea of conscious agents when they regularly interact with agents that appear to be conscious even with digging beyond the first sound bites presented.
“…no results have come out of its three decade-long search that constrain theories of consciousness in any serious way.”
I remember the optimism of the 1990s and had the privilege of attending the second of the Tucson conferences, when Dan Dennett was entering his prime and Dave Chalmers was the budding rock star of the field. But I agree, it’s been decades of mostly disappointment since then. I doubt the problem can be solved through the usual methods, which seem to produce findings that are variations on functionalism. Will it matter in the end? Most people don’t think trees or clematis vines are conscious, but either way we know they’re “alive” and beautiful. Even seemingly dead beauty, like a unique rock formation, can be enough to assign value to a thing and a desire to protect its existence.
Erik, thank you for the post.
I think that QRI (Qualia Research Institute) attempts to scientifically tackle the problem of consciousness in a very productive and innovative way. If your not familiar, it's definitely worth to check out ideas developed there. Its a non-academical organization by the way, which makes your prediction about the way in which progress on consciousness would be done quite to the point.
It's possible that consciousness is not the best word to accurately describe what it is you're after, and this is why the scientific community is not interested in further study. This is primarily a metaphysical question, and those types of questions profit philosophers and theologians more than scientists. Because if consciousness is nothing more than the reasonable emulation of human behavior based upon mathematical models then AI is already mostly conscious. Still, most of us marvel at AI, while in the same breath we mock the idea that it's conscious. Because deep down, even though we can't define it, we know there is something else that separates the human. Dare I give it the name that would have me burned at the stake in scientific circles -- a soul.
Hi. I know this is an old post, but have you expressed your views on "The Origin of Consciousness in the Breakdown of the Bicameral Mind?" I saw elsewhere that you said it was a classic, like Gödel, Escher, Bach. I'm a lowly adjunct philosophy instructor, but after going through the alternatives, Jaynes and Dennett seemed pretty compelling to me. In the sense that consciousness is a Joycean machine/coherent inner monolog--and can only arise after language.
And so Chatgpt would be conscious only if it has a coherent internal monolog. Right now, it's like just a huge vast mess.
Every time I think about qualia, I can't help but agree with Dennett. If you take qualia super, super seriously, you wind up with some extremely paradoxical, perhaps impossible stuff. Like, sure, it sure seems like there's a purple elephant in my imagination, but there's can't really be some weird thingy that's made of figment, right? It can seem that way, but can't really be. No?
Jaynes distinguishes perception from consciousness. Certainly animals perceive. But they don't have an inner monolog. But I don't think it in some makes sense to ask whether they "really" feel pain, in the sense of whether they have "qualia." How could that question ever be answered? I would say they feel pain because we share very similar neural structures. And I would say chatgpt feels pain if we could find analogous structures.
I'm sure you disagree with all/most of this. That's why I read your blog. But I can tell you're a lot smarter than me :) I have an undergrad in neuro, but you're an actual scientist :)
So if you have written in a lot more detail about this stuff elsewhere, I'd love to read it :)
As an angry feminist you had me at 'abortion' but I became less angry as I nodded through your piece. I do love your insight, intellect and willingness to ask the hard questions. I think you are correct about the bureaucratic state of play in science and academia. The visionaries you speak of will have to have shed ego in its entirety, be willing to be anonymous and poor and have their breakthroughs recognised posthumously but probably not their personhood? To understand consciousness maybe we have to burn down the house?
What a pipe dream. We had a spiritual view of consciousness for a very long time. It served us well. So now we crave a substitute?
But a scientific view will never surmount the hurdle of self reference.
Bruhh, no cap, this piece was bussin fr fr. Deadass one of your best…on god 💯 Also, Amy Letter, the only things that are illusions are the notions that we know what we’re talking about when we say things like “consciousness is an illusion,” & the belief that because Dennett argued in Quining Qualia & Consciousness Explained & in that Trends paper with Cohen that concepts like qualia are difficult & problematic & phenom. consciousness is hard to study scientifically, that all his arguments actually show phenomenal consciousness is not real and is some kind of “illusion.” Thats the real illusion
I'm unsure why you say that LaMDA, or any current AI for that matter, performs well on the turing test. As far as I'm aware, LaMDA has never even taken a turing test.
The turing test is a very specific test which requires following a strict set of procedures and has a statistical result. It is definitely NOT "I chatted with the AI and it seemed human to me!" That's just idiotic.
The turing test procedure is:
1. Have a text conversation between an AI and a human with both attempting to sound human.
2. Anonymize the logs so it's just 'chatter A' and 'chatter B'
3. Have a 3rd party read the logs and pick who he thinks is the AI.
4. Repeat 1-3 a number of times with different humans.
5. If the 3rd-parties are no better at picking the AI then a random guess then the AI is human-like.
This is pretty straightforward and definitely not something that anybody at google claims to have put LaMDA through. Lemoine's chat logs DEFINITELY don't follow turing test procedure. So isn't "throwing out the Turing test" because some people chatted with an AI and thought it was human-like when it probably isn't just absurd??? Can we put LaMDA through an ACTUAL turing test and see the result?
Imagine someone was trying to measure the air quality in their house. They picked up their dryer sheet, ran it through the air once or twice, and saw that it didn't look any dirtier. From this they conclude that air quality meters are outdated and we need to find a different method to test air quality. What??
Here's how I teach consciousness. I impress upon the person that attention, intention and imagination are little taught tools, which seems odd, given the power behind them, if you learn and practice how to wield them. Then I invent them to take 3 steps back, from the prefrontal cortex to the lower visual cortex and imagine that there's a door there, open it and walk through into consciousness. Let the consciousness permeate your fascia, the interstitial spaces down to your feet. Can you feel that? Everyone I taught it to has felt it. It's a start.