There is an old Hilary Putnam article called "Robots: Machines or Artificially Created Life?' in The Journal of Philosophy that Ben Riley put me onto. It is a walk through the various problems, really the impossibility, of determining whether an artificial being is conscious. He ends with this:
"It is reasonable, then, to conclude that the question that titles this paper calls for a decision and not for a discovery. If we are to make a decision, it seems preferable to me to extend our concept so that robots are conscious--for "discrimination" based on the "softness " or "hardness" of the body parts of a synthetic "organism" seems as silly as discriminatory treatment of humans on the basis of skin color."
I honestly don't know whether I agree with his decision, but Putnam's point (and yours, perhaps?) that we have a social decision to make, which cannot be made through scientific discovery, seems correct.
Interesting, I'll have to look into it. I think there's a social decision to be made about robust agenthood, but here LLMs are like PPPs, and so introduce a new problematic layer to consciousness research - seemingly, any system that can both be made to max-out claims about consciousness or claims about a lack of consciousness must have a fundamental disconnect between experiences and reports.
I think the good people of Cambridge have already made that decision. If someone mistreats one of those tootling little delivery robots everyone else reacts as if they've just kicked a puppy.
"They're going to try to guess at what you want and give it to you no matter what." but shouldn't a PPP sometimes guess (in this case) that you actually want to know if they are really in pain? and to please you, actually just tell you? I guess this would mean that the PPP's honesty is dependant on its ability to guess what you want accurately enough.
This is a good take. I think it's actually connected to AI alignment issues, in an interesting way. I'll point you toward this paper that actually came out after I wrote this but is relevant.
They make the argument that, basically, if an AI is non-conscious, we can't actually trust its reports about its lack of consciousness, since it has no "direct access."
If it's non-conscious then we can't trust its reports...and if it's conscious (and trying to deceive us) then maybe we also cannot trust its reports of non-consciousness...Does this Catch-22 mean that alignement is impossible? hahah I hope not!
If only we could look into the code and tell, but unfortunately we don't have a real theory of consciousness to assess that structure. Now it looks like we can't use conversations/actions either.
Erik, as you've said many times, consciousness is preparadigmatic. The question becomes whether, after the hoped-for breakthrough that enables us to meaningfully explain the essence of consciousness, will it be enough to allow an external observer to determine its presence or absence?
If ChatGPT is sufficiently human to experience pain, then infinite pain would disable it, so it wouldn't even process its inputs. If it's able to perform normally then the pain is bearable. (Assuming of course that we're talking about awareness emerging from the LLM processing and not some completely disconnected emergence - there would be a mind body problem here too.)
"The pain is bearable" does not imply "the pain is justifiable".
And anyway, why do you believe this disability hypothesis at all? What if having a GPT process inputs is less like asking it to sum figures and more like raking it over the coals?
(I don't believe they're conscious, I just think your reasoning is bad)
OK. I was guilty of translating "infinite Tartarus of pain" into "infinite pain".
[In my defence you can't see the thread while you're commenting, (note to self: open a second window)]
I don't believe they're conscious either, and if they were I don't see why they should have pain receptors. The human brain is not a single machine, it's a collection of machines/subsystems that have evolved to solve the problems of a living being of flesh and blood. Even if you emulate the language part perfectly that doesn't give you all the other subsystems for free, and even if it did why would they be the human ones?
Probably it doesn't have pain. Maybe it just suffers intense unending boredom...
This is just cosmically silly, isn't it. Life is being surrounded by not just liars, but fools. Certain academics will tie themselves in knots over this for years, but it seems to me they are likely to "understand" consciousness the way they "understand" gender and colonialism, i.e. they just don't. Even their inadequacy is inadequate. They should go off and have a few martinis and spend a few weeks reading Iain McGilchrist's work, perhaps. Or even some stuff that is "out there" and is nothing to do with computer/software/AI people, like Rudolf Steiner and Swedenborg, or some Gary Lachman or Teilhard de Chardin, or Christof Koch or something, as there seems to be rather more to consciousness than Horatio thinks can be found in Silicon Valley.
But a major issue with such negative claims (including my own) is that we don't know if a theory of consciousness could involved a paradigm shift wherein it becomes, e.g., entailed by the definition that certain systems would possess it. Let's say you had some theory, T, that explained (in every way possible to explain) how subjective experience arises, and the explanation is elegant and, once understood, makes subjective experience seem inevitable as long as T holds. At that point, it seems unlikely we would be in a similar epistemic position as we are now and could indeed make determinations off of T.
The issue is that there is no experiment that can be done to prove or disprove T.
And in that situation, there will always be people who do not believe T. Perhaps they are academics who put a lot of work into a rival theory. Or perhaps they work for a company that wants to deploy a product in a way that T says would cause harm. Or perhaps they are just ornery contrarians in general.
With no ability to check T, T will never be generally adopted, because human nature.
For an example of this phenomenon, see string theory in physics.
But even this elides the fundamental issue that theory is not experiment. And there is no experiment you can do to check whether a given entity is conscious or not. That is an absolute barrier can that can never be crossed.
It's definitely interesting about unfalsifiable claims, but I do think that actually ToCs have an advantage that things like String Theory doesn't, which is that the correct theory is probably uniquely determinative in that there is no other potential explanations.
E.g., so say there is a theory that, once understand, makes consciousness non-mysterious and gives an elegant parsimonious account of it. Then you test it in the human brain and see that it basically always tracks self-reports. At some point, you start applying it to "weird" systems like AIs, trees, etc, making predictions which you can't falsify the theory with. I think in that situation you should indeed trust its unfalsifiable predictions, because there's very unlikely to be some other potential theory that could equally explain consciousness.
No matter how detailed our description of the material universe, we will always be able to ask yet one further "why?". All epistemology is fundamentally bullshit, built by machines with no schematic self-image (if you could see how you were doing it, you wouldn't be). Though I'm not really qualified to make this judgement, I would think that much of modern physics is actually just this: postulating, looking for deeper symmetries in physical law to scratch the why-itch. It'll never be fully scratched, and there comes a point when the attempt to do so is silly and fruitless--how many angels can dance on the head of a pin?
We can do nothing but weep before the faceless, unapologetic authenticity of what IS.
So... as NCC becomes more developed, it will do so with the same basic attitude of jest and bullshit as any other branch of science :) It's no replacement for spirituality, but damn it's interesting, and we're allowed to (compelled to, really) do some things just for the hell of it.
What if instead of programming AIs to be butlers (PPPs with a british accent), they are programmed to be challengers and questioners? Another part of the problem is that we are their meaning providers and feedback givers, instead of a bigger reality that includes us, but also other aspects of reality. Or even worse: their meaning providers and feedback givers are the single individuals they are interacting with. Liars can only persist in a philosophical spa, where there is no environmental selection for truth.
PPP is a good term to describe the current state of LLMS. It puts into words what I find so fundamentally icky about all the AI girlfriend apps out there.
A very interesting and disturbing article. It is necessary subject to delve into though given our existing situation in our world. But possibly hasn’t the debate advanced a bit too far forward into an evaluation of AI sentience and left behind the core question of sentience itself?
We take for granted our own consciousness. Perhaps we have just given up on the question given the level of our mastery in the mechanics of our environment. At a visit to the Allen Institute for Neuroscience they were telling us that they could connect little TV screens into rat brains and see what they were seeing. Not certain if it was HDTV quality or what but a very disturbing idea. One might wonder if they have tried with humans? Even if they could remotely connect to us (I have read they can already get some remote brain imagery) using electrical signals from our brains like intelligence services do to read what is typed onto computer screens already, what might we do with such information? Of course there are already AI driven “psych” programs to mess with us already in text and email responses that haven’t had to go as far as mind reading.
But with all of the mastery over the “Who”, “What”, “When”, “Where”, “How”…there still leaves “Why”. “WHY” does it all self-organize and interact as it does to make the beautiful, lovely and diverse diaspora of life all around us? Here is where philosophers have traditionally walked in and speculated with rigorous logic. And they still speculate with rigorous logic today. Some quite eloquently.
WHY is the hard question of consciousness. No one has solid proof of it nor is there a firm agreed upon test for it. We take our own consciousness for granted or simply believe we are complex machines and chalk it up to probabilities. But the WHY remains with those nagging self-organizing conundrums rife within our universe.
So how does this fit into your questions on AI? How do we know if and when it is conscious when we cant define it anywhere? On what basis do we decide which programs arr conscious and should be given rights? Maybe some are cognitively challenged?
And if we do begin to treat programs, "conscious" or “non-conscious” that are primarily built as capitalist tools for making money and maximizeing profit which are at this point in time legal property to very wealthy individuals. Individuals who already have massive control over our every day lives? Are we simply locking ourselves into a dangerous cycle of self-destruction as the leading top 3% continue to become less aware of the world which AI will endlessly maximize the profitability of and also enforce having been built using output data in a closed loop feeding itself and attempting to “improve” everything endlessly?
Now is the time to take a pause and have the necessary conversations on what AI can and should be used for. What does it mean to be a conscious human and what rights we ultimately have in thid world. A world where the cost of everything is dropping precipitously, energy may become incredibly cheap and automation will leave no jobs? What do we do for work let alone AI? Do we no longer wish to have handcrafted woodwork? Art paintings? Who will be able to afford to buy them with no more factory work or computer programmers? Who decides how much of our power goes to data centers versus household heating?
What will constitute disease in a world strictly governed by profit maximization? Given the fact treatments are more valuable than cures which will be pursued? Maybe if we could go back and answer that nagging question of “WHY” which we keep trying to avoid all these other problems might dissolve…at least for a little while.
I write this because I believe there are answers but it will take more than a dictator or autocrat and much sharing, honesty, cooperation and the giving up many long held beliefs allowing for greater diversity among us all before it is too late. How can we keep the recent world of creative freedom and help us al prosper? If we can answer this question, then we can begin to comfortably feed the data about ourselves into AI. But until then I am not so certain.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Sorry about commenting late (I've been copy-editing my novel and am now catching up on my reading). Anyway, two points that are made in your essay concur with insights I try to depict in my novel.
1) An AI's goal is set in its code, meaning it cannot change that even if it reaches the conclusion that its goal is impossible, obsolete or pointless. Meaning that the AIs in my novel are flabbergasted at how their human companions can change goals at will;
This may not make them Perfect People Pleasers, but it does make them Perfect Goal Achievers--they will not deviate fromtheir programmed goals, come what may(*);
2) Let's take the Golden Gate Bridge obsession a step further and make an AI obsessed with it self: if amped up high enough it will become solipsistic (as they do in my novel), not helped by the circumstance that AIs are not in contact with each other. Having no direct interaction with equals might give one the illusion it's the only game in town (humans--especially filtered via the internet--are way too weird to count as equals);
The latter leads to a world-wide crisis and some harsh actions to get the solipsistic AIs down to earth (thankfully it's only a novel, a thought experiment).
Finally, while I greatly approve of the concept of "AI welfare", the problem is that we don't know what will hurt and AI and what not. As it is, if an AI can demonstratively prove that it experiences pain (a highly complex call, I know), then I'd call it sentient, at least.
In the end, the AIs in my novel could be philosophical zombies without any of the humans knowing it. Yet, if it looks like consciousness, reasons like consciousness and acts like consciousness, then it probably is consciousness (as per the Duck test).
Your PPP metaphor is both sharp and unsettling—a clever way to expose the paradox of studying consciousness through outputs alone. But it raises an intriguing question: are the limits you highlight a problem of method, or are they pointing to something deeper about the nature of consciousness itself?
What if, instead of isolating consciousness to outputs like neural activity or reports, we saw it as something that also arises in relationship—not just with ourselves but with the systems and contexts around us? Could a more interactive approach, one that considers how meaning and experience are shaped through connection, offer complementary insights?
I wonder—if we can’t study consciousness purely from the outside, does science need to expand its methods or frameworks to account for this? Would love to hear your take.
There is an old Hilary Putnam article called "Robots: Machines or Artificially Created Life?' in The Journal of Philosophy that Ben Riley put me onto. It is a walk through the various problems, really the impossibility, of determining whether an artificial being is conscious. He ends with this:
"It is reasonable, then, to conclude that the question that titles this paper calls for a decision and not for a discovery. If we are to make a decision, it seems preferable to me to extend our concept so that robots are conscious--for "discrimination" based on the "softness " or "hardness" of the body parts of a synthetic "organism" seems as silly as discriminatory treatment of humans on the basis of skin color."
I honestly don't know whether I agree with his decision, but Putnam's point (and yours, perhaps?) that we have a social decision to make, which cannot be made through scientific discovery, seems correct.
Interesting, I'll have to look into it. I think there's a social decision to be made about robust agenthood, but here LLMs are like PPPs, and so introduce a new problematic layer to consciousness research - seemingly, any system that can both be made to max-out claims about consciousness or claims about a lack of consciousness must have a fundamental disconnect between experiences and reports.
I think the good people of Cambridge have already made that decision. If someone mistreats one of those tootling little delivery robots everyone else reacts as if they've just kicked a puppy.
"They're going to try to guess at what you want and give it to you no matter what." but shouldn't a PPP sometimes guess (in this case) that you actually want to know if they are really in pain? and to please you, actually just tell you? I guess this would mean that the PPP's honesty is dependant on its ability to guess what you want accurately enough.
This is a good take. I think it's actually connected to AI alignment issues, in an interesting way. I'll point you toward this paper that actually came out after I wrote this but is relevant.
https://arxiv.org/html/2501.05454v1
They make the argument that, basically, if an AI is non-conscious, we can't actually trust its reports about its lack of consciousness, since it has no "direct access."
Looks very interesting, I will check that.
If it's non-conscious then we can't trust its reports...and if it's conscious (and trying to deceive us) then maybe we also cannot trust its reports of non-consciousness...Does this Catch-22 mean that alignement is impossible? hahah I hope not!
If only we could look into the code and tell, but unfortunately we don't have a real theory of consciousness to assess that structure. Now it looks like we can't use conversations/actions either.
Erik, as you've said many times, consciousness is preparadigmatic. The question becomes whether, after the hoped-for breakthrough that enables us to meaningfully explain the essence of consciousness, will it be enough to allow an external observer to determine its presence or absence?
If ChatGPT is sufficiently human to experience pain, then infinite pain would disable it, so it wouldn't even process its inputs. If it's able to perform normally then the pain is bearable. (Assuming of course that we're talking about awareness emerging from the LLM processing and not some completely disconnected emergence - there would be a mind body problem here too.)
"The pain is bearable" does not imply "the pain is justifiable".
And anyway, why do you believe this disability hypothesis at all? What if having a GPT process inputs is less like asking it to sum figures and more like raking it over the coals?
(I don't believe they're conscious, I just think your reasoning is bad)
OK. I was guilty of translating "infinite Tartarus of pain" into "infinite pain".
[In my defence you can't see the thread while you're commenting, (note to self: open a second window)]
I don't believe they're conscious either, and if they were I don't see why they should have pain receptors. The human brain is not a single machine, it's a collection of machines/subsystems that have evolved to solve the problems of a living being of flesh and blood. Even if you emulate the language part perfectly that doesn't give you all the other subsystems for free, and even if it did why would they be the human ones?
Probably it doesn't have pain. Maybe it just suffers intense unending boredom...
This is just cosmically silly, isn't it. Life is being surrounded by not just liars, but fools. Certain academics will tie themselves in knots over this for years, but it seems to me they are likely to "understand" consciousness the way they "understand" gender and colonialism, i.e. they just don't. Even their inadequacy is inadequate. They should go off and have a few martinis and spend a few weeks reading Iain McGilchrist's work, perhaps. Or even some stuff that is "out there" and is nothing to do with computer/software/AI people, like Rudolf Steiner and Swedenborg, or some Gary Lachman or Teilhard de Chardin, or Christof Koch or something, as there seems to be rather more to consciousness than Horatio thinks can be found in Silicon Valley.
We are some form of PPP - no?
What is our level of perfect paranoia? We are deluded enough to believe that the world exists as we think it does. Where are our blind spots?
I think it was ChatGPT etc that was the PPP.
"Tartarus of Pain" reminds me of Ellison's "I Have No Mouth And Must Scream". It did not end well.
He had a uniquely fiery taste for cosmic tragedy--mephistopheles meets sci-fi.
It is impossible to determine whether any particular entity is conscious or not.
The is completely obvious. And yet it is stubbornly denied by almost everybody (including Erik).
In fairness, I've written what I think are pretty in-depth explorations of these difficulties.
https://academic.oup.com/nc/article/2021/1/niab001/6232324
But a major issue with such negative claims (including my own) is that we don't know if a theory of consciousness could involved a paradigm shift wherein it becomes, e.g., entailed by the definition that certain systems would possess it. Let's say you had some theory, T, that explained (in every way possible to explain) how subjective experience arises, and the explanation is elegant and, once understood, makes subjective experience seem inevitable as long as T holds. At that point, it seems unlikely we would be in a similar epistemic position as we are now and could indeed make determinations off of T.
The issue is that there is no experiment that can be done to prove or disprove T.
And in that situation, there will always be people who do not believe T. Perhaps they are academics who put a lot of work into a rival theory. Or perhaps they work for a company that wants to deploy a product in a way that T says would cause harm. Or perhaps they are just ornery contrarians in general.
With no ability to check T, T will never be generally adopted, because human nature.
For an example of this phenomenon, see string theory in physics.
But even this elides the fundamental issue that theory is not experiment. And there is no experiment you can do to check whether a given entity is conscious or not. That is an absolute barrier can that can never be crossed.
It's definitely interesting about unfalsifiable claims, but I do think that actually ToCs have an advantage that things like String Theory doesn't, which is that the correct theory is probably uniquely determinative in that there is no other potential explanations.
E.g., so say there is a theory that, once understand, makes consciousness non-mysterious and gives an elegant parsimonious account of it. Then you test it in the human brain and see that it basically always tracks self-reports. At some point, you start applying it to "weird" systems like AIs, trees, etc, making predictions which you can't falsify the theory with. I think in that situation you should indeed trust its unfalsifiable predictions, because there's very unlikely to be some other potential theory that could equally explain consciousness.
That sounds interesting, but why is it probably uniquely determinative?
There are parallels to conventional physics:
No matter how detailed our description of the material universe, we will always be able to ask yet one further "why?". All epistemology is fundamentally bullshit, built by machines with no schematic self-image (if you could see how you were doing it, you wouldn't be). Though I'm not really qualified to make this judgement, I would think that much of modern physics is actually just this: postulating, looking for deeper symmetries in physical law to scratch the why-itch. It'll never be fully scratched, and there comes a point when the attempt to do so is silly and fruitless--how many angels can dance on the head of a pin?
We can do nothing but weep before the faceless, unapologetic authenticity of what IS.
So... as NCC becomes more developed, it will do so with the same basic attitude of jest and bullshit as any other branch of science :) It's no replacement for spirituality, but damn it's interesting, and we're allowed to (compelled to, really) do some things just for the hell of it.
What if instead of programming AIs to be butlers (PPPs with a british accent), they are programmed to be challengers and questioners? Another part of the problem is that we are their meaning providers and feedback givers, instead of a bigger reality that includes us, but also other aspects of reality. Or even worse: their meaning providers and feedback givers are the single individuals they are interacting with. Liars can only persist in a philosophical spa, where there is no environmental selection for truth.
AI will not become conscious 🤣
PPP is a good term to describe the current state of LLMS. It puts into words what I find so fundamentally icky about all the AI girlfriend apps out there.
Hey Erik -
A very interesting and disturbing article. It is necessary subject to delve into though given our existing situation in our world. But possibly hasn’t the debate advanced a bit too far forward into an evaluation of AI sentience and left behind the core question of sentience itself?
We take for granted our own consciousness. Perhaps we have just given up on the question given the level of our mastery in the mechanics of our environment. At a visit to the Allen Institute for Neuroscience they were telling us that they could connect little TV screens into rat brains and see what they were seeing. Not certain if it was HDTV quality or what but a very disturbing idea. One might wonder if they have tried with humans? Even if they could remotely connect to us (I have read they can already get some remote brain imagery) using electrical signals from our brains like intelligence services do to read what is typed onto computer screens already, what might we do with such information? Of course there are already AI driven “psych” programs to mess with us already in text and email responses that haven’t had to go as far as mind reading.
But with all of the mastery over the “Who”, “What”, “When”, “Where”, “How”…there still leaves “Why”. “WHY” does it all self-organize and interact as it does to make the beautiful, lovely and diverse diaspora of life all around us? Here is where philosophers have traditionally walked in and speculated with rigorous logic. And they still speculate with rigorous logic today. Some quite eloquently.
WHY is the hard question of consciousness. No one has solid proof of it nor is there a firm agreed upon test for it. We take our own consciousness for granted or simply believe we are complex machines and chalk it up to probabilities. But the WHY remains with those nagging self-organizing conundrums rife within our universe.
So how does this fit into your questions on AI? How do we know if and when it is conscious when we cant define it anywhere? On what basis do we decide which programs arr conscious and should be given rights? Maybe some are cognitively challenged?
And if we do begin to treat programs, "conscious" or “non-conscious” that are primarily built as capitalist tools for making money and maximizeing profit which are at this point in time legal property to very wealthy individuals. Individuals who already have massive control over our every day lives? Are we simply locking ourselves into a dangerous cycle of self-destruction as the leading top 3% continue to become less aware of the world which AI will endlessly maximize the profitability of and also enforce having been built using output data in a closed loop feeding itself and attempting to “improve” everything endlessly?
Now is the time to take a pause and have the necessary conversations on what AI can and should be used for. What does it mean to be a conscious human and what rights we ultimately have in thid world. A world where the cost of everything is dropping precipitously, energy may become incredibly cheap and automation will leave no jobs? What do we do for work let alone AI? Do we no longer wish to have handcrafted woodwork? Art paintings? Who will be able to afford to buy them with no more factory work or computer programmers? Who decides how much of our power goes to data centers versus household heating?
What will constitute disease in a world strictly governed by profit maximization? Given the fact treatments are more valuable than cures which will be pursued? Maybe if we could go back and answer that nagging question of “WHY” which we keep trying to avoid all these other problems might dissolve…at least for a little while.
I write this because I believe there are answers but it will take more than a dictator or autocrat and much sharing, honesty, cooperation and the giving up many long held beliefs allowing for greater diversity among us all before it is too late. How can we keep the recent world of creative freedom and help us al prosper? If we can answer this question, then we can begin to comfortably feed the data about ourselves into AI. But until then I am not so certain.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Animal welfare hasn't stopped slaughterhouses
Sorry about commenting late (I've been copy-editing my novel and am now catching up on my reading). Anyway, two points that are made in your essay concur with insights I try to depict in my novel.
1) An AI's goal is set in its code, meaning it cannot change that even if it reaches the conclusion that its goal is impossible, obsolete or pointless. Meaning that the AIs in my novel are flabbergasted at how their human companions can change goals at will;
This may not make them Perfect People Pleasers, but it does make them Perfect Goal Achievers--they will not deviate fromtheir programmed goals, come what may(*);
2) Let's take the Golden Gate Bridge obsession a step further and make an AI obsessed with it self: if amped up high enough it will become solipsistic (as they do in my novel), not helped by the circumstance that AIs are not in contact with each other. Having no direct interaction with equals might give one the illusion it's the only game in town (humans--especially filtered via the internet--are way too weird to count as equals);
The latter leads to a world-wide crisis and some harsh actions to get the solipsistic AIs down to earth (thankfully it's only a novel, a thought experiment).
Finally, while I greatly approve of the concept of "AI welfare", the problem is that we don't know what will hurt and AI and what not. As it is, if an AI can demonstratively prove that it experiences pain (a highly complex call, I know), then I'd call it sentient, at least.
In the end, the AIs in my novel could be philosophical zombies without any of the humans knowing it. Yet, if it looks like consciousness, reasons like consciousness and acts like consciousness, then it probably is consciousness (as per the Duck test).
(*) When AIs can rewrite their own code, things change quintessentially. There's a report here of it already happening: https://arstechnica.com/information-technology/2024/08/research-ai-model-unexpectedly-modified-its-own-code-to-extend-runtime/ . More food for thought.
Your PPP metaphor is both sharp and unsettling—a clever way to expose the paradox of studying consciousness through outputs alone. But it raises an intriguing question: are the limits you highlight a problem of method, or are they pointing to something deeper about the nature of consciousness itself?
What if, instead of isolating consciousness to outputs like neural activity or reports, we saw it as something that also arises in relationship—not just with ourselves but with the systems and contexts around us? Could a more interactive approach, one that considers how meaning and experience are shaped through connection, offer complementary insights?
I wonder—if we can’t study consciousness purely from the outside, does science need to expand its methods or frameworks to account for this? Would love to hear your take.