I'd love to write a whole essay to properly respond to this.
I have settled on mysterianism & predicate dualism. I think that consciousness is inherently invulnerable to scientific methods being that it is subjectivity itself, and you can't view it objectively. It simply lies outside of the domain of science.
I think attempts to explain consciousness are probably doomed and maybe even misguided, because consciousness is fundamentally distinct thing from the material world, and there is no way to explain it in terms of material phenomena. In other words, I am a strong believer in the explanatory gap and the hard problem, so much so that I think the most sensible position is that material reality and subjective consciousness are ontologically distinct. This is also partly informed by my experience in meditation. I believe if you actually look at consciousness you can realize that it is absurd to think that it is simply something that can be explained in terms of a specific set of physical circumstances. Even if you could identify the exact circumstances that produce consciousness, you would still be left with the question: why does consciousness appear? Why is there something that it is like to be those circumstances? Why don't those circumstances simply exist and carry out computational functions and direct behavior etc all with no subjective experience?
Finally, my most out-there position wrt to consciousness is I believe that interactionism is plausible--it is at least possible that free will exists, and that consciousness is the causal mechanism for free will, i.e. reality is not causally closed (it opens up into consciousness, anyway--why can't it go the other way?). This makes sense to me as a reason for the evolution of consciousness in the first place. It makes more sense to me than couching consciousness as a causal dead end/epiphenomenonal accident, anyway. To me, the princess of bohemia-type criticisms of interactionism are misguided because although yes it is impossible to explain how consciousness produces effects in the material world, *it is also impossible to explain how the material world produces consciousness, and yet--we are conscious!*
I'll change my opinion as soon as someone writes that paragraph, Erik. But I think it is telling that we can't even *conceive* of an actual explanation for how material phenomena give rise to subjective consciousness. It's not that we can't identify the correct explanation, we can't even think of a possible explanation. We can only come up with descriptions of phenomena associated with consciousness, not how these phenomena actually make something that is like to be them.
Could subjective experience have survival value? Only entities with survival instincts have ever displayed evidence of consciousness, to my knowledge. Inorganic entities, rocks and so forth, do not. Is it a coincidence, or is there some connection?
This is indeed a problem for any theory of consciousness that thinks of consciousness as epiphenomenal. People often get into that same question via debates about philosophical zombies and their own utterances about consciousness, but you can also get into it by simply asking why consciousness would evolve, and, if consciousness is epiphenomenal, why does it match so well with explaining the actions of organisms?
I have written quite a few articles on this but I do have something in the works that responds to another article you wrote. For now, the question of why consciousness would evolve is in fact obvious, because there is only reason why anything evolves, and that is to improve survivability of the phenotype and ultimately the genes. For me the 'why' question is far more interesting in the 'how' (having looked in some detail at both). So what does consciousness give us and how might it aid survival over, for example, a Chalmers zombie? Consciousness moves us beyond reflexive actions. It enables us to distinguish between good and bad experiences and establish our preferences. Those preferences guide our actions because through consciousness we can experience our model of the world, put ourselves inside that world and simulate possible actions and outcomes that move us towards our preferred outcomes, or our preferential experiences. The mechanism that ties this all together (and really the thing I have written extensively about) is free-will. I believe we manufacture decisions and abstraction from energy and information. I can elaborate and have done but I will park that here. Ultimately in the interests of survivability, consciousness has to be a far more parsimonious way to integrate sensory data - what is the evidence for that? The fact that we are conscious means that it was economic from an evolutionary perspective. Interesting discussion but I will keep my powder dry for the post I am writing. I may change it up a little to incorporate a response to this post. Incidentally my series (#15) on free-will is ongoing so there is a lot of overlap there.
"...if consciousness is epiphenomenal, why does it match so well with explaining the actions of organisms?"
Can you point toward the best argument(s) that consciousness plays a role in the action of organisms? I think Vervaeke makes a good attempt, possibly a successful one. But I'm otherwise unfamiliar with such (unless you count arguments where consciousness is, actually, reduced to mean externally observable brain function (or certain systems of brain function).
That's, my thinking too. It's a survival mechanism. Being aware and "conscious" of our surroundings definitely has increased life's ability to survive.
Why shocked? The interaction problem is solved if you drop the superfluous assumption there is this entire substance called matter that exists independently of consciousness.
It's also solved if you just reject phenomenal consciousness to begin with. Illusionism and related views handle the problem just find, and don't require dubious metaphysics to do so.
Illusionism providing justification to reject phenomenal consciousness is -- an illusion. Actually, a category error. Both Hoffman's illusionism (idealist) and Dennett's illusionism (materialist) fail to explain how consciousness comes into being (or int he case of Hoffman, why it was ever there in the first place).
Illusionism tells us that the relationship between our intelligence and the contents of our phenomenal consciousness and "actual reality" is necessarily asymmetrical. But it does not explain phenomenal consciousness, nor create grounds for rejecting it. At best, it can tell us that certain theories of consciousness are insufficient.
They don't fail at all. Dennett's account entails that certain preconceptions about what consciousness is or what its characteristics must be are misguided; what consciousness actually picks out are a range of empirically evaluable phenomena that, once understood, constitute everything there is to know about consciousness. Insofar as critics insist something else is left unaccounted for: that's precisely what these accounts deny.
Many critics of illusionism will insist it "fails" to "account for" some aspect of consciousness or other. These strike me as weird objections: it's a bit like saying the view that "dragons don't really exist, but people mistakenly thought they saw dragons" is wrong because it "fails to describe the physiology of dragons." Such an account isn't attempting to do so; it's attempting to show that there aren't any dragons...but there are lizards.
>>But it does not explain phenomenal consciousness, nor create grounds for rejecting it.
This doesn't make any sense. It's like saying the problem with atheism is that it fails to explain the existence of God. As far as grounds for rejecting it: that's a matter of contention. I think Dennett went overboard in providing more than enough reasons to reject phenomenal consciousness.
But that's another thing: why should we need to provide grounds for rejecting it when nobody's ever made a good case for it to begin with? Why should I think there is any such thing as phenomenal consciousness or qualia?
There are two important problems with Dennett's framing.
One is that he is creating a strawman argument against Q/PC as "things" which would "exist" in a third-person-observable way, rather than arguing on the point of whether they are real.
The next problem follows the first, wherein he dismisses Q/PC by, maybe rightfully, saying that they have no place within the realm of the existing (or the "system" as he sometimes calls it), and thus dismisses their legitimacy.
Your arguments-by-analogy make the same category error. A claim on the existence of dragons or God are claims about the observable world, not about the qualities of observation itself.
And you're welcome to point to a counter-example, but I don't know of any reasons from Dennett for rejecting Q/PC. Rather, he says we cannot see a reason for its existence. These are not the same.
We can make the same argument for existence writ large. Is there some reason that anything at all should exist? No? Then why should we believe that anything does?
"But clearly things exist," you say? Ok. Clearly Q/PA is real, I reply.
Similarly, within existence, why would there be fundamental laws like gravity or the speed of light? Why any constraints whatsoever? If we don't have reasons to give for fundamental properties of existence, should we deny those properties?
If not, why would we make exceptions for the fundamental properties of our experience of reality?
I can truly only speak for myself regarding experience and the realness of Q/PC, because it could be the case that my own Q/PC is the only real instance of such. For all I know, the text attributed to Lance S. Bush may be the automated output of a GPT. And who knows for sure if the other human beings around me are having Q/PC (...rather than being "philosophical zombies").
Qualia, in my experience, are fundamental properties of my experience. And they are, in some sense, before-reason, similarly to how Dennett's concept of Vastness is beyond-reason. To quote the man himself:
"Words fail me."
And it is our failure to be able to reason for Q/PA, gravity, SOL, existence itself, that these concepts are held as mysteries. But they're no less real for their mysteriousness or fundamentality or ineffability.
There are many things like that, not just consciousness....it's impossible to explain how the material world produces 'music' yet 'music' exists, and not exclusively of the result of consciousnesses.
Music is perfectly explainable in terms of physical processes until you get to the point of attempting to explain the experience of music, which at that point is just an example of qualia, like the experience of the color red, which can’t be explained
I would disagree with this. It seems to be fundamentally an argument for radical skepticism. That is, we have no stable or reliable methods for integrating our individual subjective experiences. The entire world is conscious experience so we can either discuss it fully or not at all.
I would even go so far as to say it is more parsimonious to describe the inclination to "explain away" the agentic role of consciousness as a redirect through the current "marketplace of rationalizations" where "hard truths" are the coin of the realm. It's about signaling the trade of one type of agency for another by socially denying that agency has any real meaning at all.
I'm a proponent of Pete Mandik's qualia quietism and meta-Illusionism. Not only do I not think that there is a hard problem of consciousness, I don't even think that if a person is thinking clearly about the matter that it would seem that there is a hard problem of consciousness. I think that the sense there is a hard problem stems from misconceptions about language and phenomenology that people largely acquire by engaging in academic philosophy. Consciousness is probably not any particular thing. It is no particular phenomena at all. Rather when we speak of consciousness we are referencing an amalgamation a various aspects of our psychology and the ways in which we theorize and speak about our psychology. Adequately breaking down all these sub components will eventually result in dissolving all these seemingly mysterious aspects of consciousness and developing a sufficiently clear set of tractable problems that they can eventually be solved in a way amenable to the empirical sciences. Nothing about consciousness is going to turn out to fall outside the scope of what we learn empirically.
Qualia quietism is a view according to which the terms and concepts that typically feature a discussion of qualia or phenomenal states consist of a viciously circular set of mutually interdefining terms that have no clear or conceptually distinct content such that we can say anything meaningful about them at all. Essentially the very notion of qualia or phenomenal states is meaningless and there is nothing substantive to say about them. It's not simply that they don't exist but that there isn't even a coherent concept to consider the existence of. I think something like this is basically correct and that a lot of the philosophical discourse on consciousness is completely misguided. This position at least on my view is closely associated with illusionism and I typically treat it as roughly being in Illusionist camp. I think views that deny the existence or meaningfulness of qualia are pretty much the only serious contenders for a viable account of consciousness.
If I were to ever be an illusionist, I would probably endorse qualia-quietism rather than strict illusionism, like Frankish. A major part of Mandik's argument I don't find convincing is him questioning whether or not normal or average people would ever conceptualize their experiences as containing a difficult-to-explain quality, and that this is just an invention of philosophers. I think that this is probably putting a little bit too much weight on unknowledgeable introspection. After all, now we have simple AI, it seems people do have intuitions about consciousness and lack thereof. Most people, for instance, wouldn't feel bad yelling at Alexa in the way that they would yelling at a person, and presumably if you push them on why that is, they would eventually reach some sort of conclusion about how Alexa lacks qualia without any philosophical background.
I don’t recall ever attempting to raise doubts about whether any normal people ever conceive experiences as having difficult-to-explain properties. What I do recall attempting to argue, against illusionism, is that there’s not a sufficiently large enough number of normal people who do this (AND do it automatically) to merit calling it any species of illusion (e.g. a cognitive illusion like in the Monty Hall problem). Anyway, it’s not an important part of any argument for qualia quietism; it’s just part of an argument against (a certain kind of) illusionism.
Simply because people can respond to questions about whether AI is "conscious" or not does not mean that those people understand what philosophers are talking about when they refer to phenomenal consciousness, or qualia. I think that people make the mistake of thinking that if people respond to questions that employ certain terms, like "free will," or "consciousness," that therefore people must have already had thoughts about these topics prior to exposure to the question, and that their thinking accords with the notions those asking these questions have in mind.
I discuss a significant shortcoming relevant to this in my own work: Forced choice study designs can prompt what I call "spontaneous theorizing": this occurs when the context of a study prompts a person to develop (or merely report) holding a view that they didn't hold prior to participating in the study. I provide some evidence that you can do this for quantum mechanics: you can present a short description of the Copenhagen and Many Worlds interpretations of QM and get people to pick one. This does not mean that most ordinary people are Copenhagians or Many-Worlders.
Specifically with respect to consciousness, there are a number of studies conducted by Machery, Sytsma, Ozdemir, and others that cast doubt on whether nonphilosophers think about consciousness in the same way as philosophers do. Here are a couple examples:
Sytsma, J., & Machery, E. (2010). Two conceptions of subjective experience. Philosophical studies, 151, 299-327.'
Sytsma, J., & Ozdemir, E. (2019). No problem: Evidence that the concept of phenomenal consciousness is not widespread. Journal of Consciousness Studies, 26(9-10), 241-256.
Generally speaking, I think people are way too quick to think that everyone thinks the way philosophers do. I don't grant these assumptions, and take facts about how nonphilosophers think to be challenging empirical questions. As such, when you say this:
"Most people, for instance, wouldn't feel bad yelling at Alexa in the way that they would yelling at a person, and presumably if you push them on why that is, they would eventually reach some sort of conclusion about how Alexa lacks qualia without any philosophical background."
...I doubt that's true. I don't think nonphilosophers have a concept of qualia, and so I don't think they'd appeal to such a concept to explain their lack of discomfort with yelling at Alexa. At present, most studies exploring the question of whether nonphilosophers have a concept of qualia have found little indication that they do. I don't take these studies to be definitive, by any means, but I don't think we should assume nonphilosophers have a concept of qualia. If they do, I want to see good evidence of this. At present, I don't think we have such evidence.
It seems to me as if, especially now, people debate things like AI consciousness all the time, and I don't think that those debates are entirely about some sort of very specific philosophical notion of phenomenal consciousness. I think that they are debates that arise very naturally. See, e.g., evidence that the majority of Americans have opinions about the consciousness of AI models: https://academic.oup.com/nc/article/2024/1/niae013/7644104
As far as what the public thinks about "consciousness", if the way that people are interpreting that term isn't consistent with what philosophers are talking about and isn't consistent with one another then what exactly are we measuring when we are measuring people's views about "consciousness"?
Meaningful and consistent interpretations of responses to what are sensibly intended to be the same question require some form of operationalization and some reason to believe that the way in which participants are responding to the questions are both consistent with the researchers operationalization and sufficiently consistent with one another that responses don't display too much interpretive variation that effectively results in responses to what amount to different questions.
Simply put I do not think that people operating on the assumption that nonphilosophers think the way others do are sufficiently attentive to the problems of conveying philosophical can concepts to people without training in philosophy. People are too quick to presume that if people are using the same words that they are thinking the same things. My research shows that in other domains relevant to the study of how nonphilosophers think about philosophy that this simply is not true
What specifically did I say that you think is batshit insane? And why do you think that?
Whether nonphilosophers think like philosophers is an empirical question. At present, there is no compelling evidence that most nonphilosophers have a concept of qualia/phenomenal consciousness, are responsive to the hard problem of consciousness, and so on. So what, exactly, is insane about what I said?
I've read the reply and there is some interesting stuff in there but what it does not do is justify the presumption that most nonphilosophers have any notions that track what philosophers are talking about when it comes to conscious states. Whether they have any particular concepts of interest is an empirical question and I do not we are in a position to presume that nonphilosophers think the way that we think they do.
It is incredibly challenging just to do the psychological research necessary to figure out what human personality is like. And that's a fairly tractable question. Why would we think that we could address the question of whether non philosophers have highly philosophically nuanced notions like phenomenal states or anything related to them merely by appealing to our anecdotal experience? From looking at these sorts of discussions it looks like a lot of philosophers operate under the presumption that they are entitled to presume that nonphilosophers think the way that philosophers presume that they think. And that therefore the onus is on those of us who are skeptical of this to show otherwise. But why should we grant you that presumption in the first place? Why not show that they think what you think they think? Why do we have to show that they don't?
Unless and until I see good reason to think that non philosophers share any of the relevant concepts that figure into philosophical discussions I don't see any particularly good reason to grant the assumption that they do.
Alexa doesn't have a mind and a sine non qua for a creature that can suffer in any way is that it has a mind, I would say 'lacks qualia' is unnecessarily egghead, a state of mind is what suffering is, therefore no mind no suffering. One can imagine making a bad gear shift on a standard shift car, then the car makes a weird noise it normally doesn't make, it moves in a herky jerky abnormal manner and has a degradation in function, but it isn't suffering, since it doesn't have a mind.
Though they might not be able to express themselves that clearly, that's what normal people think. Study done by a professor of some sort not required.
No. I don't think that. I'm suggesting that the idea of qualia is conceptually confused nonsense. The thing many people think comprises consciousness or makes it mysterious is just a conceptual confusion of their part. There isn't even an intelligible concept to deny in the first place because there is no genuine concept of phenomenal states/qualia.
To steel man the qualia angle, there need not be an idea of qualia or a concept of qualia to justify it. If anything, "a conception of concepts" or "ideation of ideas" are just different arrangements of qualia relative to the subject.
This isn't to preclude the possibility that self-deception is wild when trying to defend qualia as valid for certain inferences, but I see no reason to deny them the "baseline win" of an experiencing being, while denying them some sort of momentum of knowing what to do with that.
Just out of curiosity, what is your experience level with meditation and psychedelics?
It's hard to know what to do with the existence of illusionists/eliminativists. If anything it seems like evidence that highly intelligent p-zombies exist and walk among us. A good reason to find a way to do ethics that doesn't depend on assumptions about consciousness - I don't think we should discard you just because you don't exist in the full sense of the word. <<< all of this must read as super condescending/contemptuous. And there is no other view that puts me in as similar a position. But you would have an easier time persuading me that I am not alive, that I am a fiction of some other person's dream, than that "consciousness" does not refer or that its referent is not fundamental. What a pickle this discourse is in.
I’m a qualia quietist even though I meditate regularly, teach classes on meditation, and have had extensive experience with LSD, DMT, psilocybin, and other drugs that have profound effects on consciousness . If you think that the verbal behaviors of illusionists are best explained by their lacking phenomenal consciousness then, congratulations: you’re a behaviorist.
That is very interesting to me! Evidently I have more reading to do. Just poked through "Meta-Illusionism and Qualia Quietism".
"Echoing Wittgenstein (1953), each radical realist labels the private contents of their beetle-box ‘a beetle’, but, for all anyone knows, each box may contain something different from the others, or even nothing at all."
Funnily enough, beetle-box-ish examples are a place where it seems like non-philosophers end up approximating qualia-ish thinking. E.g., the classic stoner thought, "What if we don't all see colours the same way? What if where I see blue you see yellow? There's no way we could tell." No one walks away from that kind of stoned conversation skeptical that their talk about "seeing something some way" is nonsense.
I got the impression that you're also not skeptical of that kind of talk in a weak sense, but I'd love to get a better sense of what the boundaries are. Very curious what the qualia quietist take is on Mary the colour scientist, as a way of pumping this.
I don't think that is the definition of the term because I don't think that the term has a single canonical definition. The term is often used in reference to a distinctive conception of consciousness that involves phenomenal states. Phenomenal states are an allegedly private, ineffable aspect of our experience that cannot be captured by third personal explanations even in principle.
Talk of qualia in this respect often involves claiming that there is a redness to red or a goodness to good experiences. There are the special properties that are experiences have and those properties cannot be reduced to some sort of third person descriptive account that could figure into our ordinary scientific models of what the world is like.
That is what I am denying and that is what is typically the content that we must allegedly account for when confronting the heart problem of consciousness. Proponents of illusionism do not deny that people have sensory experiences, for thoughts, or feelings. What they deny is that there is some special, private, intrinsic properties associated with these experiences that are especially difficult or even impossible to from a third person empirical perspective.
Aren't a NN's weightings (which encode the NN's gestault impressions that are triggered by a query) a very mundane example of special, private properties associated with experiences that are difficult/impossible to convey (since you can't convey NN weightings to another NN without essentially destorying that NN by replacing its weighting with the others).
I'm not a physicalist. Not sure why that was mentioned.
I'm also not saying it's hard to talk about, that I don't feel like talking about it, or that it doesn't interest me. I'm not even saying that phenomenal consciousness does not exist. I'm saying there isn't even a concept of phenomenal consciousness. I'm saying that those words don't refer to any meaningful concept in the first place. There isn't even something to talk about.
This made me think of "Life Is Not a Problem to Be Solved, But a Mystery to Be Lived" which is apparenly a quote from Kierkegaard, Osho, Frank Herbet and also Nietzsche, of all people, according to Google. Or it's made up.
I do not consider it as a thesis. It is Knowledge. Generations of saints, gurus and chamans have lived that Knowledge. Experienced that Knowledge. Expanded that Knowledge.
This is an utterly undisprovable and thus utterly unexplanatory thesis. Right up there with "I'm a Boltzmann brain, and in the next moment I can dissolve without a trace".
Not true. I enjoy math discussions even though they are almost definitionally about models, which do not necessarily have a parallel in the physical universe. Some fields make sense, other fields make noise.
“One measly paragraph would be enough! I would collapse in joy, weeping, if I ever read such a thing.”
Putting aside that this may be colorful hyperbole to some extent — I wonder, Erik, what drives your emotional relationship to this question.
Consciousness is easily one of my favorite topics and I’ve spent many, many hours thinking and writing about it. But I am perfectly comfortable with not having a final scientific (nor metaphysical) explanation or theory of consciousness.
And I wonder how you would feel if you found a glowing paragraph which convincingly proves that consciousness cannot be explained scientifically. Would you be sad? Frustrated? Could you even be convinced of that perspective, or have do you have some amount of dogmatic attachment to a scientific explanation which you can’t let go of?
I’m asking these questions because, for me, one of the most interesting and overlooked aspects of consciousness studies is the psychology of the studiers, both professional and public. People having feelings about Consciousness. Where do those feelings come from?
This is a great and deep question. One so deep I'm afraid I can't answer it for you, at least not now. But I will say that usually what usually drives intellectual obsessions is something personal, and I have a personal relationship to unresolved mysteries.
Apologies for being poetic here, but perhaps there would be no mysteries to resolve if there weren't some fundamental mystery which is itself unsolvable. I'm not advocating for that perspective, but its a fun one to engage with, sometimes requiring courage (nature abhors a vacuum and the ego abhors a void).
I’ve started reading Bernard Kastrup and am warming up to his theory of “monistic idealism.” In a nutshell, it posits that matter emerges from the only possible ontological primitive: mind. (Or consciousness or whatever term you’d like to use.)
I don’t think a materialistic approach (aka science) will ever figure out how consciousness emerges from dead matter. It can’t even figure out what 85% of the universe is made of.
Please don’t take this to mean I don’t believe the scientific method is the best way to answer most of our questions about how the world works. It just has limits.
I also don’t believe in God or any supernatural entities, except perhaps as an expression of collective emergent human behavior.
People often conflate those things. You can absolutely be an idealist while not believing in God or supernatural entities, and still think that the scientific method is the best way to answer most questions.
I think it's interesting. One of my big problems with a lot of philosophical work on consciousness is that frameworks aren't proof. So, Kastrup has a nice framework, but it's not really the same thing as having proof. One could say that this isn't the sort of speculations that involve empirical evidence, and that's fine, but you also still need strong arguments in your favor. I think most of the arguments for that sort of idealism are more so that it is simply a nice parsimonious framework rather than standing on philosophical proofs, as it were. This is one reason why someone like David Chalmers is so successful, is that he couples frameworks with pretty rigorous arguments. I could just be missing that part of Kastrup's work, I certainly haven't read all of it, but most of what I've read is on the framework side.
Yes, I'd really like to hear your opinion on Kastrup's analytical idealism (and/or Donald Hoffman's "conscious realism"). Kastrup claims that his position is more parsimonious from a scientific perspective -- thoughts?
I would also like to hear that since I think it has something in common with Erik’s view which is:
We know self-referential system can formulate questions that they can’t answer (Gödel) and that this could be why no one will ever figure out consciousness (since self-reference is embedded in the very concept).
I love this idea, but something puzzles me: Gödel sentences can be proven by a larger system (which will have problems of its own but can still answer ALL questions from the first system). This would mean that we should be able to conceptualize lower-level consciousness since we are outside of them (like insects or simple AIs). The issue is that we can’t seem to do that at all for consciousness. We have trouble even conceiving it. After all a lot of self-referential systems do not seem to give rise to consciousness at all.
A solution would be to view conscious experience as the building block of all knowledge/cognitive thinking. Anything that anyone has ever known or stated is -in a way- secondary to their direct sensory input.
In the same way that a mathematical theory cannot explain what equality is -because equality is its fundamental language and to express that, it can only do it through equality itself.
In the same way that set theory cannot define sets but only talk in “set language”.
We cannot define consciousness because it’s the building block of our thoughts.
"I love this idea, but something puzzles me: Gödel sentences can be proven by a larger system (which will have problems of its own but can still answer ALL questions from the first system)....."
Why is the statement that we have a continuum of consciousness from an organelle to a multicellular organism (but no greater system so far) not what we're after? The cell can answer all questions from the organelle, the organ can answer all questions from the cell, and the organism can answer all questions from any organ, except maybe the brain. There is something special about the brain as opposed to other organs, because its function is to coordinate other organs, and also make predictions about the external environment. The brain might have its own Godel-nested topology with modules of modules, that also have a time component.
Consciousness comes from this nested hierarchy, with each higher level providing irreducible constraints for the lower level and vice versa, with some agency and will at each level.
I guess that the limitations of that model is that you have to prove that there is some kind of will or agency at the lowest level first. So you have to show that the organelle is not just a sum of physical-chemical interactions. At our level we can at least perceive what agency could be but on the organelle, how would you test that?
Not any system. There are very specific constraints and network topologies that make some systems be able to have more information than SOME of their subsystems. Check out https://royalsocietypublishing.org/doi/full/10.1098/rsta.2021.0246. And not all the way up (only up to organismal level, which is defined by the highest level at which Darwinian selection occurs. Also not all the way down. Only down to the lowest level with a membrane.
"I don’t think a materialistic approach (aka science) will ever figure out how consciousness emerges from dead matter." Right - science will never figure this out because the question is posed exactly backwards. Matter emerges from consciousness/awareness. Here's an article discussing this perspective: Neil D. Theise and Menas C. Kafatos, “Fundamental Awareness: A Framework for Integrating Science, Philosophy and Metaphysics,” Communicative & Integrative Biology 9, no. 3 (2016): e1155010 (1-19), https://doi.org/10.1080/19420889.2016.1155010)
Insofar as scientists maintain the false physical-mental, matter-mind dualism, the "problem" of consciousness will be impossible to solve. John Dewey realized this and wrote about it extensively in his 1925 seminal "Experience and Nature." This was the basis of my dissertation. The mind-body/mental-physical duality stems from an ontological paradigm rooted in Parmenides writings from about 2,500 years ago. Turns out, his arguments were fundamentally flawed due to quirks in Greek grammar, and the majority of Western science and philosophy has been built on that basic ontological error. If we surrender those false assumptions, whole new worlds of possibility open up around these questions.
What? Neuroscientists (and I hope we all agree that the relevant science here is neuroscience) are mostly (correctly) rabid physicalists, they don't maintain dualism.
Physicalism is an outgrowth of the dualism. As Richard Campbell says: "The Cartesian model of two substances – mind and matter – has long been outdated, but a common contemporary response is to reject just one (usually mind). Thereby materialism, or physicalism as this philosophical position has been articulated in recent decades, simply truncates the Cartesian framework. I call it a ‘one-legged’ version of Cartesianism." (Richard Campbell, The Metaphysics of Emergence (New York, NY: Palgrave Macmillan, 2015), 211.)
Physicalism is rooted in substance/entity ontology, which itself is rooted in the flawed arguments of Parmenides from 2,500 years ago. Process ontology is much more suitable to the best available evidence across multiple sciences. As Campbell says, "Physicalists are just out of date." (Richard Campbell, “A Process-Based Model for an Interactive Ontology,” Synthese 166, no. 3 (2009): 453–77.)
Another good source on this: Johanna Seibt, “The Myth of Substance and the Fallacy of Misplaced Concreteness.” Acta Analytica 15 (1996): 61–76." (And all of Seibt's work.)
Also check out my dissertation, which discusses this at more length: https://surface.syr.edu/etd/1288/ (ch. 2 dives into the historical origins of the two basic ontological paradigms in Western science, substance ontology and process ontology)
Physicalism is not rooted in Parmenides in any reasonable, synchronic sense (and historical considerations philosophers waste their time on _simply don't matter_, that's one of the main reasons philosophy can go F itself).
But, more generally, "physicalism is dualism" is a take neither physicalists nor dualists would agree with.
I am sympathetic to being open to other ways of looking at the problem besides physicalism. As long as it helps us build an AGI in silico, or show us that it is impossible. Even if we can dismiss the hard problem, and even if pan psychism is true, there is still an issue of precisely what it is about consciousness that makes it possible in a brain, and not other substrates. It seems like current neural networks, even with many layers, are still missing the ability to think outside the box of their training, even if it isn't "subjective experience". So let's try to think outside the box of physicalism. What does it get us?
You thoroughly missed the point. Historical antecedents _don't matter_ because they're not (automatically) part of the synchronic picture.
To take an example from my own field, Chomsky may make a fuss about historical connections of generative linguistics to Cartesian thought, but, thankfully, generative linguistics isn't _actually_ Cartesian.
I don't currently have a favorite theory, but I would really like to see you continue to attack the utter certainty of physicalism.
Physicalism cannot event describe, in principle, how consciousness arises from a material substrate. Given this, isn't a bit of humility in order here? My working theory on the arrogant, dismissive attitude of many physicalist scientists towards philosophy is that it comes from being part of a discipline that has been so dominant for so long that it can't even conceive of itself as having any underlying philosophy(s): "We're just describing the world as it is -- what does history have to do with anything?!"
From whence does this worldview we call physicalism come? Given the persistence of the hard problem of consciousness and the lack of progress in explaining it, might there be flaws in some of physicalism's underlying assumptions? If so, how are we going to get at examining these assumptions if we keep insisting that physicalism isn't a human worldview with necessary antecedents that have shaped it?
How does the utter certainty of physicalism qualify as science?
There's a part of my novel, The Revelations, which is a dialogue between a laughing mask and a crying mask (I know this sounds insane, but it makes sense in the context), and it's all about arguments for and against physicalism. Maybe I should post that excerpt here, since it is pretty complete in and of itself. And is probably the best thing I've written on the subject.
Hello Brian, sorry to show up in this comment thread a few weeks late but I stumbled upon it and am very interested in hearing your justification for this statement:
"Physicalism cannot event describe, in principle, how consciousness arises from a material substrate"
What are the principles that preclude a physicalist perspective from describing consciousness? If consciousness can be shown to be an emergent property of certain complex and dynamic organizations of matter, then physics would certain describe consciousness. I am not saying this is a done deal, simply that I don't see how _in principle_ in can't be done.
I'm working on a novel about this problem and thinking about having consciousness researchers turn to the area of shared consciousness as a way of breaking the subjectivity barrier. Maybe the researchers will suppose they are directly accessing another person's experiences, but are they? Or are they merely experiencing their own representations or copies? And how far can one mind access another mind before the unity of consciousness is destroyed? If consciousness is shareable between two, will there need to be a third consciousness to unify them? These are some of the questions I'm turning over in my mind. If you know of any studies or books about this subject, I'd love to read them. Or am I barking up the wrong tree? Where do you think the next wave of interest will be amongst research professionals in consciousness studies?
Sounds great. In terms of resources, well, I can at least point you to some of the things I've written. The Revelations isn't exactly about scientists having the technology to do that, but that is a theme and question that crops up throughout the book in many ways. And I've also written directly about it in an essay that has lots of citations for you to dig through and follow here: https://www.theintrinsicperspective.com/p/the-planetary-egregore-passes-you
Perhaps in your novel your researchers discover that crosstalk between each of our versions of self in near-by Quantum Theory defined mutliworlds, where near worlds gives us parallel deterministic minds linked into functional free-will, as well as some reach into the future subconsciously in connections to those similar worlds that happen to exist as ours, but advanced in time.
Good resources! Had not seen that second paper on the irreducibility of minds to structure, but I can't help but point out that it looks to be, funnily enough, kind of like a more formalized version of an argument given in The Revelations.
Edit: Oh the author is you I see! That's awesome. To be clear, you're proposing that there are transformations you can do the state of the universe that change the parameter space, and therefore, even in terms of the Boltzmann brain problem being solved because of some select state, those counterarguments don't apply?
That's basically the fundamental problem of the argument that I gave in The Relevations, which is that if materialism is true, you're far more likely to be a Boltzmann brain or a butterfly's dream or what have you: some other structure bearing at that moment a similar perspective. But of course that perspective ends up containing no true justified beliefs about the world. And I argue that by saying that true justified beliefs are very complicated things and you're far more likely to get an illusion of true justified beliefs (e.g., some ephemeral structure thinking that it's a human and then disappearing immediately and so on). But one of the counter arguments is just that Boltzmann brains simply don't really exist in our current state of the universe. But if you can show that there's a certain subjectivity in order to sort of generate an infinite amount of perspectives according to some parameter space, then you can show that Boltzmann brains effectively always do exist and therefore materialism can't possibly be true.
I'll have to read The Revelations, can you please give me a link?
In my proofs (both against computationalism and against structural realism/physicalism) I used the symmetries of the structure. Yes, the result look like Boltzmann brains, but not because of fluctuations, because the symmetries of the structures allow such brains to be un-hidden/decoded within the structure, and they are already there. And this contradicts the observations that there is a correlation between what we think of the world and the properties of the world.
I'd be happy to explain these to you in a discussion, get some pro/against feedback, and also to learn more about your views.
Absolutely Cristi. I'd love to chat more. Can you shoot me an email to set something up, at my last name first name (all one word) at Gmail? (not typing in directly due to spam bots that trawl the web).
I need to study your papers carefully, but here are some preliminary thoughts:
1. About computers and minds: Of course computers are missing their own interpretation, we provide it for them. Minds might have several levels of interpretation with each higher level providing an interpretation for the level beneath it. But at the highest level, even a mind needs an external environment to provide an interpretation, or a fitness function.
2. Minds vs physical structures: the important correlation is not between an external physical structure and a structure in our brains, but a correlation between correlations of structures in our brains and correlations of structures in the physical world. And these structures include time, are processes. The transformations of structures leave the second order correlations of correlations invariant. Forgive me if I am talking nonsense, this is mostly intuitive for now.
Also certain reparameterizations are topologically forbidden...
Great, because the way I understand these objections, I took them into account :) Maybe they are not phrased how you think of them, but when the times will come I think I can develop on this.
In the past I've read many writings claiming to disprove Bell's theorem. Not with the intention to find them wrong, although I expected this, but when I read something I try to give it a chance. Ultimately the best of them boiled down to a huge percent of interesting and perhaps true reasoning, plus a small percent of smuggling an assumption that is not evident to see. Think about a long math calculation which is 99.999% correct, but where you get the wrong sign somewhere during the calculation (a mere 0.001% of it). And the same is true with proposals of perpetual motion devices. It all seems to add up and more energy "emerges" out of less energy, but this never happened. And the same happens with claims to explain how sentience emerges from insentient stuff.
Now in the case of "Incomplete Nature: How Mind Emerged from Matter", at a first sight it looks very interesting, I am sure I can learn many things from reading it. But they're all about structure and its dynamics. So, if this book "seeks to naturalistically explain 'aboutness'" as its wikipedia page says, it seems to me it proposes in fact a description of structures that can be interpreted as aboutness, not an explanation. Also, no matter what constraints you add to the structure, it helps a lot with the structural part, but it doesn't help at all with the part about the emergence of sentience. I say this because I have a no-go theorem. But, since I didn't read the book, and even if I would I would have to make sure I understand what the author really means, I can't say where it smuggles an interpretation of structure on which we project things and then think it found an explanation of aboutness. From what I know we do, it is everywhere, in the way we interpret everything. We project meaning on structures and their behaviors all the time, from assigning bits of information to physical states to assigning sentience to the structure of stuff that we presume to be inherently insentient. So no matter how good his proposal is to explain the easy problems, I don't think it can explain the hard one. It can only make us assign sentience to structure more convincingly.
My result shows that, regardless of what internal constraints we assume, infinitely many exact same structures can be found and interpreted in the same way as having the exact same internal constraints and being able to emerge aboutness, but they are insentient, otherwise there would be no correlation between what we think we know and the external world. That is, they differ in the external constraints, but should it matter if a structure identical with a brain exists for a brief moment in an unconstrained environment, as long as it has the same structure as the brain existing in a constrained environment, for a brief time interval in which the differences in the environment don't affect it? So the particularities of the structure and its dynamics are not sufficient. That's all, from my point of view there is no way to "emerge" sentience out of structure, just as there is no way to "emerge" more energy out of less energy.
But maybe you have a reason why you indicated this book to me.
The point of constraints is three-fold: 1. they are not reducible to smaller scales. They are necessary but not sufficient for emergence. 2. Each level of consciousness has constraints that provide a boundary between it and higher and lower levels (a finite, not infinite chain), sometimes in the form of membranes, but maybe different for brains 3. An interpretation is particular to the form of these constraints, and in a sentient system is not arbitrary, or imposed by other sentient beings. This whole thing reminds me of creationist arguments or post-modern arguments: the counter-argument in both cases is that not all possibilities are selected for by the environment.
Quantum mechanics is incomplete, not only because of the inconsistencies with GR, but because it doesn't have a good part/whole description, aka the measurement problem, which is also related to how it deals with constraints (which Dirac tackled, but I don't think solved). That is why I suggested you try to prove your non-go theorem in a purely classical manner.
In the case of the quantum world the constraints don't reduce at all the ambiguity, I showed it here https://arxiv.org/abs/2102.08620 and also here https://arxiv.org/abs/2401.01793. The article linked at the top of this thread is a natural consequence of these results. I think anyone who can follow the math can see it just like you see that 2+2=4.
About the classical case (which is not the case of our world anyway) I already linked in the youtube reply from four days ago two references with two proofs for classical physics (https://arxiv.org/abs/2307.06783, Corollary 1, and another one in https://philsci-archive.pitt.edu/23108/). It really doesn't matter, because I know how to prove it for any dynamical system imaginable.
So, to wrap it up, I have a proof, and anyone who disagrees can try to find a mistake in its math & physics. I would prefer a discussion that engages with the math, not imaginary parallels with discredited things like creationism or postmodernism that have nothing to do with my proof. Which I think is my cue to stop here.
youtube is really not a place for deep discussions. It has algorithms that remove stuff at the slightest provocation. If you have time, read Deacon's Incomplete Nature.
I think the first step is to stop calling consciousness an illusion. Unless, of course, you’re a not very philosophical zombie. Then you have every right to.
For me, by far the most compelling explanation is derived from the highly predictive research of mathematical neuroscientist Stephen Grossberg (who basically wrote the work which was awarded this year’s Nobel prize in physics - this is an interesting case of how the institutions of science are more political than scientific, but I digress) and Gail Carpenter.
The theory, Adaptive Resonance Theory – summarised in Grossberg’s 2021 book Conscious Mind, Resonant Brain – is highly mathematical in a way which is non-intuitive for most mathematicians; but then, understanding gravity required a non-intuitive conceptual leap too. If you haven’t engaged with his work before, I strongly encourage you to do so – you might find what you’re looking for.
On a separate note, I don’t think we’re too far away from a Theory of Everything either (I plan on writing a blog about this) – in a similar manner to Grossberg’s ART, the mathematics which underpins unification of general relativity and quantum mechanics is horribly non-intuitive, relying on noncommutative geometry.
It's a neural network, but with a Matryoshka type topology, meaning there are clusters of neurons that process low level information, embedded in higher level clusters, for at least 2, maybe more levels, and where the information goes both ways on the vertices/synapses, in non-trivial ways (time delays, going up for a while, then going down, then up attempting convergence of a recursive loop). There is also a time component to the information flow. There are utility functions to be minimized at each level, and the utility function to be minimized is the difference between 4D patterns in the lower level vs the higher level, as opposed to just 3D state differences, with the highest level utility function being a difference between that level's 4D patterns and the external world. This can generalize beyond individual humans to groups of them, where the individual humans are just a level below the group of them.
Qualia are just the processes happening in the body/brain.
One question that basically only I and a few other people (mostly IIT-involved or IIT-adjacent) have looked at with any precision the question of: If you have physical correlates of consciousness, why are they at a particular spatial-temporal scale? Why not at the scale of atoms? And so on. If information is flowing, what scales matter (as you suggest, higher-level clusters)? This gets ignored in a lot of consciousness research.
Levin (as you know) has argued (with some experimental backup, though he is careful to call it "agency" instead of "consciousness") that consciousness happens at all biological (and therefore in silico, IF we get the topology right) scales, but they are not arbitrary, they have some distinct boundaries, with the lowest level being gene networks (I disagree, I think a level needs a membrane, hence I would say organelles). These are easy to see in multicellular organisms (MCOs) when we exclude the brain, as 3D membranes (separating 2 3D regions, lower and higher levels). For the brain, the clusters might have a 4D character (because the main goal of the brain is to predict and coordinate, which are time like activities). The other thing we can learn from studying the organization of MCOs is that lower level utility functions could go rogue and become misaligned from higher level ones (i.e., cancer) which I think is the main reason for the existence of the Matryoshka topology: a balance between the benefits of synergy and the cost of monitoring and punishing free riding, leading to a certain number of parts before a new level is beneficial (each level is in charge of the monitoring only within itself, where the number of parts is relatively small).
Could this free riding book keeping exist in the brain? The evidence for it is multiple personalities. (when they are not integrated, and their goals conflict with each other)
Maybe the appropriate scale is the entire universe and we are mere atoms inside a great big giant mind. It wouldn’t be surprising that we failed to figure it out!
At a fundamental level, I think it boils down to a self-modeling, shared communication protocol that is imprinted on a substrate. While the United States has numerous communicating components, there is no substrate where the information is processed, in a temporally coherent fashion and is therefore not a conscious entity. A single celled organism has information processing capabilities, but has no need for a larger controlling system to handle the environmental issues or pressures for longer horizons. Consciousness evolved to exist at a specific temporal range and to control a very uniform collection of cells. That is why we don't have conscious electrons and probably why plants don't utilize a neural network centric particular strategy. Pancomputationalism views point towards an artificial neural network combined with robotic embodiment could mean that we will soon see conscious AI. The ingredients will be there for homogeneous communication, self reflective/modeling loops of data over a temporally coherent time where predictive control is required.
I think in terms of theories that don't necessarily solve the hard problem but are still pretty interesting and rich at getting at some sort of scientific answer to consciousness, this combination of (a),= solving the binding problem, and (b) having a self-model (or a *bounded* self-model, in combination) is probably one of the best approaches, just in general.
I submitted an essay to Adam Mastroianni's essay contest over the summer that was basically about this, and I HEAVILY relied on your book. The essay is borderline incomprehensible but it's the first time I tried putting any of those thoughts on paper. It's sitting in my back pocket, waiting to be rewritten eventually but, like...I would be so chuffed to share it with you, if you're interested. I can email you a google drive link...?
That sounds great. I can't promise to give back notes, but if you'd like to email it to me, if I find time I'll take a look (last name first name at [the G-one that's common])
My working definition is basically just that consciousness is electricity + computation, and IIT is a good way of measuring the consciousness of systems in this domain. This is just me drawing the circle around consciousness at the largest level of things associated with consciousness that makes sense to me. One abstract, one physical, since I believe firmly based on intuition that consciousness has to be physically instantiated.
This argument reminded me of the brilliant short story by Ted Chiang, "Exhalation", where an intelligent and conscious species is based on a totally different substrate. It is fiction, of course, but it is deep and very thought-provoking, as is most of Chiang's writing.
This reminds one of the transduction theory of mind that Robert Epstein apparently supports. It seems logical: gravity + quantum = string theory => lots of extra dimensions. Maybe the brain is a transducer to quantum field information in these other dimensions?
It's fun to theorize, I'd be interested if anyone has a summarized tutorial / overview of ToCs that are testable or make predictions that could be testable?
I'd love to write a whole essay to properly respond to this.
I have settled on mysterianism & predicate dualism. I think that consciousness is inherently invulnerable to scientific methods being that it is subjectivity itself, and you can't view it objectively. It simply lies outside of the domain of science.
I think attempts to explain consciousness are probably doomed and maybe even misguided, because consciousness is fundamentally distinct thing from the material world, and there is no way to explain it in terms of material phenomena. In other words, I am a strong believer in the explanatory gap and the hard problem, so much so that I think the most sensible position is that material reality and subjective consciousness are ontologically distinct. This is also partly informed by my experience in meditation. I believe if you actually look at consciousness you can realize that it is absurd to think that it is simply something that can be explained in terms of a specific set of physical circumstances. Even if you could identify the exact circumstances that produce consciousness, you would still be left with the question: why does consciousness appear? Why is there something that it is like to be those circumstances? Why don't those circumstances simply exist and carry out computational functions and direct behavior etc all with no subjective experience?
Finally, my most out-there position wrt to consciousness is I believe that interactionism is plausible--it is at least possible that free will exists, and that consciousness is the causal mechanism for free will, i.e. reality is not causally closed (it opens up into consciousness, anyway--why can't it go the other way?). This makes sense to me as a reason for the evolution of consciousness in the first place. It makes more sense to me than couching consciousness as a causal dead end/epiphenomenonal accident, anyway. To me, the princess of bohemia-type criticisms of interactionism are misguided because although yes it is impossible to explain how consciousness produces effects in the material world, *it is also impossible to explain how the material world produces consciousness, and yet--we are conscious!*
I'll change my opinion as soon as someone writes that paragraph, Erik. But I think it is telling that we can't even *conceive* of an actual explanation for how material phenomena give rise to subjective consciousness. It's not that we can't identify the correct explanation, we can't even think of a possible explanation. We can only come up with descriptions of phenomena associated with consciousness, not how these phenomena actually make something that is like to be them.
Could subjective experience have survival value? Only entities with survival instincts have ever displayed evidence of consciousness, to my knowledge. Inorganic entities, rocks and so forth, do not. Is it a coincidence, or is there some connection?
This is indeed a problem for any theory of consciousness that thinks of consciousness as epiphenomenal. People often get into that same question via debates about philosophical zombies and their own utterances about consciousness, but you can also get into it by simply asking why consciousness would evolve, and, if consciousness is epiphenomenal, why does it match so well with explaining the actions of organisms?
I have written quite a few articles on this but I do have something in the works that responds to another article you wrote. For now, the question of why consciousness would evolve is in fact obvious, because there is only reason why anything evolves, and that is to improve survivability of the phenotype and ultimately the genes. For me the 'why' question is far more interesting in the 'how' (having looked in some detail at both). So what does consciousness give us and how might it aid survival over, for example, a Chalmers zombie? Consciousness moves us beyond reflexive actions. It enables us to distinguish between good and bad experiences and establish our preferences. Those preferences guide our actions because through consciousness we can experience our model of the world, put ourselves inside that world and simulate possible actions and outcomes that move us towards our preferred outcomes, or our preferential experiences. The mechanism that ties this all together (and really the thing I have written extensively about) is free-will. I believe we manufacture decisions and abstraction from energy and information. I can elaborate and have done but I will park that here. Ultimately in the interests of survivability, consciousness has to be a far more parsimonious way to integrate sensory data - what is the evidence for that? The fact that we are conscious means that it was economic from an evolutionary perspective. Interesting discussion but I will keep my powder dry for the post I am writing. I may change it up a little to incorporate a response to this post. Incidentally my series (#15) on free-will is ongoing so there is a lot of overlap there.
"...if consciousness is epiphenomenal, why does it match so well with explaining the actions of organisms?"
Can you point toward the best argument(s) that consciousness plays a role in the action of organisms? I think Vervaeke makes a good attempt, possibly a successful one. But I'm otherwise unfamiliar with such (unless you count arguments where consciousness is, actually, reduced to mean externally observable brain function (or certain systems of brain function).
That's, my thinking too. It's a survival mechanism. Being aware and "conscious" of our surroundings definitely has increased life's ability to survive.
If you want to explain awareness, first explain something easy like red or cinnamon.
Shocked by the number of idealists on this thread.
Why shocked? The interaction problem is solved if you drop the superfluous assumption there is this entire substance called matter that exists independently of consciousness.
It's also solved if you just reject phenomenal consciousness to begin with. Illusionism and related views handle the problem just find, and don't require dubious metaphysics to do so.
The existence of phenomenal consciousness is required to do the rejecting so I don't think that's coherent.
Why should I agree with you about that?
I'll just reply in the other comment thread with you, since we apparently aren't using words to mean the same thing.
Illusionism providing justification to reject phenomenal consciousness is -- an illusion. Actually, a category error. Both Hoffman's illusionism (idealist) and Dennett's illusionism (materialist) fail to explain how consciousness comes into being (or int he case of Hoffman, why it was ever there in the first place).
Illusionism tells us that the relationship between our intelligence and the contents of our phenomenal consciousness and "actual reality" is necessarily asymmetrical. But it does not explain phenomenal consciousness, nor create grounds for rejecting it. At best, it can tell us that certain theories of consciousness are insufficient.
They don't fail at all. Dennett's account entails that certain preconceptions about what consciousness is or what its characteristics must be are misguided; what consciousness actually picks out are a range of empirically evaluable phenomena that, once understood, constitute everything there is to know about consciousness. Insofar as critics insist something else is left unaccounted for: that's precisely what these accounts deny.
Many critics of illusionism will insist it "fails" to "account for" some aspect of consciousness or other. These strike me as weird objections: it's a bit like saying the view that "dragons don't really exist, but people mistakenly thought they saw dragons" is wrong because it "fails to describe the physiology of dragons." Such an account isn't attempting to do so; it's attempting to show that there aren't any dragons...but there are lizards.
>>But it does not explain phenomenal consciousness, nor create grounds for rejecting it.
This doesn't make any sense. It's like saying the problem with atheism is that it fails to explain the existence of God. As far as grounds for rejecting it: that's a matter of contention. I think Dennett went overboard in providing more than enough reasons to reject phenomenal consciousness.
But that's another thing: why should we need to provide grounds for rejecting it when nobody's ever made a good case for it to begin with? Why should I think there is any such thing as phenomenal consciousness or qualia?
There are two important problems with Dennett's framing.
One is that he is creating a strawman argument against Q/PC as "things" which would "exist" in a third-person-observable way, rather than arguing on the point of whether they are real.
The next problem follows the first, wherein he dismisses Q/PC by, maybe rightfully, saying that they have no place within the realm of the existing (or the "system" as he sometimes calls it), and thus dismisses their legitimacy.
Your arguments-by-analogy make the same category error. A claim on the existence of dragons or God are claims about the observable world, not about the qualities of observation itself.
And you're welcome to point to a counter-example, but I don't know of any reasons from Dennett for rejecting Q/PC. Rather, he says we cannot see a reason for its existence. These are not the same.
We can make the same argument for existence writ large. Is there some reason that anything at all should exist? No? Then why should we believe that anything does?
"But clearly things exist," you say? Ok. Clearly Q/PA is real, I reply.
Similarly, within existence, why would there be fundamental laws like gravity or the speed of light? Why any constraints whatsoever? If we don't have reasons to give for fundamental properties of existence, should we deny those properties?
If not, why would we make exceptions for the fundamental properties of our experience of reality?
I can truly only speak for myself regarding experience and the realness of Q/PC, because it could be the case that my own Q/PC is the only real instance of such. For all I know, the text attributed to Lance S. Bush may be the automated output of a GPT. And who knows for sure if the other human beings around me are having Q/PC (...rather than being "philosophical zombies").
Qualia, in my experience, are fundamental properties of my experience. And they are, in some sense, before-reason, similarly to how Dennett's concept of Vastness is beyond-reason. To quote the man himself:
"Words fail me."
And it is our failure to be able to reason for Q/PA, gravity, SOL, existence itself, that these concepts are held as mysteries. But they're no less real for their mysteriousness or fundamentality or ineffability.
There are many things like that, not just consciousness....it's impossible to explain how the material world produces 'music' yet 'music' exists, and not exclusively of the result of consciousnesses.
Music is perfectly explainable in terms of physical processes until you get to the point of attempting to explain the experience of music, which at that point is just an example of qualia, like the experience of the color red, which can’t be explained
I would disagree with this. It seems to be fundamentally an argument for radical skepticism. That is, we have no stable or reliable methods for integrating our individual subjective experiences. The entire world is conscious experience so we can either discuss it fully or not at all.
Agreed, mostly all around.
I would even go so far as to say it is more parsimonious to describe the inclination to "explain away" the agentic role of consciousness as a redirect through the current "marketplace of rationalizations" where "hard truths" are the coin of the realm. It's about signaling the trade of one type of agency for another by socially denying that agency has any real meaning at all.
I'm a proponent of Pete Mandik's qualia quietism and meta-Illusionism. Not only do I not think that there is a hard problem of consciousness, I don't even think that if a person is thinking clearly about the matter that it would seem that there is a hard problem of consciousness. I think that the sense there is a hard problem stems from misconceptions about language and phenomenology that people largely acquire by engaging in academic philosophy. Consciousness is probably not any particular thing. It is no particular phenomena at all. Rather when we speak of consciousness we are referencing an amalgamation a various aspects of our psychology and the ways in which we theorize and speak about our psychology. Adequately breaking down all these sub components will eventually result in dissolving all these seemingly mysterious aspects of consciousness and developing a sufficiently clear set of tractable problems that they can eventually be solved in a way amenable to the empirical sciences. Nothing about consciousness is going to turn out to fall outside the scope of what we learn empirically.
Qualia quietism is a view according to which the terms and concepts that typically feature a discussion of qualia or phenomenal states consist of a viciously circular set of mutually interdefining terms that have no clear or conceptually distinct content such that we can say anything meaningful about them at all. Essentially the very notion of qualia or phenomenal states is meaningless and there is nothing substantive to say about them. It's not simply that they don't exist but that there isn't even a coherent concept to consider the existence of. I think something like this is basically correct and that a lot of the philosophical discourse on consciousness is completely misguided. This position at least on my view is closely associated with illusionism and I typically treat it as roughly being in Illusionist camp. I think views that deny the existence or meaningfulness of qualia are pretty much the only serious contenders for a viable account of consciousness.
If I were to ever be an illusionist, I would probably endorse qualia-quietism rather than strict illusionism, like Frankish. A major part of Mandik's argument I don't find convincing is him questioning whether or not normal or average people would ever conceptualize their experiences as containing a difficult-to-explain quality, and that this is just an invention of philosophers. I think that this is probably putting a little bit too much weight on unknowledgeable introspection. After all, now we have simple AI, it seems people do have intuitions about consciousness and lack thereof. Most people, for instance, wouldn't feel bad yelling at Alexa in the way that they would yelling at a person, and presumably if you push them on why that is, they would eventually reach some sort of conclusion about how Alexa lacks qualia without any philosophical background.
I don’t recall ever attempting to raise doubts about whether any normal people ever conceive experiences as having difficult-to-explain properties. What I do recall attempting to argue, against illusionism, is that there’s not a sufficiently large enough number of normal people who do this (AND do it automatically) to merit calling it any species of illusion (e.g. a cognitive illusion like in the Monty Hall problem). Anyway, it’s not an important part of any argument for qualia quietism; it’s just part of an argument against (a certain kind of) illusionism.
Simply because people can respond to questions about whether AI is "conscious" or not does not mean that those people understand what philosophers are talking about when they refer to phenomenal consciousness, or qualia. I think that people make the mistake of thinking that if people respond to questions that employ certain terms, like "free will," or "consciousness," that therefore people must have already had thoughts about these topics prior to exposure to the question, and that their thinking accords with the notions those asking these questions have in mind.
I discuss a significant shortcoming relevant to this in my own work: Forced choice study designs can prompt what I call "spontaneous theorizing": this occurs when the context of a study prompts a person to develop (or merely report) holding a view that they didn't hold prior to participating in the study. I provide some evidence that you can do this for quantum mechanics: you can present a short description of the Copenhagen and Many Worlds interpretations of QM and get people to pick one. This does not mean that most ordinary people are Copenhagians or Many-Worlders.
Specifically with respect to consciousness, there are a number of studies conducted by Machery, Sytsma, Ozdemir, and others that cast doubt on whether nonphilosophers think about consciousness in the same way as philosophers do. Here are a couple examples:
Sytsma, J., & Machery, E. (2010). Two conceptions of subjective experience. Philosophical studies, 151, 299-327.'
Sytsma, J., & Ozdemir, E. (2019). No problem: Evidence that the concept of phenomenal consciousness is not widespread. Journal of Consciousness Studies, 26(9-10), 241-256.
Generally speaking, I think people are way too quick to think that everyone thinks the way philosophers do. I don't grant these assumptions, and take facts about how nonphilosophers think to be challenging empirical questions. As such, when you say this:
"Most people, for instance, wouldn't feel bad yelling at Alexa in the way that they would yelling at a person, and presumably if you push them on why that is, they would eventually reach some sort of conclusion about how Alexa lacks qualia without any philosophical background."
...I doubt that's true. I don't think nonphilosophers have a concept of qualia, and so I don't think they'd appeal to such a concept to explain their lack of discomfort with yelling at Alexa. At present, most studies exploring the question of whether nonphilosophers have a concept of qualia have found little indication that they do. I don't take these studies to be definitive, by any means, but I don't think we should assume nonphilosophers have a concept of qualia. If they do, I want to see good evidence of this. At present, I don't think we have such evidence.
Chalmers has some pretty good replies to those papers worth checking out:
https://consc.net/papers/universal.pdf
It seems to me as if, especially now, people debate things like AI consciousness all the time, and I don't think that those debates are entirely about some sort of very specific philosophical notion of phenomenal consciousness. I think that they are debates that arise very naturally. See, e.g., evidence that the majority of Americans have opinions about the consciousness of AI models: https://academic.oup.com/nc/article/2024/1/niae013/7644104
The majority of Americans also believe in some kind of religion. That doesn't make any of them true.
As far as what the public thinks about "consciousness", if the way that people are interpreting that term isn't consistent with what philosophers are talking about and isn't consistent with one another then what exactly are we measuring when we are measuring people's views about "consciousness"?
Meaningful and consistent interpretations of responses to what are sensibly intended to be the same question require some form of operationalization and some reason to believe that the way in which participants are responding to the questions are both consistent with the researchers operationalization and sufficiently consistent with one another that responses don't display too much interpretive variation that effectively results in responses to what amount to different questions.
Simply put I do not think that people operating on the assumption that nonphilosophers think the way others do are sufficiently attentive to the problems of conveying philosophical can concepts to people without training in philosophy. People are too quick to presume that if people are using the same words that they are thinking the same things. My research shows that in other domains relevant to the study of how nonphilosophers think about philosophy that this simply is not true
This is batshit insane
What specifically did I say that you think is batshit insane? And why do you think that?
Whether nonphilosophers think like philosophers is an empirical question. At present, there is no compelling evidence that most nonphilosophers have a concept of qualia/phenomenal consciousness, are responsive to the hard problem of consciousness, and so on. So what, exactly, is insane about what I said?
I've read the reply and there is some interesting stuff in there but what it does not do is justify the presumption that most nonphilosophers have any notions that track what philosophers are talking about when it comes to conscious states. Whether they have any particular concepts of interest is an empirical question and I do not we are in a position to presume that nonphilosophers think the way that we think they do.
It is incredibly challenging just to do the psychological research necessary to figure out what human personality is like. And that's a fairly tractable question. Why would we think that we could address the question of whether non philosophers have highly philosophically nuanced notions like phenomenal states or anything related to them merely by appealing to our anecdotal experience? From looking at these sorts of discussions it looks like a lot of philosophers operate under the presumption that they are entitled to presume that nonphilosophers think the way that philosophers presume that they think. And that therefore the onus is on those of us who are skeptical of this to show otherwise. But why should we grant you that presumption in the first place? Why not show that they think what you think they think? Why do we have to show that they don't?
Unless and until I see good reason to think that non philosophers share any of the relevant concepts that figure into philosophical discussions I don't see any particularly good reason to grant the assumption that they do.
Alexa doesn't have a mind and a sine non qua for a creature that can suffer in any way is that it has a mind, I would say 'lacks qualia' is unnecessarily egghead, a state of mind is what suffering is, therefore no mind no suffering. One can imagine making a bad gear shift on a standard shift car, then the car makes a weird noise it normally doesn't make, it moves in a herky jerky abnormal manner and has a degradation in function, but it isn't suffering, since it doesn't have a mind.
Though they might not be able to express themselves that clearly, that's what normal people think. Study done by a professor of some sort not required.
> that we can say anything meaningful about them
So? There seems to be an assumption here that language can succeed in explaining reality.
No. I don't think that. I'm suggesting that the idea of qualia is conceptually confused nonsense. The thing many people think comprises consciousness or makes it mysterious is just a conceptual confusion of their part. There isn't even an intelligible concept to deny in the first place because there is no genuine concept of phenomenal states/qualia.
To steel man the qualia angle, there need not be an idea of qualia or a concept of qualia to justify it. If anything, "a conception of concepts" or "ideation of ideas" are just different arrangements of qualia relative to the subject.
This isn't to preclude the possibility that self-deception is wild when trying to defend qualia as valid for certain inferences, but I see no reason to deny them the "baseline win" of an experiencing being, while denying them some sort of momentum of knowing what to do with that.
What is it that's being justified?
Just out of curiosity, what is your experience level with meditation and psychedelics?
It's hard to know what to do with the existence of illusionists/eliminativists. If anything it seems like evidence that highly intelligent p-zombies exist and walk among us. A good reason to find a way to do ethics that doesn't depend on assumptions about consciousness - I don't think we should discard you just because you don't exist in the full sense of the word. <<< all of this must read as super condescending/contemptuous. And there is no other view that puts me in as similar a position. But you would have an easier time persuading me that I am not alive, that I am a fiction of some other person's dream, than that "consciousness" does not refer or that its referent is not fundamental. What a pickle this discourse is in.
I’m a qualia quietist even though I meditate regularly, teach classes on meditation, and have had extensive experience with LSD, DMT, psilocybin, and other drugs that have profound effects on consciousness . If you think that the verbal behaviors of illusionists are best explained by their lacking phenomenal consciousness then, congratulations: you’re a behaviorist.
That is very interesting to me! Evidently I have more reading to do. Just poked through "Meta-Illusionism and Qualia Quietism".
"Echoing Wittgenstein (1953), each radical realist labels the private contents of their beetle-box ‘a beetle’, but, for all anyone knows, each box may contain something different from the others, or even nothing at all."
Funnily enough, beetle-box-ish examples are a place where it seems like non-philosophers end up approximating qualia-ish thinking. E.g., the classic stoner thought, "What if we don't all see colours the same way? What if where I see blue you see yellow? There's no way we could tell." No one walks away from that kind of stoned conversation skeptical that their talk about "seeing something some way" is nonsense.
I got the impression that you're also not skeptical of that kind of talk in a weak sense, but I'd love to get a better sense of what the boundaries are. Very curious what the qualia quietist take is on Mary the colour scientist, as a way of pumping this.
I’m going to launch my own substack soon and one of my earliest posts will be all about Mary and another will be about “what it’s like” talk.
I look forward to that!
This.
No. Why does what I said sound insane?
I don't think that is the definition of the term because I don't think that the term has a single canonical definition. The term is often used in reference to a distinctive conception of consciousness that involves phenomenal states. Phenomenal states are an allegedly private, ineffable aspect of our experience that cannot be captured by third personal explanations even in principle.
Talk of qualia in this respect often involves claiming that there is a redness to red or a goodness to good experiences. There are the special properties that are experiences have and those properties cannot be reduced to some sort of third person descriptive account that could figure into our ordinary scientific models of what the world is like.
That is what I am denying and that is what is typically the content that we must allegedly account for when confronting the heart problem of consciousness. Proponents of illusionism do not deny that people have sensory experiences, for thoughts, or feelings. What they deny is that there is some special, private, intrinsic properties associated with these experiences that are especially difficult or even impossible to from a third person empirical perspective.
Aren't a NN's weightings (which encode the NN's gestault impressions that are triggered by a query) a very mundane example of special, private properties associated with experiences that are difficult/impossible to convey (since you can't convey NN weightings to another NN without essentially destorying that NN by replacing its weighting with the others).
I'm not a physicalist. Not sure why that was mentioned.
I'm also not saying it's hard to talk about, that I don't feel like talking about it, or that it doesn't interest me. I'm not even saying that phenomenal consciousness does not exist. I'm saying there isn't even a concept of phenomenal consciousness. I'm saying that those words don't refer to any meaningful concept in the first place. There isn't even something to talk about.
I mostly just enjoy being conscious.
Not a matter to be solved, but an experience to enjoy, eh?
Fight! Fight!
Turtle.
This made me think of "Life Is Not a Problem to Be Solved, But a Mystery to Be Lived" which is apparenly a quote from Kierkegaard, Osho, Frank Herbet and also Nietzsche, of all people, according to Google. Or it's made up.
Maybe Google is a valid citation now. 🫠
YIKES.
Consciousness is all there is. There is nothing outside Consciousness.
I do not consider it as a thesis. It is Knowledge. Generations of saints, gurus and chamans have lived that Knowledge. Experienced that Knowledge. Expanded that Knowledge.
This is an utterly undisprovable and thus utterly unexplanatory thesis. Right up there with "I'm a Boltzmann brain, and in the next moment I can dissolve without a trace".
Not true. I enjoy math discussions even though they are almost definitionally about models, which do not necessarily have a parallel in the physical universe. Some fields make sense, other fields make noise.
Ain't that the truth :)
“One measly paragraph would be enough! I would collapse in joy, weeping, if I ever read such a thing.”
Putting aside that this may be colorful hyperbole to some extent — I wonder, Erik, what drives your emotional relationship to this question.
Consciousness is easily one of my favorite topics and I’ve spent many, many hours thinking and writing about it. But I am perfectly comfortable with not having a final scientific (nor metaphysical) explanation or theory of consciousness.
And I wonder how you would feel if you found a glowing paragraph which convincingly proves that consciousness cannot be explained scientifically. Would you be sad? Frustrated? Could you even be convinced of that perspective, or have do you have some amount of dogmatic attachment to a scientific explanation which you can’t let go of?
I’m asking these questions because, for me, one of the most interesting and overlooked aspects of consciousness studies is the psychology of the studiers, both professional and public. People having feelings about Consciousness. Where do those feelings come from?
This is a great and deep question. One so deep I'm afraid I can't answer it for you, at least not now. But I will say that usually what usually drives intellectual obsessions is something personal, and I have a personal relationship to unresolved mysteries.
Apologies for being poetic here, but perhaps there would be no mysteries to resolve if there weren't some fundamental mystery which is itself unsolvable. I'm not advocating for that perspective, but its a fun one to engage with, sometimes requiring courage (nature abhors a vacuum and the ego abhors a void).
I might suggest one of the routes to this line of interest is in the drive to improve our Theory of Mind.
I’ve started reading Bernard Kastrup and am warming up to his theory of “monistic idealism.” In a nutshell, it posits that matter emerges from the only possible ontological primitive: mind. (Or consciousness or whatever term you’d like to use.)
I don’t think a materialistic approach (aka science) will ever figure out how consciousness emerges from dead matter. It can’t even figure out what 85% of the universe is made of.
Please don’t take this to mean I don’t believe the scientific method is the best way to answer most of our questions about how the world works. It just has limits.
I also don’t believe in God or any supernatural entities, except perhaps as an expression of collective emergent human behavior.
People often conflate those things. You can absolutely be an idealist while not believing in God or supernatural entities, and still think that the scientific method is the best way to answer most questions.
Do you have an opinion on Kastrup and/or his idealism?
I think it's interesting. One of my big problems with a lot of philosophical work on consciousness is that frameworks aren't proof. So, Kastrup has a nice framework, but it's not really the same thing as having proof. One could say that this isn't the sort of speculations that involve empirical evidence, and that's fine, but you also still need strong arguments in your favor. I think most of the arguments for that sort of idealism are more so that it is simply a nice parsimonious framework rather than standing on philosophical proofs, as it were. This is one reason why someone like David Chalmers is so successful, is that he couples frameworks with pretty rigorous arguments. I could just be missing that part of Kastrup's work, I certainly haven't read all of it, but most of what I've read is on the framework side.
Yes, I'd really like to hear your opinion on Kastrup's analytical idealism (and/or Donald Hoffman's "conscious realism"). Kastrup claims that his position is more parsimonious from a scientific perspective -- thoughts?
I would also like to hear that since I think it has something in common with Erik’s view which is:
We know self-referential system can formulate questions that they can’t answer (Gödel) and that this could be why no one will ever figure out consciousness (since self-reference is embedded in the very concept).
I love this idea, but something puzzles me: Gödel sentences can be proven by a larger system (which will have problems of its own but can still answer ALL questions from the first system). This would mean that we should be able to conceptualize lower-level consciousness since we are outside of them (like insects or simple AIs). The issue is that we can’t seem to do that at all for consciousness. We have trouble even conceiving it. After all a lot of self-referential systems do not seem to give rise to consciousness at all.
A solution would be to view conscious experience as the building block of all knowledge/cognitive thinking. Anything that anyone has ever known or stated is -in a way- secondary to their direct sensory input.
In the same way that a mathematical theory cannot explain what equality is -because equality is its fundamental language and to express that, it can only do it through equality itself.
In the same way that set theory cannot define sets but only talk in “set language”.
We cannot define consciousness because it’s the building block of our thoughts.
Is this how you understand Kastrup?
"I love this idea, but something puzzles me: Gödel sentences can be proven by a larger system (which will have problems of its own but can still answer ALL questions from the first system)....."
Why is the statement that we have a continuum of consciousness from an organelle to a multicellular organism (but no greater system so far) not what we're after? The cell can answer all questions from the organelle, the organ can answer all questions from the cell, and the organism can answer all questions from any organ, except maybe the brain. There is something special about the brain as opposed to other organs, because its function is to coordinate other organs, and also make predictions about the external environment. The brain might have its own Godel-nested topology with modules of modules, that also have a time component.
Consciousness comes from this nested hierarchy, with each higher level providing irreducible constraints for the lower level and vice versa, with some agency and will at each level.
I guess that the limitations of that model is that you have to prove that there is some kind of will or agency at the lowest level first. So you have to show that the organelle is not just a sum of physical-chemical interactions. At our level we can at least perceive what agency could be but on the organelle, how would you test that?
Any system can only be explained by a larger system? In other words, it's turtles all the way down (or all the way up)
Not any system. There are very specific constraints and network topologies that make some systems be able to have more information than SOME of their subsystems. Check out https://royalsocietypublishing.org/doi/full/10.1098/rsta.2021.0246. And not all the way up (only up to organismal level, which is defined by the highest level at which Darwinian selection occurs. Also not all the way down. Only down to the lowest level with a membrane.
"I don’t think a materialistic approach (aka science) will ever figure out how consciousness emerges from dead matter." Right - science will never figure this out because the question is posed exactly backwards. Matter emerges from consciousness/awareness. Here's an article discussing this perspective: Neil D. Theise and Menas C. Kafatos, “Fundamental Awareness: A Framework for Integrating Science, Philosophy and Metaphysics,” Communicative & Integrative Biology 9, no. 3 (2016): e1155010 (1-19), https://doi.org/10.1080/19420889.2016.1155010)
Insofar as scientists maintain the false physical-mental, matter-mind dualism, the "problem" of consciousness will be impossible to solve. John Dewey realized this and wrote about it extensively in his 1925 seminal "Experience and Nature." This was the basis of my dissertation. The mind-body/mental-physical duality stems from an ontological paradigm rooted in Parmenides writings from about 2,500 years ago. Turns out, his arguments were fundamentally flawed due to quirks in Greek grammar, and the majority of Western science and philosophy has been built on that basic ontological error. If we surrender those false assumptions, whole new worlds of possibility open up around these questions.
What? Neuroscientists (and I hope we all agree that the relevant science here is neuroscience) are mostly (correctly) rabid physicalists, they don't maintain dualism.
Physicalism is an outgrowth of the dualism. As Richard Campbell says: "The Cartesian model of two substances – mind and matter – has long been outdated, but a common contemporary response is to reject just one (usually mind). Thereby materialism, or physicalism as this philosophical position has been articulated in recent decades, simply truncates the Cartesian framework. I call it a ‘one-legged’ version of Cartesianism." (Richard Campbell, The Metaphysics of Emergence (New York, NY: Palgrave Macmillan, 2015), 211.)
Physicalism is rooted in substance/entity ontology, which itself is rooted in the flawed arguments of Parmenides from 2,500 years ago. Process ontology is much more suitable to the best available evidence across multiple sciences. As Campbell says, "Physicalists are just out of date." (Richard Campbell, “A Process-Based Model for an Interactive Ontology,” Synthese 166, no. 3 (2009): 453–77.)
Another good source on this: Johanna Seibt, “The Myth of Substance and the Fallacy of Misplaced Concreteness.” Acta Analytica 15 (1996): 61–76." (And all of Seibt's work.)
Also check out my dissertation, which discusses this at more length: https://surface.syr.edu/etd/1288/ (ch. 2 dives into the historical origins of the two basic ontological paradigms in Western science, substance ontology and process ontology)
Physicalism is not rooted in Parmenides in any reasonable, synchronic sense (and historical considerations philosophers waste their time on _simply don't matter_, that's one of the main reasons philosophy can go F itself).
But, more generally, "physicalism is dualism" is a take neither physicalists nor dualists would agree with.
I am sympathetic to being open to other ways of looking at the problem besides physicalism. As long as it helps us build an AGI in silico, or show us that it is impossible. Even if we can dismiss the hard problem, and even if pan psychism is true, there is still an issue of precisely what it is about consciousness that makes it possible in a brain, and not other substrates. It seems like current neural networks, even with many layers, are still missing the ability to think outside the box of their training, even if it isn't "subjective experience". So let's try to think outside the box of physicalism. What does it get us?
So what is physicalism rooted in? What's its basis? It didn't just pop out of thin air. All science has historical antecedents.
You thoroughly missed the point. Historical antecedents _don't matter_ because they're not (automatically) part of the synchronic picture.
To take an example from my own field, Chomsky may make a fuss about historical connections of generative linguistics to Cartesian thought, but, thankfully, generative linguistics isn't _actually_ Cartesian.
I don't currently have a favorite theory, but I would really like to see you continue to attack the utter certainty of physicalism.
Physicalism cannot event describe, in principle, how consciousness arises from a material substrate. Given this, isn't a bit of humility in order here? My working theory on the arrogant, dismissive attitude of many physicalist scientists towards philosophy is that it comes from being part of a discipline that has been so dominant for so long that it can't even conceive of itself as having any underlying philosophy(s): "We're just describing the world as it is -- what does history have to do with anything?!"
From whence does this worldview we call physicalism come? Given the persistence of the hard problem of consciousness and the lack of progress in explaining it, might there be flaws in some of physicalism's underlying assumptions? If so, how are we going to get at examining these assumptions if we keep insisting that physicalism isn't a human worldview with necessary antecedents that have shaped it?
How does the utter certainty of physicalism qualify as science?
There's a part of my novel, The Revelations, which is a dialogue between a laughing mask and a crying mask (I know this sounds insane, but it makes sense in the context), and it's all about arguments for and against physicalism. Maybe I should post that excerpt here, since it is pretty complete in and of itself. And is probably the best thing I've written on the subject.
Yes, please!
Yes!
Most often people don't even understand they're proposing physicalism. There is a giant level of ignorance of philosophy among many scientists.
Hello Brian, sorry to show up in this comment thread a few weeks late but I stumbled upon it and am very interested in hearing your justification for this statement:
"Physicalism cannot event describe, in principle, how consciousness arises from a material substrate"
What are the principles that preclude a physicalist perspective from describing consciousness? If consciousness can be shown to be an emergent property of certain complex and dynamic organizations of matter, then physics would certain describe consciousness. I am not saying this is a done deal, simply that I don't see how _in principle_ in can't be done.
One can reject both the hard problem of consciousness and physicalism.
Panpsychism + IIT as solving the combination problem, idealism, some sort of radio/spirit theory, or something else altogether
I'm working on a novel about this problem and thinking about having consciousness researchers turn to the area of shared consciousness as a way of breaking the subjectivity barrier. Maybe the researchers will suppose they are directly accessing another person's experiences, but are they? Or are they merely experiencing their own representations or copies? And how far can one mind access another mind before the unity of consciousness is destroyed? If consciousness is shareable between two, will there need to be a third consciousness to unify them? These are some of the questions I'm turning over in my mind. If you know of any studies or books about this subject, I'd love to read them. Or am I barking up the wrong tree? Where do you think the next wave of interest will be amongst research professionals in consciousness studies?
Sounds great. In terms of resources, well, I can at least point you to some of the things I've written. The Revelations isn't exactly about scientists having the technology to do that, but that is a theme and question that crops up throughout the book in many ways. And I've also written directly about it in an essay that has lots of citations for you to dig through and follow here: https://www.theintrinsicperspective.com/p/the-planetary-egregore-passes-you
I've read The Revelations and The World Behind the World—both excellent! And thank you for the link. I'll check it out!
Perhaps in your novel your researchers discover that crosstalk between each of our versions of self in near-by Quantum Theory defined mutliworlds, where near worlds gives us parallel deterministic minds linked into functional free-will, as well as some reach into the future subconsciously in connections to those similar worlds that happen to exist as ours, but advanced in time.
You've got some serious plot twists going on there!
Also check out Neal Stephenson's (fictional) books The diamond Age and Fall, where he discusses hive consciousness
Oh cool, I haven't read those yet, thanks for the recommendation.
In research I go with via negativa (no-go theorems):
Proof based on Computer Science that mind is not reducible to computation:
- https://philsci-archive.pitt.edu/22880/
- https://www.youtube.com/watch?v=kuziE01rh6M&list=PLJzOclJMoIC1pnJIPlmWhGwaDWD04tFza&pp=iAQB
Proof based on physics that mind is not reducible to structure:
- https://arxiv.org/abs/2307.06783
- https://www.youtube.com/watch?v=BobtUr3nLLg
Good resources! Had not seen that second paper on the irreducibility of minds to structure, but I can't help but point out that it looks to be, funnily enough, kind of like a more formalized version of an argument given in The Revelations.
Edit: Oh the author is you I see! That's awesome. To be clear, you're proposing that there are transformations you can do the state of the universe that change the parameter space, and therefore, even in terms of the Boltzmann brain problem being solved because of some select state, those counterarguments don't apply?
That's basically the fundamental problem of the argument that I gave in The Relevations, which is that if materialism is true, you're far more likely to be a Boltzmann brain or a butterfly's dream or what have you: some other structure bearing at that moment a similar perspective. But of course that perspective ends up containing no true justified beliefs about the world. And I argue that by saying that true justified beliefs are very complicated things and you're far more likely to get an illusion of true justified beliefs (e.g., some ephemeral structure thinking that it's a human and then disappearing immediately and so on). But one of the counter arguments is just that Boltzmann brains simply don't really exist in our current state of the universe. But if you can show that there's a certain subjectivity in order to sort of generate an infinite amount of perspectives according to some parameter space, then you can show that Boltzmann brains effectively always do exist and therefore materialism can't possibly be true.
I'll have to read The Revelations, can you please give me a link?
In my proofs (both against computationalism and against structural realism/physicalism) I used the symmetries of the structure. Yes, the result look like Boltzmann brains, but not because of fluctuations, because the symmetries of the structures allow such brains to be un-hidden/decoded within the structure, and they are already there. And this contradicts the observations that there is a correlation between what we think of the world and the properties of the world.
I'd be happy to explain these to you in a discussion, get some pro/against feedback, and also to learn more about your views.
Absolutely Cristi. I'd love to chat more. Can you shoot me an email to set something up, at my last name first name (all one word) at Gmail? (not typing in directly due to spam bots that trawl the web).
And here's a link to The Revelations (perennially on sale) at Amazon: https://www.amazon.com/Revelations-Novel-Erik-Hoel/dp/1419750224
I need to study your papers carefully, but here are some preliminary thoughts:
1. About computers and minds: Of course computers are missing their own interpretation, we provide it for them. Minds might have several levels of interpretation with each higher level providing an interpretation for the level beneath it. But at the highest level, even a mind needs an external environment to provide an interpretation, or a fitness function.
2. Minds vs physical structures: the important correlation is not between an external physical structure and a structure in our brains, but a correlation between correlations of structures in our brains and correlations of structures in the physical world. And these structures include time, are processes. The transformations of structures leave the second order correlations of correlations invariant. Forgive me if I am talking nonsense, this is mostly intuitive for now.
Also certain reparameterizations are topologically forbidden...
Great, because the way I understand these objections, I took them into account :) Maybe they are not phrased how you think of them, but when the times will come I think I can develop on this.
Please see my comments on your youtube videos. I still need to read the papers....I am curious whether you read any of Terrence Deacon's work.
I don't know if I can convey this, but I'll try.
In the past I've read many writings claiming to disprove Bell's theorem. Not with the intention to find them wrong, although I expected this, but when I read something I try to give it a chance. Ultimately the best of them boiled down to a huge percent of interesting and perhaps true reasoning, plus a small percent of smuggling an assumption that is not evident to see. Think about a long math calculation which is 99.999% correct, but where you get the wrong sign somewhere during the calculation (a mere 0.001% of it). And the same is true with proposals of perpetual motion devices. It all seems to add up and more energy "emerges" out of less energy, but this never happened. And the same happens with claims to explain how sentience emerges from insentient stuff.
Now in the case of "Incomplete Nature: How Mind Emerged from Matter", at a first sight it looks very interesting, I am sure I can learn many things from reading it. But they're all about structure and its dynamics. So, if this book "seeks to naturalistically explain 'aboutness'" as its wikipedia page says, it seems to me it proposes in fact a description of structures that can be interpreted as aboutness, not an explanation. Also, no matter what constraints you add to the structure, it helps a lot with the structural part, but it doesn't help at all with the part about the emergence of sentience. I say this because I have a no-go theorem. But, since I didn't read the book, and even if I would I would have to make sure I understand what the author really means, I can't say where it smuggles an interpretation of structure on which we project things and then think it found an explanation of aboutness. From what I know we do, it is everywhere, in the way we interpret everything. We project meaning on structures and their behaviors all the time, from assigning bits of information to physical states to assigning sentience to the structure of stuff that we presume to be inherently insentient. So no matter how good his proposal is to explain the easy problems, I don't think it can explain the hard one. It can only make us assign sentience to structure more convincingly.
My result shows that, regardless of what internal constraints we assume, infinitely many exact same structures can be found and interpreted in the same way as having the exact same internal constraints and being able to emerge aboutness, but they are insentient, otherwise there would be no correlation between what we think we know and the external world. That is, they differ in the external constraints, but should it matter if a structure identical with a brain exists for a brief moment in an unconstrained environment, as long as it has the same structure as the brain existing in a constrained environment, for a brief time interval in which the differences in the environment don't affect it? So the particularities of the structure and its dynamics are not sufficient. That's all, from my point of view there is no way to "emerge" sentience out of structure, just as there is no way to "emerge" more energy out of less energy.
But maybe you have a reason why you indicated this book to me.
The point of constraints is three-fold: 1. they are not reducible to smaller scales. They are necessary but not sufficient for emergence. 2. Each level of consciousness has constraints that provide a boundary between it and higher and lower levels (a finite, not infinite chain), sometimes in the form of membranes, but maybe different for brains 3. An interpretation is particular to the form of these constraints, and in a sentient system is not arbitrary, or imposed by other sentient beings. This whole thing reminds me of creationist arguments or post-modern arguments: the counter-argument in both cases is that not all possibilities are selected for by the environment.
Quantum mechanics is incomplete, not only because of the inconsistencies with GR, but because it doesn't have a good part/whole description, aka the measurement problem, which is also related to how it deals with constraints (which Dirac tackled, but I don't think solved). That is why I suggested you try to prove your non-go theorem in a purely classical manner.
In the case of the quantum world the constraints don't reduce at all the ambiguity, I showed it here https://arxiv.org/abs/2102.08620 and also here https://arxiv.org/abs/2401.01793. The article linked at the top of this thread is a natural consequence of these results. I think anyone who can follow the math can see it just like you see that 2+2=4.
About the classical case (which is not the case of our world anyway) I already linked in the youtube reply from four days ago two references with two proofs for classical physics (https://arxiv.org/abs/2307.06783, Corollary 1, and another one in https://philsci-archive.pitt.edu/23108/). It really doesn't matter, because I know how to prove it for any dynamical system imaginable.
So, to wrap it up, I have a proof, and anyone who disagrees can try to find a mistake in its math & physics. I would prefer a discussion that engages with the math, not imaginary parallels with discredited things like creationism or postmodernism that have nothing to do with my proof. Which I think is my cue to stop here.
I don't know about Terrence Deacon's work, can you please send me which one you referred to?
Thank you for your comments. Strange thing happened, I replied to your comments yesterday, but they disappeared. I tried again and they reappeared.
youtube is really not a place for deep discussions. It has algorithms that remove stuff at the slightest provocation. If you have time, read Deacon's Incomplete Nature.
I think the first step is to stop calling consciousness an illusion. Unless, of course, you’re a not very philosophical zombie. Then you have every right to.
For me, by far the most compelling explanation is derived from the highly predictive research of mathematical neuroscientist Stephen Grossberg (who basically wrote the work which was awarded this year’s Nobel prize in physics - this is an interesting case of how the institutions of science are more political than scientific, but I digress) and Gail Carpenter.
The theory, Adaptive Resonance Theory – summarised in Grossberg’s 2021 book Conscious Mind, Resonant Brain – is highly mathematical in a way which is non-intuitive for most mathematicians; but then, understanding gravity required a non-intuitive conceptual leap too. If you haven’t engaged with his work before, I strongly encourage you to do so – you might find what you’re looking for.
On a separate note, I don’t think we’re too far away from a Theory of Everything either (I plan on writing a blog about this) – in a similar manner to Grossberg’s ART, the mathematics which underpins unification of general relativity and quantum mechanics is horribly non-intuitive, relying on noncommutative geometry.
It's a neural network, but with a Matryoshka type topology, meaning there are clusters of neurons that process low level information, embedded in higher level clusters, for at least 2, maybe more levels, and where the information goes both ways on the vertices/synapses, in non-trivial ways (time delays, going up for a while, then going down, then up attempting convergence of a recursive loop). There is also a time component to the information flow. There are utility functions to be minimized at each level, and the utility function to be minimized is the difference between 4D patterns in the lower level vs the higher level, as opposed to just 3D state differences, with the highest level utility function being a difference between that level's 4D patterns and the external world. This can generalize beyond individual humans to groups of them, where the individual humans are just a level below the group of them.
Qualia are just the processes happening in the body/brain.
One question that basically only I and a few other people (mostly IIT-involved or IIT-adjacent) have looked at with any precision the question of: If you have physical correlates of consciousness, why are they at a particular spatial-temporal scale? Why not at the scale of atoms? And so on. If information is flowing, what scales matter (as you suggest, higher-level clusters)? This gets ignored in a lot of consciousness research.
Levin (as you know) has argued (with some experimental backup, though he is careful to call it "agency" instead of "consciousness") that consciousness happens at all biological (and therefore in silico, IF we get the topology right) scales, but they are not arbitrary, they have some distinct boundaries, with the lowest level being gene networks (I disagree, I think a level needs a membrane, hence I would say organelles). These are easy to see in multicellular organisms (MCOs) when we exclude the brain, as 3D membranes (separating 2 3D regions, lower and higher levels). For the brain, the clusters might have a 4D character (because the main goal of the brain is to predict and coordinate, which are time like activities). The other thing we can learn from studying the organization of MCOs is that lower level utility functions could go rogue and become misaligned from higher level ones (i.e., cancer) which I think is the main reason for the existence of the Matryoshka topology: a balance between the benefits of synergy and the cost of monitoring and punishing free riding, leading to a certain number of parts before a new level is beneficial (each level is in charge of the monitoring only within itself, where the number of parts is relatively small).
Could this free riding book keeping exist in the brain? The evidence for it is multiple personalities. (when they are not integrated, and their goals conflict with each other)
Maybe the appropriate scale is the entire universe and we are mere atoms inside a great big giant mind. It wouldn’t be surprising that we failed to figure it out!
At a fundamental level, I think it boils down to a self-modeling, shared communication protocol that is imprinted on a substrate. While the United States has numerous communicating components, there is no substrate where the information is processed, in a temporally coherent fashion and is therefore not a conscious entity. A single celled organism has information processing capabilities, but has no need for a larger controlling system to handle the environmental issues or pressures for longer horizons. Consciousness evolved to exist at a specific temporal range and to control a very uniform collection of cells. That is why we don't have conscious electrons and probably why plants don't utilize a neural network centric particular strategy. Pancomputationalism views point towards an artificial neural network combined with robotic embodiment could mean that we will soon see conscious AI. The ingredients will be there for homogeneous communication, self reflective/modeling loops of data over a temporally coherent time where predictive control is required.
I think in terms of theories that don't necessarily solve the hard problem but are still pretty interesting and rich at getting at some sort of scientific answer to consciousness, this combination of (a),= solving the binding problem, and (b) having a self-model (or a *bounded* self-model, in combination) is probably one of the best approaches, just in general.
I submitted an essay to Adam Mastroianni's essay contest over the summer that was basically about this, and I HEAVILY relied on your book. The essay is borderline incomprehensible but it's the first time I tried putting any of those thoughts on paper. It's sitting in my back pocket, waiting to be rewritten eventually but, like...I would be so chuffed to share it with you, if you're interested. I can email you a google drive link...?
That sounds great. I can't promise to give back notes, but if you'd like to email it to me, if I find time I'll take a look (last name first name at [the G-one that's common])
Sent!
My working definition is basically just that consciousness is electricity + computation, and IIT is a good way of measuring the consciousness of systems in this domain. This is just me drawing the circle around consciousness at the largest level of things associated with consciousness that makes sense to me. One abstract, one physical, since I believe firmly based on intuition that consciousness has to be physically instantiated.
This might interest you then: EM field theories https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2022.1024934/full
This argument reminded me of the brilliant short story by Ted Chiang, "Exhalation", where an intelligent and conscious species is based on a totally different substrate. It is fiction, of course, but it is deep and very thought-provoking, as is most of Chiang's writing.
Chiang is the master.
This reminds one of the transduction theory of mind that Robert Epstein apparently supports. It seems logical: gravity + quantum = string theory => lots of extra dimensions. Maybe the brain is a transducer to quantum field information in these other dimensions?
It's fun to theorize, I'd be interested if anyone has a summarized tutorial / overview of ToCs that are testable or make predictions that could be testable?