141 Comments
Aug 30, 2023·edited Aug 30, 2023Liked by Erik Hoel

I have an idea after reading this, but how would you react to Yuval Noah Harari's definition?

"Consciousness is the biologically useless by-product of certain brain processes. Jet engines roar loudly, but the noise doesn't propel the aeroplane forward. Humans don't need carbon dioxide, but each and every breath fills the air with more of the stuff. Similarly, consciousness may be a kind of mental pollution produced by the firing of complex neural networks. It doesn't do anything. It is just there."

I've always enjoyed Bill Hick's quote. "We are one consciousness experiencing itself subjectively." I've experienced it many times on psychedelics. My sense of consciousness doesn't feel like it belongs to me any more than the people around me. Or even sober, thoughts arise that don't feel like my own. They just flow through me, and my experiences shape how I act on them.

Expand full comment
author
Aug 30, 2023·edited Aug 30, 2023Author

That's an interesting quote from Harari. It's a very specific metaphysical position about consciousness, and one that has its own problems (as, frankly, they all do). If consciousness doesn't do anything, and is some useless byproduct, how do we know about it? How come we can report so well on it? If it didn't do anything, it'd be merely a massive coincidence that the actual qualia that you experience relates in any way to your cognition, to your behavior and functioning. Why not experience red, instead of pain, when you're injured? The clocks of the material and the mental would have to be fine-tuned to be perfectly correlated, and it's unclear why that would be. This is one major problem with the kind of epiphenomenalism that Harari is advocating for.

Expand full comment

"If consciousness doesn't do anything and is some useless byproduct, how do we know about it? How come we can report so well on it?" These questions prove to me that consciousness is real and significant, but that's the problem. The only proof we have is that we report what we consciously experience, but a computer can do that too. How can we humans show what consciousness is in a way AI can't? For me, it brings me back to the Bill Hicks quote. It also makes me think that human languages aren't capable of defining the truth of consciousness. It starts to make me think of Gregory David Robert's quote (that isn't about consciousness but about Truth). "There’s truth that’s deeper than experience. It’s beyond what we see, or even what we feel. It’s an order of truth that separates the profound from the merely clever, and the reality from the perception. We’re helpless, usually, in the face of it; and the cost of knowing it, like the cost of knowing love, is sometimes greater than any heart would willingly pay. It doesn’t always help us to love the world, but it does prevent us from hating the world."

Expand full comment

You can find the rebuttal to epiphenomenalism here:

"From our perspective the function of our bodies is to maintain our Experience. Our Experience is the end point of what our bodies do. This might sound esoteric but just look and walk around: what is happening is your Experience. The continuance of this is what your body does. Our primary choice is to nurture the form of our Experience, to choose between chaos and calm, love and hate."

See https://drsimonrobin.substack.com/p/our-reality for a full account.

Whose perspective is Nolan taking?

Expand full comment

re: Hariri quote, I think he was framing it excessively negative - it's not useless or a by product, but he was right about "it is just there."

"If consciousness doesn't do anything, and is some useless byproduct, how do we know about it?"

Because we are it! It isn't useless - the perception that you are you (and the limits of that) is important for categorizing the sensory inputs we receive into "I need to process this because it's important" or "that's just a rock over there, who cares" - which we then used to mentally back into our definition of "me" as "the conceptual category of things that are affected by the important things", which the actual "importance" of was whether or not it perpetuated the genes that selected for brains that could be and perceive consciousness.

"If it didn't do anything, it'd be merely a massive coincidence that the actual qualia that you experience relates in any way to your cognition, to your behavior and functioning."

But I humbly suggest you're doing exactly what your title says isn't up for debate: you're expanding the definition of consciousness to cover "accurate processing of input and creating action based on it."

Expand full comment

if consciousness is "accurate processing of input and creating action based on it" then i have to ask whether the kidneys are conscious, or muscle cells, or strands of DNA. or going in the other direction, is a tribe conscious? or a nation?

Expand full comment

Epiphenomenalism is basically woo. The roar of jet planes disturbs and heats up particles in the air, etc. There's no such thing as phenomena that has no effect on other phenomena

Expand full comment

I'd call epiphenomena vs. phenomena a false dichotomy, like illusion vs. reality.

Expand full comment

Vaguely similar to the idea set out by Donald Hoffman in his book the Case Against Reality.

Expand full comment

This doesn't seem right to me. Maybe if you mean the higher order "thinking", which seems to be a byproduct of language that social animals have. But experiencing pain seems to actually be necessary in order to get creatures to avoid things that will kill/injure them. There's some condition (CIP) which is extremely rare and genetic, but those person's can't feel pain and therefore are constantly in danger of dying from small little things that normal people would notice and avoid...like a burn or cut that causes bleeding or a stomach ache indicating your appendix has exploded. And on the other side of the coin, absent pleasure and other conscious experiences we would perhaps never have any reason to do anything at all.

Expand full comment

We perform the “lipstick” test on some mammals and they pass it. Seems like consciousness did something.

Expand full comment

I don't think the lipstick test has anything to do with consciousness. After all, you could build a robot that passes the test.

Expand full comment

I think the lipstick test has something to do with consciousness because one has to have some subjective experience to have a subjective experience of one’s self. The robot example has me stumped though. If we think the robot passes the lipstick test with no consciousness, then what evidence do we have that the smart mammals aren’t doing it the same way?

Expand full comment

In fact, defining consciousness isn’t what is standing in the way of developing a scientific theory of it. Maybe it’s our solipsistic inability to confirm that anything is conscious at all.

Expand full comment

Maybe it's not about passing one test or another but about how a system has arrived to the point of passing the test.

For example, I see people discussing consioussness as proof that they're conscious. ChatGPT can also discuss it, but only after reading millions of human texts about it. It did not arrive at this concept by itself, unlike humans (or at least the first humans to think about consiousness).

Expand full comment

That doesn’t sound wrong to me. A noble stab at a very hard problem!🤔

Expand full comment

Why not start with Experience rather than consciousness. After all everything that is currently experienced must be in Experience. If we have consciousness it must be there, in Experience.

See https://drsimonrobin.substack.com/p/our-reality

Expand full comment

This discussion is interesting to me, as a plant biologist who has enjoyed following my colleagues argue about whether or not plants have consciousness (and intelligence . . . And a nervous system). (See https://www.nytimes.com/2018/02/02/science/plants-consciousness-anesthesia.html for one of the issues they are fighting about). The “what it’s like to be” definition doesn’t really seem to help here. How would you apply this definition to non-humans?

Expand full comment

Not sure whether you got a response from Erik to this? Maybe I'm a bit weird but I would think it just as easy to imagine that there is something it is like to be say a sunflower, as a bat.... leaning toward the warmth/light of the sun, feeling weak and collapsing in the absence of water.

Expand full comment

As long as the definition of consciousness leaves out reference to brains, nervous systems, and neurons, I think we can talk about plants having consciousness. But that seems to dilute the definition to the point of having little meaning?

Expand full comment
Aug 31, 2023Liked by Erik Hoel

Yeah, I think that's the problem with hard definition - they are abstractions, and abstractions are always problematic at the edges. I did like imagining for a second what it could be like to be sunflower, following the sun, feeling it's brothers nearby, the pollinators coming around... without neurons!

Expand full comment

Agree--plants are cool and fun to empathize with, whether or not they resemble humans in these particular ways!

Expand full comment

I'm not a plant biologist, or even a biologist ... but with the risk of going down a rabbit hole, and another definitional challenge, is a nervous system in some sense a possibility with plants? I remember an article in the New Scientist talking about how a particular plant was susceptible to anaesthesia. Or maybe, I am still traumatised by watching Day of the Triffids at a young age.

Expand full comment

You remember the article correctly, but a response to anesthesia doesn’t require a nervous system if the effect is general, such as on cell membrane composition. There are a lot of shared molecular components--neurotransmitters and channels--but none of the structural or cellular components like neurons, synapses, or ganglia.

Expand full comment

I seem to remember reading that the mechanism that fly traps use to catch flies involves action-potentials. (Maybe neurons?)

Expand full comment

Action potentials, yes! Neurons, no, unless it’s a very relaxed definition. But that’s the whole question--how relaxed can definitions be until you are saying that a bacterium has consciousness and intelligence? Not that such a statement would be wrong, but it would be a different way of using these terms than we do for humans or other animals.

Expand full comment

Our neurons are causal to our consciousness. The best evidence maybe brain machine interfaces. For example, cortical stimulation of neurons can produce conscious experiences and these experiences can be manipulated to either be conscious or not just based on stimulation parameters tightly linked to neuronal activity. But that doesn’t say anything about whether plants can have conscious experiences, I think.

Expand full comment

It seems to me that it's obviously necessary for animals to be conscious bc they have to do things in the world to stay alive. They must find food and eat it, and they must avoid being eaten. Without the experiences of hunger and fear and other things that it is "like" why would they do anything at all? Whereas a plant can't do anything but stay exactly where it is and grow or die. They've evolved defenses like being poisonous or having thorns so that they don't get eaten, but they can't run away. And they don't have to find food...the sun is there every day, and water either is or is not. So with Occam's razor it doesn't seem necessary for a plant to be conscious when all of its functions would be easier if automatic. Granted the same could be said about animals, but they have SO many more decisions to make, having to move around the world, which makes things so much more complex. And with social animals, they must not only take into account their own interests but those of the group as well, to survive, which is orders of magnitude even MORE complicated. I do think it's the case that social complexity requires more consciousness bc of the increased complexity, trade offs, and tug of war between what the creature would like to do versus what others in its group would like it to do. It certainly seems to be the case that all the most intelligent animals are also the most social.

Expand full comment

I think you are seriously underestimating the life of a plant. True, they can’t move to where there is more food (sunlight) or to get away from predators. But that means that they have to be MORE responsive, not less. They drop leaves if it’s too sunny or grow more if it’s not; they grow roots towards water and to support themselves. And they have to make a lot of decisions about growth versus defense, about losing water versus taking in carbon dioxide, and so on. The molecular life of a plant is just as, if not more complex than an animal’s, and it is dynamically changing at the same rate!

Expand full comment

You had me at “I think”😉

Expand full comment

No, no, you misunderstand, I absolutely appreciate and understand the complexity of plants. I am an obsessive level gardener and spend hours every week working with my plants trying to figure out what it is they need to thrive. The irony here is that I literally wrote my comment after spending 10 hrs today working in my yard and had just collapsed in exhaustion in my couch and decided to scroll q bit. 😊

But it is not complexity that seems relevant to me, but whether or not an organism can DO something that would make consciousness useful. I got extremely hot working in the sun today, so I was sweating. Eventually, my body directed oxygenated blood towards my internal organs and away from my extremities, causing a blood pressure drop that made me see stars and get dizzy every time I stood. Those were not conscious functions, nor functions I had any control of at all, they were automatic processed my body took to protect itself from heat and required no consciousness.

However there was a much better and faster way for my body to deal with the heat, which was for my mind to decide to walk inside, cool off, and have a drink. But my mind had other aims. A plant can't do that and can't seek shade or move, so I don't see what benefit consciousness would provide. It's stuck. Automatic processes such as closing or dropping its leaves are all it's got and consciousness doesn't seem to provide a benefit any more than me being able to decide to sweat or not. If I could have decided, I would make the dumb decision not to sweat bc it makes me feel gross.

I suppose there are situations where it would benefit a plant to be able to understand the environment around it and make conscious decisions. I prune almost daily, bc it gets me more blooms. That's in my interest (I like to see the blooms and make the color last as long as possible) but not in the plants. I just dump what I prune in my compost pile. The plant would rather that a animal eat its seeds and poop them out somewhere else, or get it stuck in its fur, or that the wind carry its seeds so that they can distribute and reproduce. So if the plant could figure out what I was doing, it would be in its benefit to say screw you lady, I'm not giving you more of what you want, keep pruning me and no more blooms for you!

Also there's clearly some vague line between an animal that has an automatic function versus one that is instinctual (ie not learned) but not exactly automatic, versus learned. My dogs pants when hot, automatically, and they can't not pant. One dog seemed to instinctually know to dig in the soil to find the cooler soil underground to lay on, since he did it since he was a small puppy. But neither dog understood that water would cook them off, and it took years of us forcing them into the water before they figured it out for themselves and that nothing bad happened in the water, and now when they're hot they will seek out water to get into. To me it seems to make perfect sense that an animal is conscious so that it can feel the discomfort of heat and do something about it like find shade or water to get into, which it's learned will make it feel much cooler. And that the plant would not be conscious bc it has very constrained choices and can't do much at all other than curl up and try to make itself small.

At least, I really *hope* plants aren't conscious! It seems horribly and unnecessarily cruel for them to be, when they have so little they can do to avoid or get away or fight back. Though I have always had a beef with mother nature for making pain TOO painful. It seems excessive for the purpose it serves.

Anyway, I don't at all put the possibility totally off the table. It just seems like it would mostly be unnecessary and something plants don't need to evolve for how they work. Maybe they experience some type of consciousness that does not include pain or suffering...that would ease my conscious indeed, as I am extremely brutal and sadistic to weeds and undesired plants, if they can suffer. I hope they can't. But I should note that JUST IN CASE, I do in fact talk to my garden, giving my plants words of encouragement and telling them beautiful they are, telling them I know they can make it apologizing when I have to prune them or move them or when the weather is going to be harsh, and scolding/warning them when they're not performing the way I hope. 😉

So I leave the possibility open, though I very much hope they either aren't conscious or if they are, it's in some nice happy mystical sense like one reads in fairy tales and not to experience pain or suffering.

Expand full comment
Aug 30, 2023·edited Aug 30, 2023Liked by Erik Hoel

The notion that "there is something it is like" is mysterious and inscrutable and probably doesn't even mean anything. This is no "definition." It is vacuous nonsense, and if this is what supposed "experts" agree on, then I question whether any of them are "experts" in the relevant respect.

As Mandik (2016) puts it, regarding invocations of this phrase to describe phenomenal consciousness:

“One phrase that might seem to break us out of the circle of technical terms is the phrase ‘something it’s like’, for there are non-technical uses of ‘what it’s like’ [...], and phenomenal properties are supposed to be those properties in virtue of which there is something it’s like to have experiences. However, to my knowledge, the syntactic transformation from ‘what it’s like’ to ‘there is something it’s like’ occurs only in technical philosophy-of-mind contexts. This makes me doubt that non-technical uses of ‘what it’s like’, which sometimes (but not always) are employed to pick out mental states, are employed to pick out a peculiar kind of property of mental states. When, for example, pop stars sing about knowing what it’s like to fall in love, they give little evidence of attributing so-called ‘phenomenal’ properties, as opposed to whatever other properties a meta-illusionist can readily grant are seemingly instantiated by love states. The hyphenated ‘what-it’s-like’ in Frankish’s ‘“what-it’s-like” properties’ [...] is yet another technical term shedding no light on the term ‘phenomenal’.” (p. 142)

As Mandik concludes, “We have then, in place of an explicit definition of ‘phenomenal properties’, a circular chain of interchangeable technical terms — a chain with very few links, and little to relate those links to nontechnical terminology. The circle, then, is vicious.” (p. 142)

Terms like "what it's like" form part of a mutually interdefining set of terms that never bottom out in expressing anything substantive or meaningful. Nagel has not offered a definition, but just one empty turn of phrase as a supposed characterization of another equally empty phrase.

If this is a "definition" the supposed experts agree on, then I simply reject that the people in question are experts on the matter, and would suggest, instead, that they are experts at having very low standards for an adequate explanation of a phenomenon.

References

Mandik, P. (2016). Meta-illusionism and qualia quietism. Journal of Consciousness Studies, 23(11-12), 140-148.

Expand full comment
author

Well, illusionism being true is a particular metaphysical position, and would certainly be a shock of scientific definition. Personally, I don’t think it’s coherent. But let’s put the truth or falsity of illusionism aside completely, and notice that it doesn’t actually contradict the definition of consciousness as “what it is like to be.” In fact, it relies precisely on the definition! For it’s saying that definition is an illusion. So clearly an illusionist and a non-illusionist (like, say, a type B materialist) would agree on a naive definition, the phenomenon to explain, but would disagree on the further scientific definition (with the illusionist saying the scientific definition is basically simply that it doesn’t exist, but naively appears to, and the Type B materialist giving some sort of complex neural-qualia relation laws).

Expand full comment
Sep 1, 2023·edited Sep 1, 2023Liked by Erik Hoel

Be very careful about calling Mandik's view incoherent, no matter how politely, tactfully, and non-confrontationally you do it. He might tell you to "f**k off" and then block you. At least, that's what happened to me on Twitter when I argued that our conscious experience is the only thing we can be 100% certain of, and that all of our knowledge about the outside world comes to us through the contents of our experience. His response was "F**k off solipsist" (except without the asterisks), and then he blocked me.

As you may have surmised, I'm still kind of butthurt about it. 😖

Expand full comment

Whether illusionism is or isn’t consistent with the definition in question is irrelevant to my post. I’m not an illusionist, and the article I cited isn’t endorsing illusionism either.

I’m endorsing qualia quietis and meta-illusionism. I don’t even think “what it’s like” is a bad definition. I don’t think it’s a definition at all, because it doesn’t mean anything.

//But let’s put the truth or falsity of illusionism aside completely, and notice that it doesn’t actually contradict the definition of consciousness as “what it is like to be.” In fact, it relies precisely on the definition! For it’s saying that definition is an illusion. //

Yes,, that's why I'm a qualia quietist, and not an illusionist.

//So clearly an illusionist and a non-illusionist (like, say, a type B materialist) would agree on a naive definition, the phenomenon to explain, but would disagree on the further scientific definition //

I agree. And I think the illusionist would be making a mistake to do so, *which is why I'm not an illusionist.*

I'm puzzled at your response. I cited an article that rejects illusionism on the grounds you've outlined, yet you're responding as though I am an illusionist and you're explaining to me how illusionism is consistent with the definition. Did you think I was an illusionist from my response?

Expand full comment
author

I don't really see the difference between "qualia quietism" and illusionism. I think the author wants there to be a strong difference, but they seem very similar to me.

Expand full comment

You may not consider the difference a strong one, but it is a difference, and I consider it to be a very important one. It also has implications that are directly relevant to this discussion: namely, that I'm claiming that Nagel's "definition" is a pseudodefinition and that, to be blunt, it is completely worthless.

If an illusionist doesn't agree with that, that strikes me as a significant difference at least with respect to whether there is a legitimate and "agreed-upon" definition of consciousness. Given that that's the central thesis of your post, the fact that illusionists may agree with the thesis, but that I don't, seems like a very important distinction! After all, if illusionists agree with you, then I think they're wrong, too!

Expand full comment

But the entire point of a phenomenological definition, as explained in the very article you are commenting on, is that we *don't* have an adequate explanation yet, and we are searching for one. To search for one, we use a collectively agreed-upon definition of the phenomena we are interested in. In this case, Nagel's definitions (and their expansions as related, again, in this very article) make up that phenomenological definition.

For instance, all of Mandik's criticisms here would equally apply in 1670 to Newton's lack of an "adequate explanation" for what water is -- and yet here we are, with an operational and scientific explanation for what water is, and there he was, contending with the existence of water.

Do you propose that in a pre-atomic world, discussion of the existence of water was "a mutually interdefining set of terms that never bottom out in expressing anything substantive or meaningful"? If not, then you'll need to propose some reason why consciousness as a concept differs from all of the other pre-scientific explorations and understandings of things we once had which are now concretely scientifically understood, and then propose why we should therefore cease our investigations of it.

It's bizarre to me that you have simply re-stated one of the criticisms that is addressed directly in this very article, without even mentioning or responding to the immediately posed argument against your criticism.

Expand full comment

The notion that there's "something it's like" to be conscious isn't sufficient as a starting point for inquiry because it doesn't even mean anything. Those studying consciousness might as well agree that they're studying "flkjlqkjel." I'm denying it even is a definition.

The concerns behind qualia quietism have to do with communicating the meaning of various technical terms. One does not need to invoke technical terms to provide an explanation of what water is. One can describe water in observable, functional, and colloquial terms, many of which are non-technical.

In contrast, I don't think anyone can explain what phenomenal consciousness or qualia are, because I don't grant that there's anything to explain. You certainly can't splash me in the face with qualia or give me a glass of "consciousness" (however those endorsing the hard problem conceive of the notion), as you can with water. Whatever people claiming to believe in phenomenal consciousness claim to believe in, its existence doesn't appear to be evaluable in third personal terms or to be capable of corroborating via observations and predictions...unlike water, for which we can clearly do this.

//Do you propose that in a pre-atomic world, discussion of the existence of water was "a mutually interdefining set of terms that never bottom out in expressing anything substantive or meaningful"? I//

No.

//If not, then you'll need to propose some reason why consciousness as a concept differs from all of the other pre-scientific explorations and understandings of things we once had which are now concretely scientifically understood, and then propose why we should therefore cease our investigations of it.//

I think this is predicated on a rather profound misunderstanding of what I've said. No, I don't think I need to do this. This is simply about explaining terms, not about offering some kind of scientific account of anything. I didn't say we need some kind of scientifically robust account of consciousness, so I'm not sure why you're saying any of this.

//It's bizarre to me that you have simply re-stated one of the criticisms that is addressed directly in this very article, //

What do you take my criticism to be, and where is it addressed in the article?

Expand full comment

> The notion that there's "something it's like" to be conscious isn't sufficient as a starting point for inquiry because it doesn't even mean anything.

And yet I understand it perfectly. Perhaps you're misusing the term "doesn't even mean anything"? Or perhaps you do not have subjective experience, and so this term doesn't mean anything to you? I'm really not sure what you're arguing.

Expand full comment

Can you explain what it means?

Expand full comment

The flippant yet literally true reply would be to say that if you don't know what it means for existence to feel like something, you will never know. But that would also be an obnoxious reply.

I'm going to get very woo-woo mystical here for a second: please bear with me, I am trying to convey to you the handful of things I know for certain to be true. They are:

1. An experience is being had. (This is just Descartes' cogito, 400 years on still the only 100% ground truth as far as I've ever been able to reason.)

2. This experience seems to be from the perspective of a particular primate body, navigating an environment outside of itself. (Note "seems". I'm confident that it seems this way though!)

So I guess I'd say that the question, "is it like something to be a bat", boils down to something like: is there a version of 1 for which 2a is true, where 2a is:

2a. This experience seems to be from the perspective of a particular bat body, navigating an environment outside of itself.

Expand full comment

I don't think there's much of a misunderstanding here, we just disagree. To simplify, I'll just address two of your points: 1. What I take your criticism to be and 2. The carrying-on of the analogy to water.

I take your criticisms primarily to be that the definition of consciousness given in this article is "vacuous nonsense" and "part of a mutually interdefining set of terms that never bottom out in expressing anything substantive or meaningful." Am I correct in this assessment of your criticisms? If so, I'll proceed. If not, let me know.

First, your claim that the definition is "vacuous nonsense" is a bare assertion without evidence, referent, or explanation, so it can simply be dismissed outright.

Next, your attempt to place "what it's like"-ness in the context of some circular, nebulous hand-waving at nothing is I think ill-founded. The reason I think this is because you can indeed do the same with pre-scientific definitions of water.

In exactly the same way as you refuse to admit that you experience things consciously, someone in 1670, upon being splashed in the face with water, could simply say, "No, I have no experience of this substance you call 'water', and I reject outright your expertise on the matter."

Well, the fact of the matter then would be that we can just all see that the person has been splashed in the face with water, no matter how he protests. But that's all we can say! The issue remains that we cannot truly PROVE that he sees and feels the water the same as we do, because that is an object of his experience, and we're all relying on a collective agreement that our sensible understanding is roughly shared when we point to water and say "yep, that's water."

And now, just as then, most of us can all see that you have an experience of "what it's like" to be a human. When you are angry in traffic, pleased by a dessert, frustrated by a misunderstanding -- we all see this, just as we see water on a person's face.

This is why, again in this article, it's stated that the actual sophistry is occurring on the side of the person claiming consciousness doesn't exist. It's the same sort of sophistry as you would run into from a Berkeleyan Idealist who claims the water does not exist.

The reason notions of science and a scientific account of the concept come in is simply because once we illuminate a concept or a phenomenon to the degree that we now have for, say, water, your particular brand of sophistry transitions from generically loony but perhaps clever to functionally extinct. The onus is on you to explain why a similar evolution won't occur for the phenomenon of consciousness that we are all referring to.

Expand full comment

I don't see how Bayesian statistics can help you here.

There are either things that cannot have a 3rd person definition or there aren't.

If there are not such things, then there is no hard problem of consciousness, and we who talk about it are confused.

If there are such things, then there is nothing wierd about a concept that does not have a 3rd person definition.

For you to have a problem with the second option you need to assume that this option is impossible or unlikely. Based on what can you assume that?

Expand full comment

"its existence doesn't appear to be evaluated in third personal terms or to be capable of corroborating via observations and predictions.." - you have just restated the hard problem of consciousness.

It seems to me that you are using the hardness of the problem as evidence that the problem cannot be real, and therefore that qualia, for example, is a nonsensical term? Is that correct?

Expand full comment

I read Lance's criticism as the typical position taken by people who have internalized the idea that post-Galileian, quantitative, empirical observation of physical quantities according to the scientific method is the one and only means of arriving at knowledge. Because Galileo specifically designed the scientific regime to exclude the qualities of subjective consciousness, this means that subjective consciousness, by definition, cannot be a subject o scientific observation. The logical fallacy that Lance and many others practice is that, ergo, subjective consciousness is "not real" in some way. See, e.g., Dennett's position, which I could almost believe is supposed to be a joke: "you're not experiencing having thoughts, you're just experiencing THINKING that you're experiencing having thoughts!". If Lance wants to talk about statements that "don't even mean anything", I propose that one as a candidate.

Devout empiricists are like the proverbial drunk looking for his keys under the lamppost, not because he dropped them there, but because the light is better. Or, to borrow another adage, if all you have is the hammer of empirical observation of physical quantities, then every problem looks like the nail of physical quantities.

Expand full comment

It may help if you rephrase the question. As it is posed, my answer is a definitive "no," since I don't think there is a hard problem. If, instead, you are asking whether I infer that if a term purports to describe a phenomenon that cannot be described in third personal terms, that this is some evidence that the term is nonsense, then yes, for simple Bayesian reasons: if I think it'd be meaningful if it could be described, and it could be described in third personal terms, then it would be meaningful. That a particular term fails to meet this condition forecloses at least one way by which it could be shown to be meaningful. That doesn't mean there aren't other ways, but it does limit one's options.

Expand full comment

My finger is something like a centimetre across. "What it is like" is about correlations. The width of my finger is correlated with the marks on a ruler. We can go further than this, take 100 estimates from a hundred people of the width of my finger, the average estimate will be very nearly the 1 centimetre given by a direct measurement.

In the absence of being able to stick a ruler into the brain we are left with estimates of what Experience is like. It is like a sphere of events, it is like it is arranged around a centre point etc. etc.

Expand full comment
Aug 30, 2023·edited Aug 30, 2023Liked by Erik Hoel

> First, there would need to be strong evidence that our definition of “consciousness” is indeed weaker than naive pre-scientific definitions of other phenomena. I don’t think there’s any good evidence of this.

The deal-breaker, of course, being that there *is* a significant distinction between consciousness and other phenomena, namely, consciousness is a condition on the existence of *any* phenomena qua phenomena.

For things to show up, there must be a subject for them to show up *for*.

What would, or could, count as good evidence in this case? There's a significant amount of metaphysical and methodological baggage smuggled in here which goes unacknowledged.

This isn't a bad thing in itself, if you want to attempt the empirical study, but it does need to be made explicit.

> Both Chalmers and Koch would know a neural correlate if they saw it, and are happy to agree they both don’t see one.

Would they?

If you want to get snotty about it, every time the fMRI lights up in a brain scan with a conscious patient, you have a neurological correlate of consciousness. That's why it is a correlate and not a cause or condition.

I'm not at all convinced that there can or will be any mechanistic explanation in the way that, say, you could explain in principle the behavior of H2O molecules by reference to the laws of atomic motion.

It's far too easy to slide from the irreducibly subjective qualities of experience (much less the characteristics of *thought*) to the sorts of material and mechanical explanations that fly in natural sciences. I think it's a logical gap that *cannot* be crossed.

The trouble arises in a subtle, hard to see change of subject in the move from identifying the phenomenon, prior to any rational or operational definitions, to implicitly assuming that there can be satisfactory evidence from empirical discoveries and the narrower sorts of mechanical explanations that go into so much of them these days.

The real trick is separating out the implied assumptions *there* -- computation and information, for example, are observer-relative properties, yet these cog-sci theories all help themselves as if they were appealing to fundamental physical laws -- before using them to create a circular argument. We explain consciousness by pretending we aren't appealing to concepts that depend on consciousness. A neat magic trick, but not terribly convincing to the outsider.

Conditions for identification are notoriously difficult to hammer down in a way that a precise explananda are not. Wittgenstein had interesting remarks about the "family resemblance" of many concepts, which we can know, as it were "intuitively", without being able to nail down an explicit meaning. Some interesting phenomena, perhaps many, simply may not have the kind of identity conditions that are open to this kind of explanation.

> We just need to move that familiarity from the first-person to the third-person. But that problem is much harder than one of mere definitions.

To put it mildly.

There's a major problem hidden in the "just", which is somewhere on the order of difficulty between:

- "We just need to build a flux capacitor and we can travel through time," and

- "We just need to shift this three-sided object to a four-sided object and we'll have a four-sided triangle."

The presumption here is that natural science can and will study and explain every phenomenon in broadly the same way, within broadly the same physicalist world-picture. Should consciousness turn out to not fit that schema, and there's plenty of reason to think that it (not to mention other attributes of mind, such as rationality) will not, then that "just" will turn out to be more than a trivial detail.

I stress again that there's nothing wrong with the empirical study of mind and its properties, but I simply do not believe that natural sciences have it in them for a *sufficient* explanation of mental life. Even if they cook up an explanation of mind's material conditions and physical-biological genesis, that doesn't at all indicate we've gotten an understanding of the thing in the way we can say Newton's laws gave us a sufficient explanation of motion at roughly human scales.

Expand full comment
Aug 30, 2023Liked by Erik Hoel

Nice essay. Couple of stray thoughts. While I expect you're right, a dog's consciousness isn't as "rich" as ours in the self-awareness department, I'd hesitate to rule out altogether, for all animals apart from ourselves, an inner "narration" of "mental gestures" until we had a better idea of how these things arise. Also nowadays I've learnt to hesitate over the suggestion that "it is consciousness that gives an entity moral value". My understanding is that there may be other routes to being given moral value? Nevertheless, I'm definitely still of the opinion it requires consciousness for an entity to be in that awkward position of becoming a moral valuer.

Expand full comment

Can you give an example of a route towards moral value that does not involve consciousness?

Expand full comment

Well, it's an active concern in environmental ethics, i.e. that non-sentient entities may still have "intrinsic" and/or relational value, some non-Western cultures may offer greater respect for nature, pluralistic moral bases may be more inclusive.

Expand full comment

Well, but do these routes make sense? Would we care about enviromental changes on a distant planet if there's no one to experience them?

Many ancient cultures that "respected nature" were animist, that is believing that plants and even non animate objects had some sort of aliveness or soul, which I take to be an idea similar to consiousness.

Expand full comment

I'm unclear how "aliveness or soul" could equate to consciousness. Isn't consciousness connected to awareness and ability to examine said awareness? Perhaps other entities - plants, animals - could have this awareness but we aren't sufficiently astute to decipher them.

I still wonder if starting with what separates humans from other creatures and plants doesn't simplify the potential definitional process.

Expand full comment

Honestly, I feel like this same rebuttal applies to many (most?) complaints about things that "can't be defined." This is a rhetorical move that is almost always meant to suggest that one's opponents are so conceptually confused that they can't even properly define what it is they are trying to talk about. It is an easy move to pull because we can't actually properly define *anything.* There is the sophisticated V.O. Quine take on why this is so, and then there's the intuitive take that if you try to define something simple and concrete like a chair, you can easily get wrapped around the axle. "A chair is a thing you sit on that has four legs and a back." OK, but some camping chairs have one leg. Some have no legs and rest on the ground. How small can the back be before the chair is a stool? At what angle away from level does the seat become so difficult to use that the thing stops being a chair? If it was once a chair but is now chopped into pieces, is it still a chair? A former chair? Etc.

We don't argue about the definition of chair because everyone knows what a chair is. Likewise, most people who spend time thinking about consciousness know what consciousness is: it is the quality of having subjective experience. But unlike with chairs, it is incredibly easy to nitpick the definition of abstract concepts and pretend that this is some sort of profound debunking.

Expand full comment
Aug 30, 2023Liked by Erik Hoel

I couldn't help thinking about these nonsensory experiences of consciousness, such as the surge out the top of the head during deep meditation. Out of body experiences, floating by the ceiling, dream paralysis. To some extent these are all located in (and out) of the physical senses.

As an example, I am profoundly Deaf, but when the rubbish guy enters our street and starts yelling "basura!" I sometimes text my partner and ask her "is the rubbish here yet?" and she is like "yes! How do you know?" I like to think that perhaps my sense of consciousness extends far from my body and I become aware when the rubbish guy enters that sphere of awareness.

I studied the literature on the phenomenon of consciousness during my BA Phil, so I've always had one eye on this field (I preordered your book for this reason), and always wanted someone to explore this sense of awareness that appears to not depend on anything physically sensory. I know that this approaches Locke's inverted spectrum argument, but doesn't quite get there IMHO.

Expand full comment

The consciousness researcher who best uses language to refine what we mean by consciousness is actually a philosopher, Thomas Metzinger. Given your interest in nonsensory experience, you might be interested in his attempts to turn the idea of pure consciousness into something that we might study: Metzinger, T. (2020). Minimal phenomenal experience: Meditation, tonic alertness, and the phenomenology of “pure” consciousness. Philosophy and the Mind Sciences, 1(I), 7. https://doi.org/10.33735/phimisci.2020.I.46

Expand full comment

Thanks for the reference. I'll check it out!

Expand full comment

This post is excellent. As an engineering undergrad, I got enough exposure to neuroscience to know to ask these questions but was left to myself to try to find answers, with results I would characterize as lazy, haphazard, and unsatisfying. Posts like this help to bridge that awkward territory between true beginner and graduate level knowledge, to at least have a frame of reference for the field. The naive definition strikes me as a parallel between math and programming, where it can be straightforward to define a mathematical relationship that doesn’t actually tell you anything about how to compute it yet both are important and valid.

Expand full comment

In physics this is a perfectly cogent distinction, made all the time: it's the distinction between phenomenology on the one hand (the back-propagation of existing measurements onto extensions of existing theories to explore what is otherwise terra incognita, and vice-versa, the extension of existing theories to make quantitative predictions of measurements that haven't been made yet, in an attempt to understand currently unexplained phenomena) and the binary of theory/experiment on the other.

E.g., before first LIGO and then direct imaging gave us the black hole to work with in the traditional theory/experiment sense, we had a number of theories, a roughly agreed-upon definition, and a whole bunch of indirect measurements and data concerning these "mysterious" objects. And indeed, they could be very mysterious-seeming, shrouded as they were in darkness. Yet few people (and even fewer laypeople) really posited the idea that this whole notion of "black holes" was actually a sophistic mess of definitional vaguery (admittedly though the number of people claiming this is non-zero, even now, I just don't think the argument has nearly as much traction as it does with respect to consciousness).

Today, plenty of ongoing fields of study in physics, from cosmology to particle physics to astronomy, operate on a phenomenological basis and don't seem subject to the same scruples of consciousness-skeptics. As far as I know, in philosophy of science the word is sometimes used similarly, though it primarily has a different meaning--but nonetheless I don't see why it shouldn't/couldn't also apply to consciousness here.

Expand full comment
author

This is an excellent point. It's also fun to imagine other fields being put to the same scrutiny: "Can you explain black holes without referencing their blackness or is it all just tautological huh?"

Expand full comment

Surely you can, because black holes first arose as a mathematical description. I'd be tempted to push it further the other way and ask if you (or anyone) can explain quantum physics without referencing the mathematics.

Expand full comment

Loved this one!

Expand full comment

Thanks for this. Glad to see you settled on the right definition 😁

So many confuse consciousness with awareness or self-consciousness, or all the other cognitive abilities that we have, in principle, no problem understanding functionally.

I had one conversation, though, that made me wary of the word "experience". A philosopher at the U of Toronto was once discussing his position with me and finally I had to say, "It sounds like you're using 'experience' in the way we might use 'job experience '. And he said, "Yes, exactly!"

Expand full comment

Yes, I suppose "experience" could be heard by some people as meaning something like "memory", i.e., "past experience".

Expand full comment

Yes, like "having been there". There is also this new idea of "lived experience" which sounds phenomenological at first blush, but is often used to mean "is subjected to".

Expand full comment

I want to explore this topic more - it seems I made some assumptions or presumptions concerning it.

How would awareness not be a part of consciousness? My ability to notice my consciousness and the sense of myself and of what my physical and other senses bring to me wouldn't be part of it? Could consciousness exist without awareness of it?

Expand full comment

The two concepts operate differently, if "consciousness" is reserved for phenomenal experience. A p-zombie could be aware of X, but not be having a conscious experience of X. It's possible that consciousness entails awareness, but not vice versa.

Expand full comment

Fascinating essay, Erik. Thank you for posting.

Expand full comment
Aug 30, 2023Liked by Erik Hoel

I think when people say that consciousness is ill-defined, they might mean a more sophisticated claim than "Some people think of psychological instead of phenomenal aspects of the mind when asked about consciousness." The claim might be that the term is defined very vaguely (what you call the "final resort of the definitional critique") and that it's very difficult to conceive of a path toward better definitions. Take Newton and water. While he doesn't have a perfectly precise and general definition, he can very reliably tell you whether something falls under the category of water if you show it to him. So he at least has a reliable, though very inefficient, definition of water: anything that is a member of the set of all things that he has pointed to and said "water."

In the case of consciousness, I can't really conceive of anything like this at all. What can I point to and say consciousness? My first instinct is to point at my head, or to point vaguely at my surroundings, but from the third-person POV, these are really silly/inadequate and so very difficult to work on scientifically.

It might be difficult to point at other things too, e.g., numbers, electromagnetism, the economy, etc. But I can (very) reliably point to a set of objects with some cardinality, I can reliably point to some reading on a physical sensor, and I can (less) reliably point to certain behavioral observations that are connected to some economic variable through a long causal chain. (Notice how the less reliably I can do this pointing, the more controversial the definition of the thing).

Basically, this is just to say that if I don't have a reliable way to gather samples of things that have some property that I want to investigate, I have a really poor shot at refining the definition of that property through scientific effort. It feels to some, including me, that consciousness is uniquely ill-defined in that it seems like a uniquely difficult thing to point to.

Expand full comment
author

Interesting! I'm actually not sure Newton can tell you very reliably whether or not something falls under the category of water. Does he conceptualize, e.g., humidity as "water in the air" or would he not think of it that way? A comet's tail, at least the coma, is made of water, and I doubt Newton would have, if asked to point to instances of "water," pointed there. As in, I don't think he could, just with his naive definition, pick out all relevant instances of water (and where, on Earth? In the universe? Etc). I also think we correctly refer to abstractions all the time, not just physical quantities.

Expand full comment

Of course! The naive definition doesn't need to be 100% accurate, just accurate enough that it can lead to generally reliable sampling for further experimentation.

I also agree that we can correctly refer to abstractions. Otherwise, we couldn't do mathematics very effectively. But I think we initially need to have some physical anchor or some concrete examples before we can generalize to those abstractions.

Expand full comment

Definitions are a problem in any philosophical discussion. Until we can nail down with some concreteness what we mean, we're typically just talking past each other with our own particular versions of the concepts. I'm always surprised how much adamant resistance there is to this simple observation. I think it's what makes many philosophical discussions endless and unproductive.

The problem with the water example is that Newton always had the option of defining water functionally, it terms of the roles it plays for him. Now, I'm a functionalist. I have no problem talking about consciousness in that way. But it typically results in a discussion of reportability, sensory discrimination and categorization, attention, memory, evaluative reactions, executive control, or some other capability.

The thing is, many insist that functionality is not what they're talking about. (See Chalmers' distinction between "easy" problems and the hard one.) The problem then is getting them to clarify what they do mean. The result is typically ambiguity while denying anything specific enough to be criticized.. As Daniel Dennett observed in his 1988 Quining Qualia paper: "My quarry is frustratingly elusive; no sooner does it retreat in the face of one argument than "it" reappears, apparently innocent of all charges, in a new guise."

Expand full comment
Aug 30, 2023·edited Aug 30, 2023Liked by Erik Hoel

Animals like dogs or cats are sentient - they perceive the world around them, have preferences and experience physical and emotional suffering. I think sentience, if that definition is true, would be closer to Edelman's primary rather than secondary consciousness.

It is not a simple subject. Dawkins said, "Consciousness was the most profound mystery facing modern biology." Yes, being either conscious or unconscious is straightforward enough, but says nothing about philosophical "consciousness" when discussed by academics. Then, there's the thorny issue of panpsychism, a spiritual dilemma, right? Id est, "Consciousness is the Ground of All Being (Paul Tillich).

Very informative piece. Thank you!

Expand full comment

> Animals like dogs or cats are sentient - they perceive the world around them, have preferences and experience physical and emotional suffering. I think sentience, if that definition is true, would be closer to Edelman's primary rather than secondary consciousness.

But (depending on definitions, again): neither "perceiv[ing] the world" nor "hav[ing] preferences" is related to consciousness, whereas "experienc[ing] physical and emotional suffering" is. I can make a clockwork machine that has internal states that correspond to perception and preference. But the internal state of having a preference (e.g., a mechanical bias toward one thing or away from another) is, to an outside observer, indistinguishable from experiencing suffering. The machine signals when its preferences are not satisfied by making a loud noise. Just like a dog.

Expand full comment

And those bloody microtubials?

Expand full comment
author

haha - I'm not against looking for a theory of consciousness in physics is a priori a terrible idea, but I think a lot of the specifics of some of the current proposals end up being pretty bad.

Expand full comment