In fact, pretty much all the experts agree
I have an idea after reading this, but how would you react to Yuval Noah Harari's definition?
"Consciousness is the biologically useless by-product of certain brain processes. Jet engines roar loudly, but the noise doesn't propel the aeroplane forward. Humans don't need carbon dioxide, but each and every breath fills the air with more of the stuff. Similarly, consciousness may be a kind of mental pollution produced by the firing of complex neural networks. It doesn't do anything. It is just there."
I've always enjoyed Bill Hick's quote. "We are one consciousness experiencing itself subjectively." I've experienced it many times on psychedelics. My sense of consciousness doesn't feel like it belongs to me any more than the people around me. Or even sober, thoughts arise that don't feel like my own. They just flow through me, and my experiences shape how I act on them.
This discussion is interesting to me, as a plant biologist who has enjoyed following my colleagues argue about whether or not plants have consciousness (and intelligence . . . And a nervous system). (See https://www.nytimes.com/2018/02/02/science/plants-consciousness-anesthesia.html for one of the issues they are fighting about). The “what it’s like to be” definition doesn’t really seem to help here. How would you apply this definition to non-humans?
The notion that "there is something it is like" is mysterious and inscrutable and probably doesn't even mean anything. This is no "definition." It is vacuous nonsense, and if this is what supposed "experts" agree on, then I question whether any of them are "experts" in the relevant respect.
As Mandik (2016) puts it, regarding invocations of this phrase to describe phenomenal consciousness:
“One phrase that might seem to break us out of the circle of technical terms is the phrase ‘something it’s like’, for there are non-technical uses of ‘what it’s like’ [...], and phenomenal properties are supposed to be those properties in virtue of which there is something it’s like to have experiences. However, to my knowledge, the syntactic transformation from ‘what it’s like’ to ‘there is something it’s like’ occurs only in technical philosophy-of-mind contexts. This makes me doubt that non-technical uses of ‘what it’s like’, which sometimes (but not always) are employed to pick out mental states, are employed to pick out a peculiar kind of property of mental states. When, for example, pop stars sing about knowing what it’s like to fall in love, they give little evidence of attributing so-called ‘phenomenal’ properties, as opposed to whatever other properties a meta-illusionist can readily grant are seemingly instantiated by love states. The hyphenated ‘what-it’s-like’ in Frankish’s ‘“what-it’s-like” properties’ [...] is yet another technical term shedding no light on the term ‘phenomenal’.” (p. 142)
As Mandik concludes, “We have then, in place of an explicit definition of ‘phenomenal properties’, a circular chain of interchangeable technical terms — a chain with very few links, and little to relate those links to nontechnical terminology. The circle, then, is vicious.” (p. 142)
Terms like "what it's like" form part of a mutually interdefining set of terms that never bottom out in expressing anything substantive or meaningful. Nagel has not offered a definition, but just one empty turn of phrase as a supposed characterization of another equally empty phrase.
If this is a "definition" the supposed experts agree on, then I simply reject that the people in question are experts on the matter, and would suggest, instead, that they are experts at having very low standards for an adequate explanation of a phenomenon.
Mandik, P. (2016). Meta-illusionism and qualia quietism. Journal of Consciousness Studies, 23(11-12), 140-148.
> First, there would need to be strong evidence that our definition of “consciousness” is indeed weaker than naive pre-scientific definitions of other phenomena. I don’t think there’s any good evidence of this.
The deal-breaker, of course, being that there *is* a significant distinction between consciousness and other phenomena, namely, consciousness is a condition on the existence of *any* phenomena qua phenomena.
For things to show up, there must be a subject for them to show up *for*.
What would, or could, count as good evidence in this case? There's a significant amount of metaphysical and methodological baggage smuggled in here which goes unacknowledged.
This isn't a bad thing in itself, if you want to attempt the empirical study, but it does need to be made explicit.
> Both Chalmers and Koch would know a neural correlate if they saw it, and are happy to agree they both don’t see one.
If you want to get snotty about it, every time the fMRI lights up in a brain scan with a conscious patient, you have a neurological correlate of consciousness. That's why it is a correlate and not a cause or condition.
I'm not at all convinced that there can or will be any mechanistic explanation in the way that, say, you could explain in principle the behavior of H2O molecules by reference to the laws of atomic motion.
It's far too easy to slide from the irreducibly subjective qualities of experience (much less the characteristics of *thought*) to the sorts of material and mechanical explanations that fly in natural sciences. I think it's a logical gap that *cannot* be crossed.
The trouble arises in a subtle, hard to see change of subject in the move from identifying the phenomenon, prior to any rational or operational definitions, to implicitly assuming that there can be satisfactory evidence from empirical discoveries and the narrower sorts of mechanical explanations that go into so much of them these days.
The real trick is separating out the implied assumptions *there* -- computation and information, for example, are observer-relative properties, yet these cog-sci theories all help themselves as if they were appealing to fundamental physical laws -- before using them to create a circular argument. We explain consciousness by pretending we aren't appealing to concepts that depend on consciousness. A neat magic trick, but not terribly convincing to the outsider.
Conditions for identification are notoriously difficult to hammer down in a way that a precise explananda are not. Wittgenstein had interesting remarks about the "family resemblance" of many concepts, which we can know, as it were "intuitively", without being able to nail down an explicit meaning. Some interesting phenomena, perhaps many, simply may not have the kind of identity conditions that are open to this kind of explanation.
> We just need to move that familiarity from the first-person to the third-person. But that problem is much harder than one of mere definitions.
To put it mildly.
There's a major problem hidden in the "just", which is somewhere on the order of difficulty between:
- "We just need to build a flux capacitor and we can travel through time," and
- "We just need to shift this three-sided object to a four-sided object and we'll have a four-sided triangle."
The presumption here is that natural science can and will study and explain every phenomenon in broadly the same way, within broadly the same physicalist world-picture. Should consciousness turn out to not fit that schema, and there's plenty of reason to think that it (not to mention other attributes of mind, such as rationality) will not, then that "just" will turn out to be more than a trivial detail.
I stress again that there's nothing wrong with the empirical study of mind and its properties, but I simply do not believe that natural sciences have it in them for a *sufficient* explanation of mental life. Even if they cook up an explanation of mind's material conditions and physical-biological genesis, that doesn't at all indicate we've gotten an understanding of the thing in the way we can say Newton's laws gave us a sufficient explanation of motion at roughly human scales.
Nice essay. Couple of stray thoughts. While I expect you're right, a dog's consciousness isn't as "rich" as ours in the self-awareness department, I'd hesitate to rule out altogether, for all animals apart from ourselves, an inner "narration" of "mental gestures" until we had a better idea of how these things arise. Also nowadays I've learnt to hesitate over the suggestion that "it is consciousness that gives an entity moral value". My understanding is that there may be other routes to being given moral value? Nevertheless, I'm definitely still of the opinion it requires consciousness for an entity to be in that awkward position of becoming a moral valuer.
Honestly, I feel like this same rebuttal applies to many (most?) complaints about things that "can't be defined." This is a rhetorical move that is almost always meant to suggest that one's opponents are so conceptually confused that they can't even properly define what it is they are trying to talk about. It is an easy move to pull because we can't actually properly define *anything.* There is the sophisticated V.O. Quine take on why this is so, and then there's the intuitive take that if you try to define something simple and concrete like a chair, you can easily get wrapped around the axle. "A chair is a thing you sit on that has four legs and a back." OK, but some camping chairs have one leg. Some have no legs and rest on the ground. How small can the back be before the chair is a stool? At what angle away from level does the seat become so difficult to use that the thing stops being a chair? If it was once a chair but is now chopped into pieces, is it still a chair? A former chair? Etc.
We don't argue about the definition of chair because everyone knows what a chair is. Likewise, most people who spend time thinking about consciousness know what consciousness is: it is the quality of having subjective experience. But unlike with chairs, it is incredibly easy to nitpick the definition of abstract concepts and pretend that this is some sort of profound debunking.
I couldn't help thinking about these nonsensory experiences of consciousness, such as the surge out the top of the head during deep meditation. Out of body experiences, floating by the ceiling, dream paralysis. To some extent these are all located in (and out) of the physical senses.
As an example, I am profoundly Deaf, but when the rubbish guy enters our street and starts yelling "basura!" I sometimes text my partner and ask her "is the rubbish here yet?" and she is like "yes! How do you know?" I like to think that perhaps my sense of consciousness extends far from my body and I become aware when the rubbish guy enters that sphere of awareness.
I studied the literature on the phenomenon of consciousness during my BA Phil, so I've always had one eye on this field (I preordered your book for this reason), and always wanted someone to explore this sense of awareness that appears to not depend on anything physically sensory. I know that this approaches Locke's inverted spectrum argument, but doesn't quite get there IMHO.
This post is excellent. As an engineering undergrad, I got enough exposure to neuroscience to know to ask these questions but was left to myself to try to find answers, with results I would characterize as lazy, haphazard, and unsatisfying. Posts like this help to bridge that awkward territory between true beginner and graduate level knowledge, to at least have a frame of reference for the field. The naive definition strikes me as a parallel between math and programming, where it can be straightforward to define a mathematical relationship that doesn’t actually tell you anything about how to compute it yet both are important and valid.
In physics this is a perfectly cogent distinction, made all the time: it's the distinction between phenomenology on the one hand (the back-propagation of existing measurements onto extensions of existing theories to explore what is otherwise terra incognita, and vice-versa, the extension of existing theories to make quantitative predictions of measurements that haven't been made yet, in an attempt to understand currently unexplained phenomena) and the binary of theory/experiment on the other.
E.g., before first LIGO and then direct imaging gave us the black hole to work with in the traditional theory/experiment sense, we had a number of theories, a roughly agreed-upon definition, and a whole bunch of indirect measurements and data concerning these "mysterious" objects. And indeed, they could be very mysterious-seeming, shrouded as they were in darkness. Yet few people (and even fewer laypeople) really posited the idea that this whole notion of "black holes" was actually a sophistic mess of definitional vaguery (admittedly though the number of people claiming this is non-zero, even now, I just don't think the argument has nearly as much traction as it does with respect to consciousness).
Today, plenty of ongoing fields of study in physics, from cosmology to particle physics to astronomy, operate on a phenomenological basis and don't seem subject to the same scruples of consciousness-skeptics. As far as I know, in philosophy of science the word is sometimes used similarly, though it primarily has a different meaning--but nonetheless I don't see why it shouldn't/couldn't also apply to consciousness here.
Loved this one!
Thanks for this. Glad to see you settled on the right definition 😁
So many confuse consciousness with awareness or self-consciousness, or all the other cognitive abilities that we have, in principle, no problem understanding functionally.
I had one conversation, though, that made me wary of the word "experience". A philosopher at the U of Toronto was once discussing his position with me and finally I had to say, "It sounds like you're using 'experience' in the way we might use 'job experience '. And he said, "Yes, exactly!"
Fascinating essay, Erik. Thank you for posting.
I think when people say that consciousness is ill-defined, they might mean a more sophisticated claim than "Some people think of psychological instead of phenomenal aspects of the mind when asked about consciousness." The claim might be that the term is defined very vaguely (what you call the "final resort of the definitional critique") and that it's very difficult to conceive of a path toward better definitions. Take Newton and water. While he doesn't have a perfectly precise and general definition, he can very reliably tell you whether something falls under the category of water if you show it to him. So he at least has a reliable, though very inefficient, definition of water: anything that is a member of the set of all things that he has pointed to and said "water."
In the case of consciousness, I can't really conceive of anything like this at all. What can I point to and say consciousness? My first instinct is to point at my head, or to point vaguely at my surroundings, but from the third-person POV, these are really silly/inadequate and so very difficult to work on scientifically.
It might be difficult to point at other things too, e.g., numbers, electromagnetism, the economy, etc. But I can (very) reliably point to a set of objects with some cardinality, I can reliably point to some reading on a physical sensor, and I can (less) reliably point to certain behavioral observations that are connected to some economic variable through a long causal chain. (Notice how the less reliably I can do this pointing, the more controversial the definition of the thing).
Basically, this is just to say that if I don't have a reliable way to gather samples of things that have some property that I want to investigate, I have a really poor shot at refining the definition of that property through scientific effort. It feels to some, including me, that consciousness is uniquely ill-defined in that it seems like a uniquely difficult thing to point to.
Definitions are a problem in any philosophical discussion. Until we can nail down with some concreteness what we mean, we're typically just talking past each other with our own particular versions of the concepts. I'm always surprised how much adamant resistance there is to this simple observation. I think it's what makes many philosophical discussions endless and unproductive.
The problem with the water example is that Newton always had the option of defining water functionally, it terms of the roles it plays for him. Now, I'm a functionalist. I have no problem talking about consciousness in that way. But it typically results in a discussion of reportability, sensory discrimination and categorization, attention, memory, evaluative reactions, executive control, or some other capability.
The thing is, many insist that functionality is not what they're talking about. (See Chalmers' distinction between "easy" problems and the hard one.) The problem then is getting them to clarify what they do mean. The result is typically ambiguity while denying anything specific enough to be criticized.. As Daniel Dennett observed in his 1988 Quining Qualia paper: "My quarry is frustratingly elusive; no sooner does it retreat in the face of one argument than "it" reappears, apparently innocent of all charges, in a new guise."
Animals like dogs or cats are sentient - they perceive the world around them, have preferences and experience physical and emotional suffering. I think sentience, if that definition is true, would be closer to Edelman's primary rather than secondary consciousness.
It is not a simple subject. Dawkins said, "Consciousness was the most profound mystery facing modern biology." Yes, being either conscious or unconscious is straightforward enough, but says nothing about philosophical "consciousness" when discussed by academics. Then, there's the thorny issue of panpsychism, a spiritual dilemma, right? Id est, "Consciousness is the Ground of All Being (Paul Tillich).
Very informative piece. Thank you!
And those bloody microtubials?