50 Comments
Nov 28, 2023Liked by Erik Hoel

A major aspect of postmodernism as I understand it (and please jump in, any art historians or the like who know better than I do) was a response to the mechanized nature of modern life and especially the way mechanized societies marched into two catastrophic world wars. To analyze and deconstruct the place of the human within a machine, which points the focus at impersonal forces. I don’t know Skinner’s motivation, but who knows, this may have been in his mind as well. That would make the scientific turn away from consciousness itself a consequence downstream of technological changes and the way those technologies were integrated into society. By way of comparison, I agree with you that AI is likely to have a similarly major impact on how we view ourselves, our interiority, and our relation to the world. We’ll see exactly how, but I expect it to be weirder than “merely” a consciousness winter.

Expand full comment
author

Indeed, you could draw a host of related arrows here. Perhaps increasing mechanization led to *both* behaviorism and postmodernism. I do think it's odd in these sort of explanations about mechanization that postmodernism springs so late onto the scene. I mean it's impossible to pin down precisely, but while its most famous examples occur not quite by the time everyone has a television, it's pretty close. That seems far along in the mechanization process!

Expand full comment

There is a point in that, in that the earliest roots of "postmodernism" grow back to Nietzsche and Kierkegaard in the 19th century, and they certainly were reacting against excessive rationalism and mechanism.

There's a certain irony in this, though, as the development of these ideas through the early 20th century ended up *borrowing from scientific innovations*. The "pomo" bogeyman of the French deconstructionists were heavily inspired by the ideas of early cybernetics, which were close ancestors of today's cognitive sciences and AI. The "decentering of the subject" was first envisioned by scientists who saw self-referential automation as a means of explaining mental activity.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023Liked by Erik Hoel

I've been fairly vocal about the unlikely prospects for a scientific theory of consciousness, absent a major conceptual upheaval on par with Copernicus or Darwin.

The present status quo is committed to physicalism/materialism, reductionism, and a form of naturalism knitting them together (which includes certain anti-metaphysical claims and claims about the authority of science, in addition to the preceding) which makes it logically impossible to speak of subjectivity, consciousness, or properties thereof without literal magic.

No one will squeeze the blood of mind from a material stone. And that is the entire problem with a scientific approach as things stand. I don't believe we ever exited the consciousness winter. The legacies of behaviorism and logical positivism have haunted cognitive science and AI since McCulloch and Pitts. Jerome Bruner lamented how cog-sci turned into computationalism and information theory after the post-behaviorist cognitivist turn. These theories are physicalist from their founding philosophical and conceptual assumptions.

Expand full comment

Yeah, I agree that we seem never to have exited the consciousness winter insofar as materialism is just fundamentally lacking in any tools for saying anything interesting about subjective experience. When I hear materialist neuroscientists talking about consciousness, they sound like Heraclitus: everything is made of fire. What are dreams? They’re a fire that enters you when you sleep. What is the soul? It’s made of fire. What is happening when I have an idea? Well, ideas are a type of fire. It’s just not helpful, and it’s clear that we will get nowhere in consciousness research within a materialist framework.

I got a neuroscience degree in the 90s because guys like Dan Dennett, whose books I read as a teenager, kept promising me that study of the brain would soon unlock the secrets of consciousness. 30 years and several much more useful engineering degrees later later and the neuroscientists haven’t even gotten off the ground, and I’ve more or less given up on ever understanding the first thing about consciousness. Dennett too, but he’s taken the much more extreme step of denying that consciousness exists. Good for a laugh, I guess. A good example of how trying to reconcile materialism with actual first hand subjective experience can actually drive someone insane.

Expand full comment
Dec 6, 2023Liked by Erik Hoel

Dennett's work is mostly useless logorrhea that hides the fact that he doesn't have an actual mechanistic model of consciousness. But Joscha Bach does have a model.

Materialism, being another ontology-first paradigm, was just as unscientific as various dualistic and idealistic paradigms. A properly scientific approach would mean a rough blueprint—a recipe—based on which a demonstration of synthetic consciousness can be achieved.

Anyway, here's the model: The brain is like the hardware of a computer: it's not, can never be conscious. But when it's on, it executes a particular program that creates a universe model (just like a computer game runs a world model), populates it with physical features it has previously learned, and on top of it, the model is furnished with human characters, including one particular human whose first person perspective is simulated.

Imagine programming a game/simulation in which first person perspective of a character is also simulated. You simulate the content of this characters' current memory (attention):

Which can include representations such as its emotions (appraisal of real world events in terms of their impacts on prospective/past acquisitions), its identity (a type of multimedia, mostly linguistic, interface thru which it presents itself ot society and to itself), a bunch of other things, or, and this part is crucial, *the act of running a model of its current memory*.

So when the simulation of this character's first person perspective is in the act of representing its current task recursively (i.e. representing to itself the act of running a model of its current attention), simulation of self-consciousness occurs, which is equivalent to actual self-consciousness.

Now, this particular character's body (inside the simulated world-model) happens to be the one that also houses the hardware of the computer running the world-model. And the first-person perspective is the actual locomotor controller (control model) of that body. And the simulated character eventually comes to identify simulation of that first-person perspective with its *identity* construct: it thinks this process is actually him!

So there you go: an implementable model of consciousness: it's the execution of a particular simulation within a richer world simulation, in which model of the contents of current attention of a special character within that world is simulated, and in which this model is the actual controller of that special character's body. In humans, that body represents the actual physical body that houses the hardware that runs the simulation; but this last condition is not necessary to run consciousness: it could just be a character inside the simulation, without access to the outer world.

Now, you may ask: how would anyone identify when a computer actually runs consciousness? It's similar to identifying when a computer runs a web browser. If I can surf the web interacting with that computer, it's running some sort of web browser: it achieved web-browserness! Similarly, if it can describe internal representations of current memory that control its behavior, and if these representations are rich enough of to recursively represent themselves, then we're observing consciousness. Of course, without media through which we can poke the computer during its course of demonstration, it's not possible to decide if it's running consciousness or not. So there could be an alien server rack running a highly rich world, in which many consciousnesses are run, but it would be really really hard (in practice, impossible) to ascertain presence of consciousness. Also, like "web-browserness", the concept of "consciousness" is probably not a fundamental category. "Atom" is a fundamental category—hard to find the limits of the context out of which it breaks. But "web-browser", although useful, will be bent out of shape outside the context of contemporary operating systems, the internet, and the web. Similarly, the bundle of criteria identifying "consciousness" may quickly break outside the attention paradigm of social mammals, and their emulations.

Expand full comment

Bach is carrying the torch, but the real pioneer of self-models is Thomas Metzinger. Among many others I have a short review of this, https://open.substack.com/pub/tedwade/p/consciousness-is-a-self-model . I have also explored how self-modeling could be used to create a conscious machine https://medium.com/@ted-wade/list/098e0dc8685b. . I don’t understand why self-modeling is not considered to be one of the leading theories of consciousness.

Expand full comment

Does this leave the hard problem to one of explaining the details of how the VR system works in humans? As in the mechanisms/computations/processing within the brain and how they are projected/seen by itself, with an emphasis on the latter?

Expand full comment

The general concept is that a system can best regulate itself in its environment if it has a model of that environment. But the organism is so enmeshed in the environment such that that better regulation comes from including a detailed model of the self as part of the overall model. I think the answer to your question is Yes: the crux of the matter is how modeling is physically accomplished in a brain inside a body. Metzinger has a lot of brilliant deductions about what the model accomplishes and how a set of abstract processes would work together on this. Probably the most plausible brain-level process would be a more complicated version of the popular predictive processing theory of brain function. Even if we grant a self model, we can’t assume that we contain a mini-me that’s looking at it. The apparent presence of an observer is kind of like an illusion, there being no observer, just a model of one. There are various explanations of this aspect out there, but they are hard to evaluate since language kind of breaks down when you try to talk about it. Makes your head spin.

Expand full comment
Dec 11, 2023·edited Dec 11, 2023

I wonder what a solution would even look like under this scenario. It’s not like we’re going to find miniature cameras and speakers or lines of code. Do you think maybe we over-romanticize consciousness because it so dominates our lives and that the elements of it (colours, sounds etc) are just the way evolution stumbled across to represent the brain’s signal inputs in order to build the model? I mean humans figured out how to build simulations on computers that project from screens and speakers in a matter of decades. Evolution has had billions of years to work on our simulator. Maybe we don’t even have the conceptual tools yet (ever?) to measure or test the key mechanisms.

Just a layperson’s musings.

Expand full comment

That's interesting, and I should look into Bach's model -- I'm sure a quick comment on Substack doesn't do it justice.

That being said, what you're describing sounds pretty familiar, and I have to say that I'm not overly enamored of computational metaphors for brain activity. You say that we can ascribe consciousness to a computational process "if it can describe internal representations of current memory that control its behavior, and if these representations are rich enough of to recursively represent themselves, then we're observing consciousness". I don't really know how one would "describe internal representations of current memory" that are "rich enough of to recursively represent themselves". But I can certainly write a program that can dump its entire memory contents into a log file. Does that mean it's conscious? The only being I know is conscious is yours truly; I'm just guessing about the rest of you poor slobs, based on a sort of reasoning-by-analogy. I'm skeptical that this "software simulation of consciousness" is anything different from the sort of dualistic homunculus / theatre of the mind stuff that has plagued consciousness theory from the beginning. It sounds like this theory is just kicking the can down the road, interposing one more layer of recursion to the hard problem: "consciousness is created by a brain making a simulation of a person who is conscious" sounds like question-begging to me.

But like I said, you've graciously provided a summary, and I should read the full explanation before I criticize. Thanks for the pointer, I will look up Bach!

Expand full comment

Dennett's an interesting case because I believe at one point he was sincerely trying to reconcile minds with the world as described by science. But his intentional stance never quite struck the balance he wanted. It was too mentalistic for the naturalists and too naturalistic for the defenders of mentalism (which is everyone who isn't discussing philosophy, or a loud-mouthed physicist chasing media bylines).

His latest work seems to have given up on reconciliation and instead thrown in firmly with the Science Uber Alles camp. Which, in a way, he always was. He's just given up on treating mind as anything but a useful illusion (which somehow has real causal consequences anyway...)

Ed Feser wrote an interesting post this week about this alleged conflict of physical descriptions in the sciences and everyday experiences. It's far too quick to give physical science the last word on What Is, which is the assumption that makes ordinary experience into a problem to begin with:

https://edwardfeser.blogspot.com/2023/11/ryle-on-microphysics-and-everyday-world.html

Expand full comment

Yes, Feser’s essay points to the most common strain of Scientism that I see in the wild: eliminative materialists who think that the framework of physics is the only frame worth thinking or taking about, and so become uninterested in the myriad other equally useful frames for navigating the world. They are like the man in the joke who looks for his keys under the lamp post, not because that’s where he dropped them, but because the light is better there — if the keys aren’t under the lamp post (and in the case of phenomenal consciousness, they ain’t), then they’re not interested in ever finding them. Some even go so far as to conclude that keys must never have been real in the first place.

To extend Feser’s borrowed example of the two tables, the Scientism cultists I encounter most often seem like they would look at a book and scoff at the primitive superstitions of their ancestors, who thought that books contained some immaterial soul called a “story”, haunted by disembodied spirits called “characters”, whereas now we know that books are actually simply lumps of woods pulp with squiggly shapes made out of chemical inks. Science has thoroughly dissected books, and we now know that they contain zero “stories” or “characters”, and that such notions are simply prescientific speculation about what might actually be inside books.

What’s strange is that most of these Scientism cultists, hearing these analogies, would readily admit that books do in fact “contain” “things” like “stories”, and that these things are properly addressed by a linguistic frame that is not the frame of physics, yet when you try to talk to them about qualia the same way they hunker down back into their reductionism. It’s bizarre, and frustrating.

Expand full comment

Always engaging Erik. I recently read a paper from DeepMind. Its purpose was to frame the path to AGI. I would imagine that AI and consciousness research has needed many adjustments in theory through the years. It seems there has been an "unconscious" desire for them to have a confluence. The paper does a very good job explaining why thought, constructed in machines can certainly get where human insight has managed to bring us and go far beyond its capabilities. It may very well be we get to advanced knowledge, insight and understanding and it doesn't necessarily require consciousness at all! It was simply required for us due to the limitations of our biological capacities. https://arxiv.org/pdf/2311.02462.pdf

Expand full comment
Nov 28, 2023·edited Nov 28, 2023Liked by Erik Hoel

Humans as biological language models 🤣 I’m laughing, not laughing because I’m sure that the AI metaphors will soon abound in texts written by biological language models (BLMs?) just like everything was quantum mechanics for a while, from quantum mechanics to, well, astrology and tarot. But if we forget our intrinsic perspective we’ll just all (LLMs, BLMs) cannibalize on the same corpora of data until we only regurgitate gibberish. What a wonderful society that will be!

Expand full comment

"regurgitate gibberish" made me laugh! While we are not surprisingly enamored with LLMs like GPT and Bard I think that is simply our ego. We focus on predictive language models b/c that makes us feel important and speech and language we consider our secret sauce. The LLMs seem to have in common that they CONVERGE to gibberish and nonsense. While they get much less attention (probably because there isn't a phone app) are genuine steps forward like AlphaFold and AlphaGo Zero (and now the predictive weather model) all from DeepMind. What makes them qualitatively different than the LLMs is they converge to smarter and more insightful conclusions. While playing Go or folding proteins is not interesting to average folk, this appears to be genuinely new insights for the human race rather than aping nonsense people have written before

Expand full comment
Nov 28, 2023Liked by Erik Hoel

Beautiful Erik.

Could there have been an influence from physics ? The early 20th physics showed conclusively that what seems a certain truth from our lived experience is often demonstrably wrong in the real world. Letting go of intuitions on things as simple and certain as energy, position, momentum, even time, was a big part of what scientists had to do in that time.

So if all one had to believe in the existence of consciousness, something far more abstract than time or position, was this intuitive certainty that we exist as conscious beings, it's quite logical that many would not trust intuition and instead argue that science starts where measurements do. This input/output black box model of organism that you describe for behaviourism is basically the matrix form of quantum mechanics, applied to living systems.

Pushing the analogy a bit, relativity, where even energy is in the eye of the beholder, brought phenomenology, where the Subject reigns supreme. Quantum matrix, which worked best when not asking too many questions as to whether the cat inside is dead or alive, brought behaviourism, where the Subject is denied even the right to be thought about.

Indulging in more speculation, I would say the evolution of physics away from matrix and into quantum field theory, where fields move stuff and stuff move fields, is a framework that better accommodates consciousness as object of study. Perhaps scientists in the first half of the 20th did not stop believing that lived experience including consciousness must have some material, physical basis (that stuff move fields, if you will, if one pictures consciousness as a field). What was perhaps harder to posit until QFT was that consciousness moves stuff that moves consciousness. (BTW, you don't mention Penrose as Nobel from a (really) unrelated field that worked in consciousness.)

That "thought moves stuff and stuff moves thought" is easy to accept if most of physics just accepts that fields move stuff and stuff move fields.

Expand full comment

Brilliant as ever, Erik. With an expected dosage of push-backs from espoused erudites. No hate intended here.

One quibble with your treatment of postmodernism: Too textually framed. Culturally-driven PM works ala Derrida seek to 'deconstruct' through discourse social reality . We draw distinctions on material v the socially constructed.

I'm betting you find, as I do, Foucault's work on discourse within institutions (medicine, the law) a bit more sophisticated. The often self-serving interests of Science to the detriment of enhancing new knowledge /refining prior understandings might be a project worth undertaking. The knowledge "base" of consciousness seems fluid, which wouldn't necessarily be a bad thing...

Expand full comment
Nov 28, 2023Liked by Erik Hoel

I'd think Carl Jung would be a part of your introduction.

Expand full comment
author

Carl Jung is an interesting case. As a psychoanalyst, I wouldn't group him with those looking for scientific theories of consciousness in the sense that I mean, which is basically asking "what is occurring in the brain that generates experiences?" He talks about consciousness, obviously, as does Freud, but Freud gave up what was essentially the ongoing very early search for a neural correlates of consciousness to do work in psychoanalysis.

Expand full comment

Jung would say 'What is occurring--or not occurring--in the consciousness that generates experiences?' All that dreamtime... I'm converting what you say to my own limited experience, which includes a fear of science getting off on the wrong foot.

Expand full comment

I intuitively disgorge all isms, although my thoughts and values have been formed by many. That's probably why I can get behind metamodernism without wanting to be labelled a metamodernist.

On one side, I see how postmodernism has contributed to putting out the inner fire. I also worry that humans will intuitively think of humans as AIs. On the other hand, I sense a shift in that people are craving more meaning, mystical experiences, consciousness, and other aspects of life that require constant exploration rather than categorical explanations.

Maybe a cultural shift towards metamodernism is what we need.

"[metamodernism] oscillates between a modern enthusiasm and a postmodern irony, between hope and melancholy, between naïveté and knowingness, empathy and apathy, unity and plurality, totality and fragmentation, purity and ambiguity. ...Each time the metamodern enthusiasm swings toward fanaticism, gravity pulls it back toward irony; the moment its irony sways toward apathy, gravity pulls it back toward enthusiasm."

-- Vermeulen and van den Akker

Expand full comment
author

I like the idea of metamodernism but it also felt sort of "we've reached the end of cycles here" as a description. I definitely wouldn't peg this as being currently in a postmodernist era, at least not in the way the 50s-80s were as judged by, say, gallery art or literature. We are way more serious in our art, and at the same time we take any postmodernism that's left in the art less seriously!

Expand full comment

I attended Harvey Mudd College in the early 2000s, where I did my B.S. in Mathematical Biology. I was always interested in the brain, the mind, and approaching consciousness from some kind of mathematical framework, with the tools I learned. I remember feeling extremely out of place with these interests, or really, trying to do anything at the time that was a fusion, crossover etc. in any way, as everyone’s research projects seemed to be focused on 11-dimensional manifolds or using systems of partial differential equations to model the expression of a single gene. When, after finally having free electives after nearly 2 years of mandatory courses, I finally got to take a neuroscience class and philosophy of mind one, a professor asked me about my research interests and plans.

“Well, I want to study consciousness and how to understand the mind from a mathematical framework.”

He said, “there’s not a lot of jobs in that sort of thing, but there’s plenty if you want to model joints biomechanically!”

Thus ended my career in science.

(I also tried to read some of Koch’s books back then as well—I don’t think I understood much, or else it didn’t make much of an impression on me, even though in THEORY, it was exactly what I was looking for—but it’s not clear to me what the impact on that kind of research has been on understanding our actual minds vs just thorough and insightful psychological work, for instance, Bessel Van Der Kolk’s masterful, “The Body Keeps the Score”). It also seems to me most neuroscience research has not progressed since then much beyond very linear thinking/cell biology type work, which I personally think is exactly NOT how to understand the most complex, nonlinear, subjective system in the universe, or something like it).

Expand full comment

You might be interested in the work of the Qualia Research Institute (https://qri.org/) who study consciousness mathematically.

Expand full comment

Super weird website. I like how a significant amount of their content/research and self proclaimed bios seem to have to do with psychotropic experiences, mixed in with accomplishments in math olympiads and random other stuff. LOL. I can't tell if they're onto something special or it's all an elaborate snake oil circle jerk.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

Yep, definitely hard to tell if they are cranks or not. Though they have some legit academic advisors. Still worth reading I think.

Expand full comment

A joke my favorite literature professor in college used to tell us: Two behaviorists have sex. Afterwards, one says to the other, "That was wonderful for you - how was it for me?"

Expand full comment

I find the article insightful. I personally think that we should be concerned with all aspects of what I call #TheHumanAdvantage. It encompasses the aspects of being human which differentiate us from being a machine like #intuition, #consciousness, #imagination, moral #agency, #creativity and artistic expresssion, among others. Looking at the conclusion of the article, I would add that according to me humans already think of themselves as an AI or machine without realizing it. We do it every time when we by place more importance and higher value over pure knowledge, quantative data and analytical thinking over intuition for example. Also when we attach our identity to what we know, what we are expert in (as far as knowing) as opposed to who we are as a whole human being.

Expand full comment

I find myself leaning these days toward the idea that the brain is more like a radio amplifying a signal than an originator of the signal itself. That would locate consciousness outside the brain, using the brain, which goes much more easily toward explaining some phenomena (shamanic abilities, for one) than saying it’s all happening between the ears. But I do agree that consciousness is quite worthy of a scientific look, whatever the staring point.

Expand full comment

Indeed, if supernaturalism is real (which it is), one cannot easily tell if one remembers something from the brain or gets knowledge from supernatural beings.

Expand full comment

Late to the party because I am just a college student catching up on fun reading during break :)

I think your assertion about high intelligence and low consciousness not being true in humans is interesting. I would have initially disagreed, but I am wrestling with various thoughts now. What would you make of a high IQ individual with an overactive default mode network? Do you consider the default mode network in your understanding of consciousness?

Expand full comment

That’s an interesting point you made, that GPT can be intelligent but not conscious, in contrast to humans. It brings to mind Moravec’s paradox and, though it doesn’t explain anything, highlights that ineffable something that being human seems to necessarily contain. I understand the urge to dualism.

Expand full comment

Do you know David Bentley Hart already? He has a book on consciousness coming in spring 2024.

Here's a taste of it, I suppose... https://open.substack.com/pub/davidbentleyhart/p/reflections-on-life-and-mind?r=1jz4y2&utm_campaign=post&utm_medium=web

Expand full comment

I find a lot of the details in your essay fascinating, so thanks for an interesting read. But I'm not sure where you are going with this warning about wintry weather.

First, you talk about a lack of scientific work on consciousness during the heyday of behaviorism. During that time there was a great deal of philosophical debate about physicalism, informed by various psychological experiments, linguistics, neurological science and computational ideas. My question is, why would one expect much more than that, i.e., work specifically on the science of consciousness, at this point? You refer to James as having set off on a course of investigating consciousness which was then lost with the advent of behaviorism; but James, who got his concept of thought from Peirce, was exploring the phenomenology - he had no idea of the neural correlates of consciousness, except at a very high level. Ditto for those who followed him, the New Realists, Russell's neutral monism, etc. There really wasn't much of a basis for scientific study of consciousness until a lot more work was done at every level - brain biochemistry, studies of brain injuries and the conditions they lead to, psychological lab experiments, computer science and networking. Crick and Koch's 1990 hypothesis that Gamma waves might be responsible for cognitive binding (which is still being debated) was a kind of starting point, I think, not a revival. So I'm not sure what this "winter" amounts to; maybe just an acknowledgment that the tools were not there to go much beyond what James (and Husserl, and other phenomenologists) had already said. It's not clear to me that logical positivism stood in the way of scientific study of consciousness; what they opposed was what Frege called "psychologism" in the philosophy of language and logic, not the scientific study of the mind per se.

Then you refer to the revival of consciousness studies in the 1980's, but if I'm not mistaken, it wasn't just Chomsky and innateness and the decline of behaviorism but Tom Nagel's "What Is It Like To Be a Bat?" (1974) that got things going. That is, I think the philosophical interest in consciousness, which resulted in hundreds of essays and dozens of books, sparked a more general interest, including the scientific project, because it was a direct challenge to physicalism: for all the scientists had learned about the brain so far, they couldn't say two words about how conscious experience is produced. So that set things off, and even the distinction between philosophers and cognitive scientists got very fuzzy with people like Dennett, the Churchlands, etc. Also it became much better recognized among neuroscientists that without a phenomenology you could not say what you were looking for correlates of, so they had to start taking the philosophical work more seriously. Now, in this glorious summer, there are journals and conferences and degree programs devoted to consciousness studies, but if you look at virtually every book published by a neuroscientist (Gazzaniga. Dehane, Humphrey, Edelman, Thagard... I think I have at least a dozen of these) they all enthusiastically spin out their theory of consciousness until they get to qualia, and then they wave their hands around and tell you why they don't need to answer that ("we are only interested in conscious *access* here", or whatever) or they hand you a promissory note. Worse, they don't even agree on the function of consciousness, or even whether it has one, which makes it very difficult to see how they are supposed to discover its neural basis. (The neural basis of a something-I-know-not-what?) The progress amounts to everyone having a competing theory that doesn't answer essential questions. It's not just IIT vs. "global workspace" (or "neuronal workspace") vs. "semantic pointers", it's all over the place. While brain science and cognitive science have made advances leading to useful discoveries (e.g., how to move cursors or prosthetic limbs with the mind) it is not clear to me that the "consciousness summer" we are in has been very useful for understanding consciousness. Wittgenstein had more interesting things to say about consciousness than anything I have found in a neuroscientific book on consciousness, even though he hardly ever used the term (Bewusstsein).

Finally you express concern about a new "consciousness winter". Since I don't see much gain from the warmer climate you report, I am not sure why we need to worry about this. The letter on Integrated Information Theory correctly points out that it is not science, though it's not "pseudoscience" either, in the sense of phrenology - it's more like a prescriptive statement about how we should use the word "conscious". Ned Block pointed out long ago that having the entire population of China connected by telephone would not create what we generally call a conscious entity, so I don't know that we really needed that letter - but in any case, there is no real science of consciousness happening at the moment, so I think the weather is going to stay about the same whether more neuroscientists jump into the fray or not.

Expand full comment