As an outsider to neuroscience who recently found this newsletter (my research area is AI), I think everything you're saying makes sense to me. I was actually a little surprised at how much it seems, from my cursory view, that neuroscience hasn't been taking more lessons from LLM interpretability research like the linked work from Anthropic.
But, assuming everything you said is true, it seems like the consequences of that would be extremely damning for neuroscience, no? Because isn't the biggest problem with consciousness by far that it's untestable? Not just difficult to test, but fundamentally immeasurable in a way that possibly nothing else is.
I just don't see how we could quantify it. If we found a switch that we believed turned consciousness off, how would we know if pressing it worked? We could pull it and ask a test subject, but what answer would we expect in that context? We can't exactly expect them to say "Nope, looks like I'm definitely not conscious anymore." Nor could we measure it directly, we'd have to look for proxies, but to design proxies we'd need to circularly assume our theories were true.
Of course, we could try it on ourselves, turning our consciousness off and then back on, but how would that be differentiable from just putting ourselves in a fugue state or giving ourselves amnesia?
Alternatively, we could develop a theory that maps neurons to qualia; we might find, for instance, a particular set of consistent neural activity to induce the sensation of anger, or happiness. But take out "the sensation of" and you have the exact same results. In other words, if we found neurons that induced certain sensations, we could still describe those sensations without reference to consciousness, just by saying that "the neural activity induced anger" directly. You could do the exact same scholarship - and people presumably already are - without mentioning consciousness at all.
I tend to believe that this immeasurability, more than anything, is why consciousness is left out of the sciences. Because the fundamental conceit of science - that all theories must be falsifiable - does not apply to consciousness. And if what you say is true, that neuroscience is incomplete without consciousness, then that means a fundamental understanding of the human brain must always be outside of the scientific grasp.
Love this, especially as you've essentially reconstructed the thesis of my latest book!
I'm not 100% confident, however, that there is a clean line between subjectivity and objectivity, which would make such issues around falsifiability dissolve. And that's where, if there is gold to be found, the answer probably lies.
"I'm not 100% confident, however, that there is a clean line between subjectivity and objectivity"
I claim, in alliance with O.M.N., that there absolutely is.
Suppose I claim that everyone on earth, except you and me, is a P-zombie with no consciousness.
How would you prove me wrong, even in principle? What possible results of any neuroscience experiment (or any other kind of experiment) would be at all relevant?
Your wrote "We can tell, for instance, if a patient in a supposed vegetative state, one who cannot even move the muscles of their eyes, is actually conscious, simply by asking them to imagine something like playing tennis and then seeing if the activation looks the same as in normal brains."
But this is false! All you know is that a cerrtain pattern of sound waves impinging on the ears of this patient resulted in a certain pattern of neural firings. This tells us NOTHING about whether or not this patient is actually conscious, because that is fundamentally unknowable to anyone except the patient.
In short: if neuroscience needs a theory of consciousness, then it is not PRE-paradigmatic, it is NON-paradigmatic, because no such theory is possible. And no theory is possible because no theory is testable.
> Suppose I claim that everyone on earth, except you and me, is a P-zombie with no consciousness. How would you prove me wrong, even in principle?
My usual opinion on this is that whether or not you find p-zombies conceivable or inconceivable likely presupposes an understanding of consciousness we lack. Wrt your specific scenario, as for experimental evidence, I might ask in this situation if we two are the only ones with a conception of qualia? Presumably we aren't, since you're talking about our world, and therefore others have the conception of qualia, like all the p-zombies in the world who happen to be philosophers of mind. I would then ask how it is that they came to their beliefs about qualia, if they are p-zombies who lack it completely - it seems to presuppose some sort of preordained coincidence.
I will answer yours: the p-zombies don't have any "beliefs" about qualia or anything else. They are just automatons running a program that tells them what to do and say, with no inner life or consciousness or beliefs at all.
I am asking for experimental evidence against this theory.
It's a rhetorical question, because not only is there no such evidence at present, no such evidence is possible at all, even in principle.
Well, it does answer the question (in my phrasing I was pointing out it's one way to answer it, and implying there are others), because it's an outline of an objection to the premise being possible: specifically, that p-zombie statments about qualia are strikingly unmotivated and imply something like the physical laws just happen to arrive at odd beliefs about some sort of spooky spectral quality, a belief they, as p-zombies, couldn't have arrived at without the equivalent of "preordained coincidences."
That's a charge of philosophical incoherence (which I might want to argue), but it's not experimental evidence.
Let me try again: suppose some day we have two competing theories of consciousness, both of which seem serious and not obviously incoherent or otherwise flawed. One theory predicts that ChatGPT-4 is conscious, the other predicts that it's not. You are given full access to ChatGPT-4. How would you decide which theory has made the correct prediction?
Been reading all your comments, and just want to let you know that I completely understand where you're coming from, and agree with you 100%!
It's actually baffling to me why so many brilliant people (seemingly) completely fail to grasp the "Hard Problem". It's this failure to grasp the H.P. that leads many of them to mistakenly think that any of the various "theories of consciousness" on offer are actual theories of consciousness--which none of them are. They're all just theories *about* consciousness (including IIT). None of these "theories of consciousness" are really even *trying* to *explain* consciousness. None of them even touches the Hard Problem.
I feel like my point of view on the HPoC is a pretty minimal one that follows from known facts, and yet I can't clearly locate it anywhere on the spectrum of standard points of view: https://iep.utm.edu/hard-problem-of-conciousness/
This is super good. I'm no longer in academia so whether I'm perceived as a philosophical lunatic no longer has any effect on my life. Let me sidestep the dialectic and get right to the speculation.
I don't know, in any one conversation about "consciousness", that the interlocutors are even talking about the same thing. But it seems important when asking, what does panpsychism actually require? I like your subtractive approach, like imagining if we took away memory and linguistic areas. Personally, I favour a deconstruction of Husserl's ideas about the structure of time consciousness. We have "retention", of course, which is memory, and we have "protention" which is the unspecific but not completely shapeless horizon of the future. We then have the hyletic data, which is basically that which we retain and protend. Husserl's idea was that if we iterate through time steps, what was once merely protended slips into retention, as the actuality of the hyletic data and protention itself get remembered. We can remember a surprise, for example, as a mismatch between protention and hyletic data. This interweaving of protention, retention and hyletic data form a coherent stream of consciousness and account for what William James called the "specious present" - that little pocket of time we call "now", much too long to be a nanosecond but much to short to be an hour.
If we look at the neural architecture as that which structures time consciousness, and then remove the parts one by one, what is left over, phenomenologically speaking? If we remove protention, we'd constantly be in the land of memory. We couldn't move around the world for lack of predictive ability. If we then removed retention, what is the phenomenological experience now? We should hesitate to say it's nothing, but could it be what most people call "consciousness?". Whatever it is, panpsychism needs to account for even less. We might regard our subject, who has since had his retentional and protentional hardware removed, as experiencing a blooming buzz of confusion and chaos, unstructured in time or place, of all sensory modalities. Aha! We must remove the specifics of the sensory modalities too, if we want to see what panpsychism must account for!
Having accomplished that, what is it like to be our subject? Again, we may not want to say it's nothing, but it's not structured spatiotemporally, and the structure of the information in the subject's environment no longer maps to a sensory modality. What even is that, phenomenologically speaking?
I have a good friend who is a Thai Buddhist Monk. He claims to know very well what that phenomenological world is like because he's been there. I myself don't know. However, it is described to me as simple, ontologically speaking. Panpsychism doesn't seem so crazy when it's not doing so much work.
These debates on philosophy are all well and good. My point is that none of these theories are experimentally testable. There is no experiment we can do, no observation we can make, that will show that one theory is right and another theory is wrong.
This has been roughly my take on subjective experience (which I would distinguish from “consciousness” as a special class of subjective experience). I’ve been interested in exploring this space. Are you aware of any published works taking panpsychism seriously? At the very least I want to get the lay of the academic/philosophical land.
Interesting. I’ll go check it out. It’d be cool if, in humans at least, there was some hope for understanding consciousness.
Sadly, even if neuroscience could make progress, I doubt this would transfer to an understanding of sentience in AI, which is the main question that has always fascinated me.
> we could develop a theory that maps neurons to qualia; we might find, for instance, a particular set of consistent neural activity to induce the sensation of anger, or happiness. But take out "the sensation of" and you have the exact same results. In other words, if we found neurons that induced certain sensations, we could still describe those sensations without reference to consciousness, just by saying that "the neural activity induced anger" directly.
This sounds nice and I think a lot of mind/brain-science people take this as a sort-of default assumption.
I like to point them to Donald Davidson's paper "Mental Events", which gives a tidy explanation of why, even if we grant that all particular mental events are identical with a physical event in the brain, it isn't possible to create the kind of systematic law-like relationship between the physical and mental realms that "neurological correlates" aim for.
If your theory is only describing correlations between physical events and observable behaviors (I wouldn't include a "sensation" as a thing independent of consciousness, when you're really describing a behavioral manifestation), no consciousness required, you haven't so much solved the problem as maneuvered around it by changing the subject. That's the real issue with searching after correlates of the mental.
I’m not sure I follow, I think in part because I can’t tell whether or not you’re agreeing with me.
Are you reiterating the point that if we focus on the behavioral effects alone, we don’t learn anything about consciousness? If so, I agree with that, it’s sort of what I was trying to communicate.
ETA: I realize in my paragraph one of the usages of “sensations” is probably the wrong word. What I should have said is that we could still describe, say, anger as the effect of neurological activity, but that our theory could do so without reference consciousness. My apologies if that was unclear.
I'm mostly agreeing with and expanding on your thoughts.
Davidson's paper, though now obscure outside of philosophy departments, is worth a read given what you wrote. In lieu of that, here's a few handy summaries:
Don't you turn off consciousness when under anesthesia? What happens to the energy consumption of the brain when consciousness is lost? Does it change?
From what I understand, one of the drugs they give you doesn't make you unconscious, it just inhibits for formation of memory. That's quite a thought, isn't it? You undergo an excruciating surgery with no pain relief but don't remember a thing.
Not sure what drug you’re referring to, but this isn’t quite right--general anesthesia does indeed make you unconscious eventually, essentially at the level of slow wave sleep, and that’s the level that is aimed for when doing the surgery. Also, patients receive pain medications during anesthesia even when going fully under.
Knowledge that we've reached a fundamental *and comprehensive* understanding is always beyond our grasp, but don't worry: humans have delusion and rhetoric/propaganda to cover up such problems...even scientists cannot stop themselves from engaging in it!
Well said. In a paper (where I also mentioned the paper on MOS 6502 retro-engineering) I once doubted with statistical arguments the weak statistical significance of the results of a well known group of neuroscientists for their work on engram cells. But the referee rejected every argument on the base that the group is led by a too famous authority that can't be doubted, and that also others confirmed their results. The point, however, is that the other groups used the same defective procedures and didn't care about its statistical weaknesses. In other words, if the boss does it wrong, but lots of people uncritically mimic him, then you must accept that it is all right. At the end, to get my paper accepted, I had to remove that critical part. I suspect this isn't an isolated case. Such malpractice is widespread and makes people believe in things that, most likely, don't exist.
Oof! I once got into an argument with my professor about engrams where I kept asking if it made sense to even say that memories were *physically* in one place. I think I may have used the analogy from the scene in Zoolander "THE FILES ARE IN THE COMPUTER" (and he breaks the computer to get them). The professor didn't love that, if my engram serves.
What's interesting about this is that it wouldn't count as evidence against the quality and trustworthiness of science, because science wouldn't sign off on it (thus, it "is not" evidence!!).
I think a lot could be learned about consciousness via the application of strict epistemology, but I doubt many scientists would stand for genuine truth and transparency, way too much money, power, and and ego on the line.
...not to mention: it is a fantastic way to divide up the public so they have something to argue about, keeping their attention occupied from actually important matters.
Erik, you raise more deep questions in this content-heavy post than could possibly be addressed in a comment box. Still, I will pose two observations to you, as a one time student of cognitive science. The problem with the 'innovation winter' in neuroscience at this time, your contention for which I have much sympathy, has multiple inputs but in my view it is primarily an outcome of a scale level misunderstanding in research objective. Cognitive processes such as speech, taste identification, aware color differentiation, love, attachment, or, yes consciousness, are very, very, very, very complex neuronal functions, the most basic operations of which have no working theory or models in neuroscience as a discipline. Trying to study them by the 'experimental' approach favored in the field is like trying to understand the formation of hydrocarbon molecular chains in the ground by observing how a Porsche 918 Spyder takes a curve at 200 klicks an hour in a non-track environment. The discipline is looking at its subject of study through the wrong end of the telescope in other words. The fact that the results of studies aren't even statistically significant much of the time when you get down to it should be no surprise. The focus is at _the wrong organizational level_ of the subject. Mental activity is something that brains do, but brains are something that neurons and their interactions do.
That brings me to your second contention, that consciousness is the point of what brains do. I do not find myself agreeing with that contention. Consciousness is, as I see it, fundamentally an epiphenomena though it is more than that word implies in complexity and generation. Far simpler organisms in no way in possession of structures which could be termed brains have neurons and interact with their environment. Theirs is the appropriate scale to focus the study of neural process. We have no functioning theory of how stimuli hitting organic material is rendered into coherent organismic response; into something which equates to 'memory;' into anything at all which correlates to an imagistic representation of that stimuli whether entirely created or largely accurate. The function of neural interactions is to accomplish those and many related outcomes, to 'register the world.' I would argue that _neural process_, far before we get to even the most basic cognitive representations and reactions, evolved to generate, reiterate, simulate, and meaningfully retain such registrations of organism-environment interface experience.
What one has in more complex neural operations from that perspective is a building organizational trajectory from _the same basic registration operations_. More complex neural structures scale up this process, if with a great deal of added and emergent complexity of organization; that is, consciousness is intrinsically epiphenomenal to more basic neuronal operations and their fine-grained organization. From this perspective again, the existing argument that consciousness is an emergent property of more basic neural interactions begins to have real legs, so to speak. I don't believe that brains 'evolved to generate consciousness.' Rather brains evolved to more complexly organize extra- and intra-organismic neural registration and its cascades, with consciousness as an emergent property of that complexity. Consciousness is not necessarily even a superordinate function of other scales of neural registration since, as you note, many neurally effected outcomes operate independent of consciousness. As an off the cuff observation which hadn't occurred to me previously, consciousness may have emerged first as a 'process checking' function; in effect quality assurance of a sort which became so basic that it never switches off voluntarily.
Whether that emergent outcome was incidental or something more core to an evolutionary trajectory is an interesting argument which only actual research could develop further. If consciousness fits a primary evolutionary function trajectory, then your contention that 'consciousness is the point of brains' would have added logical oomph and semantic validity at least even if the point remained arguable. If consciousness is more nearly a pure epiphenomena of increasingly complex 'operation representations + retention capability,' more nearly the argument to which I'm inclined, then your contention would be more distant from a good summation of things.
If 'neurostudy' wants to get serious about being an actual science, to me it has to take neuronal behavior and its complex interactions as its primary focus of research. Studying behavior many scales of complexity up the operational chain simply cannot be 'reverse engineered' back into an understanding of the underlying registration functions _because those functions occur and look NOTHING like complex organism behaviors_. Neurons aren't there to 'generate behavior,' to me, they are their to mediate environmental stimuli at far lower scales of action. And we are not going to get much good data on how organic neural systems register their environment by asking undergraduates to scratch their anatomy in an fMRI canister, no.
Erik's work speaks directly against higher-scale patterns being epiphenomenal. If you allow, though, that large-scale patterns can have causal efficacy of their own (both horizontally and vertically I think), I don't think your core idea changes. Your Porsche analogy made me chuckle. It's a very well-put point.
Thank you sincerely for this, Eric - delighted me on so many levels!
As a survivor of a demented commune/cult full of nutty psychological ideas, and also the sort of eighties teenager who built and fought with many 6502 based computers, I have forever been fascinated by what computers reveal about thinking (how they change ours was the main subject, for the first few decades - way too much on the Skinner side).
BUT - what I really love about what you have shared with us is your simple courage, in calling your very own specialist field into question. Cult survivors have special respect for the one who WILL stand against the very weird and foul conformity of any group which adores their own arbitrary feeling of rightness.
I'm also crazy for history (20th century in particular) and a bit more political than is good for me - so I keep struggling with a big idea around useful versus performative rebellion. Specifically, wondering whether decoupling rebellion from common struggle (the separation of the intelligentsia and the working class, which has only got worse and worse since the first sixties schism) makes it egotistical and stupid (I mean technically, not pejoratively). Suppose I'm really just saying that the point of rebellion is to have a point! (reason beyond self?)
Which you most certainly do - and your care and good humour in making it, show it to be the product of genuine enthusiasm for the study and for truth, rather than 'coolness seeking' or axe-grinding. We need more like you in every field on earth.
Thanks for standing - and doing it with such grace and eloquence! (and laughs)
Paul Snyders
PS - I just finished an essay about what is wrong with so many modern essays. Not attention-seeking (you have cooler things to work on) ;o) - but only to say I also appreciate your piece from the angle of dissecting your own thing. FUN!
(and damned well done, dude - each beat flowing from the last with music).
I'm a neuroscientist, I've had no beers at all (it's 10am here), and I still agree with 80% of it.
I part ways with you at the definitive "consciousness is the primary function of the brain" claim - I'm not at all sure what the primary function of the brain (except in the most trivial sense of "improving behavioural performance so as to maximise evolutionary fitness"), but if pressed, I'm definitely more sympathetic to the Bayesian brain claim that the brain's job is to constantly try to improve its models/predictions of the world.
But I fully agree that most neuroscience findings of the past decades are cool rather than fundamental, that we have no idea how to make progress on many fundamental issues (what is consciousness? what causes Alzheimer's disease? what even are psychiatric disorders, let alone how can we cure them?).
I agree neuroscience is pre-paradigmatic (if it will ever be the kind of discipline that can bear and harbor a theoretical paradigm) but I have some questions about the idea that consciousness stands to unify it. 1) What of all the computational tasks we know don't impinge on consciousness in any way? For one example, whatever interplay among the peripheral nervous system and the cerebellum and basal ganglia and motor cortex that results in putting your foot down just-so while walking, such that your stride continues and you don't fall. You can bring the fact that you're trying to perform this task into your conscious awareness, but consciousness can't "take the wheel". 2) What of non-human animals? Leaving aside the ones that are probably also conscious, how does classic work on squids and worms fit into a consciousness-focused neuroscience? 3) Evolutionarily speaking, if there was a pre-conscious era of nervous system development, should we expect an anatomical signature heralding when consciousness started to be the thing that brains were about? Or a behavioral signature?
1) we know that computational tasks become automated and withdraw from consciousness, but we don't really know of many that can be done *without reference* to an ongoing stream of consciousness. That's what the blindsight research, for instance, tried to show, and I think it's highly questionable. There's some cases of automatism, like sleep-walking, that can provide instructive examples, but it's complicated by dreaming, etc.
2) I think consciousness extends pretty far "down" the animal kingdom. We call a lot of things "brains" that are more like big nuclei - I think anything with a substantial brain (I'll leave open what exactly that means) is likely conscious, and so it makes sense to talk about it as the primary function of the organism.
3) This is an especially great question - I've worked with, e.g., hydra, which only have a simple neural network that's quite literally a "net" over their body (no real clustering to speak of). In those cases, it still governs simple reactions and so on. I think it's a really interesting thought that there could be a kind of punctuated equilibrium in the record where consciousness kicks on and early "brains" start looking a lot more brain-like. But I don't know of any, or even if anyone has looked.
Re: 1) Brains do all kinds of things while sleeping. Regulation of core body functions like respiration and digestion continues apace when unconscious, as does some monitoring of the environment (hence the ability to be woken up). These are aren't cognitively rich tasks, but the kinds of things that were at least the initial point of nervous systems, and some are wholly immune to conscious interference. Rather than consciousness, ongoing sensory monitoring is required to perform these tasks. There's a story where certain functions evolved prior to consciousness, but those that have evolved after require it. But in any case where you do have an ongoing stream of consciousness monitoring elements of some task, you necessarily also have ongoing sensory monitoring supplying that information to consciousness, so it's tough to conclude that consciousness is necessary.
(While I was sitting here thinking about primordial shrimp recoiling from bright lights it occurred to me that consciousness might be a kind of shortcut to getting all your inputs on the same page to make a decision. Considering 'let's have a decision-making process about this' is one way of thinking about tasks falling under the purview of the nervous system. An attractor state owing to the fact that you can only do one thing at one time, plus whatever special sauce unifies the conscious experience. And if *that* were the case you might expect to see behavioral ambivalence/ambitendency as a common failure mode when consciousness is disabled somehow, or in organisms that lack it.)
It's funny that you mention Anthropic given their CEO Dario Amodei originally studied neuro before moving to AI (his thesis, "Network-Scale Electrophysiology: Measuring and Understanding the Collective Behavior of Neural Circuits" https://www.proquest.com/openview/1c37e3a50c16b9187eec9792140f3a17/1?pq-origsite=gscholar&cbl=18750). Perhaps he realized interpretation in vivo would be much more difficult than in silico, and decided to take that path instead.
Awesome piece, thanks! I'd be very curious to hear your thoughts on the (somewhat still speculative?) interest in electromagnetic fields (Susan Pockett's "Consciousness is a thing, not a process) and harmonic resonance (Selen Atasoy's work, like https://www.nature.com/articles/s42003-023-04474-1), or even symmetry theory of valence stuff (an offshoot of harmonics, I think?).
For example, when you say: "...ask a neuroscientist to explain something about human cognition. Listen to their answer. Then ask them to explain again, but to say nothing in their explanation about location,” couldn't one use measures of complexity to explain the "richness" of the associated conscious experience, or (allegedly) decompose the brain harmonic readings into a consonance-dissonance-noise signature to tell you about the valence of the experience?
As an outsider, shifting from flavors of neuron-doctrine to full-brain electromagnetic properties seems pretty interesting — just can't gauge how speculative it is.
I like the idea of trying new things like looking at the brain from a harmonic perspective - I'm definitely pro that, just in case something very strong pops out that gives us more insight wrt consciousness. But when it comes to electromagnetic properties, I'm usually more skeptical. One good reason: putting aside consciousness, certainly it appears that AIs can be quite intelligent without an associated electromagnetic field. I'm always willing to hear evidence that physical properties are relevant for consciousness, I've just never seen any super solid ones that convinced me to that side.
As for your specific question, I've done similiar work, not on harmonics, but using information theory, like the compressability of brain dynamics. But in the end, it's mostly correlational, and we need the lawful for a real theory of consciousness.
Erik: "certainly it appears that AIs can be quite intelligent without an associated electromagnetic field" - I am perplexed at this statement, given that you studied with Tononi. While I haven't delved deeply into his work (and it's been a while) my take-away from it was that the IIT rules out any sort of true (i.e. conscious) "intelligence" on the part of anything whose information operations are reducible to 1s and 0s. Intelligent computations require a much larger sets of integrated nodes, of which digital computers are incapable of achieving, as they boil down to nothing but sequences -including parallel processing arrangements- of 'go' and 'no-go'.
Was my take-away from Tononi all wrong? By 'integration' in Integration Information Theory, I understand this to mean: holding many parts together simultaneously. The Luppi et al paper that Oshan cites looks promising as a contribution to this understanding, and I do appreciate Richard's comment below (while struggling to grasp it!)
Oshan,, you raise an interesting point. I do think that the focus on nonlinear harmonic signal composition as considered in the Nature paper you link to is important indeed, if not necessarily in application at the level of whole-brain or substructure connectomes. To me, the way that signals process coherently at near neuronal scale is where I would focus study were I working in this field. I think that structural grouping of neuronal clusters (and by extension of nervous system structures) is much less relevant than are the signal modulation dynamics which neural components produce. I strongly suspect that the 'shape/pattern of signal' in fact drives neuron structuration rather than the other way around, so one has to model the signal properties rather than follow the structural assemblage to understand what is in fact happening. In fact, structural assemblages appear quite plastic, especially developmentally. The manner in which signal modulates in a coherent fashion in small neuronal groups likely scales up to more complex structures on the basis of efficiency by path of least differentiation, i.e. what works already is built out rather than putative purpose-specific structures being generated, in the same way that specific neurotransmitter molecules are employed and repurposed for diverse structural function in neuronal complexes.
This line of argument also provides a potentially theoretically rich path for why signal processing ever became distributed over large neuronal masses at all as opposed to remaining localized to discrete structures. Furthermore, while I think coding analogies are generally very poor for conceptualizing neuronal function behavior, one might nonetheless hypothesize overall signal configuration and dynamics as having signatures sufficiently specific that they might serve to 'tab identify' discrete complex responses so that they could be sustained---and potentially reproduced. That is, some header portion or spectral reduction of an overall signal configuration is used to identify and re-initiate the whole so that the entirety of a configuration need not be 're-cognized' to initiate comparable whole signal reinitiating. The generation of discrete complex signals might sequence in the way that code sequences (or chromosome sequences) chain together in discrete functional packets. As signal drives structure differentiation, differentiated structures remain extant and can be 're-animated' when and as the same initial components are generated to the complex modulated signal that precipitated their formation in the first place. Any such 'signal purposed structures' would initially be very, very simple activity clusters far from representing any complex environmental or organismic representation. Many micro-clusters which register some aspect of initial stimuli can be re-utilized to express or at least simulate more complex stimuli aggregations, again driving specialized structural elaboration. Hmm; I rather like that line of speculation, to give it no more credit than it deserves.
Fantastic overview. I fully agree and have been saying a less educated version of this to people for years—every time I hear or read someone saying that they like to exercise because “it gets my dopamine going,” I roll my eyes. Definitely made the right decision to not continue with neuroscience.
Haha. It has to start somewhere, right? And what better place than on the mantle of Credentials and Respectability? XD
Are you a fan of Umberto Eco and generally into semiotics by the way? I feel like his blending of a certain analytical académica background (which he liberally makes fun of and points out the obtuseness of) with historical fiction is a natural fit, I’ve read most of his works.
I've been similarly amused by the use of dopamine in everybody language. So much so that I put on my writer hat to write a short story satirising it. It got published recently and I thought of this comment haha. (It's here if you're interested: https://www.freedomfiction.com/2024/11/dopamine-by-h-talichi/)
I legitimately flunked the gorilla suit video test (not sure if I saw the original video or a remake). But I was locked in on the task, following the bouncing ball. The ability to narrowly focus our consciousness for a period of time is valuable, but I wouldn't consider it an argument against the primacy of consciousness itself.
We see the same things in my field of economics. Fancier and fancier statistical techniques to squeeze results out of your data but not all that much known about what the results mean.
I once had a post-talk conversation over drinks with a psychology grad student who was convinced that there was an easy story to tell about how the brain assembles knowledge from sense data.
He arranged a set of empty beer glasses into a set of five and said, "see, that's how you get the number five".
I moved the beer glasses out of the arrangement and asked him if the number five didn't exist anymore. I'm not sure he got the point but he did leave in a huff a few minutes later.
We're at a similar stage with cognitive science and AI, including their "neuro-" branches.
As far as fundamental progress, I'm not sure there's been any since McCulloch and Pitts built their artificial neuron. More cynically I'm not sure there's been *fundamental* progress since Hobbes and Descartes. That never stopped scientists from marching on, but my intuition tells me the march isn't going where most of them think.
I really enjoyed your post, thanks! And I definitely agree with you that consciousness has to be at the center of neuroscience. Though I'm wondering if neuroscience is in a pre-paradigmatic phase or rather a just-before-the-revolution phase (or crisis stage - I forget Kuhn's exact terminology). These two stages have significant similarities, but also important differences, I think.
For example, Kuhn notes that in the pre-paradigm stage, intelligent amateurs can make significant contributions to a field (think Ben Franklin and electricity). I'm not sure intelligent amateurs could make significant contributions to neuroscience at this point, could they? That suggests the field has made more progress than the pre-paradigm stage could account for.
On the other hand, the crisis stage generally comes after there has been a universally accepted paradigm in the field, and you make a strong case that there has never been such a paradigm in neuroscience.
So perhaps this is a case where Kuhn's model of paradigms isn't a perfect fit? Possibly because of the subjective/objective issue that's central to the discipline? With electricity, even in the pre-paradigm stage, everyone agreed about the phenomena to be explained. They just disagreed about how to explain it. This seems generally true across the sciences. But in neuroscience (as you suggest) there's not even agreement about the phenomena to be explained! Is it the physical brain and it's observable aspects and actions? Or is it consciousness? Or maybe something else?
I think an intelligent amateur couldn't make a significant contribution to contemporary neuroscience, just due to the methodological and technical depths of most of its subfields. But I'm much more willing to say that an intelligent amateur could make a significant contribution to our understanding of consciousness, since even experts are usually just as stumped as when they started out!
This matches my experience. As an amateur I don't have access to a neuroscience lab or the skills to work there, but with sustained effort I can get pretty far into the theoretical side of things and bring a cross-disciplinary perspective that stands a chance of moving the theory forward. While there is no accepted theory of consciousness, there are several candidates, and their proponents are more than happy to talk about them with interested amateurs. We can also come up with proposals for further research in neuroscience even if we as amateurs don't have the skills to carry them out. I'm doing that in a paper right now.
I enjoyed your article and I agree with it a fair amount, especially wrt neuroscience’s pre-paradigmatic aspects. I’m curious if you’ve ever considered that movement, or sensor-guided behavior, might instead be “the main function of brains”...there’s the now oft-repeated argument put forward by people like Rudolfo Llinas and Daniel Wolpert based on primitive vertebrate species like tunicates, which have multi-stage life cycles defined by the presence or absence of a central nervous system--an early mobile stage *with* brains gives way to a later sessile stage after the tunicate has digested its own brain, presumably it is no longer useful for guiding behavior. If brains are for consciousness, I’m unsure what to think of this phylogenetic factoid...would love to hear your thoughts!
Thanks :) - we use the term "brain" to include a lot of things, including just like, bigger nuclei and lumps of neurons. Regarding the earliest possible "brains" and what they were for - I think probably essentially unconscious processing and then evolution figures out conscious processing instead and it's far superior (see, e.g., how much less data humans need to learn compared to AIs). Regarding movement, in one sense I agree - but eventually you are moving toward something, or away for something, due to conscious reasons. It's pretty interesting to think of the first conscious experiences as being essentially bookstrapped from predator-prey relationships (hunger vs. pain) governing movement.
Thanks for your reply! I wonder what you make of attention schema theory (which I’m currently compelled by), which takes the view that consciousness is a control model of attention, attention being more physiologically basic and akin to a “signal boosting” mechanism (eg an octopus feeling contact flinches because the “touch signal” has been boosted to brain regions that process aversion and control movement, no other awareness necessary).
We can also maybe distinguish between conscious and unconscious “learning” instead of just “processing”; evolution discovers that conscious learners are more effective (and data efficient) than unconscious ones because the former can track subjective states (eg concentration) and adjust their signal boosting accordingly, to learn more from the same data…the simplicity of this is that animals have an ancient system (signal boosting) that turns on itself to track its own effectiveness. This raises the possibility that many creatures might move and even show simple behaviors but without the benefit of modeling their own attention process, unaware they are aware.
I’m agnostic as to whether all this is true, but I’m curious about what your view on this is!
As an outsider to neuroscience who recently found this newsletter (my research area is AI), I think everything you're saying makes sense to me. I was actually a little surprised at how much it seems, from my cursory view, that neuroscience hasn't been taking more lessons from LLM interpretability research like the linked work from Anthropic.
But, assuming everything you said is true, it seems like the consequences of that would be extremely damning for neuroscience, no? Because isn't the biggest problem with consciousness by far that it's untestable? Not just difficult to test, but fundamentally immeasurable in a way that possibly nothing else is.
I just don't see how we could quantify it. If we found a switch that we believed turned consciousness off, how would we know if pressing it worked? We could pull it and ask a test subject, but what answer would we expect in that context? We can't exactly expect them to say "Nope, looks like I'm definitely not conscious anymore." Nor could we measure it directly, we'd have to look for proxies, but to design proxies we'd need to circularly assume our theories were true.
Of course, we could try it on ourselves, turning our consciousness off and then back on, but how would that be differentiable from just putting ourselves in a fugue state or giving ourselves amnesia?
Alternatively, we could develop a theory that maps neurons to qualia; we might find, for instance, a particular set of consistent neural activity to induce the sensation of anger, or happiness. But take out "the sensation of" and you have the exact same results. In other words, if we found neurons that induced certain sensations, we could still describe those sensations without reference to consciousness, just by saying that "the neural activity induced anger" directly. You could do the exact same scholarship - and people presumably already are - without mentioning consciousness at all.
I tend to believe that this immeasurability, more than anything, is why consciousness is left out of the sciences. Because the fundamental conceit of science - that all theories must be falsifiable - does not apply to consciousness. And if what you say is true, that neuroscience is incomplete without consciousness, then that means a fundamental understanding of the human brain must always be outside of the scientific grasp.
Love this, especially as you've essentially reconstructed the thesis of my latest book!
I'm not 100% confident, however, that there is a clean line between subjectivity and objectivity, which would make such issues around falsifiability dissolve. And that's where, if there is gold to be found, the answer probably lies.
"I'm not 100% confident, however, that there is a clean line between subjectivity and objectivity"
I claim, in alliance with O.M.N., that there absolutely is.
Suppose I claim that everyone on earth, except you and me, is a P-zombie with no consciousness.
How would you prove me wrong, even in principle? What possible results of any neuroscience experiment (or any other kind of experiment) would be at all relevant?
Your wrote "We can tell, for instance, if a patient in a supposed vegetative state, one who cannot even move the muscles of their eyes, is actually conscious, simply by asking them to imagine something like playing tennis and then seeing if the activation looks the same as in normal brains."
But this is false! All you know is that a cerrtain pattern of sound waves impinging on the ears of this patient resulted in a certain pattern of neural firings. This tells us NOTHING about whether or not this patient is actually conscious, because that is fundamentally unknowable to anyone except the patient.
In short: if neuroscience needs a theory of consciousness, then it is not PRE-paradigmatic, it is NON-paradigmatic, because no such theory is possible. And no theory is possible because no theory is testable.
Period, end of story.
> Suppose I claim that everyone on earth, except you and me, is a P-zombie with no consciousness. How would you prove me wrong, even in principle?
My usual opinion on this is that whether or not you find p-zombies conceivable or inconceivable likely presupposes an understanding of consciousness we lack. Wrt your specific scenario, as for experimental evidence, I might ask in this situation if we two are the only ones with a conception of qualia? Presumably we aren't, since you're talking about our world, and therefore others have the conception of qualia, like all the p-zombies in the world who happen to be philosophers of mind. I would then ask how it is that they came to their beliefs about qualia, if they are p-zombies who lack it completely - it seems to presuppose some sort of preordained coincidence.
I note that you didn't answer my question.
I will answer yours: the p-zombies don't have any "beliefs" about qualia or anything else. They are just automatons running a program that tells them what to do and say, with no inner life or consciousness or beliefs at all.
I am asking for experimental evidence against this theory.
It's a rhetorical question, because not only is there no such evidence at present, no such evidence is possible at all, even in principle.
Well, it does answer the question (in my phrasing I was pointing out it's one way to answer it, and implying there are others), because it's an outline of an objection to the premise being possible: specifically, that p-zombie statments about qualia are strikingly unmotivated and imply something like the physical laws just happen to arrive at odd beliefs about some sort of spooky spectral quality, a belief they, as p-zombies, couldn't have arrived at without the equivalent of "preordained coincidences."
That's a charge of philosophical incoherence (which I might want to argue), but it's not experimental evidence.
Let me try again: suppose some day we have two competing theories of consciousness, both of which seem serious and not obviously incoherent or otherwise flawed. One theory predicts that ChatGPT-4 is conscious, the other predicts that it's not. You are given full access to ChatGPT-4. How would you decide which theory has made the correct prediction?
I don't see how consciousness is necessary for beliefs. Most of my beliefs are not in my consciousness at the moment. Some never are.
What's your attitute towards the problem of induction?
We have to adopt the Uniformity Principle as a matter of practicality. But it can't be proven.
Similarly, we should adopt the notion that all human beings are conscious as a matter of practicality. But we can't prove it.
And we will never know if AIs are truly conscious, or just mimicking it. Ever. Because it's impossible to know.
Been reading all your comments, and just want to let you know that I completely understand where you're coming from, and agree with you 100%!
It's actually baffling to me why so many brilliant people (seemingly) completely fail to grasp the "Hard Problem". It's this failure to grasp the H.P. that leads many of them to mistakenly think that any of the various "theories of consciousness" on offer are actual theories of consciousness--which none of them are. They're all just theories *about* consciousness (including IIT). None of these "theories of consciousness" are really even *trying* to *explain* consciousness. None of them even touches the Hard Problem.
IMHO.
Thanks!
I feel like my point of view on the HPoC is a pretty minimal one that follows from known facts, and yet I can't clearly locate it anywhere on the spectrum of standard points of view: https://iep.utm.edu/hard-problem-of-conciousness/
Highly recommend looking into what David Deutsch has said/written about induction/inductivism. He’s not a fan to say the least!
This is super good. I'm no longer in academia so whether I'm perceived as a philosophical lunatic no longer has any effect on my life. Let me sidestep the dialectic and get right to the speculation.
I don't know, in any one conversation about "consciousness", that the interlocutors are even talking about the same thing. But it seems important when asking, what does panpsychism actually require? I like your subtractive approach, like imagining if we took away memory and linguistic areas. Personally, I favour a deconstruction of Husserl's ideas about the structure of time consciousness. We have "retention", of course, which is memory, and we have "protention" which is the unspecific but not completely shapeless horizon of the future. We then have the hyletic data, which is basically that which we retain and protend. Husserl's idea was that if we iterate through time steps, what was once merely protended slips into retention, as the actuality of the hyletic data and protention itself get remembered. We can remember a surprise, for example, as a mismatch between protention and hyletic data. This interweaving of protention, retention and hyletic data form a coherent stream of consciousness and account for what William James called the "specious present" - that little pocket of time we call "now", much too long to be a nanosecond but much to short to be an hour.
If we look at the neural architecture as that which structures time consciousness, and then remove the parts one by one, what is left over, phenomenologically speaking? If we remove protention, we'd constantly be in the land of memory. We couldn't move around the world for lack of predictive ability. If we then removed retention, what is the phenomenological experience now? We should hesitate to say it's nothing, but could it be what most people call "consciousness?". Whatever it is, panpsychism needs to account for even less. We might regard our subject, who has since had his retentional and protentional hardware removed, as experiencing a blooming buzz of confusion and chaos, unstructured in time or place, of all sensory modalities. Aha! We must remove the specifics of the sensory modalities too, if we want to see what panpsychism must account for!
Having accomplished that, what is it like to be our subject? Again, we may not want to say it's nothing, but it's not structured spatiotemporally, and the structure of the information in the subject's environment no longer maps to a sensory modality. What even is that, phenomenologically speaking?
I have a good friend who is a Thai Buddhist Monk. He claims to know very well what that phenomenological world is like because he's been there. I myself don't know. However, it is described to me as simple, ontologically speaking. Panpsychism doesn't seem so crazy when it's not doing so much work.
These debates on philosophy are all well and good. My point is that none of these theories are experimentally testable. There is no experiment we can do, no observation we can make, that will show that one theory is right and another theory is wrong.
This has been roughly my take on subjective experience (which I would distinguish from “consciousness” as a special class of subjective experience). I’ve been interested in exploring this space. Are you aware of any published works taking panpsychism seriously? At the very least I want to get the lay of the academic/philosophical land.
Interesting. I’ll go check it out. It’d be cool if, in humans at least, there was some hope for understanding consciousness.
Sadly, even if neuroscience could make progress, I doubt this would transfer to an understanding of sentience in AI, which is the main question that has always fascinated me.
Well, let me disagree. FAIR Digital Objects = FDOs will try to do just that ;-)
> we could develop a theory that maps neurons to qualia; we might find, for instance, a particular set of consistent neural activity to induce the sensation of anger, or happiness. But take out "the sensation of" and you have the exact same results. In other words, if we found neurons that induced certain sensations, we could still describe those sensations without reference to consciousness, just by saying that "the neural activity induced anger" directly.
This sounds nice and I think a lot of mind/brain-science people take this as a sort-of default assumption.
I like to point them to Donald Davidson's paper "Mental Events", which gives a tidy explanation of why, even if we grant that all particular mental events are identical with a physical event in the brain, it isn't possible to create the kind of systematic law-like relationship between the physical and mental realms that "neurological correlates" aim for.
If your theory is only describing correlations between physical events and observable behaviors (I wouldn't include a "sensation" as a thing independent of consciousness, when you're really describing a behavioral manifestation), no consciousness required, you haven't so much solved the problem as maneuvered around it by changing the subject. That's the real issue with searching after correlates of the mental.
I’m not sure I follow, I think in part because I can’t tell whether or not you’re agreeing with me.
Are you reiterating the point that if we focus on the behavioral effects alone, we don’t learn anything about consciousness? If so, I agree with that, it’s sort of what I was trying to communicate.
ETA: I realize in my paragraph one of the usages of “sensations” is probably the wrong word. What I should have said is that we could still describe, say, anger as the effect of neurological activity, but that our theory could do so without reference consciousness. My apologies if that was unclear.
I'm mostly agreeing with and expanding on your thoughts.
Davidson's paper, though now obscure outside of philosophy departments, is worth a read given what you wrote. In lieu of that, here's a few handy summaries:
- https://iep.utm.edu/donald-davidson-anomalous-monism/
- https://plato.stanford.edu/ENTRIES/anomalous-monism/
Don't you turn off consciousness when under anesthesia? What happens to the energy consumption of the brain when consciousness is lost? Does it change?
From what I understand, one of the drugs they give you doesn't make you unconscious, it just inhibits for formation of memory. That's quite a thought, isn't it? You undergo an excruciating surgery with no pain relief but don't remember a thing.
Not sure what drug you’re referring to, but this isn’t quite right--general anesthesia does indeed make you unconscious eventually, essentially at the level of slow wave sleep, and that’s the level that is aimed for when doing the surgery. Also, patients receive pain medications during anesthesia even when going fully under.
I had to look it up. Apparently the amnesic effect is a side effect of a number of drugs including benzos, propofol, sevoflurane, etc
Knowledge that we've reached a fundamental *and comprehensive* understanding is always beyond our grasp, but don't worry: humans have delusion and rhetoric/propaganda to cover up such problems...even scientists cannot stop themselves from engaging in it!
Well said. In a paper (where I also mentioned the paper on MOS 6502 retro-engineering) I once doubted with statistical arguments the weak statistical significance of the results of a well known group of neuroscientists for their work on engram cells. But the referee rejected every argument on the base that the group is led by a too famous authority that can't be doubted, and that also others confirmed their results. The point, however, is that the other groups used the same defective procedures and didn't care about its statistical weaknesses. In other words, if the boss does it wrong, but lots of people uncritically mimic him, then you must accept that it is all right. At the end, to get my paper accepted, I had to remove that critical part. I suspect this isn't an isolated case. Such malpractice is widespread and makes people believe in things that, most likely, don't exist.
Oof! I once got into an argument with my professor about engrams where I kept asking if it made sense to even say that memories were *physically* in one place. I think I may have used the analogy from the scene in Zoolander "THE FILES ARE IN THE COMPUTER" (and he breaks the computer to get them). The professor didn't love that, if my engram serves.
What's interesting about this is that it wouldn't count as evidence against the quality and trustworthiness of science, because science wouldn't sign off on it (thus, it "is not" evidence!!).
I think a lot could be learned about consciousness via the application of strict epistemology, but I doubt many scientists would stand for genuine truth and transparency, way too much money, power, and and ego on the line.
...not to mention: it is a fantastic way to divide up the public so they have something to argue about, keeping their attention occupied from actually important matters.
Erik, you raise more deep questions in this content-heavy post than could possibly be addressed in a comment box. Still, I will pose two observations to you, as a one time student of cognitive science. The problem with the 'innovation winter' in neuroscience at this time, your contention for which I have much sympathy, has multiple inputs but in my view it is primarily an outcome of a scale level misunderstanding in research objective. Cognitive processes such as speech, taste identification, aware color differentiation, love, attachment, or, yes consciousness, are very, very, very, very complex neuronal functions, the most basic operations of which have no working theory or models in neuroscience as a discipline. Trying to study them by the 'experimental' approach favored in the field is like trying to understand the formation of hydrocarbon molecular chains in the ground by observing how a Porsche 918 Spyder takes a curve at 200 klicks an hour in a non-track environment. The discipline is looking at its subject of study through the wrong end of the telescope in other words. The fact that the results of studies aren't even statistically significant much of the time when you get down to it should be no surprise. The focus is at _the wrong organizational level_ of the subject. Mental activity is something that brains do, but brains are something that neurons and their interactions do.
That brings me to your second contention, that consciousness is the point of what brains do. I do not find myself agreeing with that contention. Consciousness is, as I see it, fundamentally an epiphenomena though it is more than that word implies in complexity and generation. Far simpler organisms in no way in possession of structures which could be termed brains have neurons and interact with their environment. Theirs is the appropriate scale to focus the study of neural process. We have no functioning theory of how stimuli hitting organic material is rendered into coherent organismic response; into something which equates to 'memory;' into anything at all which correlates to an imagistic representation of that stimuli whether entirely created or largely accurate. The function of neural interactions is to accomplish those and many related outcomes, to 'register the world.' I would argue that _neural process_, far before we get to even the most basic cognitive representations and reactions, evolved to generate, reiterate, simulate, and meaningfully retain such registrations of organism-environment interface experience.
What one has in more complex neural operations from that perspective is a building organizational trajectory from _the same basic registration operations_. More complex neural structures scale up this process, if with a great deal of added and emergent complexity of organization; that is, consciousness is intrinsically epiphenomenal to more basic neuronal operations and their fine-grained organization. From this perspective again, the existing argument that consciousness is an emergent property of more basic neural interactions begins to have real legs, so to speak. I don't believe that brains 'evolved to generate consciousness.' Rather brains evolved to more complexly organize extra- and intra-organismic neural registration and its cascades, with consciousness as an emergent property of that complexity. Consciousness is not necessarily even a superordinate function of other scales of neural registration since, as you note, many neurally effected outcomes operate independent of consciousness. As an off the cuff observation which hadn't occurred to me previously, consciousness may have emerged first as a 'process checking' function; in effect quality assurance of a sort which became so basic that it never switches off voluntarily.
Whether that emergent outcome was incidental or something more core to an evolutionary trajectory is an interesting argument which only actual research could develop further. If consciousness fits a primary evolutionary function trajectory, then your contention that 'consciousness is the point of brains' would have added logical oomph and semantic validity at least even if the point remained arguable. If consciousness is more nearly a pure epiphenomena of increasingly complex 'operation representations + retention capability,' more nearly the argument to which I'm inclined, then your contention would be more distant from a good summation of things.
If 'neurostudy' wants to get serious about being an actual science, to me it has to take neuronal behavior and its complex interactions as its primary focus of research. Studying behavior many scales of complexity up the operational chain simply cannot be 'reverse engineered' back into an understanding of the underlying registration functions _because those functions occur and look NOTHING like complex organism behaviors_. Neurons aren't there to 'generate behavior,' to me, they are their to mediate environmental stimuli at far lower scales of action. And we are not going to get much good data on how organic neural systems register their environment by asking undergraduates to scratch their anatomy in an fMRI canister, no.
Erik's work speaks directly against higher-scale patterns being epiphenomenal. If you allow, though, that large-scale patterns can have causal efficacy of their own (both horizontally and vertically I think), I don't think your core idea changes. Your Porsche analogy made me chuckle. It's a very well-put point.
Thank you sincerely for this, Eric - delighted me on so many levels!
As a survivor of a demented commune/cult full of nutty psychological ideas, and also the sort of eighties teenager who built and fought with many 6502 based computers, I have forever been fascinated by what computers reveal about thinking (how they change ours was the main subject, for the first few decades - way too much on the Skinner side).
BUT - what I really love about what you have shared with us is your simple courage, in calling your very own specialist field into question. Cult survivors have special respect for the one who WILL stand against the very weird and foul conformity of any group which adores their own arbitrary feeling of rightness.
I'm also crazy for history (20th century in particular) and a bit more political than is good for me - so I keep struggling with a big idea around useful versus performative rebellion. Specifically, wondering whether decoupling rebellion from common struggle (the separation of the intelligentsia and the working class, which has only got worse and worse since the first sixties schism) makes it egotistical and stupid (I mean technically, not pejoratively). Suppose I'm really just saying that the point of rebellion is to have a point! (reason beyond self?)
Which you most certainly do - and your care and good humour in making it, show it to be the product of genuine enthusiasm for the study and for truth, rather than 'coolness seeking' or axe-grinding. We need more like you in every field on earth.
Thanks for standing - and doing it with such grace and eloquence! (and laughs)
Paul Snyders
PS - I just finished an essay about what is wrong with so many modern essays. Not attention-seeking (you have cooler things to work on) ;o) - but only to say I also appreciate your piece from the angle of dissecting your own thing. FUN!
(and damned well done, dude - each beat flowing from the last with music).
¯\_(ツ)_/¯
Thanks so much Paul, I really appreciate that - and good luck with the essay!
Cheers Eric - just in case you have a slow moment (have a feeling you'll enjoy the tone and vector, even if I cannot claim anything like your commendable rigour!) https://www.largeesssmallpress.com/2024/01/07/the-problem-with-blogic/
I'm a neuroscientist, I've had no beers at all (it's 10am here), and I still agree with 80% of it.
I part ways with you at the definitive "consciousness is the primary function of the brain" claim - I'm not at all sure what the primary function of the brain (except in the most trivial sense of "improving behavioural performance so as to maximise evolutionary fitness"), but if pressed, I'm definitely more sympathetic to the Bayesian brain claim that the brain's job is to constantly try to improve its models/predictions of the world.
But I fully agree that most neuroscience findings of the past decades are cool rather than fundamental, that we have no idea how to make progress on many fundamental issues (what is consciousness? what causes Alzheimer's disease? what even are psychiatric disorders, let alone how can we cure them?).
I agree neuroscience is pre-paradigmatic (if it will ever be the kind of discipline that can bear and harbor a theoretical paradigm) but I have some questions about the idea that consciousness stands to unify it. 1) What of all the computational tasks we know don't impinge on consciousness in any way? For one example, whatever interplay among the peripheral nervous system and the cerebellum and basal ganglia and motor cortex that results in putting your foot down just-so while walking, such that your stride continues and you don't fall. You can bring the fact that you're trying to perform this task into your conscious awareness, but consciousness can't "take the wheel". 2) What of non-human animals? Leaving aside the ones that are probably also conscious, how does classic work on squids and worms fit into a consciousness-focused neuroscience? 3) Evolutionarily speaking, if there was a pre-conscious era of nervous system development, should we expect an anatomical signature heralding when consciousness started to be the thing that brains were about? Or a behavioral signature?
Great questions. Some first thoughts.
1) we know that computational tasks become automated and withdraw from consciousness, but we don't really know of many that can be done *without reference* to an ongoing stream of consciousness. That's what the blindsight research, for instance, tried to show, and I think it's highly questionable. There's some cases of automatism, like sleep-walking, that can provide instructive examples, but it's complicated by dreaming, etc.
2) I think consciousness extends pretty far "down" the animal kingdom. We call a lot of things "brains" that are more like big nuclei - I think anything with a substantial brain (I'll leave open what exactly that means) is likely conscious, and so it makes sense to talk about it as the primary function of the organism.
3) This is an especially great question - I've worked with, e.g., hydra, which only have a simple neural network that's quite literally a "net" over their body (no real clustering to speak of). In those cases, it still governs simple reactions and so on. I think it's a really interesting thought that there could be a kind of punctuated equilibrium in the record where consciousness kicks on and early "brains" start looking a lot more brain-like. But I don't know of any, or even if anyone has looked.
Re: 1) Brains do all kinds of things while sleeping. Regulation of core body functions like respiration and digestion continues apace when unconscious, as does some monitoring of the environment (hence the ability to be woken up). These are aren't cognitively rich tasks, but the kinds of things that were at least the initial point of nervous systems, and some are wholly immune to conscious interference. Rather than consciousness, ongoing sensory monitoring is required to perform these tasks. There's a story where certain functions evolved prior to consciousness, but those that have evolved after require it. But in any case where you do have an ongoing stream of consciousness monitoring elements of some task, you necessarily also have ongoing sensory monitoring supplying that information to consciousness, so it's tough to conclude that consciousness is necessary.
(While I was sitting here thinking about primordial shrimp recoiling from bright lights it occurred to me that consciousness might be a kind of shortcut to getting all your inputs on the same page to make a decision. Considering 'let's have a decision-making process about this' is one way of thinking about tasks falling under the purview of the nervous system. An attractor state owing to the fact that you can only do one thing at one time, plus whatever special sauce unifies the conscious experience. And if *that* were the case you might expect to see behavioral ambivalence/ambitendency as a common failure mode when consciousness is disabled somehow, or in organisms that lack it.)
It's funny that you mention Anthropic given their CEO Dario Amodei originally studied neuro before moving to AI (his thesis, "Network-Scale Electrophysiology: Measuring and Understanding the Collective Behavior of Neural Circuits" https://www.proquest.com/openview/1c37e3a50c16b9187eec9792140f3a17/1?pq-origsite=gscholar&cbl=18750). Perhaps he realized interpretation in vivo would be much more difficult than in silico, and decided to take that path instead.
I didn't know that - I should check to see if he's mentioned it in interviews
Awesome piece, thanks! I'd be very curious to hear your thoughts on the (somewhat still speculative?) interest in electromagnetic fields (Susan Pockett's "Consciousness is a thing, not a process) and harmonic resonance (Selen Atasoy's work, like https://www.nature.com/articles/s42003-023-04474-1), or even symmetry theory of valence stuff (an offshoot of harmonics, I think?).
For example, when you say: "...ask a neuroscientist to explain something about human cognition. Listen to their answer. Then ask them to explain again, but to say nothing in their explanation about location,” couldn't one use measures of complexity to explain the "richness" of the associated conscious experience, or (allegedly) decompose the brain harmonic readings into a consonance-dissonance-noise signature to tell you about the valence of the experience?
As an outsider, shifting from flavors of neuron-doctrine to full-brain electromagnetic properties seems pretty interesting — just can't gauge how speculative it is.
I like the idea of trying new things like looking at the brain from a harmonic perspective - I'm definitely pro that, just in case something very strong pops out that gives us more insight wrt consciousness. But when it comes to electromagnetic properties, I'm usually more skeptical. One good reason: putting aside consciousness, certainly it appears that AIs can be quite intelligent without an associated electromagnetic field. I'm always willing to hear evidence that physical properties are relevant for consciousness, I've just never seen any super solid ones that convinced me to that side.
As for your specific question, I've done similiar work, not on harmonics, but using information theory, like the compressability of brain dynamics. But in the end, it's mostly correlational, and we need the lawful for a real theory of consciousness.
Erik: "certainly it appears that AIs can be quite intelligent without an associated electromagnetic field" - I am perplexed at this statement, given that you studied with Tononi. While I haven't delved deeply into his work (and it's been a while) my take-away from it was that the IIT rules out any sort of true (i.e. conscious) "intelligence" on the part of anything whose information operations are reducible to 1s and 0s. Intelligent computations require a much larger sets of integrated nodes, of which digital computers are incapable of achieving, as they boil down to nothing but sequences -including parallel processing arrangements- of 'go' and 'no-go'.
Was my take-away from Tononi all wrong? By 'integration' in Integration Information Theory, I understand this to mean: holding many parts together simultaneously. The Luppi et al paper that Oshan cites looks promising as a contribution to this understanding, and I do appreciate Richard's comment below (while struggling to grasp it!)
Oshan,, you raise an interesting point. I do think that the focus on nonlinear harmonic signal composition as considered in the Nature paper you link to is important indeed, if not necessarily in application at the level of whole-brain or substructure connectomes. To me, the way that signals process coherently at near neuronal scale is where I would focus study were I working in this field. I think that structural grouping of neuronal clusters (and by extension of nervous system structures) is much less relevant than are the signal modulation dynamics which neural components produce. I strongly suspect that the 'shape/pattern of signal' in fact drives neuron structuration rather than the other way around, so one has to model the signal properties rather than follow the structural assemblage to understand what is in fact happening. In fact, structural assemblages appear quite plastic, especially developmentally. The manner in which signal modulates in a coherent fashion in small neuronal groups likely scales up to more complex structures on the basis of efficiency by path of least differentiation, i.e. what works already is built out rather than putative purpose-specific structures being generated, in the same way that specific neurotransmitter molecules are employed and repurposed for diverse structural function in neuronal complexes.
This line of argument also provides a potentially theoretically rich path for why signal processing ever became distributed over large neuronal masses at all as opposed to remaining localized to discrete structures. Furthermore, while I think coding analogies are generally very poor for conceptualizing neuronal function behavior, one might nonetheless hypothesize overall signal configuration and dynamics as having signatures sufficiently specific that they might serve to 'tab identify' discrete complex responses so that they could be sustained---and potentially reproduced. That is, some header portion or spectral reduction of an overall signal configuration is used to identify and re-initiate the whole so that the entirety of a configuration need not be 're-cognized' to initiate comparable whole signal reinitiating. The generation of discrete complex signals might sequence in the way that code sequences (or chromosome sequences) chain together in discrete functional packets. As signal drives structure differentiation, differentiated structures remain extant and can be 're-animated' when and as the same initial components are generated to the complex modulated signal that precipitated their formation in the first place. Any such 'signal purposed structures' would initially be very, very simple activity clusters far from representing any complex environmental or organismic representation. Many micro-clusters which register some aspect of initial stimuli can be re-utilized to express or at least simulate more complex stimuli aggregations, again driving specialized structural elaboration. Hmm; I rather like that line of speculation, to give it no more credit than it deserves.
Fantastic overview. I fully agree and have been saying a less educated version of this to people for years—every time I hear or read someone saying that they like to exercise because “it gets my dopamine going,” I roll my eyes. Definitely made the right decision to not continue with neuroscience.
What's worse is when people who are actual neuroscientists say stuff like this casually! I, ah, won't name names though...
Haha. It has to start somewhere, right? And what better place than on the mantle of Credentials and Respectability? XD
Are you a fan of Umberto Eco and generally into semiotics by the way? I feel like his blending of a certain analytical académica background (which he liberally makes fun of and points out the obtuseness of) with historical fiction is a natural fit, I’ve read most of his works.
One of my favorite authors - The Name of the Rose was a huge inspiration for my first novel.
I've been similarly amused by the use of dopamine in everybody language. So much so that I put on my writer hat to write a short story satirising it. It got published recently and I thought of this comment haha. (It's here if you're interested: https://www.freedomfiction.com/2024/11/dopamine-by-h-talichi/)
I legitimately flunked the gorilla suit video test (not sure if I saw the original video or a remake). But I was locked in on the task, following the bouncing ball. The ability to narrowly focus our consciousness for a period of time is valuable, but I wouldn't consider it an argument against the primacy of consciousness itself.
We see the same things in my field of economics. Fancier and fancier statistical techniques to squeeze results out of your data but not all that much known about what the results mean.
I once had a post-talk conversation over drinks with a psychology grad student who was convinced that there was an easy story to tell about how the brain assembles knowledge from sense data.
He arranged a set of empty beer glasses into a set of five and said, "see, that's how you get the number five".
I moved the beer glasses out of the arrangement and asked him if the number five didn't exist anymore. I'm not sure he got the point but he did leave in a huff a few minutes later.
We're at a similar stage with cognitive science and AI, including their "neuro-" branches.
As far as fundamental progress, I'm not sure there's been any since McCulloch and Pitts built their artificial neuron. More cynically I'm not sure there's been *fundamental* progress since Hobbes and Descartes. That never stopped scientists from marching on, but my intuition tells me the march isn't going where most of them think.
I really enjoyed your post, thanks! And I definitely agree with you that consciousness has to be at the center of neuroscience. Though I'm wondering if neuroscience is in a pre-paradigmatic phase or rather a just-before-the-revolution phase (or crisis stage - I forget Kuhn's exact terminology). These two stages have significant similarities, but also important differences, I think.
For example, Kuhn notes that in the pre-paradigm stage, intelligent amateurs can make significant contributions to a field (think Ben Franklin and electricity). I'm not sure intelligent amateurs could make significant contributions to neuroscience at this point, could they? That suggests the field has made more progress than the pre-paradigm stage could account for.
On the other hand, the crisis stage generally comes after there has been a universally accepted paradigm in the field, and you make a strong case that there has never been such a paradigm in neuroscience.
So perhaps this is a case where Kuhn's model of paradigms isn't a perfect fit? Possibly because of the subjective/objective issue that's central to the discipline? With electricity, even in the pre-paradigm stage, everyone agreed about the phenomena to be explained. They just disagreed about how to explain it. This seems generally true across the sciences. But in neuroscience (as you suggest) there's not even agreement about the phenomena to be explained! Is it the physical brain and it's observable aspects and actions? Or is it consciousness? Or maybe something else?
I think an intelligent amateur couldn't make a significant contribution to contemporary neuroscience, just due to the methodological and technical depths of most of its subfields. But I'm much more willing to say that an intelligent amateur could make a significant contribution to our understanding of consciousness, since even experts are usually just as stumped as when they started out!
This matches my experience. As an amateur I don't have access to a neuroscience lab or the skills to work there, but with sustained effort I can get pretty far into the theoretical side of things and bring a cross-disciplinary perspective that stands a chance of moving the theory forward. While there is no accepted theory of consciousness, there are several candidates, and their proponents are more than happy to talk about them with interested amateurs. We can also come up with proposals for further research in neuroscience even if we as amateurs don't have the skills to carry them out. I'm doing that in a paper right now.
I enjoyed your article and I agree with it a fair amount, especially wrt neuroscience’s pre-paradigmatic aspects. I’m curious if you’ve ever considered that movement, or sensor-guided behavior, might instead be “the main function of brains”...there’s the now oft-repeated argument put forward by people like Rudolfo Llinas and Daniel Wolpert based on primitive vertebrate species like tunicates, which have multi-stage life cycles defined by the presence or absence of a central nervous system--an early mobile stage *with* brains gives way to a later sessile stage after the tunicate has digested its own brain, presumably it is no longer useful for guiding behavior. If brains are for consciousness, I’m unsure what to think of this phylogenetic factoid...would love to hear your thoughts!
Thanks :) - we use the term "brain" to include a lot of things, including just like, bigger nuclei and lumps of neurons. Regarding the earliest possible "brains" and what they were for - I think probably essentially unconscious processing and then evolution figures out conscious processing instead and it's far superior (see, e.g., how much less data humans need to learn compared to AIs). Regarding movement, in one sense I agree - but eventually you are moving toward something, or away for something, due to conscious reasons. It's pretty interesting to think of the first conscious experiences as being essentially bookstrapped from predator-prey relationships (hunger vs. pain) governing movement.
Thanks for your reply! I wonder what you make of attention schema theory (which I’m currently compelled by), which takes the view that consciousness is a control model of attention, attention being more physiologically basic and akin to a “signal boosting” mechanism (eg an octopus feeling contact flinches because the “touch signal” has been boosted to brain regions that process aversion and control movement, no other awareness necessary).
We can also maybe distinguish between conscious and unconscious “learning” instead of just “processing”; evolution discovers that conscious learners are more effective (and data efficient) than unconscious ones because the former can track subjective states (eg concentration) and adjust their signal boosting accordingly, to learn more from the same data…the simplicity of this is that animals have an ancient system (signal boosting) that turns on itself to track its own effectiveness. This raises the possibility that many creatures might move and even show simple behaviors but without the benefit of modeling their own attention process, unaware they are aware.
I’m agnostic as to whether all this is true, but I’m curious about what your view on this is!
Great article. Loved the depth of analysis. I also agree with you.
Thoroughly enjoyed this piece. Informative and thought-provoking. Looking forward to reading more.