Discussion about this post

User's avatar
Onid's avatar

As an outsider to neuroscience who recently found this newsletter (my research area is AI), I think everything you're saying makes sense to me. I was actually a little surprised at how much it seems, from my cursory view, that neuroscience hasn't been taking more lessons from LLM interpretability research like the linked work from Anthropic.

But, assuming everything you said is true, it seems like the consequences of that would be extremely damning for neuroscience, no? Because isn't the biggest problem with consciousness by far that it's untestable? Not just difficult to test, but fundamentally immeasurable in a way that possibly nothing else is.

I just don't see how we could quantify it. If we found a switch that we believed turned consciousness off, how would we know if pressing it worked? We could pull it and ask a test subject, but what answer would we expect in that context? We can't exactly expect them to say "Nope, looks like I'm definitely not conscious anymore." Nor could we measure it directly, we'd have to look for proxies, but to design proxies we'd need to circularly assume our theories were true.

Of course, we could try it on ourselves, turning our consciousness off and then back on, but how would that be differentiable from just putting ourselves in a fugue state or giving ourselves amnesia?

Alternatively, we could develop a theory that maps neurons to qualia; we might find, for instance, a particular set of consistent neural activity to induce the sensation of anger, or happiness. But take out "the sensation of" and you have the exact same results. In other words, if we found neurons that induced certain sensations, we could still describe those sensations without reference to consciousness, just by saying that "the neural activity induced anger" directly. You could do the exact same scholarship - and people presumably already are - without mentioning consciousness at all.

I tend to believe that this immeasurability, more than anything, is why consciousness is left out of the sciences. Because the fundamental conceit of science - that all theories must be falsifiable - does not apply to consciousness. And if what you say is true, that neuroscience is incomplete without consciousness, then that means a fundamental understanding of the human brain must always be outside of the scientific grasp.

Expand full comment
Marco Masi's avatar

Well said. In a paper (where I also mentioned the paper on MOS 6502 retro-engineering) I once doubted with statistical arguments the weak statistical significance of the results of a well known group of neuroscientists for their work on engram cells. But the referee rejected every argument on the base that the group is led by a too famous authority that can't be doubted, and that also others confirmed their results. The point, however, is that the other groups used the same defective procedures and didn't care about its statistical weaknesses. In other words, if the boss does it wrong, but lots of people uncritically mimic him, then you must accept that it is all right. At the end, to get my paper accepted, I had to remove that critical part. I suspect this isn't an isolated case. Such malpractice is widespread and makes people believe in things that, most likely, don't exist.

Expand full comment
140 more comments...

No posts