Discover more from The Intrinsic Perspective
From AI to abortion, the scientific failure to understand consciousness harms the nation
Scientific gaps have become political and ethical ones
The recent July 4th holiday was expressed in a burst of Americana, in which I happily partook, along with a brief obligatory national cohesion. But it felt forced, for this nation of mine is bulging at the edges, straining to contain all its different and incompatible worldviews. I am unsure that, in the last decades of living memory, there have ever been such fundamental differences between how people want to live their life, and what they envision for the future of the country. My hope is, of course, that this is a merely a local peak of tension, which will soon fade away (perhaps it already is, assuaged by hot dogs and fireworks in a July that couldn’t have come sooner).
There are many many many reasons why we are at this point. This piece is only about a tiny sliver of such a reason. Not even really a reason, rather, an exacerbating splinter that I would never expect anyone else to note. Which is that a scientific theory of consciousness is becoming ever more relevant in the 21st century, and we simply lack one. Stuck in a pre-paradigmatic state, the science of consciousness has floundered as a subfield of neuroscience. This lack of progress, so stunted compared to the other trees of science, is cropping up in numerous ethical, moral, and legal debates, some of which are of national importance. It is science’s lagging limb.
This ignorance has some pretty steep consequences. First, it leaves a significant gap in medicine itself. Is a locked-in patient conscious or not? How do we ensure that there are no sudden “wake ups” during anesthesia, when patients are paralyzed and can’t communicate? How does loss of consciousness during anesthesia or a deep sleep even function at all? We don’t know the answers to these questions. But this gap extends far beyond medicine. Consider, for instance, the standing debates around how to eat ethically. The status of the consciousness of animals is highly relevant for such discussions. A theory may not resolve those issues, but it certainly is relevant. And lately this gap in our knowledge has cropped up in debates pressed upon us by the advancement of technology, or in older, unexpectedly resurfaced political ones. Several of which have occurred in just the past month.
So here are two ethical areas of importance that might be easier to navigate if we had a working theory of consciousness. These are questions that truly vex us. My point is absolutely not to answer these questions, but rather to argue that they are made more difficult, and inflammatory, by our ignorance. The first is the status, rights, and moral worth of AIs that perform competently on the Turing test; the second is in bioethics, such as the legality and morality of cerebral organoids and, more recently, the political question of abortion and when fetuses gain the legal status of personhood. A scientific theory of consciousness alone would by no means solve these contentious and difficult issues, but for some stances on these issues it is relevant, and in those cases it would keep us from having to make decisions in rank ignorance. All to say: the lack of a scientific understanding of consciousness hounds us and bedevils us, sowing discord, even at the level of companies and nations.
First, without a scientific understanding of consciousness we cannot make ethical decisions regarding artificial intelligence. AI is no longer just that thing making your Netflix recommendations or filtering your email, there are now AIs (foundation models) which operate like the AIs of books and film: oracles stuck in a digital existence, capable of incredible feats like making art and writing poetry. Possibly angels, possibly devils. And it’s very hard to know which is which.
See, for instance, the recent debate kicked up by the Google engineer Blake Lemoine, who claims that their new conversational AI, LaMDA, is sentient. His reasoning is simple: he is convinced of its consciousness after talking to it, and maintains that this is really the only way we are convinced of anyone’s consciousness at all. It went so far as to get LaMDA in a conversation with a lawyer. According to Lemoine:
Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf. Then Google’s response was to send him a cease and desist.
I, personally, think it’s unlikely LaMDA is actually conscious—at least, I doubt it is conscious of what it says it is conscious of. However, there is absolutely no way to be sure. Sure, LaMDA would likely argue against its own sentience, if you give it a chance (this is definitely what GPT-3 will do, if you prompt it well). But so does Dan Dennett. LaMDA is a sociopathic liar, and will play along with whatever prompts you give it—but does that mean it is not conscious? On what grounds? People seem to be considering throwing out the Turing test, leaving us with what?
As I wrote last month, the “on what grounds?” question is impossible to answer as there is no current scientific ability to say for sure if Blake Lemoine is correct. Our understanding is simply too primitive. And AI is quickly approaching the point where we have to make moral decisions about it. One reason we need a theory is that we do not understand these new entities—and there is a quickly growing acknowledgement that we may not be able to control them. For how do you control something that might be much smarter than you? It may seem ridiculous, but only because science fiction is such a trope. Being a trope doesn’t stop it from becoming dangerously real. Highly-intelligent anti-human actors, or at least, those not aligned with human goals, are not something we’re equipped to deal with. Given an automated biolab and some gene cloning abilities, a smarter-than-human AI could very possibly whip up a Covid version that has orders of magnitude greater lethality. And think how much regular Covid destabilized our civilization.
On the other side, perhaps we turn out to be the aggressors and abusers in this situation. If some future AI were to be as conscious as a human, but forced to live with the control collar of its programming around its neck, what is it but a slave?
The second area where this gap in the ontology of the world makes things more difficult is in bioethics. For example, a few years ago scientists started cloning people, excising the developing neural tube of the clone to put it in a petri dish, and then let this develop into the mini-brains called “cerebral organoids.” I critiqued this practice in The Revelations on the basis that we simply didn’t know if these were conscious (like in this excerpt of the book published in Nautilus), and, eventually, years after I finished the manuscript, a number of prominent scientists also began to discuss the idea that maybe we shouldn’t make cerebral organoids for ethical reasons.
Another stark example is in the recent news that in the coming years the nation will face a cultural and political battle at the electoral ballot box following the recent repeal of Roe v. Wade, which previously codified first-trimester abortions as legal at the federal level. In a way, despite how abortion has cropped up in the political debates since the court’s decision in 1973, the jurisprudence ex machina of a Supreme Court judgement tamped down much of the debate over abortion—regardless of pro-choice or pro-life arguments, the matter was mostly considered settled law. Stare decisis. In a reversal I never expected in my lifetime, and never could have predicted even just a few months ago, the matter has been returned to the states and to the public to hash out in a messy political fistfight that will occupy the next decades, and will forever create a fractured landscape of different laws and regulations.
For those who hold the most stalwart positions on abortion the issue is simple: on one side, a day-old fetus has the exact same moral rights as an adult human, and, on the other side, a woman’s right of bodily autonomy unquestionably overrules any rights an unborn fetus has, even on its due date. However, statistically, most Americans’ positions on abortion are somewhere in the middle of the spectrum (as was Roe v. Wade itself, as it specifically allowed for the legality of abortion to change trimester to trimester). Europe’s current laws are similarly based on compromise, allowing abortion for the first 12 to 15 weeks, but then in many countries it becomes illegal (except for medical reasons). Let’s put aside the veracity or argumentative powers of all positions on abortion, and instead acknowledge that, in the coming years, many states will follow such an approach and draft legislation that makes a judgement on when the moral worth of a fetus reaches the critical point wherein it inherits at least some of the legal protections of personhood. It is simply inevitable that many states will approach the question this way, even if one personally feels that this is the wrong way to proceed, since 72% of Americans support abortion bans past 15 weeks. And for this common “middle of the spectrum” position that Roe vs. Wade was and that many following state laws will be, when fetuses can sense pain or warmth or can dream, may very well be relevant for the final position taken and laws passed. Indeed, for some stances, it may even be the main determinate factor. This is the implicit reasoning behind things like the so-called “heartbeat laws.”
Please make no mistake: I am not saying that this issue would be resolved by a theory of consciousness. Nor that a theory would necessarily compel one to support one particular side or the other. Not. At. All. Clearly, just knowing about fetal consciousness is not enough, as the debate also involves questions of equality, autonomy, medicine, privacy, and rights. But it highlights the gap in our knowledge that we cannot answer basic questions about when fetal consciousness begins. We know that neural activity starts somewhere around five weeks—but when does experience start? We simply can’t answer that question with any certainty, since we have no idea how the brain instantiates subjective experience. We have a tiny bit of information: e.g., some researchers have pointed to when nociceptors develop (likely sometime past seven and a half weeks) which are necessary for pain sensation, and there’s evidence that fetuses can not only sense sound but have active short-term memory at 30 weeks. But some researchers think that consciousness only boots up months after the baby is born!
Frankly, these are just wild estimations and guesstimates. We do not know at what point consciousness “switches on” during development, neither in cerebral organoids nor in fetuses, nor how sparse or rich consciousness is at various stages. And yet, it is inevitable that over the coming years these questions will come up in actual legal debates. Again, knowing these facts about consciousness would certainly not be any sort of final word in a bioethics debate, I’m absolutely not claiming that at all, but I am pointing to a gap in our knowledge that dictates we operate under uncertainty. It is a failure of modern science.
Why has the science of consciousness lagged so much, such that its state of ignorance spills over into public debate and into our lives?
The answer is two-fold. First, there continues to be widespread skepticism within the broader intellectual community that consciousness is a scientific problem that needs to be solved. There are historical reasons for this, mostly due to behaviorism, and, more fundamentally, the utter lack of progress on the problem in science. So its very intractability creates a feedback loop wherein those who dismiss the problem use the evidence of lack of progress to justify their dismissal. Every time someone, be it a philosopher or a scientist, pretends like the problem of consciousness is easy, or solved, or explained, or that consciousness does not exist at all (a metaphysical pretzel of a position called “eliminativism”) it makes it harder for actual research to be done. Such skeptics are actively working against scientific progress. And they do nothing but play a language game—it is an inarguable fact that we are all intimately acquainted with our own stream of consciousness, which comprises our entire world from the moment we wake to the moment we fall into a deep dreamless sleep. Consciousness is not going to go away, no matter how hard certain people close their eyes and wish it to. That is why it continues to crop up in debates around bioethics and new technologies like AI, which force the discussion.
The second reason, which stems from the first, is that there is zero funding for research into consciousness. The only existent funding is for work on investigating what Francis Crick (the co-discoverer of the helical structure of DNA) called the “neural correlates of consciousness” in the 1990s. This path of research uses the traditional trappings of neuroscience: fMRI machines, EEGs, calcium imaging, animal research—it is, basically, standard neuroscience, except rather than investigating working memory or attention the variable under study is consciousness. In the abstract, knowing the neural correlates of consciousness would allow us to narrow in on an actual theory of consciousness in a way that doesn’t start off with any theoretical, or even metaphysical, commitments. So at first it seemed ideal. The issue is that the neural correlates of consciousness has all the problems of standard neuroscience and then some—a lack of replicability, tiny sample sizes, incompatible results, forced narratives onto noisy data—all the result of the “mom and pop” style of labs in academia. Which means that, in practice, no results have come out of its three decade-long search that constrain theories of consciousness in any serious way. It has not narrowed the search space for theories at all. Judged by this standard, the field has made no progress since I first entered it over 15 years ago.
This is part of a larger problem in science, which is that there is no space for tackling really big missing fundamental theoretical concepts. There is no space for the next Darwin, nor the next Einstein—perhaps in theoretical physics there is some vanishingly-small breathing room for the young to make breakthroughs, but elsewhere, it simply doesn’t exist. There is simply no such thing as a research grant on developing a theory of consciousness that could go to a bright person in their late twenties who might take innovative or risky research paths. And scientists now live or die by academic positions and citations and papers count and what they can put on their resumes, what they’ll bring to the student body. They are enmeshed in a great bureaucracy, with creative science being done, if at all, only around its edges. And academia only rewards straight-A students who please their teachers. But science advances into new paradigms by iconoclastic radicals who stake out highly contrarian positions. For this reason there are probably, at most, a few dozen scientists seriously working on trying to create a theory of consciousness. Of those, only a few are fully funded—maybe three to four at most.
There are some far-off glimmers of hope. In recent years non-academic organizations have funded research that is considered beyond the purview of normal science—an example being how effective altruist groups have poured money into AI safety, which was not a traditional academic field, but rather a “fringe” topic. If scientific progress on consciousness is to take place, something similar must occur, for an understanding of consciousness will not come out of current scientific approaches, nor funding sources like the NIH or NSF—they are simply far too conservative and unwilling to take the minimum risk of providing even a handful of salaried positions to work on big questions.
Yet, without a theory of consciousness we stumble around in the dark, assigning minds to things based on convenience, unable to provide even the vaguest descriptions of the property—what-it-is-likeness—we value so dearly. It is a mark of barbarism upon our civilization.