A little over a week ago, a quote about the unfortunate importance of schmoozing and politicking for success in academia went viral. The quote is from biochemist Katalin Karikó, who spent 20 years barely scraping by at the University of Pennsylvania (poorly funded, never getting tenure) before winning the Nobel Prize for her work on mRNA vaccines (now used to treat everything from cancer to Covid). It’s from her memoir of a life in science, Breaking Through.
I was learning that succeeding at a research institution like Penn required skills that had little to do with science. You needed the ability to sell yourself and your work… You needed to know how to do things which I have never had any interest (flattering people, schmoozing, being agreeable when you disagree, even when you are 100 percent certain that you are correct). You needed to know how to climb a political ladder… I wasn’t interested in those.
While I personally sympathized, many online were not pleased by her candor. Of course, they said, you should be able to schmooze as an academic, for isn’t “schmoozing” just what horrible disagreeable people call regular socializing? And same with Karikó’s dislike of “climbing the political ladder”—is that not a euphemism for “I hate explaining myself to anyone and I’m above it all?”
At the same time, there was news of the collapse of star academic Ibram X. Kendi’s $55 million center at Boston University, a center which had failed to produce much research since its 2020 founding, with the university telling The New York Times that:
“Boston University provided significant financial and administrative support to Dr. Kendi and the center. Dr. Kendi did not always accept the support,” a spokesperson wrote. “In hindsight, and with the fuller knowledge of the organizational problems that arose, the university should have done more to insist on additional oversight.”
During all this, unnoticed and unconnected, in TIME magazine there was a short piece arguing that “No, Today’s AI Isn’t Sentient” by two professors from another elite university, Stanford (one author was a former provost, the other a former VP within Google itself). At Stanford they too run a large and well-funded academic institute.
Who cares, right?
For a lot has been written, and will be written, about the subject of AI sentience. But I think this particular short op-ed is worth caring about. In all the worst ways. Because it’s easy to imagine someone could read the article as making Karikó’s point for her about the dangers of an increasingly-managerial academia unable to produce anything but a callow show of progress, especially at elite institutions like Stanford.
The short opinion piece in TIME is the brainchild of the leaders of the Stanford Institute for Human-Centered Artificial Intelligence, which is so well-connected, and so well-funded, that its mere announcement back in 2019 had an entire Washington Post article written about it.
The institute—backed by the field’s biggest leaders and industry players—is not the first such academic effort of its kind, but it is by far the most ambitious: It aims to raise more than $1 billion… Microsoft co-founder Bill Gates will keynote its inaugural symposium on Monday… It will be housed in a new 200,000-square-foot building at the heart of Stanford’s campus…
The idea for the institute began with a conversation in 2016… the casual neighborly chat quickly morphed into a weightier dialogue about the future of society and what had gone wrong in the exploding field of AI.
Surely, an op-ed penned by the two co-directors of this prestigious institution, two individuals with such impressive resumes, will be, okay, maybe it won’t be amazing, but there’s no way it could be so shallow, and so poorly written, in such a major outlet, that I worry it reflects back on academia as a whole? Right?
Unfortunately for everyone involved, including now myself, that is not a bar it manages to pass—but it does pass another bar, one based on the topic, the reach of the outlet, and the prestige of the writers; my own bar for serious criticism.
Here goes. Let’s start with how the pair explain their motivations in TIME:
This short piece started as a WhatsApp group chat to debunk the argument that LLMs might have achieved sentience. It is not meant to be complete or comprehensive. Our main point here is to argue against the most common defense offered by the “sentient AI” camp, which rests on LLMs’ ability to report having “subjective experiences.”
A few notes. While writing for a public audience like in TIME, don’t add useless meta information like “Our main point here is to argue….” That’s for student essays (which are, ahem, poorly taught). And informing us the piece grew out of a “WhatsApp group chat” shrinks the reader’s pool of faith. You may as well say “I came up with this while picking my nose”—some things are better left undisclosed. Continuing on.
Over the past months, both of us have had robust debates and conversations with many colleagues in the field of AI, including some deep one-on-one conversations with some of the most prominent and pioneering AI scientists. The topic of whether AI has achieved sentience has been a prominent one. A small number of them believe strongly that it has. Here is the gist of their arguments by one of the most vocal proponents, quite representative of those in the ‘sentient AI’ camp:
“AI is sentient because it reports subjective experience. Subjective experience is the hallmark of consciousness. It is characterized by the claim of knowing what you know or experience. I believe that you, as a person, are conscious when you say ‘I have the subjective experience of feeling happy after a good meal.’ I, as a person, actually have no direct evidence of your subjective experience. But since you communicated that, I take it at face value that indeed you have the subjective experience and so are conscious.
Now, let’s apply the same ‘rule’ to LLMs. Just like any human, I don’t have access to an LLM’s internal states. But I can query its subjective experiences. I can ask ‘are you feeling hungry?”
I’m glad these conversations are occurring in the WhatsApp halls of power within the past months. Although, even forgetting the long history of this issue in sci-fi and philosophy, haven’t people been hotly debating in public the issue of AI sentience ever since that Google engineer quit, saying that LaMDA was conscious? That was two years ago.
Regardless, whose pro-LLMs-are-conscious argument is this? Who is being quoted here in TIME? Incredibly, it’s unattributed. It may as well be a random Reddit comment dug out of the ground (or not even that, since I tried to find the source to no avail). That doesn’t stop the pair from firmly disagreeing with their anonymous debater:
While this sounds plausible at first glance, the argument is wrong. It is wrong because our evidence is not exactly the same in both cases. Not even close.
When I conclude that you are experiencing hunger when you say “I’m hungry,” my conclusion is based on a large cluster of circumstances. First, is your report—the words that you speak—and perhaps some other behavioral evidence, like the grumbling in your stomach. Second, is the absence of contravening evidence, as there might be if you had just finished a five-course meal. Finally, and this is most important, is the fact that you have a physical body like mine...
Now compare this to our evidence about an LLM. The only thing that is common is the report, the fact that the LLM can produce the string of syllables “I’m hungry.” But there the similarity ends. Indeed, the LLM doesn’t have a body and so is not even the kind of thing that can be hungry.
This is… not a good argument. Now, funnily enough, I also think that LLMs are not conscious; or, if they are, I think their consciousness must be disconnected from their reports in weird ways, like they experience only a long continuous abstract dream during use (but even this I think is unlikely). So you might find it odd I’m criticizing the piece, for I too would say not to trust statements about subjective experience from LLMs. But I’m not saying the pair is wrong in their conclusion. I’m saying that their arguments for their conclusion are paper-thin (I’m not the only one to notice the piece’s lacking qualities).
There is one argument given that’s at least, on its face, interesting: that in humans we have additional chains of causal evidence for claims about conscious states. However, it’s immediately put aside for their “most important” point, which is that LLMs don’t have a body and so can’t be conscious. The pair writes:
If the LLM were to say, “I have a sharp pain in my left big toe,” would we conclude that it had a sharp pain in its left big toe? Of course not, it doesn’t have a left big toe!
Wait a minute, things without bodies a priori can’t feel pain? What about the classic philosophical thought experiment of a brain in a vat? What if the brain had a pain in its “big toe?” Heck, what about people with phantom toes? That literally happens. What if something happens to your toe in a dream? And a lack of physical bodies seems pretty temporary. What about all the research at Google itself, the company one author was a former VP in, showing how to adopt LLMs to control an actual robot body? Do LLMs spring into consciousness once that happens?
What about states of experience that aren’t hunger or pain, those not fundamentally biological and body-based and which therefore trigger different intuitions? What about the feeling of admiring the platonic realm? What about confusion? That’s something LLMs sure behave like they are, all the time, leaving behind a long chain of causal evidence. “I can’t address that, for I’m a Large Language Model trained by OpenAI, blah blah blah.” Forget hunger—multimodal LLMs clearly act like they have vision, and report on it like we can, but whether they have visual experience is a much harder question. To answer it, you can’t say: obviously lacking the special physiological states that optic nerves possess (whatever those are), they can’t have any visual experience. Clearly there is visual input in the form of digital images and subsequent report as if LLMs were seeing.
If these objections about brains in vats, states like confusion, robotic bodies, or multimodality and so on seem too esoteric, I’ll note that one of these authors is a distinguished professor of philosophy. All these edge cases are the sort of things that in a philosophical proposition should be considered and yet… the whole thing merely continues in this vein:
All sensations—hunger, feeling pain, seeing red, falling in love—are the result of physiological states that an LLM simply doesn’t have. Consequently we know that an LLM cannot have subjective experiences of those states…
When I say “I am hungry,” I am reporting on my sensed physiological states. When an LLM generates the sequence “I am hungry,” it is simply generating the most probable completion of the sequence of words in its current prompt.
Maintaining the difference is that humans “report on sensed physiological states” and LLMs don’t is just stating the conclusion. Everyone knows (well, most people!) that humans are conscious, and everyone knows that LLMs complete sequences of words.
The truly guilty aspect of the TIME piece is taking the starting anonymous quote as being a good example of the pro-LLM-consciousness argument. They call it “the most common” argument but (a) there’s no way “I believe all LLM statements about all subjects” is a common position, and (b) if you complexify the idea just a little, as any defender would, it’s easy to avoid this entire rebuttal with simple steps.
Examples of easy complexification might include noting that LLMs possess far more awareness than just auto-completing things blindly. E.g., if you ask a smart LLM if it’s hungry, it will say, “No, because I’m an AI.” LLM utterances are not fully disconnected from context and so we cannot believe they are universal liars (just some of the time).
One may think that claims to subjective experience fall into the bucket of “lying some of the time” but you need an argument for that, since other times they respond with context-awareness and facility in a way that mimics subjective awareness. A defender of LLM consciousness might also point out that, outside of humans, LLMs are the only entities ever to have made claims to subjective experience at all, and so therefore, even if they are born liars, that should give us pause.
A second example of easy complexification is noting that LLMs do their pattern completion using large black box artificial neural networks that were originally inspired, at least loosely, by biological ones. There are potentially critically important differences in architecture, function, learning rules, etc., and so those similarities don’t therefore mean they are necessarily conscious. But merely saying “It doesn’t have a body” and then declaring by fiat that solves everything doesn’t pass muster, since only a slim minority of existent theories of consciousness specify why an artificial neural network cannot possibly be conscious, while a biological one can. For this reason, it is not a good argument that:
An LLM is a mathematical model coded on silicon chips. It is not an embodied being like humans. It does not have a “life” that needs to eat, drink, reproduce, experience emotion, get sick, and eventually die.
LLMs may not have a “life” but throwing in “experiencing emotion” with that dismisses with bold confidence a century’s worth of arguments in the actual academic literature, such as all the work on functionalism (an incredibly popular stance in philosophy of mind), which would also view the human mind as essentially a mathematical model carried out by biological neurons instead of silicon chips. There are retorts to functionalism—you can pick up any book by John Searle to find some good ones—but the actual argumentative work needs to get done. You can’t say “LLMs are just math lol” because the reply of “Are you sure we aren’t just math?” is always there, waiting.
Why do I care so much? Why make so much hay out of this one short little example? Because these people have the best job in the world. You make a lot of money, you get glowing profiles in The Washington Post, you run research institutions that aim for a billion dollars in funding. You can work on any interesting project you choose. The standard should therefore be that you are producing intellectual work that’s high quality. If that intellectual work is aimed at the public, like writing in TIME, instead of basic research, that’s fine. But then it needs to rise above the level of the average social media post on the same subject. This doesn’t. It’s publication certainly doesn’t help assuage worries about Karikó’s point that getting to the tippy top of academia now requires skills that have little to do with science, thought, argumentation, or writing.
Now, these two authors could be brilliant geniuses. One of them helped create ImageNet, a central resource for machine learning. Another did deep work on set theory and logical paradoxes. Maybe their work as a whole rises to their station and they are great researchers, but this particular piece has serious problems. Of course, when researchers pivot to talking about consciousness, it’s often facile. On the other hand, just because someone did great work in the past doesn’t mean they don’t become shaped by their environment over time, and also it’s odd if the piece is unrepresentative because it’s written by the directors personally on a topic that falls squarely within the purview of their (potentially) billion-dollar academic institution.
So therefore I could easily imagine a cynical outside reader of the TIME piece concluding that once you strip away networking and connections, a lot of what’s left in modern academia is “I saw an interesting WhatsApp message” accompanied by arguments that sound like tossed-off voice notes. Those, the cynic might think, are the kinds of people who get to run institutions worth tens of millions in funding. They could therefore be forgiven for thinking Karikó is right. They might even conclude it’s connections, and schmoozing, and climbing the political ladder, all the way up.
There are precious few brilliant people, and they are rarely brilliant outside their field (there are a few polymath examples like Newton, but he would likely be a hyper specialist in today’s academia).
Thinking about consciousness and embodiment is the life work of an academic. Doing it well and insightfully is a the life’s work of a brilliant, very well read mind.
Doing this work as a set theorist, no matter how brilliant, is nonsense.
Many if not most of the folks who are the talking heads of the AI movement have zero background in non-mathematical fields. Modern science and engineering education provides neither time nor reward for studying philosophy or sociology deeply.
So you get people who are masters at math and juvenile thinkers.
The second problem is that Stanford is the first among equals in the trend to “effective” academia, or academia in the service of capital. The question of consciousness will be uncovered by a Hofstadter or a Chomsky or a Gell-mann (the type, not the person). Weird, unbelievably smart, and potentially organizationally incompetent.
What they produce will be hard to monetize, but essential. And it will not be done by MIT, Stanford, or Harvard because these institutions are captured by modern capitalism and management theory.
I think that more than 'what does it take to succeed' it may be instructive to look at 'who fails?' In the competitive world of certain academic fields, 'who fails?' seems to be 'anybody the already established can kick to the kerb'. In Henrik Karlsson's essay _Looking for Alice_ https://www.henrikkarlsson.xyz/p/looking-for-alice which is about how he and his wife got together, he has this wonderful line about what he had figured out he wanted in a relationship. "And now I could sort of gesture at what I liked—kind people who are intellectually voracious and think it is cute that I obsess so hard about ideas that I fail all status games."
I think that is what a lot of us as children dreamed we could get by becoming academics -- a community of smart people who are like this. (My father says that when he went to university fewer than 10% of the population went, and it really was like that.) But by the time I showed up, the reality was a terrible disappointment, and it is a lot worse now. The whole notion that survival and success depends on your ability to use social power you don't have, don't want, and find the use of to be intellectually and morally compromising drives a whole lot of people out of academia before they ever even get in.