73 Comments
Jun 6·edited Jun 7Liked by Erik Hoel

There are precious few brilliant people, and they are rarely brilliant outside their field (there are a few polymath examples like Newton, but he would likely be a hyper specialist in today’s academia).

Thinking about consciousness and embodiment is the life work of an academic. Doing it well and insightfully is a the life’s work of a brilliant, very well read mind.

Doing this work as a set theorist, no matter how brilliant, is nonsense.

Many if not most of the folks who are the talking heads of the AI movement have zero background in non-mathematical fields. Modern science and engineering education provides neither time nor reward for studying philosophy or sociology deeply.

So you get people who are masters at math and juvenile thinkers.

The second problem is that Stanford is the first among equals in the trend to “effective” academia, or academia in the service of capital. The question of consciousness will be uncovered by a Hofstadter or a Chomsky or a Gell-mann (the type, not the person). Weird, unbelievably smart, and potentially organizationally incompetent.

What they produce will be hard to monetize, but essential. And it will not be done by MIT, Stanford, or Harvard because these institutions are captured by modern capitalism and management theory.

Expand full comment

Interesting points. Perhaps math demands reduction based thinking. At least the kind math that is used for accounting's job as an example. For example, my experience in finance was to be encouraged to synthesis corporate financial reports data down to 2 to 4 variables. It was the most information that could be digested by uninterested business partners, perhaps even interested ones. What never got into those reports was the psychology of the place I worked. Which was, no one cared about finance at all! On the other hand, there was always spin and encouragement and motivation to spin (lie) about the data/facts in a way that would help the people I worked for look better and keep us open despite no hope of profitability or efficency.

My point is, you could look at my financial reports showing 3 reasons why we didn't meet this week's financial goals and it all makes sense. You are going have to come in and really smell around if you want the truth. Maybe my point is, if you ask a math person who is motivated to continue developing AI for a philosophical/psychological question about the meaning of creating a new lifeform for profit, you get a self-motivated answer which is everything is fine here. No need to worry about it. Just like finance reports that all point to an answer we can explain away every week.

Expand full comment

I think this is a valuable distinction. So many effective people do what you are talking about, which is creating models that dramatically simplify a system in order to make it comprehensible and manipulable.

A great model captures signal and ignores noise, providing a tool for guiding decisions. Unfortunately, the types of systems that yield to such simplification don't include human metabolism, the economy, consciousness, or ecology, and when Excel folks dabble in these, they ignore that the noise is the signal, meaning everything is signal, and the only accurate model is the system itself.

I ran a high-level statistical modeling company for marketing years ago, and we referred to the use of math to justify an outcome as "Statistical Convenience" (as opposed to statistical confidence).

Expand full comment

> we referred to the use of math to justify an outcome as "Statistical Convenience" (as opposed to statistical confidence).

I like Stark's quantifauxcation myself.

Expand full comment

I agree with your premise but it's coloured in a regressive light by "Thinking about consciousness and embodiment is the life work of an academic."

Not so. An academic has just as much distraction from the unrelated requirements of being an academic as someone in another ( non-academic ) field presumably does from their occupation. In other words, while an academic gets better access to resources, that does not imply that others can't do just as well but putting just as much effort in, just as it doesn't imply that the resources they have access to are actually helpful towards that end, especially in non-technically specific areas like consciousness. We have thousands of books on it from academics with no particular conclusions or progress.

Expand full comment

I very much appreciate your response and totally agree. Replace “academic” with serious thinker. I have always conflated the two, but I am beginning to understand.

Expand full comment
Jun 6·edited Jun 6Liked by Erik Hoel

I think that more than 'what does it take to succeed' it may be instructive to look at 'who fails?' In the competitive world of certain academic fields, 'who fails?' seems to be 'anybody the already established can kick to the kerb'. In Henrik Karlsson's essay _Looking for Alice_ https://www.henrikkarlsson.xyz/p/looking-for-alice which is about how he and his wife got together, he has this wonderful line about what he had figured out he wanted in a relationship. "And now I could sort of gesture at what I liked—kind people who are intellectually voracious and think it is cute that I obsess so hard about ideas that I fail all status games."

I think that is what a lot of us as children dreamed we could get by becoming academics -- a community of smart people who are like this. (My father says that when he went to university fewer than 10% of the population went, and it really was like that.) But by the time I showed up, the reality was a terrible disappointment, and it is a lot worse now. The whole notion that survival and success depends on your ability to use social power you don't have, don't want, and find the use of to be intellectually and morally compromising drives a whole lot of people out of academia before they ever even get in.

Expand full comment

Wow, I was surprised by how bad that editorial (https://time.com/collection/time100-voices/6980134/ai-llm-not-sentient/) was. It makes so many bombastic claims with nothing to back them up.

I was most annoyed by

>One of the essential characteristics of general intelligence is “sentience,” the ability to have subjective experiences—to feel what it’s like, say, to experience hunger, to taste an apple, or to see red. Sentience is a crucial step on the road to general intelligence.

I can hear Peter Watts screaming faintly in the distance.

They were clearly once good researchers, so what I suspect is happening here is simply the ill effects of power.

There's a great Emmett Shear tweet thread on power (https://x.com/eshear/status/1660356496604139520). The relevant bit:

> Almost no one has the control and integrity which would allow them to prevent the power from coloring what they say to the Emperor. The result of over time is delusion and paranoia, in proportion to quantity of power. Hence, “power corrupts” is actually shorthand for “power corrupts your world model”

These people have so much power that none of their associates have been willing to bluntly correct their idle philosophical arguments in a long time.

Expand full comment
author

"Power corrupts your world model" is a really good way of putting it. I'd add too that humans change pretty quickly in terms of their skill sets, with old ones replacing new. If you become someone who mostly just manages day-to-day, it's very easy to lose really basic technical skills. In some sense, being a PI (the leader of a scientific lab) is all about balancing this, and most PI failures are some failure of balancing where either they stay too technical and micro-managerial or they become too aloof and disconnected.

Expand full comment
Jun 6Liked by Erik Hoel

Now now, I think the editorial makes a perfectly valid argument. Look, I'm going to apply their test for "sentience" to my wife:

1. My wife just told me she's hungry.

2. But I just heard her having a snack in the kitchen, like, 20 minutes ago.

3. Therefore, she can't possibly be hungry.

4. Ergo, my wife lacks phenomenal consciousness. Or, uh, "sentience", I guess they're calling it now.

Science!

Expand full comment
Jun 6Liked by Erik Hoel

For the record, my wife does that to me all the time. “But you just ate?” “Yeah, and I’m hungry. The two are not mutually exclusive”

Expand full comment

The science is clear: Aristotle was right, women don’t have souls. Stanford University and Time magazine proved it!

Expand full comment

I think that this really demonstrates just how remarkably bad otherwise smart and rational people can be when it comes to reasoning about consciousness

Expand full comment
author

There's some old joke about how the ultimate metamorphosis of becoming a tenured professor is just to wildly speculate about consciousness.

Expand full comment

Dan Dennett sure made a good run of it.

Expand full comment

LOL. Not to speak ill of the recently departed — he was a great guy who had a lot of good ideas — but “speculating wildly” wasn’t his problem when it came to phenomenal consciousness. I’d say that “adamantly denying that there’s anything to speculate about” comes closer to the mark.

Expand full comment

I consider Dennett to be the flipside of someone like Phillip Goff. Both basically made an extreme speculative case for or against consioussness. "What if everything is consioussness" and "What if nothing is consious" both just stake out the edge cases of the debate.

Expand full comment

Uh, except the only thing I know with 100 percent certainty is that Dennett’s position is wrong. I assume it’s the same for you.

EDIT: I should add that, having recently tried to reread and really understand Dennett's insane "user illusion" proposition (shortly after his death, I felt it was my duty to steel-man his argument), I don't think that he necessarily denies that phenomenal consciousness exists. Instead, he was just so committed to physicalism as a totalizing ideological (not scientific or philosophical) position that he seemed to be incapable of comprehending any argument outside of that framework. So his comments about "consciousness" as an "illusion" were made from a purely physicalist standpoint: when he said "consciousness", he was talking about something like "ability of a biological system to access a unified representation of its internal states". In other words, completely orthogonal to the hard question of consciousness. To him, especially in his later years, asking him about phenomenal consciousness per se was like asking any lifelong religious or political ideologue a question about something outside of their ideological framework: it just does not compute, and they have no idea what you're talking about. ("Why don't you just make ten louder and make ten be the top number and make that a little louder?" ".... these go to eleven.")

Expand full comment

When I read that the idea came from a WhatsApp chat, it feels like hedging and preemptive defensiveness. Like, “Hey, we’re just noodling here. Don’t expect much.”

Expand full comment
author

I think you're right that's one reason it bothers me. It's also sort of a weird move, because it's saying that "My WhatsApp messages contain people like the most vocal proponent of LLM consciousness" but then... doesn't identify that person? It seems unlikely to me personally that the best arguments for or against particular positions (the kinds of things worthy writing about in TIME) are occurring in private DMs, when there are plenty of much more complex and interesting arguments (both for and against) that can be found. And I think broadly when writing in an outlet with a big audience academics have a "duty to thoroughness" that is no more than the low bar of presenting not-terrible arguments (even if wrong) against somewhat-representative statements (even if not perfectly representative).

Expand full comment

Great point. It reminds me of Thomas Friedman, who for years would plant an Egyptian cab driver into the intro of his column to buttress his position.

Expand full comment

Friedman is the intellectual godfather of everyone who tweets semi-profundities ventriloquized into the mouths of precocious 5-year-olds.

Expand full comment

Hey, Galileo invented Simplicio to make all the arguments for geocentrism; why can't Friedman make someone up to share his equally spectacular insights?

Expand full comment

I raise you Thrasymachus.

Expand full comment

It’s the most disgusting gambit in any discussion, and you see it online all the time. If what I put out there is garbage, don’t take it so seriously! But if you agree with it and think it’s right, well, look how little effort it took me, aren’t I brilliant? I spit on it.

Expand full comment

Unfortunately, large private sector companies may not be all that much better. I quit middle management at a Big Tech company a few years back in part because it was clear that in order to advance, I would have to spend more time schmoozing and less time doing the mentoring and project oversight work that I actually liked and felt I added value by doing. In corporate speak I needed to do more "managing up and across".

Legibility in large organizations is a hard problem! What do you think are the best historical models of deliberately and effectively avoiding creating these kinds of schmoozing centered norms in response to that problem?

Expand full comment

No idea. People promote people they know, trust, and like.

Expand full comment
Jun 6·edited Jun 6Liked by Erik Hoel

This must be a new issue in hard sciences? The issue of networking trumping talent was going on in the 1990s in cultural anthropology...I wrote about my own failure to schmooze in my latest book - it was the real source of my failure in grad school...of course, I have Asperger's, so schmoozing was going to require professional training like an MMA fighter...I had the opposite effect on most people...

Expand full comment

Oh my gosh. Totally get it. I love chatting but when I am working I want to work. When everyone seems to have such limited time, how am I supposed to act like I care what icecream flavor my team likes? We have problems to solve! I become so impatient. :(

Expand full comment

one of the annoying parts about debates surrounding the consciousness of llms is that they spend hours and hours talking about the neural networks and all that jazz but the neural network is a reasoning engine built by way of language model.

for it to have a kind of genuine subjective experience, it would need to be able to learn and in no way are the LLMs learning as they go, and thats partially a computaional limitation at the moment, but if LLMs are able to adaptively re-train /re-tune their weights as a conversation develops and to have a kind of memory of its experiences in the world, it will be much closer to a real kind of subjectivity and genuine consciousness. until then, paradigmatically we are not that different now from the early 2000s talking to smarterchild on aol. the engine behind the chat bot is leagues ahead, but there has not been this kind of advancement yet.

how much better and smarter would an llm be if it could use its memory from past conversations with diff ppl to answer questions in new ones? in the very least, we'd approach a much more social, human-like intellect that is learning as it goes.

Expand full comment
Jun 6Liked by Erik Hoel

> for it to have a kind of genuine subjective experience, it would need to be able to learn and in no way are the LLMs learning as they go

Can you elaborate on this? What theory of consciousness are you relying on here? And also, what do you mean by LLMs not "learning"? I'm not familiar with any theories of consciousness that posit learning as fundamental to phenomenal consciousness (what about people with long term memory deficits?), and I also don't understand how an LLM continuing to adjust its weights based on supervised or unsupervised learning during interaction sessions doesn't count as "learning".

Expand full comment

As far as I understand LLMs do not adjust their weights during interactions, the weights are fixed during training. All that happens in an interaction is that everything you said before in that conversation, and the LLMa replies, get fed back in as input (it’s called context), but not continual learning. But I’m not an expert, just repeating what I heard.

Expand full comment

I’m fairly certain that the text of interaction sessions gets used as training data at some point, in some capacity, for at least some models. There’s a problem with using LLM outputs to train the same LLM, of course, but I believe some LLM-based products gather user feedback during interaction sessions.

Expand full comment

That may be the case, but I’d be surprised if that update happens during the interaction session. It may all influence the next version of the LLM, so could be considered learning on a longer timescale, depending on definitions, and how often weights really get updated (does it only happen eg when updating GTP3 to 4? In which case things other than the weights change (architecture of the network, other things?) which typically is not what people mean by learning in this context) - but as I said I’m not an expert on this

Expand full comment

I’m also not an expert, and could be getting it all wrong. My initial comment was just expressing puzzlement about what model of consciousness requires “learning” as a precondition, and also why that would exclude machine _learning_ models from consideration for the sorts of things that have qualia.

Expand full comment

Hi Erik, I think you are making three distinct claims and trying to package it into the same one.

1) The article about consciousness and sentience is bad - I have no objection here, although a mild critique of your own take on the subject. Yes, we would expect people to publish more polished articles in TIME, although I don't particularly care if an idea came from a WhatsApp group. If the idea is good on its own merits, then argue the idea.

2) The article goes to show that people can reach high positions of power without having the full scope of skills - at an object level I agree with this claim, although the poor communication of the article leaves me to consider the reverse of your thesis. Perhaps these people are really good at their technical work, which is why they got promoted to such high positions, and now they are being made to do a job they are not necessarily qualified for in trying to communicate with the general public. Again I'm happy to retract this because I am not in academia but you could easily make this case.

3) Point 1) and 2) are indicators that the incentive structures of academia broadly broken, and that charisma matters more than actual substance. Here I don't think you've necessarily drawn a clear line for this conclusion. Rather, my personal bias is that the incentive structures are actually a little bit less broken than in previous decades, but we are just seeing the gap between expected and actual skill set more clearly due to the rise of new communication channels.

Expand full comment

Re: 1) I think Erik clarifies his objection to this above, by agreeing with Sean Sakamoto. It's weird and unprofessional to try to disclaim the quality of your reasoning ahead of time in a published magazine article ("Hey dudes, I'm actually super smart, but this is just something that popped out of my head while smoking a joint, don't take it too seriously"). Even if it's true that it came out of a WhatsApp chat, why would you attach that qualifier unless you were trying to distance yourself from the quality of the reasoning, which, as you point out, should stand or fall on its merits, not its origin?

Re: 2) Yes, I agree that this is likely part of the story. Frankly, a lot of very smart scientists seem to think they're qualified to weigh in on very hard philosophical problems despite having no exposure to the last 500 years of thought on these questions. This seems like an article written in that tradition. TIME magazine asked some comp sci nerds to talk about the hard problem of consciousness, the same way they might ask U2 front man Bono to write an article about development economics. Don't expect the output to be worth reading.

[EDIT: Just re-read and saw the bit about one of these guys being a philosophy prof. Wow, that really is embarrassing...]

Re: 3) You're probably right that this is the internet showing us how many academic careerists are pretty dim outside of their very very narrow specialty, but thanks to Dunning-Krueger and narcissism, still think they're brilliant on every subject.

Expand full comment

I had a similar horrified reaction reading The Revolt Against Humanity (2022), as a favor to an academic, when I don't particularly regularly read books from academic presses. The journalism in it is surprisingly decent, but the level of analytical rigor is just shockingly low. Like, the book's entire thesis falls apart if you remember that Christianity exists. How do you just forget an entire Christianity? The author doesn't seem to think he needs to support most of the claims he makes, let alone anticipate and acknowledge objections. What's going wrong in his, and his editors' and publishers', social environment?

Expand full comment

Can you elaborate on how Christianity disproves the author's thesis?

Expand full comment

Sure! Central to Kirsch's thesis is that radical transhumanism is a new, 21st-century phenomenon. But most strains of Christian thought are transhumanist in the sense that he means--the credos assert that we'll be radically transformed after we die, and that that's a good thing. Often it's something we want the whole world to experience as soon as we can, so we should work to hasten the eschaton. 1 Corinthians 15:35-58 is super transhumanist, for example. The other Abrahamic religions have different flavors of this--my shul ended most services with a song based on Judy Chicago's "And then, and then" poem (https://ritualwell.org/ritual/merger-poem/).

I'm sure there's a response Kirsch could make to that objection, but the book doesn't give one. It's weirdly low on references to prior art in general--no Tennyson "red in tooth and claw," no 20th century science fiction (!), no discussion of the 20th-century impact of the threat of nuclear war on public consciousness. Judging from the name of your substack I'm guessing the Christianity bit is what caught your interest, though, so I'll stop there. :)

Expand full comment

Come to think of it, The Last Battle in the Narnia series is a decent example too.

Expand full comment

Yeah, a number of Christian authors have tried to imagine what that transformation could look like. C.S. Lewis's version in The Last Battle is pretty compelling, but my personal favorite is actually Tolkien's short story, "Leaf By Niggle".

Expand full comment

This is a natural consequence of the corporatisation of (and removal of public funding from) higher education institutions, which themselves are the only remaining bastions of academia.

When an academic organisation is forced to be profitable in order to maintain its existence, the most valuable members of the organisation are not those who are genuinely pushing the boundaries of science, but those whose ideas can best attract private sector funding – and private sector funding is almost always conditional on seeing a return on investment.

This creates selective pressures on academia comparable to those acting on start-ups – and, as in that field, an idea is much less important than how well you are able to sell it.

In any hierarchical organisation, rising to the top selects very strongly for good social skills and organisational skills, and adding a profit motive amplifies this. The old adage “it’s not what you know, it’s who you know” is evergreen.

Ironically, the skill set required to rise through organisational hierarchies selects strongly against neurodivergent people, so if Einstein (who is believed to have had ADHD) were working in academia today, he might never have made it off the ground in the corporatised academic ladder.

Expand full comment

Nice piece! All these terrible features of academia are definitely real, but I wonder if they are inevitable? I recently tried to defend academia from its many critics (https://fictionalaether.substack.com/p/is-academia-broken?), and one thought that was always in the background was that all these horrible phenomena are a natural consequence of the huge uncertainties involved. It's hard to define important science problems, hard to identify the best ways to solve them, and hard to pick the best people to do it. In that chaos there will always be room to game the system, and the wrong people being crowned superstars, and so on. It will be worst in the most fashionable and over-hyped fields, and right now there is no topic more insanely fashionable and overhyped than LLMs. We should fight against this as much as possible, of course, so it's great that you gave some shit!

Expand full comment

Typo 'scrapping'

Expand full comment
author

great catch, ty

Expand full comment

The TIME oped is basically a shorter reformation of one aspect of Hubert Dreyfus' points made decades ago. If you don't care for phenomenological arguments then you're probably going to have a bad time with it, I'll admit, but the point I guess is that the premise has precedent.

As far as mentions about Whatsapp, I think maybe that's just a reflection of how sustained conversations take place these days across diffuse groups of people, especially in this sector. Discord, Whatsapp, gChat, Slack, whatever. These are the tools people use now and it isn't really anything special to mention them.

Expand full comment
author

I've read What Computers Still Can't Do and what they said seems, at least to me, pretty far away from the kind of Heidegger-based or Merleau-Ponty-based objections that Dreyfus famously made.

Expand full comment

WCSCD is 90% a teardown of the assumptions that AI researchers were making pre-ANN architectures. The last 10% at the very end includes an argument for embodied experience as a necessary component for human consciousness, in addition to deeply contextual being and the fundamental role of intuition in human intelligence. He expands on those 3 things in "Mind Over Machine" later.

Expand full comment

Nice article, but I don't think the Time piece (as poorly written as it) provides much evidence for your assertion that academia is mostly about connections and schmoozing. It's always been a thing for subject matter experts to opine confidently about things outside of their expertise and look foolish in the process.

But I think a more interesting note is that academia selects for people who can demonstrate narrow mastery of some topic area at the level required to publish continuously in journals, but academia doesn't at all select for the type of epistemic habits that allow someone to navigate new and uncertain areas without making egregious errors.

Expand full comment

The referenced article is badly written, but so are most academic papers! Yeesh.

The sloppiness of their arguments in this venue raises one possibility: perhaps peer review (or the anticipation of peer review) is pretty valuable.

Expand full comment