63 Comments
User's avatar
Becoming Human's avatar

There are precious few brilliant people, and they are rarely brilliant outside their field (there are a few polymath examples like Newton, but he would likely be a hyper specialist in today’s academia).

Thinking about consciousness and embodiment is the life work of an academic. Doing it well and insightfully is a the life’s work of a brilliant, very well read mind.

Doing this work as a set theorist, no matter how brilliant, is nonsense.

Many if not most of the folks who are the talking heads of the AI movement have zero background in non-mathematical fields. Modern science and engineering education provides neither time nor reward for studying philosophy or sociology deeply.

So you get people who are masters at math and juvenile thinkers.

The second problem is that Stanford is the first among equals in the trend to “effective” academia, or academia in the service of capital. The question of consciousness will be uncovered by a Hofstadter or a Chomsky or a Gell-mann (the type, not the person). Weird, unbelievably smart, and potentially organizationally incompetent.

What they produce will be hard to monetize, but essential. And it will not be done by MIT, Stanford, or Harvard because these institutions are captured by modern capitalism and management theory.

Expand full comment
Rachel Gatlin's avatar

Interesting points. Perhaps math demands reduction based thinking. At least the kind math that is used for accounting's job as an example. For example, my experience in finance was to be encouraged to synthesis corporate financial reports data down to 2 to 4 variables. It was the most information that could be digested by uninterested business partners, perhaps even interested ones. What never got into those reports was the psychology of the place I worked. Which was, no one cared about finance at all! On the other hand, there was always spin and encouragement and motivation to spin (lie) about the data/facts in a way that would help the people I worked for look better and keep us open despite no hope of profitability or efficency.

My point is, you could look at my financial reports showing 3 reasons why we didn't meet this week's financial goals and it all makes sense. You are going have to come in and really smell around if you want the truth. Maybe my point is, if you ask a math person who is motivated to continue developing AI for a philosophical/psychological question about the meaning of creating a new lifeform for profit, you get a self-motivated answer which is everything is fine here. No need to worry about it. Just like finance reports that all point to an answer we can explain away every week.

Expand full comment
Becoming Human's avatar

I think this is a valuable distinction. So many effective people do what you are talking about, which is creating models that dramatically simplify a system in order to make it comprehensible and manipulable.

A great model captures signal and ignores noise, providing a tool for guiding decisions. Unfortunately, the types of systems that yield to such simplification don't include human metabolism, the economy, consciousness, or ecology, and when Excel folks dabble in these, they ignore that the noise is the signal, meaning everything is signal, and the only accurate model is the system itself.

I ran a high-level statistical modeling company for marketing years ago, and we referred to the use of math to justify an outcome as "Statistical Convenience" (as opposed to statistical confidence).

Expand full comment
Manjari Narayan's avatar

> we referred to the use of math to justify an outcome as "Statistical Convenience" (as opposed to statistical confidence).

I like Stark's quantifauxcation myself.

Expand full comment
Kaiser Basileus's avatar

I agree with your premise but it's coloured in a regressive light by "Thinking about consciousness and embodiment is the life work of an academic."

Not so. An academic has just as much distraction from the unrelated requirements of being an academic as someone in another ( non-academic ) field presumably does from their occupation. In other words, while an academic gets better access to resources, that does not imply that others can't do just as well but putting just as much effort in, just as it doesn't imply that the resources they have access to are actually helpful towards that end, especially in non-technically specific areas like consciousness. We have thousands of books on it from academics with no particular conclusions or progress.

Expand full comment
Becoming Human's avatar

I very much appreciate your response and totally agree. Replace “academic” with serious thinker. I have always conflated the two, but I am beginning to understand.

Expand full comment
Laura Creighton's avatar

I think that more than 'what does it take to succeed' it may be instructive to look at 'who fails?' In the competitive world of certain academic fields, 'who fails?' seems to be 'anybody the already established can kick to the kerb'. In Henrik Karlsson's essay _Looking for Alice_ https://www.henrikkarlsson.xyz/p/looking-for-alice which is about how he and his wife got together, he has this wonderful line about what he had figured out he wanted in a relationship. "And now I could sort of gesture at what I liked—kind people who are intellectually voracious and think it is cute that I obsess so hard about ideas that I fail all status games."

I think that is what a lot of us as children dreamed we could get by becoming academics -- a community of smart people who are like this. (My father says that when he went to university fewer than 10% of the population went, and it really was like that.) But by the time I showed up, the reality was a terrible disappointment, and it is a lot worse now. The whole notion that survival and success depends on your ability to use social power you don't have, don't want, and find the use of to be intellectually and morally compromising drives a whole lot of people out of academia before they ever even get in.

Expand full comment
Julian D'Costa's avatar

Wow, I was surprised by how bad that editorial (https://time.com/collection/time100-voices/6980134/ai-llm-not-sentient/) was. It makes so many bombastic claims with nothing to back them up.

I was most annoyed by

>One of the essential characteristics of general intelligence is “sentience,” the ability to have subjective experiences—to feel what it’s like, say, to experience hunger, to taste an apple, or to see red. Sentience is a crucial step on the road to general intelligence.

I can hear Peter Watts screaming faintly in the distance.

They were clearly once good researchers, so what I suspect is happening here is simply the ill effects of power.

There's a great Emmett Shear tweet thread on power (https://x.com/eshear/status/1660356496604139520). The relevant bit:

> Almost no one has the control and integrity which would allow them to prevent the power from coloring what they say to the Emperor. The result of over time is delusion and paranoia, in proportion to quantity of power. Hence, “power corrupts” is actually shorthand for “power corrupts your world model”

These people have so much power that none of their associates have been willing to bluntly correct their idle philosophical arguments in a long time.

Expand full comment
Erik Hoel's avatar

"Power corrupts your world model" is a really good way of putting it. I'd add too that humans change pretty quickly in terms of their skill sets, with old ones replacing new. If you become someone who mostly just manages day-to-day, it's very easy to lose really basic technical skills. In some sense, being a PI (the leader of a scientific lab) is all about balancing this, and most PI failures are some failure of balancing where either they stay too technical and micro-managerial or they become too aloof and disconnected.

Expand full comment
User's avatar
Comment deleted
Jun 6
Comment deleted
Expand full comment
Becoming Human's avatar

For the record, my wife does that to me all the time. “But you just ate?” “Yeah, and I’m hungry. The two are not mutually exclusive”

Expand full comment
Dawson Eliasen's avatar

I think that this really demonstrates just how remarkably bad otherwise smart and rational people can be when it comes to reasoning about consciousness

Expand full comment
Erik Hoel's avatar

There's some old joke about how the ultimate metamorphosis of becoming a tenured professor is just to wildly speculate about consciousness.

Expand full comment
John B's avatar

Dan Dennett sure made a good run of it.

Expand full comment
User's avatar
Comment deleted
Jun 7
Comment deleted
Expand full comment
John B's avatar

I consider Dennett to be the flipside of someone like Phillip Goff. Both basically made an extreme speculative case for or against consioussness. "What if everything is consioussness" and "What if nothing is consious" both just stake out the edge cases of the debate.

Expand full comment
Sean Sakamoto's avatar

When I read that the idea came from a WhatsApp chat, it feels like hedging and preemptive defensiveness. Like, “Hey, we’re just noodling here. Don’t expect much.”

Expand full comment
Erik Hoel's avatar

I think you're right that's one reason it bothers me. It's also sort of a weird move, because it's saying that "My WhatsApp messages contain people like the most vocal proponent of LLM consciousness" but then... doesn't identify that person? It seems unlikely to me personally that the best arguments for or against particular positions (the kinds of things worthy writing about in TIME) are occurring in private DMs, when there are plenty of much more complex and interesting arguments (both for and against) that can be found. And I think broadly when writing in an outlet with a big audience academics have a "duty to thoroughness" that is no more than the low bar of presenting not-terrible arguments (even if wrong) against somewhat-representative statements (even if not perfectly representative).

Expand full comment
Sean Sakamoto's avatar

Great point. It reminds me of Thomas Friedman, who for years would plant an Egyptian cab driver into the intro of his column to buttress his position.

Expand full comment
Bistromathtician's avatar

Hey, Galileo invented Simplicio to make all the arguments for geocentrism; why can't Friedman make someone up to share his equally spectacular insights?

Expand full comment
Squire's avatar

I raise you Thrasymachus.

Expand full comment
Lamb Yakhni's avatar

It’s the most disgusting gambit in any discussion, and you see it online all the time. If what I put out there is garbage, don’t take it so seriously! But if you agree with it and think it’s right, well, look how little effort it took me, aren’t I brilliant? I spit on it.

Expand full comment
Nicholas Weininger's avatar

Unfortunately, large private sector companies may not be all that much better. I quit middle management at a Big Tech company a few years back in part because it was clear that in order to advance, I would have to spend more time schmoozing and less time doing the mentoring and project oversight work that I actually liked and felt I added value by doing. In corporate speak I needed to do more "managing up and across".

Legibility in large organizations is a hard problem! What do you think are the best historical models of deliberately and effectively avoiding creating these kinds of schmoozing centered norms in response to that problem?

Expand full comment
Rachel Gatlin's avatar

No idea. People promote people they know, trust, and like.

Expand full comment
James F. Richardson's avatar

This must be a new issue in hard sciences? The issue of networking trumping talent was going on in the 1990s in cultural anthropology...I wrote about my own failure to schmooze in my latest book - it was the real source of my failure in grad school...of course, I have Asperger's, so schmoozing was going to require professional training like an MMA fighter...I had the opposite effect on most people...

Expand full comment
Rachel Gatlin's avatar

Oh my gosh. Totally get it. I love chatting but when I am working I want to work. When everyone seems to have such limited time, how am I supposed to act like I care what icecream flavor my team likes? We have problems to solve! I become so impatient. :(

Expand full comment
giacomo catanzaro's avatar

one of the annoying parts about debates surrounding the consciousness of llms is that they spend hours and hours talking about the neural networks and all that jazz but the neural network is a reasoning engine built by way of language model.

for it to have a kind of genuine subjective experience, it would need to be able to learn and in no way are the LLMs learning as they go, and thats partially a computaional limitation at the moment, but if LLMs are able to adaptively re-train /re-tune their weights as a conversation develops and to have a kind of memory of its experiences in the world, it will be much closer to a real kind of subjectivity and genuine consciousness. until then, paradigmatically we are not that different now from the early 2000s talking to smarterchild on aol. the engine behind the chat bot is leagues ahead, but there has not been this kind of advancement yet.

how much better and smarter would an llm be if it could use its memory from past conversations with diff ppl to answer questions in new ones? in the very least, we'd approach a much more social, human-like intellect that is learning as it goes.

Expand full comment
User's avatar
Comment deleted
Jun 6
Comment deleted
Expand full comment
Linus's avatar

As far as I understand LLMs do not adjust their weights during interactions, the weights are fixed during training. All that happens in an interaction is that everything you said before in that conversation, and the LLMa replies, get fed back in as input (it’s called context), but not continual learning. But I’m not an expert, just repeating what I heard.

Expand full comment
User's avatar
Comment deleted
Jun 8
Comment deleted
Expand full comment
Linus's avatar

That may be the case, but I’d be surprised if that update happens during the interaction session. It may all influence the next version of the LLM, so could be considered learning on a longer timescale, depending on definitions, and how often weights really get updated (does it only happen eg when updating GTP3 to 4? In which case things other than the weights change (architecture of the network, other things?) which typically is not what people mean by learning in this context) - but as I said I’m not an expert on this

Expand full comment
Ron Ghosh's avatar

Hi Erik, I think you are making three distinct claims and trying to package it into the same one.

1) The article about consciousness and sentience is bad - I have no objection here, although a mild critique of your own take on the subject. Yes, we would expect people to publish more polished articles in TIME, although I don't particularly care if an idea came from a WhatsApp group. If the idea is good on its own merits, then argue the idea.

2) The article goes to show that people can reach high positions of power without having the full scope of skills - at an object level I agree with this claim, although the poor communication of the article leaves me to consider the reverse of your thesis. Perhaps these people are really good at their technical work, which is why they got promoted to such high positions, and now they are being made to do a job they are not necessarily qualified for in trying to communicate with the general public. Again I'm happy to retract this because I am not in academia but you could easily make this case.

3) Point 1) and 2) are indicators that the incentive structures of academia broadly broken, and that charisma matters more than actual substance. Here I don't think you've necessarily drawn a clear line for this conclusion. Rather, my personal bias is that the incentive structures are actually a little bit less broken than in previous decades, but we are just seeing the gap between expected and actual skill set more clearly due to the rise of new communication channels.

Expand full comment
Aaron Zinger's avatar

I had a similar horrified reaction reading The Revolt Against Humanity (2022), as a favor to an academic, when I don't particularly regularly read books from academic presses. The journalism in it is surprisingly decent, but the level of analytical rigor is just shockingly low. Like, the book's entire thesis falls apart if you remember that Christianity exists. How do you just forget an entire Christianity? The author doesn't seem to think he needs to support most of the claims he makes, let alone anticipate and acknowledge objections. What's going wrong in his, and his editors' and publishers', social environment?

Expand full comment
Timothy Johnson's avatar

Can you elaborate on how Christianity disproves the author's thesis?

Expand full comment
Aaron Zinger's avatar

Sure! Central to Kirsch's thesis is that radical transhumanism is a new, 21st-century phenomenon. But most strains of Christian thought are transhumanist in the sense that he means--the credos assert that we'll be radically transformed after we die, and that that's a good thing. Often it's something we want the whole world to experience as soon as we can, so we should work to hasten the eschaton. 1 Corinthians 15:35-58 is super transhumanist, for example. The other Abrahamic religions have different flavors of this--my shul ended most services with a song based on Judy Chicago's "And then, and then" poem (https://ritualwell.org/ritual/merger-poem/).

I'm sure there's a response Kirsch could make to that objection, but the book doesn't give one. It's weirdly low on references to prior art in general--no Tennyson "red in tooth and claw," no 20th century science fiction (!), no discussion of the 20th-century impact of the threat of nuclear war on public consciousness. Judging from the name of your substack I'm guessing the Christianity bit is what caught your interest, though, so I'll stop there. :)

Expand full comment
Aaron Zinger's avatar

Come to think of it, The Last Battle in the Narnia series is a decent example too.

Expand full comment
Timothy Johnson's avatar

Yeah, a number of Christian authors have tried to imagine what that transformation could look like. C.S. Lewis's version in The Last Battle is pretty compelling, but my personal favorite is actually Tolkien's short story, "Leaf By Niggle".

Expand full comment
Nicholas Moore's avatar

This is a natural consequence of the corporatisation of (and removal of public funding from) higher education institutions, which themselves are the only remaining bastions of academia.

When an academic organisation is forced to be profitable in order to maintain its existence, the most valuable members of the organisation are not those who are genuinely pushing the boundaries of science, but those whose ideas can best attract private sector funding – and private sector funding is almost always conditional on seeing a return on investment.

This creates selective pressures on academia comparable to those acting on start-ups – and, as in that field, an idea is much less important than how well you are able to sell it.

In any hierarchical organisation, rising to the top selects very strongly for good social skills and organisational skills, and adding a profit motive amplifies this. The old adage “it’s not what you know, it’s who you know” is evergreen.

Ironically, the skill set required to rise through organisational hierarchies selects strongly against neurodivergent people, so if Einstein (who is believed to have had ADHD) were working in academia today, he might never have made it off the ground in the corporatised academic ladder.

Expand full comment
Mark Hannam's avatar

Nice piece! All these terrible features of academia are definitely real, but I wonder if they are inevitable? I recently tried to defend academia from its many critics (https://fictionalaether.substack.com/p/is-academia-broken?), and one thought that was always in the background was that all these horrible phenomena are a natural consequence of the huge uncertainties involved. It's hard to define important science problems, hard to identify the best ways to solve them, and hard to pick the best people to do it. In that chaos there will always be room to game the system, and the wrong people being crowned superstars, and so on. It will be worst in the most fashionable and over-hyped fields, and right now there is no topic more insanely fashionable and overhyped than LLMs. We should fight against this as much as possible, of course, so it's great that you gave some shit!

Expand full comment
William of Zeno's avatar

I’m sympathetic to complaints about how important shmoozing and politicking are to advancing in not just academia but any large organization. But you do need to be good at interpersonal stuff to be good at management and leadership. Someone who’s a brilliant subject matter expert but terrible at communicating and building relationships, etc. is going to be a terrible pick for management and leadership. So, yeah I think organizations do better if they tamp down on picking leaders based on shmoozing and politics, but being a great individual contributor doesn’t mean you should be promoted instead. (I don’t know if this is still the case, but Microsoft created two tracks to advancement to address this distinction, one for management and another for technical experts)

Expand full comment
MAC's avatar

Typo 'scrapping'

Expand full comment
Erik Hoel's avatar

great catch, ty

Expand full comment
John B's avatar

The TIME oped is basically a shorter reformation of one aspect of Hubert Dreyfus' points made decades ago. If you don't care for phenomenological arguments then you're probably going to have a bad time with it, I'll admit, but the point I guess is that the premise has precedent.

As far as mentions about Whatsapp, I think maybe that's just a reflection of how sustained conversations take place these days across diffuse groups of people, especially in this sector. Discord, Whatsapp, gChat, Slack, whatever. These are the tools people use now and it isn't really anything special to mention them.

Expand full comment
Erik Hoel's avatar

I've read What Computers Still Can't Do and what they said seems, at least to me, pretty far away from the kind of Heidegger-based or Merleau-Ponty-based objections that Dreyfus famously made.

Expand full comment
John B's avatar

WCSCD is 90% a teardown of the assumptions that AI researchers were making pre-ANN architectures. The last 10% at the very end includes an argument for embodied experience as a necessary component for human consciousness, in addition to deeply contextual being and the fundamental role of intuition in human intelligence. He expands on those 3 things in "Mind Over Machine" later.

Expand full comment
Mark's avatar

The referenced article is badly written, but so are most academic papers! Yeesh.

The sloppiness of their arguments in this venue raises one possibility: perhaps peer review (or the anticipation of peer review) is pretty valuable.

Expand full comment