56 Comments
User's avatar
Graham L's avatar

I thought the atrophy of human cognition had got well under way in recent decades before AI even got here!

Expand full comment
Manny's avatar

Over consumption through short form video addiction was the first handicap.

Now AI is a sort of calculator for the mind, to handicap you in a brand new way :)

Expand full comment
Rebecca's avatar

Our prefrontal cortex are going to shrink. Fast. So let's hope the back of the brain can whip up something fun. I think about this a lot.

Expand full comment
Brian Sherwin's avatar

I appreciate your comments about AI being less of a concern with grad students. I suggest that the reason AI is a "problem" in K-12 settings is because of the ineffective, authoritarian educational system to which we subject our young people. Our school system involves putting dictator teachers in charge of 25-30+ students who must ask to go pee or get a drink and can be punished for "wandering" in the halls. We run schools like prisons, and students are never given meaningful decision making power in their education. Until they reach grad school. "Cheating" using AI is probably one of the most creative acts that many 7th grade students engage in, and I don't blame them -- school is so damn boring!

I say this as a former teacher, who hated being an authoritarian, enforcing stupid rules ("Hats aren't allowed in the classroom," "No talking in the hallway!" etc.). I honestly hope AI breaks our current educational system and forces us to build truly democratic schools in which kids have meaningful control of what and how they learn.

Expand full comment
Stregoni's avatar

AI usage is more likely to break students capacities for learning and critical thinking (or prevent them from ever forming) than break the current educational system. In fact, such paths of AI usage in schools are more likely to help facilitate the system further, than somehow bring us to anything more democratic. Leaders have made the similar judgement errors about smartphones, and I desperatly wish some "authoritarian" would have forced me to put it away when I was in high school. Or maybe some adults could have taken just a little bit of time to question why someone under 18 should have that tech in the first place. That would have been really nice, too.

Expand full comment
Kiko's avatar

Looks like "AI literacy" comes at the price of actual literacy. Hopefully it's not zero-sum

Expand full comment
Jeremy Parra's avatar

Modeling this game seems like a prudent investigation. It seem to be the case that we are all to readily slaying the goose to get at the golden eggs...we are the geese.

Expand full comment
Isha Yiras Hashem's avatar

I keep on suggesting spirituality, but no one wants to hear it.

Expand full comment
John B's avatar

Re-enchantment is a growing topic. I think people are actually more primed than ever for a resurgence of spirituality. Signed, a former militant atheist turned agnostic.

Expand full comment
Isha Yiras Hashem's avatar

You might really like my post from Yesterday, ai dangers: Theological and other thoughts.

Expand full comment
Ted Grasela's avatar

Ross Douthat would agree heartily!

Expand full comment
Gilad Seckler's avatar

Are there any generalizable claims that can be made about when it's possible to reach higher levels of Bloom's Taxonomy without having to work through the lower levels?

When I'm more optimistic I think about technology like Google Maps. GPS has *definitely* made my navigation skills atrophy, but in a way it doesn't matter that lower level skills are abstracted away by a computer because the tech is so ubiquitous and reliable. Maybe AI can be like that and humans are allowed to exercise high level judgment about "destinations" without having to learn every single "street."

When I'm pessimistic, I think about specialized crafts and art forms in which taste and intuition really is a product of lots and lots of reps working with the "raw material." In this scenario, our judgment gets worse because the AI robs us of the opportunity to build knowledge from the bottom up.

Or maybe it's just highly context dependent?

Expand full comment
Alaina Drake's avatar

I've found myself repeating "Butlerian Jihad" over and over in my head since all the terrible Super Bowl commericals. Imo, the brain drain is actually the most likely existential risk from AI.

Expand full comment
Aron Blue's avatar

Thank you for this. I'm incredibly worried about the use of AI for language students. Having a tool like AI lying around is a strong net negative for language acquisition. The fact that some university systems are surrendering to it makes my ears smoke.

Expand full comment
Arturo Macias's avatar

The university has too. The high school shall be Luddite.

Expand full comment
Helikitty's avatar

Luddism is always the answer

Expand full comment
Connor Harmelink's avatar

It's ironic that the areas an AI does well in are exactly the areas of most concern for academics.

I've personally found a lot of value in using AI as a non judgemental language exchange partner, practicing my speaking and listening skills.

A speaking test would be an easy answer to this. Unless you've got Claude in an earbud or something you're on your own.

Expand full comment
Aron Blue's avatar

I'm talking about how easy it is for students to use it to turn in writing projects. I've tried to use AI as a chat partner in my own language acquisition and I've found it quite dull. YMMV. I guess.

Expand full comment
dan mantena's avatar

loved the butlerian jihad.

"For individuals themselves, the main actionable thing to do about brain drain is to internalize a rule-of-thumb the academic literature already shows: Skepticism of AI capabilities—independent of if that skepticism is warranted or not!—makes for healthier AI usage."

I agree with this call to action, however, these latest reasoning models make it very hard to find failure modes that I can be proud of detecting and fix. these are the dumbest the models will every be so failure modes we spot now will be resolved in the next version of the models, which makes this a very Sisyphean task.

Expand full comment
Isaac King's avatar

I think concerns like these always need to be compared to concerns over the digital calculator. Does letting people use calculators make them worse at arithmetic? Certainly. But is this bad? Not necessarily. After all, if people have calculators, there's no reason why they would *need* to be good at arithmetic. The only reason we should be concerned is if arithmetic knowledge is still important for some other reason, like maybe it helps with learn general mathematical pattern recognition that can then be applied to higher math.

In general, if there's a new technology that can take over for a human skill, I don't think "people become worse at that skill" is a relevant concern. The whole point of introducing the tech is that they won't need the skill anymore. We only need be concerned about technologies that make people worse at a skill the technology *cannot* do itself.

Expand full comment
dan mantena's avatar

surely you are not implying that critical thinking is not needed since AI can take over for humans....

https://en.wikipedia.org/wiki/The_Machine_Stops

Expand full comment
SeeC's avatar

I agree. People have always had this kind of doomed scenario because the chose to delegate some human capabilities to machines. That makes no sense, if we didn’t do that we would basically still be barely one step above monkeys.

And pretending AI can actually do any kind of real reasoning is hilarious and shows a lack of understanding of the technology in question.

Expand full comment
Liz Haswell's avatar

Makes the Cal State decision to partner with OpenAI even more upsetting! https://www.calstate.edu/csu-system/news/Pages/CSU-AI-Powered-Initiative.aspx

Expand full comment
Jack's avatar

Technology has always been the great amplifier of one's goals.

So the question stays the same - what do you want to amplify with those even more powerful tools?

Are you using AI to think broader and answer harder questions?

Or do you use it to avoid thinking at all?

Expand full comment
Dr. Jan Schlösser's avatar

I'm reminded of that Thomas Sowell quote: "There are no solutions, only trade-offs." AI can be very useful when used right, as you point out. But we need to be mindful of what we're missing out on when we use it. Maybe a good strategy would be to use AI only for tasks that are peripheral to the work? For example, if you're a writer, don't use AI to write (because that core skill will atrophy), but only for brainstorming, proofreading, and so on.

Expand full comment
willem van der berg's avatar

I keep thinking, what if we lose power...at some point in the distant future?

Expand full comment
Swen Werner's avatar

I subscribed recently because, after looking at a few of the recent blog titles, I was intrigued by how the author draws on cultural anchor points—The Wizard of Oz, polymaths, etc.—to explore the intersection of science and the humanities, a subject I also care about.

Somebody even promoted this article as “one of the most important articles about AI ever written.” It’s hard to argue what is or isn’t important, but after reading this post, I can confidently say one thing:

This is not a serious intellectual exploration of AI’s impact. It’s a collection of loosely related viewpoints, thrown together to create the illusion of depth where there is none. The author conflates technology skepticism with wisdom—without presenting any substantive argument to support that claim.

There is nothing wrong with skepticism, but it is not inherently superior to enthusiasm. Saying skepticism is better than optimism is as meaningless as claiming blue is a better color than red.

Throughout the article, irrelevant anecdotes replace rigorous argumentation:

The Butlerian Jihad reference from Dune is not evidence—it’s fiction.

Socrates’ concerns about writing damaging memory are historically interesting but irrelevant to the discussion of AI. After 2,000 years of empirical data, I think we can safely conclude that humanity adapted just fine.

The article meanders through unrelated examples and speculation, leaving its core message unclear.

This lack of intellectual discipline is not unique to this post—it is a widespread problem in AI discourse. Not every article needs to be perfect, and nobody (including myself) is free from mistakes. But if we care about critical thinking, we need to call out shallow arguments when we see them. Given the author’s stated ambitions (from what I can glean from this text), that is particularly disappointing.

Expand full comment