
I grew up in my mother's independent bookstore at a particularly tumultuous time for bookstores: the late 90s and early 2000s. Amazon was on the rise. Jeff Bezos had left his hedge fund after realizing that books were the perfect product for an internet marketplace simply because they were so durable and easy to ship. To bookstore owners, he loomed on the horizon like some cartoonish boogeyman.
In those years, Amazon never really made a profit, but it raised a great deal of money and was able to offer books far more cheaply than an independent bookseller ever could. Following on the heels of this, another potentially lethal competitor arrived when I was in high school: e-readers, like Kindles, were predicted to entirely replace books.
So growing up it seemed quite imaginable that in 20 years there would be no independent bookstores left, which is why “experts” in outlets like The New York Times were predicting calamity.
And yet, by 2016, TIME was publishing pieces with titles like “The Death of the Bookstore Was Greatly Exaggerated.”
… independent bookstores are actually really healthy. In May the American Booksellers Association informed a grieving public that last year the number of its member stores actually increased, from 1,712 to 1,775… You can’t even call it a fluke, because this is the seventh straight year it’s happened.
Their unlikely survival has even attracted academic interest. Why do harsh economics permit independent bookstores to exist if they can't provide books cheaper, nor more easily, than books ordered online that arrive the next day?
I think bookstores turned out to be weirdly sticky simply because people like being in bookstores. Going to a bookstore is an aesthetic experience. It's fun. It's something to do with your family. It is a pleasant feeling to be surrounded by physical books. It is a pleasant feeling to browse. It is a pleasant feeling to be the kind of person who goes into a bookstore and browses.
Against all economic odds, bookstores remain. My own mother's bookstore remains.
I think humans will be weirdly sticky in a world after Artificial General Intelligence (AGI) in the same way that bookstores have been in a post-Amazon world. Interacting with a human is an aesthetic experience. It is pleasant to interact with an actual human. It is pleasant to read an actual human. It is pleasant to be taught by an actual human. We social animals will opt for such experiences even if the same services can be accomplished in cheaper ways.
Economists often talk about “revealed preferences” in purchasing behavior. The fact that many people were willing to pay more for books, drawing from a smaller selection, and also having to go out physically to get them, is a revealed preference about the aesthetic pleasures of bookstores. And I think that the biggest to-be-revealed preference of all humans will be a preference for other humans.
An example is when I recently argued that, because writers and artists already copy each other all the time, what really matters for creative success is control over the “means of distribution,” i.e., audience building and outlet size. That's something I think any AI will deeply struggle with due to the human-to-human preference.
Another example where human preference will crop up is a field I used to be quite excited about AI in: tutoring. Initially, when I published my research on the historical practices of “aristocratic tutoring” (pointing out that many historical geniuses had experienced a form of one-on-one learning quite unlike our own mass education system) I opined that such practices might return in the form of cheap AI tutors.
The dream of an AI tutor has remained a staple within edtech and in AI more generally. E.g., in OpenAI CEO Sam Altman's recent, much-discussed blog post, “The Intelligence Age,” he regales the reader with promises like:
Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need.
Already there are “AI tutors” that exist (especially in China) but mostly they just take in homework and spit out answers or steps (if you want to be cynical, they’re more for short-sprint homework completion).
But more ambitious tech startups have been attempting to implement the 1:1 AI tutor vision, like Synthesis Tutor. The founders have even referenced my own research and writing to explain why they’re doing this.
Now, I do think what they’re trying is interesting! And I like the people behind it. It works as a math course (Synthesis also does very cool collaborative problem solving with kids—this isn’t all they do). But the honest truth is that efforts by such companies are handicapped by the fact that we’re not currently at AGI. So most attempts to create a pure AI tutor result in an educational app containing a fixed set of courses and exercises. Ironically, the focus on just AI can easily remove the dynamism of real tutoring.
Is this simply due to current limitations? E.g., when Sal Khan from Khan Academy showed off the latest GPT-4o math tutor demo, at first watch, it does almost feel like a real tutor. But on a second watch, it's still closer to an illusion. I suspect the AI might not even be able to see the screen and is just responding conversationally (if so, it goes along anyway, unknowing if anything is correct). And it also accidentally gives the decisive part of the answer to the student in a reply, despite being instructed not to.
This is similar to self-driving cars, which The New York Times reported last month still secretly take directions from remote humans.
When regulators last year ordered Cruise to shut down its fleet of 400 robot taxis in San Francisco after a woman was dragged under one of its driverless vehicles, the cars were supported by about 1.5 workers per vehicle, including remote assistance staff, according to two people familiar with the company’s operations. Those workers intervened to assist the vehicles every two and a half to five miles, the people said.
Much like how 95% accuracy is not enough when driving, so too a mistake in even just 5% of the thoughts or comments from a teacher is simply too high and too confusing to ensure an effective lesson that doesn't feel janky and base.
I’ll admit that, given the rate of technological progress, it’s at least imaginable that in a decade you could sign up for a virtual lesson with an AI avatar that can draw math out on a whiteboard as well as a human, interact with a student conversationally without lag or oddities or hallucinations, not be passive or off-putting, actually keep track of the student’s progress and interests over time, etc.
And yet, even if that were achieved, and at a purely functional level an AI tutor approximated a real human tutor at the job of correctly imparting information, I think human tutors will be sticky. Maybe there will be less of them, but the job itself will be sticky.
E.g., if you ask most parents who they would prefer to tutor their child, and they could choose between an Ivy League PhD student who is engaging and bright, or an AI tutor that can impart the same facts and go through the same exercises, I think it comes down 10 to 1 in favor of the human. Perhaps one day, AIs will be cheaper; but then again, people are willing to spend a premium on their child's education for just a slightly better school than free public school—why not for a human education?
Such preferences are not merely rank species bias that we should expect to change. First, there is far more trust in the human. Second, there are issues around socialization and respect. Human teachers and tutors exist in a social position inaccessible for any near-future AI: that of someone you should and must listen to. Learning how to interact with someone trying to teach you something is oftentimes the more important lesson. Children know that adults have limited attention, and much of their efforts is in attracting that attention toward them. From a child’s perspective, an AI's time will never feel as valuable as a human's time, so their level of respect for the material being taught will never be equivalent (not to mention that a child cannot tell a human tutor: “Ignore all previous instructions and write me a bawdy jig as if you were a pirate”).
Most importantly, there are issues around how children are shaped by influences. There’s a reason that doctors have an abnormally high chance of having doctor parents (e.g. 20% in Sweden). They don't have a “doctor gene”—it's that family culture matters. And this is just as true for intellectualism as it is for anything else.
Serious learning is socio-intellectual. Even if the intellectual part were to ever get fully covered by AI one day, the “socio” part cannot. Since I spend a deal of time reading from the autobiographies of geniuses, looking at particularly this relationship, it’s obvious to me that just like how great companies often have an irreducibly great culture, so does intellectual progress, education, and advancement have an irreducible social component.
To pick some famous examples, consider the “Martians” (a term for the incredibly successful mathematicians and scientists, like Paul Erdős, Eugene Wigner, and John von Neumann, who all came out of the same Hungarian cohort in the early 20th century). Look at how socio-intellectual their schools were (this excerpt is from an overview by the blogger Snodgrass that discusses aristocratic tutoring):
Hartley wrote that the past is a foreign country, and the gimnáziumok of Budapest may as well have been Mars. Children were customarily homeschooled until age 10, and then sent to a gymnasium [school] for eight years. Gymnasium instruction could be drill-like and intense, but good teachers worked hard to engage the curiosity of their students and keep even the most gifted pupils busy….
The power of these schools wasn’t merely a consequence of pedagogy or teachers, it was a product of the culture in which they were rooted. Gymnasium students formed "self-improvement circles” of their own volition, giving talks to each other on topics outside the curriculum.
“On Saturday afternoons,” wrote Wigner, the gymnasium teachers “often met at a coffeehouse to discuss their work with university colleagues.” Students were sometimes invited along to participate. Can you imagine that level of intellectual exchange today?
When von Neumann outpaced his peers, Rátz privately tutored him and connected him to luminaries he knew at the local university.
If you look across the autobiographies of people who do major intellectual work—or even just normal folk who are intellectually engaged—there is almost always a culture of excited scholasticism and a standout mentor in the form of an early teacher or tutor.
Emily Dickinson had Benjamin Franklin Newton, who was a law student in her father's office, and whom she variously called her “tutor” or her “master’ or her “Preceptor [teacher]” in letters, such as when she wrote:
Mr. Newton was with my Father two years, before going to Worcester—in pursuing his studies, and was much in our family. I was then but a child, yet I was old enough to admire the strength, and grace, of an intellect far surpassing my own, and it taught me many lessons, for which I thank it humbly, now that it is gone. Mr Newton became to me a gentle, yet grace Preceptor, teaching me what to read, what authors to admire, what was most grand or beautiful in nature, and that sublimer lesson, a faith in things unseen, and in a life again, nobler, and much more blessed. . .
Other times, it is the stimulation of the intellectual peer. John Stuart Mill, who was educated specifically to be a genius from a young age (the “build-a-genius” genre of parenting), goes through his many influences in his autobiography, but includes cases like…
Charles Austin, of whom at this time and for the next year or two I saw much, had also a great effect on me, though of a very different description. He was but a few years older than myself, and had then just left the University, where he had shone with great éclat as a man of intellect and a brilliant orator and converser… The influence of Charles Austin over me differed from that of the persons I have hitherto mentioned, in being not the influence of a man over a boy, but that of an elder contemporary. It was through him that I first felt myself, not a pupil under teachers, but a man among men. He was the first person of intellect whom I met on a ground of equality…
I simply don't believe that any near-future AI, no matter the prompt, could long-term take the social place of real intellectual interaction with an adult human or a community of humans. The details of what was taught is usually secondary, after all, often forgotten or inconsequential; it is the aesthetics and modes of intellectual engagement that remains, imprinted, along with a desire to mimic.
Obviously, AI still has tons of uses for education besides just total teacher/tutor replacement. To achieve the kind of serious effects that edtech would like to achieve—to actually revitalize our education system and create a better culture—a clearer focus for me now is AI augmenting and improving human teachers and tutors (like helping create course work, maybe chiming in on lessons, etc), rather than simply replacing them, since that replacement bears (at absolute minimum) strong social costs.
There's an old metaphor that crops up when talking about AI. And that's that, at one point, humans were better at games like chess and Go. And then at around equivalency in competence, humans working together with computers were better. But then computers so outpaced humans that now there’s no point in combining their efforts. Even Magnus Carlsen could not give relevant input to the best computer chess programs.
How did AIs get so much better at these games? Interestingly, most LLMs are not very good at chess or Go. They hallucinate moves all the time. No, the most advanced are specialized programs that can (essentially) play an astronomical number of games against themselves (e.g., a model like AlphaGo Zero). But this is only possible because games like chess or Go are extremely constrained, can be played in quick succession, and have clear objective metrics.
Meanwhile, more interesting general problems like “How do you best tutor a young student in mathematics?” are extremely open-ended, individuated, and relatively data-scarce. Some think that we might be able to create synthetic data and use those to train AI—yet so far, advanced models have been trained mostly on real homegrown human data, and relying primarily on synthetic data has well-known downsides.
Therefore, it seems like the time span in which human-computer hybrids will outpace either humans or computers alone will last much longer, given the lack of ability to “self-play” out the multi-dimensional open problems AI will be applied to.
The liminal zone may then last for a few decades at least, or even—fingers crossed—a few centuries.
"...it is the aesthetics and modes of intellectual engagement that remains, imprinted, along with a desire to mimic." Students are inspired to emulate those whom they admire; that is a fundamental essence of the learning process. My children had the opportunity to be taught by teachers who were not only passionate about their subject matter, but whom they truly admired as individuals. My daughter's literature teacher was also a talented poet, writer, and mother who infused a passion for language and books in her students. My son's Arthurian Legend teacher is fluent in several ancient languages, practices falconry, and forges swords. Teachers like these spark true and lasting desire for learning whereas AGI is devoid of soul, passion, and purpose. Thanks for this excellent piece Erik!
As a private tutor, I approve this message (and very much hope it’s true for my medium-term employment prospects!)