The joy of blackouts; AI ruins college; The Consciousness Wars continue; Peter Singer’s chatbot betrays him, & more
Desiderata #35
The Desiderata series is a regular roundup of links and commentary, and an open thread for the community (paid-only).
Contents
Overstatement of the Year?
“Everyone is cheating their way through college.”
The Consciousness Wars continue.
“The most fascinating graph.”
If the US were an upper-class family, DOGE has saved $367.
Does the Great Filter hypothesis mean finding alien life is bad?
How close were the Ancient Greeks to calculus?
Peter Singer’s chatbot betrays him and endorses deontology.
Newest reasoning models are lying liars who lie. A lot.
Blackout jubilation as an indictment of the modern world.
From the archives.
Comment, share anything, ask anything.
1. Overstatement of the Year?
Occasionally, I like to check in on predictions people have made about AI. Here’s one of my favorites. Did you know it’s been over three months since Deep Research supposedly allowed automating 1-10% of all economically valuable tasks in the world (according to the CEO of OpenAI)?
Meanwhile, our labor productivity was down by 0.8% in the first quarter of this year. In fact, according to the U.S. Bureau of Labor Statistics, labor output was down 0.3%, while hours worked was up 0.6%. As I’ve noted before: text-generation just isn’t that valuable! Otherwise, there wouldn’t be so much of it to train the models on.
2. “Everyone is cheating their way through college.”
What Sam Altman should have said is that they’ve automated the “job” of being a student. Which is true. As a recent deep-dive in New York Magazine put it:
It’s a harrowing read. Its interviews and anecdotes make it clear we should now baseline expect, pessimistically, most students to use AI to do most assignments. Plenty of teachers are quitting because they want more from life than grading an AI’s essays.
After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” he said.
So what do we do? Is academia over? What does a GPA, or even an entire degree, reflect anymore, if homework and essays can be one-shotted by ChatGPT?
People are arguing for a return to tests, but relying solely on tests limits what academia can impart. It turns us into the AIs, focused solely on regurgitating facts. No “blue book” essay written in a cramped room by pencil can take the place of real research for hours, deep digestion of a book, and so on. This is the main relevant skill academia teaches: how to think in depth about a subject. The situation reveals deep tensions in academia. Ultimately, we have to ask:
Why, in 2025, are we grading outputs, instead of workflows?
We have the technology. Google Docs is free, and many other text editors track version histories as well. Specific programs could even be provided by the university itself. Tell students you track their workflows and have them do the assignments with that in mind. In fact, for projects where ethical AI is encouraged as a research assistant, editor, and smart-wall to bounce ideas off of, have that be directly integrated too. Get the entire conversation between the AI and the student that results in the paper or homework. Give less weight to the final product—because forevermore, those will be at minimum A- material—and more to the amount of effort and originality the student put into arriving at it.
In other words, grading needs to transition to “showing your work,” and that includes essay writing. Real serious pedagogy must become entirely about the process. Tracking the impact of education by grading outputs is no longer a thing, ever again. It was a good 3,000 year run. We had fun. It’s over. Stop grading essays, and start grading the creation of the essay. Same goes for everything else.