I recently suffered a personal rout. It could be called, charitably, a shameful private defeat. A massacre, a decimation, an unconditional surrender. Yes, today I have a piece out in The New York Times. I have written for them, that conglomerate of taste, and bowed my head to their editorial yoke.
This is despite all my bleating and justifications to the contrary that writers should mostly spend their time building their own audiences and that writing for outlets isn’t worth it anymore. In fact, you might remember just a few months ago where I wrote a post literally titled “Why I keep turning down The New York Times” saying I had already refused the Times on multiple occasions.
… while outlets like The New York Times are considered traditionally prestigious places, there are misconceptions about the benefits of publishing with them. Personally, I’ve been reached out to and asked to write for The New York Times three or four times now. And I’ve always turned them down.
Well, fifth time’s the charm, I suppose. An editor reached out saying exactly that, and then strong-armed me, beating me handily in hand-to-hand combat and forcing a kind of groveling submission that marks me with shame to this day.
I’m kidding. She was quite nice. After she reached out I met with her, at minimum just to see what she’d say, since she specifically brought up my previous anti-Times protestation piece. And she ended up convincing me as to why, at least for this specific project, it was worth publishing with them. She pointed out that the brand of the Times has sway on matters of public policy in ways I simply don’t, and at this point I honestly believe that’s what necessary when it comes to putting some clearer guide-rails on generative AI. Otherwise it’s going to continue to be used in ways that are intrinsically harmful to human culture, ways most online are already beginning to feel as our feeds and searches fill up with synthetic garbage.
So today I’m in the Times calling for mandates that major AI companies engage in actual advanced watermarking efforts so that we can tell, with accuracy, what was AI-generated. You can read the full piece here.
Given the enshittification of the internet we are experiencing, I think this is a pretty reasonable demand. For despite all my takes on the technology, which are admittedly often negative, I do see the potential good use cases (to name a few: AI tutoring for children, AI translation, AI research assistants, etc). However, I’m also a realist. I don’t think it will be healthy in the long-run if AI outputs remain totally undetectable, as they are today, and take over culture.
Instead, subtle patterns should be baked into the outputs that, at least when it comes to substantive chunks of text or video or audio, allow for it to be identified as AI-originating—all via detectors maintained by the companies themselves. The companies originally said they would do this, but then didn’t. They have added very minimal watermarking to things like the metadata of images, but those are stripped automatically by most websites. This is despite there being well-known far more advanced methods of watermarking outputs in subtle statistical ways. However, the AI industry seems unwilling to slow its race down even a little to try to implement them. Possibly because such measures, just like making AIs safer and more aligned, probably means a hit to AI capabilities. And no one wants their model to underperform on benchmarks, as that, rather than real-world usage, is what feeds investment and hype.
Personally, I don’t think that’s a good excuse—it’s well-worth some extra effort, and a possible hit on performance, to make sure that long chunks of things like AI text are identifiable via some cipher. The reason this is hard, in my opinion, is mostly just because companies refuse to fiddle with the output side to make detection methods easier, since being undetectable actually helps the bottom line.
If you feel the same way about this issue, or know someone who might, please share the piece. I wouldn’t have accepted the offer if I didn’t think this, in particular, is immediate and important.
For me personally, how was the experience of working with the Gray Lady? It was a good reminder that I much prefer Substack. As the editor themselves said, when designing a Times piece, it’s very much not like Substack, and the expectation is that people will read to the end only rarely. And don’t complain to me about the title, they are very sensitive about the title—the very mention of “title” will be like activating a swarm of angry bees (although they did eventually take my suggestions into account).
But with all that said, overall the experience was generally a positive one, mostly due to the gentle editorial hand on this piece, and it was interesting to see into the system. I did like the outside fact-checking. Even though it was ultimately 95% unnecessary and basically just corrected a single very minor mistake that would have been fine to correct post-publication, it felt like the only thing I wish I could add to my usual writing.
With all that said, my humbling defeat at their hands is quickly beginning to fade. Already it’s been replaced by machinations triggered by the euphoric god-like power of that byline. The transformation has begun.
You have a clear mission and the Times will allow you to reach it. No fault in that.
The NYT has a much, much wider audience, and for all we know it might have a more persistent archive into the future. I loved the piece. I especially treasure the phrase "cognitive micronutrient." It's an example of what an A.I. could never come up with: a tiny linguistic seed that take roots in human minds to yield who knows what kind of fruit (some of which will hopefully be shared on Substack.)