All the existential risk, none of the economic impact. That's a shitty trade
I offer "no economic boost," you receive "P(doom)"

Last week The Financial Times had a well-researched piece by Tej Parikh titled “The bear case for AI.” It included a discussion of previous writing I’ve done on The Intrinsic Perspective that’s been bearish about the AI space. Perhaps I was a little bit early in my skepticism, but in mainstream outlets and organizations there is now also a growing bearishness, such as the recent report by Goldman Sachs titled “Gen AI: Too Much Spend, too Little Benefit.”
Much of this (attempted, at least) bubble-busting has occurred within this month alone: at July’s beginning, The Economist kicked it off with a piece examining the macroeconomic effect of the much-hyped advances in AI. Their conclusion? AI’s impact as a productivity-boosting technology basically can’t be seen at all in the macroeconomic data. It has had pretty much zero impact on productivity or almost anything else. It may as well not exist.
…macroeconomic data also show little evidence of a surge in productivity. The latest estimates, using official figures, suggest that real output per employee in the median rich country is not growing at all. In America, the global centre of AI, output per hour remains below its pre-2020 trend.
Despite people often reporting personal AI usage, and firms being enthusiastic, there are no stark signs in the actual final numbers where it matters. Sure, productivity is no longer literally decreasing like in 2022, but it looks firmly in trend with the same longterm fluctuations going on for decades.
This lack of impact appears true even when you specifically focus on corporations that stand to benefit the most from deployed AI.
Goldman Sachs has constructed a stockmarket index tracking firms that, in the bank’s view, have “the largest estimated potential change to baseline earnings from AI adoption via increased productivity”. The index includes firms such as Walmart, a grocer, and H&R Block, a tax-preparation outfit. Since the end of 2022 these companies’ share prices have failed to outperform the broader stockmarket. In other words, investors see no prospect of extra profits.
The much-anticipated (by AI critics and boosters alike) great job replacement has also not occurred, at least looking broadly. So far, in fact, white-collar jobs have increased along, again, the same trend-line as before.
Why might this be true? Last week’s Financial Times piece gives a nice tip of the hat to a concern I originally pointed out: that an existing overwhelming data supply usually means whatever produces this data is not very economically valuable (like, say, internet commentary or creative writing). Here’s from The Financial Times:
Erik Hoel, an American neuroscientist, posits that the industries AI are disrupting are not all that lucrative. He coined the phrase “supply paradox of AI” — the notion that the easier it is to train AI to do something, the less economically valuable that thing is.
“This is because AI performance scales based on its supply of data, that is, the quality and size of the training set itself,” said Hoel. “So when you are biased towards data sets that have an overwhelming supply, that, in turn, biases the AI to produce things that have little economic value.”
Hoel raises an interesting point. Generative AI’s current applications include writing, image and video creation, automated marketing, and processing information, according to the US Census Bureau’s Business Trends and Outlook Survey. Those are not particularly high value. Using specialist data, sophisticated models could do deeper scientific work, but that data can be in short supply or even restricted.
At minimum, the macroeconomic data looking pretty much un-impacted by the progress in AI fits with the supply paradox being “true-enough” so far.
What I wanted to point out here though is actually something deeper. Which is that the longer AI takes to show up in any positive economic indicators, the more it becomes the case that AI has brought increasing existential risk in exchange for minimal upside.
I, unlike many, am open to what the level of existential risk is (a level sometimes referred to jokingly as P(doom)). That will depend very strongly on the limits of the technology and how fast it develops. So far, none of the lack of controllability that people like Eliezer Yudkowsky originally sounded the alarm about has materialized. AIs do not appear to pursue goals zealously, or be especially power-seeking. It seems that the methods used by leading companies like OpenAI and Anthropic, such as reinforcement learning from human feedback (RLHF), work quite well at allowing AIs to grasp context, what is helpful and what is not, and allows them to understand the ambiguity of human requests wherein “Go optimize a paperclip factory” does not get transformed, as if via a wish to an ill-tempered genie, into “Make the world all paperclips.” RLHF has been quite successful in turning early versions of AIs from schizophrenic dream gods (like the released-too-soon original Bing/Sydney) into what are essentially bland intellectual butlers (modern AIs like Claude).
Admittedly, right now we don’t have high-functioning AI agents, the next goal of companies like OpenAI. Those will be the most fundamental test for whether intellectual ability is linked to power-seeking, and will set the bar on how easy it is for AIs to “go rogue.” But with that earliness admitted, it’s worth remembering that the Yudkowsky perspective is the absolutely worst-case scenario. As I’ve pointed out, the concern over an intelligence feedback loop looks a lot like the worries that nuclear explosions would trigger a feedback loop and burn up the atmosphere. Given that the release gap between GPT-5 and GPT-4 now is predicted to be at least as large, if not larger, than that between GPT-4 and GPT-3, there are no signs of an imminent “intelligence explosion” wherein AI speeds up advances in AI.
Yet, even if no such intelligence feedback loop materializes, and even if there’s no phase shift wherein AIs become power-seeking for long-term goals, that doesn’t mean there aren’t deeper, and older, primeval risks to creating what is effectively a new intelligent species to share this planet with. My favorite argument remains unchanged on this, because it is so simple. It goes: intelligence makes entities dangerous, which is why humans are the dominant species on Earth. It is therefore dangerous—in the long run—to create a new “species” of entities that are equally, or more, intelligent than yourself. This just strikes me, and many others, as a pretty obvious risk, without being specific about how it plays out. I’ve heard it phrased quite aptly as:
Chimpanzees should be careful about inventing humans.
As I wrote once in the context of worrying about introducing new equally-intelligent “species:”
Economics, politics, capitalism—these are all games we get to play because we are the dominant species on our planet, and we are the dominant species because we’re the smartest. Annual GDP growth, the latest widgets—none of these are the real world. They’re all stage props and toys we get to spend time with because we have no competitors. We’ve cleared the stage and now we confuse the stage for the world.
One can be agnostic as to when, or how, this risk materializes. You don’t need some scenario about gray goo and AIs training ever smarter versions of themselves in 2030. It could be in 100 years, when AIs are far more advanced just due to our own efforts, able to communicate amongst themselves, have much more distinct and specific personalities and agenthood, and our overconfidence in our methods of control gets us into trouble. It could be after AIs are granted rights by humans unable to help but anthropomorphize qualia-less insectoid minds wearing the face of a helpful human assistant. It could be after AIs actually gain consciousness, if they ever do (perhaps they even secretly already have to some small degree). Alternatively, it could be some group of humans themselves using AI for power-seeking purposes, and then in turn losing control. History is long, and lots can go wrong, and the introduction of AI now adds this entire new axis along which things can go wrong.
My point here is merely that, while existential risks don’t seem immediate to me, they are still present. They have still been introduced, in some form or another, forevermore. And since, at least according to all the macroeconomic data so far, this technology is not ushering in some grand new utopia of increasing productivity and life-quality, it all seems like, well, kind of a shitty trade.
Awesome that the FT recognized you.
Your observations remind me of the early 1990s IT productivity paradox debate. As Solow noted and Brynjolfsson researched, the massive IT spending of the 1980s and 90s did not seem to register in productivity. One of the proposed reasons was that industry "repaved the cow paths." Instead, business process redesign is required to realize gains from new technologies.
I think we are in the first chapter of AI - the introduction of the new technology. The next chapter will be AI-oriented re-engineering. If your are right, there will still be limited gains to be realized.
My only prediction is that there will be millions made in AI-re-engineering consulting (but such services will be called something more pretentious).
It seems to me like if AI got the stage where we had a real P(doom) it would also necessarily be able to do economically productive things.
Like for example, I cannot imagine how you could have high P(doom) without AI being very good at e.g. coding