85 Comments
author

Please note: scammers are impersonating me (and other authors) on Substack to send suspicious links. I will never ask you to join a telegram or send me any information. If you got a reply comment saying as such, know that it's not real and is a common kind of scam.

Expand full comment

What I keep coming back to is that language is just a bridge. It's a bridge between your experience and my experience. I can understand what you are saying only because elements of my experience correspond to elements of your experience and language forms a bridge between them. This leads to two thoughts:

1. AI is trained on language, not experience. It is trained on bridges, not on solid ground. No wonder it hallucinates.

2. I don't want content, I want communication. Human existence is fundamentally lonely. We have access only to our own mind. Everyone else is a stranger to us. Except when we communicate. Knowing that there is a human being with human experience on the other side of the bridge is the only reason I am interested in the bridge at all. A bridge to nowhere, no matter how bright and shiny it may be, and no matter how swiftly is may be built, is still a bridge to nowhere, and I'm not interested.

Which leads to the further thought that the notion that it can disrupt the content industry by building shinier bridges faster misses the point that we never wanted bridges in the first place. What we wanted was a way to cross to the other side. A bridge is of value only if there is something I want on the other shore. And the thing I want is a human being.

Expand full comment
author

"A bridge to nowhere" is a great description

Expand full comment

This to me is what's most puzzling about the current "tech" scene, and something I've noticed as well, but that they remain blind to.

"Technologists" don't understand what technology is in the larger scheme of things. They fetishize it. They build technology because it's cool and fun. Sure, I get it.

But the so-called "visionaries" of Silicon Valley have *no vison whatsoever*. Ask them what's on the other side of that bridge and prepare to be underwhelmed. Oh, we're going to "drive down the cost of energy and intelligence", make them "too cheap to meter"? To do... what exactly? Usually you get some vague gesturing at "all the possibilities" that will come from it, with no actualities. "The singularity is so unpredictable we couldn't possibly know what's going to happen!!" How... convenient?

Building technology as such is useless except as an academic exercise in aesthetics. Its existence is justified by the ends it enables. But what world are we actually enabling with it? What ends for the means we're developing? Do we just want to be eternal omnipotent gods with... nothing concrete to do? Doesn't make for the greatest story.

There is no shortage of people wanting to build bridges, but it seems like they barely know in what direction to build it, let alone what would exist on the other side.

Expand full comment

You articulated so well what was swirling around in my head, but that I couldn't quite nail down. From this perspective, the current enthusiasm about AI reminds me of what was happening back in the early 90's, when the general public had more access than before to personal-use of computing power. "PC's," as we referred to the machines back then, were talked about with a similar level of fanfare and both Utopian and Dystopian predictions about the profound impact the technology they would have on humanity. While the proliferation of artificial computing power has undeniably ushered in significant changes in how we, as humans interact and produce, none of those changes significantly alter our hard-wired need to interact with each other on a level that verifies our humanity to each other.

Expand full comment

thanks for this, you've clearly articulated why I find I'm so turned off by AI generated art.

Expand full comment

How beautifully put! So in the end, AI may need to experience mortality and the pain that goes with it, to have anything to share that we're interested in.

Expand full comment

For all the tweets and articles I have read about AI, I have yet to see one explicit claim of what AI can do for us, other than widen and deepen the commerce of information. Drown the world in more of the written word. I really don't see what else it can do for us, and as far as I can tell neither can those building it.

And as far as creative stuff goes, one might say that AI is going to make it harder than ever to be a creator - because it's going to drown the world in media.

So then where's the edge? Well, in a world where anyone can be a creator, the defining characteristic is going to be taste. Taste, and the ability to be honest in ways that others aren't. And if you think about it, that's all creativity ever was to begin with.

I'm still optimistic on human creativity, because it is a thing that fundamentally cannot be replaced. AI can't get goosebumps. AI doesn't have taste.

Expand full comment
Dec 8, 2023Liked by Erik Hoel

I’m seeing some really interesting results in medical situations, where AI is outperforming humans in detecting early signs of cancer and so forth. It doesn’t eliminate the need for radiologists or proctologists but it does seem to significantly improve test reading and therefore contribute to early treatment and thus better outcomes.

Expand full comment

I'm a programmer and digital creator and AI has had a huge impact on my work. It writes code for me, answers questions, generates concept art, ideas, and acts as a ballplank to bounce stuff off of. I'm easily 5 if not 10 times more productive with it. And yes, it's well paid work that I'm able to do quicker and better.

Expand full comment

I have a job that requires digesting lots of very long, dense, complex technical documents, and bringing myself up to speed on new areas of technological and scientific development, often on a short timeline. LLMs have also made me much, much more productive. The three areas where they provide the greatest value to me:

1. Teaching me about new technologies. They act as an infinitely patient, completely flexible tutor who has digested all public human knowledge and can summarize it for me and answer questions about it, meeting me exactly where I am and levelling me up to wherever I need to be. These models will revolutionize education. This is the #1 killer app for LLMs in my opinion.

2. Digesting, summarizing, and answering questions about long, dense technical documents.

3. Digesting, summarizing, and answering questions about foreign language documents. In a typical afternoon I may ask the model to explain a half dozen documents in Chinese, Korean, and Japanese, and the outputs are MUCH more useful and actionable than a straight machine translation (eg from Google Translate).

Note that none of these applications involves “generating content” in the way Erik means it — the content being generated is for an audience of one, and it’s ephemeral. It’s like having a team of enthusiastic assistants with an impossibly diverse set of skills, who tirelessly work on my behalf for $0/hour. What is the economic value of that? Sounds high to me, and my increased productivity over the past year would support that intuition.

Expand full comment

I think AI tools are valuable because they are, well, tools. Programmers are some of the best use cases because programming is a language job and these LLMs are language tools. Like any tools, they are leverage for us to do more work. If they make good personal assistants, it's not that they'll put all PAs out of work; rather that millions more people will be able to use PAs than ever could afford the human version.

Expand full comment

Nice. Your thinking on AI is always a good sanity check, I wish more people would read this stuff. The hype is so powerful that people don't even want to hear measured bearishness like this. Consider popping into the weekly meeting at my work?

I think you are on to something with the supply paradox of AI. In programming this translates to the fact that the only thing that the AI can learn is what we call the "boilerplate," essentially the stuff that is the same across files and projects. This is necessarily true because what these models do is pick up on patterns. Well, guess what. I can write boilerplate with my eyes closed, at my max typing speed. The work that programmers get paid to do is precisely the non-boilerplate, the stuff that is specific to the project. Integrating different technologies with the business context, and so on. Knowing the intricacies of the existing codebase. Very similar to lawyers, I imagine, where there is a lot of boilerplate to deal with but that's not where there expertise is. The expertise is in the novel case, identifying what is different about it.

Like you said, maybe there is room for the tech to encroach on the "real job." But I'm skeptical for the same reasons you mentioned. We are operating at the absolute limits of data hoarding. If any pressure at all is placed on these companies' ability to hoard data, progress will not be made, barring some sort of revolution. I definitely expect this pressure to come. People don't like their data being used to train AIs. Platforms are updating their terms, artists are rebelling, policies will be made. It's basically a natural extension of existing privacy laws.

Expand full comment
author

I think the copyright thing will end up playing a major role, as you said. It’s sort of strange that those most worried about AI safety focus all their efforts on alignment. The best way to slow down AI would be to fund joint lawsuits about copyright and try to get the laws changed! But that’s not very sexy.

Expand full comment

Oh my god, yes, you said it.

Expand full comment

It's a pretty open-and-shut case. Copyright requires human authorship. Therefore, anything made by a robot shouldn't be copyrightable.

Expand full comment

Ha! Brilliant analysis. 👍leaves me wondering what the use cases are. A year on, I find the use cases for ChatGPT are few and far between: like when I need to revise text to fit an arbitrary text limit, and I'm too blind with tiredness to do it myself; or to unpick an opaque piece of academic writing and turn it into comprehensible language from a discipline I don't have a deep acquaintance with. It can't write essays for honors courses where you need to deeply engage with the primary literature either. I am less wowed by the week, tbh. The AI winter is coming.

Expand full comment
author

Since I have a pretty strong skepticism about if AI is worth pursuing at all, I've tried to find a few ways to use ChatGPT. Specifically, I've tried to use it for either copy-editing ("Are there any grammar mistakes in this text?") and research (e.g., "Has anyone proposed X?") along with web-connected variants, etc. It was not really helpful at either of those tasks, and I long ago stopped paying the $20 to OpenAI for testing purposes.

Expand full comment

I can see how it's not super useful for authors. And maybe that use case is over-hyped. But as a programmer I can tell you that it's absolutely earth-shattering the things it does for me. For one thing, I now take on jobs in languages I've barely touched before, since I can get ChatGPT to write almost all the code anyway. It's an enormous boost to my productivity and breadth.

Expand full comment

Likewise, I find myself volunteering for things I wouldn't be putting my hand up for if I didn't have ChatGPT helping me, and I can deliver them faster. Getting past a roadblock where I need to do something I've not done before is so much faster now. I can't say I've had as much luck with getting it to write longer code, I reach a point where the project is large enough that it starts breaking parts of it's earlier work with later additions. But for quickly spinning up functions or debugging code and spreadsheets it's a real boon. It's made learning new stuff so much less frustrating too, it's refreshing to be able to ask "what's this bit of this line of code actually doing" and get a direct answer, instead of pouring over pages of similar-ish queries people have asked/answered online.

Expand full comment

Would you mind to quantify? What impact do you expect on the costs and growth rate of the software industry? Do you see a feedback loop where the cost and quality drop like the price of genetic testing?

Expand full comment

Unfortunately I don't have the knowledge of economics to dare try to answer those questions. And to be clear, I'm not disputing Erik's point that AI is trained on low-demand work and as such will turn out to be less economically valuable and disruptive than investors think. But several commenters here expressed opinions that AI is not useful at all just because it's not useful to them personally. I wanted to make a counterpoint to that.

Expand full comment

Yep, that’s good point indeed, except that’s focusing too much on the LLMs (these criticisms don’t apply to other AI approaches, like alphazero).

Expand full comment

Right, how much would you pay for this AI-generated text, “Meaning of Old Man by Neil Young”?

https://www.songtell.com/neil-young/old-man

Before you start bidding, note that this text (1) Is Captain Obvious stuff. (2) Reads like it’s by someone for whom English is not their first language. (3) Is cold and emotionless. (4) Has absolutely nothing personal in it. (5) Is repetitious. (6) Proceeds in a linear fashion from beginning to end. (7) Tells you everything and nothing.

This is just the textual analysis of the lyrics. (Is it even factually correct?) Where’s the analysis of the song (lyrics + music)? We don’t read lyrics, we listen to songs. Any human would say something about how the sound of the music and Young’s voice is part of the song’s “meaning,” how the song changes in intensity as his voice suddenly shifts up in pitch. As humans, we may not always be able to articulate the how and why, but we sure know the when and where of our feelings.

Expand full comment
Removed (Banned)Dec 24, 2023
Expand full comment

Eric, Is that an email address or something else? Not sure how use it.

Expand full comment
author

I'm sorry, this is the real Erik Hoel. Someone impersonated me by creating a Substack account with my photo (but it wasn't the same actual account). The message didn't come come from me, and the invitation to my telegram is not real.

Expand full comment

What's the antonym of poetic justice? Poetic injustice? Uninspired and spiritless justice? Thanks for reminding me that all the stuff that gave my life its limited poetry are now to be co-opted by the handsome and very rich princes (and let us not forget the princesses!) of Stanford and other incubators for the devaluation of non-technological human expression. Thank you also, though you didn't need to bother really because I already knew it, of reminding me that human illustrators don't make any money. That's why I drove a cab for years before cabs were replaced by (how coincidental!) another recent for-profit technological cudgel quaintly known as uber which is a word that once, ninety years ago or so, was itself fondly co-opted by the expounders of 20th century fascism. (Yes, I know this was an article about the economics of AI not the rightness of it but economic failure or not, AI has added to the powerful anti-poetic mindset that I sense everywhere these days though I have no graphs nor charts to prove it.)

Expand full comment
Dec 8, 2023Liked by Erik Hoel

Thanks for your very insightful and interesting article.

The AI´s disruption of “small industries” that are not lucrative, could be a just a part of a whole disruption dynamics going on. For example, from a historical perspective, the industrial revolution disrupted the job of thousands of family industries, from manual to automatized labor in big factories. So maybe the Big Five intent is not to replace industries or jobs, as such, but to create (or disrupt) a whole new brand of industries and markets. ("Yes, there is not enough demand for AI now, or compelling uses cases, so, why not create the demand for them?"). That would possibly explain why a huge amount of money is going on AI. Hence is not about how to optimize or have a better cost/benefit of what we are doing or using now, but to create a new realm where AI would be indispensable, not just for working, but for a new lifestyle.

I´m 61, and I grew without having cell phones or internet at that time. I did well without them. But now it is impossible to live without them. The industry at that time created the demand for the new products and with them a new way of living. The investors’ expectations are that sooner or later we will consume more and more AI products as the time pass on. And then afterwards, as always, we will not be able to live without them. AI is like a new artificial Darwinian species, and collectively as a society we will adapt and live with it, whether we want it or not.

You wrote “A big portion of users pay the $20 to OpenAI for GPT-4 access more for the novelty than anything else”. I´m sorry, but most of all are socially conditioned to consume, and investors expect we will consume their AI products. Nevertheless I am optimistic that AI will carry a good amount of benefits in our lifesyle and workplace.

Expand full comment
Dec 8, 2023Liked by Erik Hoel

Thanks for this cool-headed take Erik, needed in the current hype storm.

Makes me think AI may go the way nuclear did: not much of a business case in peaceful applications, despite huge investments, but a complete game-changer in war. Small autonomous killer drones, moving mines with basic image recognition capabilities, would turn the battlefield upside down in Ukraine.

Expand full comment

Such an entertaining read, Erik! Just like AI, I’m also not expecting to make any money with my writing. 🤣

Expand full comment

Thought provoking, although I take issue with this section:

"Even assuming all that comes true sometime over the next decades: what is the market for personal assistants? What’s the market for butlers? Most people have neither of those things."

This is pretty clearly the wrong way to look at this, since today personal assistants and butlers are incredibly supply constrained but in the future they (theoretically at least) won't be. The current market for personal assistants and butlers is small, but the market for a highly competent digital personal assistant that is affordable to the masses could be huge. Changing the cost structure will change the market.

Expand full comment

Great point. We already outsource chat and support teams. Artists are paid a pittance already. Wake me up when we can use AI to replace the overpaid executive class who make millions to sit on a few meetings with their thumbs up their asses.

Expand full comment
Dec 11, 2023Liked by Erik Hoel

> AI might end up incredibly smart, but mostly at things that aren’t economically valuable.

> Like, you know, writing.

In the name of this I sent you $7! xD By subscribing and I'm now going to unsubscribe since this is the only way I know of to give a one-time donation on Substack. :)

Expand full comment
Dec 9, 2023Liked by Erik Hoel

Your best piece in a year. You're back! Go get 'em!

Expand full comment

To your point about AI robot butlers -- instead of a luxury product, I'm hoping for AI elder care nurses, to help with the extremely disheartening problem of managing diseases like dementia, etc.

Expand full comment

Dementia patients need human contact very much; it seems cruel to think of them being cared for by robots. Would we also put robots in nurseries and daycares to handle the 'management' of infants and children - another job that can be repetitive and messy? What would robot care look like (maybe handling non-body cleanup - but that's not nursing, it's custodial work)?

Expand full comment

I think it would look like all of the things that we unfortunately simply don't have enough nurses to do + all of the things that would feel more dignified for a robot to do (I would very much prefer a robot help me wipe my ass if I become incapable of doing so myself). I'm picturing more dementia patients getting to live at home with their families, because there's an intelligent robot to watch them 24/7 so they don't wander off, to help them bathe and use the bathroom, to gently manage angry outbursts that could be dangerous for an equally elderly spouse trying to care for her husband, etc. I certainly am not suggesting that robots simply replace human contact!

Expand full comment
Dec 9, 2023·edited Dec 9, 2023

I'm surprised that there are so few people talking Economics about what is, essentially, an article about the Economics of AI. No one has mentioned the Jevons Paradox so far, which I shall illustrate with this made-up quote-

"Watt, you're wasting your time. No one buys steam engines except coal mines, and coal mines don't even buy that many steam engines. You're cornering the market on a product no one buys."

-which is to say that, when steam engines were crappy & expensive, no one bought them (except the people who *really* needed them, like coal mines that needed to pump out the groundwater constantly seeping into their mineshafts). Then when steam engines got cheaper... mine owners didn't spend less money buying the same amount of steam engines. They bought so many more that their total spending on steam engines actually went up. In fact, everyone started buying steam engines, realizing they had tons of uses for them they had never pursued cause they couldn't afford 'em -- up until now. Factories, the new steam ships, the new railroads, construction companies experimenting with the new steamrollers and steam shovels...

(The classic example of the Jevons Paradox is actually about coal and more efficient coal burning leading to more coal being burned, rather than steam engines, but eh, same difference)

The more tragic example would be Eli Whitney's cotton gin. Up until its invention, slavery in the American South was actually something of a dying industry, as the classic tobacco fields had used up all their fertility (to oversimplify) on their past harvests. No tobacco, means no demand for slave labor, means no slaves. Up until some bright spark invents a way to process cotton with less labor, which is another way of saying that each unit of labor processes more cotton, which is another way of saying it's more *profitable* to employ each unit of labor, including slave labor...

(The same story applies to military history, if you start looking:

Gunpowder gave you more bang for your buck then bows & arrows... so paradoxically European states start spending *more* money on their militaries, since there's now more return on investment, as the biggest spenders start conquering all the rest.

Gattling thought his gun would make armies smaller, since 1 man could do the work of 10; it did so rather more literally than he intended, but also in response people just bought *more armies*, contributing amongst other things to WW1.

Eisenhower thought nukes would lead to reductions in military spending via his "New Look"/"Massive Retaliation" policy of replacing conventional firepower with nuclear; instead, both the US and the Soviets get caught in a nuclear arms race, because falling behind in spending is now *insanely* more dangerous than before.)

I personally would have found the article more persuasive if it had talked about such things, drawing supply & demand curves to buttress its claims about hypothetical futures instead of looking solely at the present, which seems to be a bit of a category error... I mean seriously, the article argues that people won't buy the robot butlers of tomorrow, because practically no one hires the human butlers of today. You don't even have to look to Economics to know that *that's* not the full story, just look at History where ordinary middle class people were more than happy to hire lots of butlers & servants back when they were cheap. If butlers were cheap again, wouldn't things be more like the past than the present?

Overall, I'm not particularly convinced. The article raises an interesting point, but can't back it up enough to convince the people it most needs to convince: people like me. And there's so much I haven't even talked about yet (e.g. how even WW2 level manufacturing technology allows for a factory to self-replicate in about 6 months, so the invention of an automated factory could lead to growth rates of 300% a year; economically insignificant that is not, and LLMs seems to be leading to exciting/concerning progress in general purpose Robotics: https://robotics-transformer2.github.io/#:~:text=Below%2C%20we%20show%20a%20few%20videos & https://eureka-research.github.io/) -- I look forwards to seeing what more you have to say about this though! I'm sure your next article on the subject will keep building on this one.

Expand full comment