Gemini and the supply paradox of AI
Please note: scammers are impersonating me (and other authors) on Substack to send suspicious links. I will never ask you to join a telegram or send me any information. If you got a reply comment saying as such, know that it's not real and is a common kind of scam.
What I keep coming back to is that language is just a bridge. It's a bridge between your experience and my experience. I can understand what you are saying only because elements of my experience correspond to elements of your experience and language forms a bridge between them. This leads to two thoughts:
1. AI is trained on language, not experience. It is trained on bridges, not on solid ground. No wonder it hallucinates.
2. I don't want content, I want communication. Human existence is fundamentally lonely. We have access only to our own mind. Everyone else is a stranger to us. Except when we communicate. Knowing that there is a human being with human experience on the other side of the bridge is the only reason I am interested in the bridge at all. A bridge to nowhere, no matter how bright and shiny it may be, and no matter how swiftly is may be built, is still a bridge to nowhere, and I'm not interested.
Which leads to the further thought that the notion that it can disrupt the content industry by building shinier bridges faster misses the point that we never wanted bridges in the first place. What we wanted was a way to cross to the other side. A bridge is of value only if there is something I want on the other shore. And the thing I want is a human being.
For all the tweets and articles I have read about AI, I have yet to see one explicit claim of what AI can do for us, other than widen and deepen the commerce of information. Drown the world in more of the written word. I really don't see what else it can do for us, and as far as I can tell neither can those building it.
And as far as creative stuff goes, one might say that AI is going to make it harder than ever to be a creator - because it's going to drown the world in media.
So then where's the edge? Well, in a world where anyone can be a creator, the defining characteristic is going to be taste. Taste, and the ability to be honest in ways that others aren't. And if you think about it, that's all creativity ever was to begin with.
I'm still optimistic on human creativity, because it is a thing that fundamentally cannot be replaced. AI can't get goosebumps. AI doesn't have taste.
Nice. Your thinking on AI is always a good sanity check, I wish more people would read this stuff. The hype is so powerful that people don't even want to hear measured bearishness like this. Consider popping into the weekly meeting at my work?
I think you are on to something with the supply paradox of AI. In programming this translates to the fact that the only thing that the AI can learn is what we call the "boilerplate," essentially the stuff that is the same across files and projects. This is necessarily true because what these models do is pick up on patterns. Well, guess what. I can write boilerplate with my eyes closed, at my max typing speed. The work that programmers get paid to do is precisely the non-boilerplate, the stuff that is specific to the project. Integrating different technologies with the business context, and so on. Knowing the intricacies of the existing codebase. Very similar to lawyers, I imagine, where there is a lot of boilerplate to deal with but that's not where there expertise is. The expertise is in the novel case, identifying what is different about it.
Like you said, maybe there is room for the tech to encroach on the "real job." But I'm skeptical for the same reasons you mentioned. We are operating at the absolute limits of data hoarding. If any pressure at all is placed on these companies' ability to hoard data, progress will not be made, barring some sort of revolution. I definitely expect this pressure to come. People don't like their data being used to train AIs. Platforms are updating their terms, artists are rebelling, policies will be made. It's basically a natural extension of existing privacy laws.
Ha! Brilliant analysis. 👍leaves me wondering what the use cases are. A year on, I find the use cases for ChatGPT are few and far between: like when I need to revise text to fit an arbitrary text limit, and I'm too blind with tiredness to do it myself; or to unpick an opaque piece of academic writing and turn it into comprehensible language from a discipline I don't have a deep acquaintance with. It can't write essays for honors courses where you need to deeply engage with the primary literature either. I am less wowed by the week, tbh. The AI winter is coming.
Right, how much would you pay for this AI-generated text, “Meaning of Old Man by Neil Young”?
Before you start bidding, note that this text (1) Is Captain Obvious stuff. (2) Reads like it’s by someone for whom English is not their first language. (3) Is cold and emotionless. (4) Has absolutely nothing personal in it. (5) Is repetitious. (6) Proceeds in a linear fashion from beginning to end. (7) Tells you everything and nothing.
This is just the textual analysis of the lyrics. (Is it even factually correct?) Where’s the analysis of the song (lyrics + music)? We don’t read lyrics, we listen to songs. Any human would say something about how the sound of the music and Young’s voice is part of the song’s “meaning,” how the song changes in intensity as his voice suddenly shifts up in pitch. As humans, we may not always be able to articulate the how and why, but we sure know the when and where of our feelings.
What's the antonym of poetic justice? Poetic injustice? Uninspired and spiritless justice? Thanks for reminding me that all the stuff that gave my life its limited poetry are now to be co-opted by the handsome and very rich princes (and let us not forget the princesses!) of Stanford and other incubators for the devaluation of non-technological human expression. Thank you also, though you didn't need to bother really because I already knew it, of reminding me that human illustrators don't make any money. That's why I drove a cab for years before cabs were replaced by (how coincidental!) another recent for-profit technological cudgel quaintly known as uber which is a word that once, ninety years ago or so, was itself fondly co-opted by the expounders of 20th century fascism. (Yes, I know this was an article about the economics of AI not the rightness of it but economic failure or not, AI has added to the powerful anti-poetic mindset that I sense everywhere these days though I have no graphs nor charts to prove it.)
Thanks for your very insightful and interesting article.
The AI´s disruption of “small industries” that are not lucrative, could be a just a part of a whole disruption dynamics going on. For example, from a historical perspective, the industrial revolution disrupted the job of thousands of family industries, from manual to automatized labor in big factories. So maybe the Big Five intent is not to replace industries or jobs, as such, but to create (or disrupt) a whole new brand of industries and markets. ("Yes, there is not enough demand for AI now, or compelling uses cases, so, why not create the demand for them?"). That would possibly explain why a huge amount of money is going on AI. Hence is not about how to optimize or have a better cost/benefit of what we are doing or using now, but to create a new realm where AI would be indispensable, not just for working, but for a new lifestyle.
I´m 61, and I grew without having cell phones or internet at that time. I did well without them. But now it is impossible to live without them. The industry at that time created the demand for the new products and with them a new way of living. The investors’ expectations are that sooner or later we will consume more and more AI products as the time pass on. And then afterwards, as always, we will not be able to live without them. AI is like a new artificial Darwinian species, and collectively as a society we will adapt and live with it, whether we want it or not.
You wrote “A big portion of users pay the $20 to OpenAI for GPT-4 access more for the novelty than anything else”. I´m sorry, but most of all are socially conditioned to consume, and investors expect we will consume their AI products. Nevertheless I am optimistic that AI will carry a good amount of benefits in our lifesyle and workplace.
Thanks for this cool-headed take Erik, needed in the current hype storm.
Makes me think AI may go the way nuclear did: not much of a business case in peaceful applications, despite huge investments, but a complete game-changer in war. Small autonomous killer drones, moving mines with basic image recognition capabilities, would turn the battlefield upside down in Ukraine.
Such an entertaining read, Erik! Just like AI, I’m also not expecting to make any money with my writing. 🤣
Great point. We already outsource chat and support teams. Artists are paid a pittance already. Wake me up when we can use AI to replace the overpaid executive class who make millions to sit on a few meetings with their thumbs up their asses.
Thought provoking, although I take issue with this section:
"Even assuming all that comes true sometime over the next decades: what is the market for personal assistants? What’s the market for butlers? Most people have neither of those things."
This is pretty clearly the wrong way to look at this, since today personal assistants and butlers are incredibly supply constrained but in the future they (theoretically at least) won't be. The current market for personal assistants and butlers is small, but the market for a highly competent digital personal assistant that is affordable to the masses could be huge. Changing the cost structure will change the market.
> AI might end up incredibly smart, but mostly at things that aren’t economically valuable.
> Like, you know, writing.
In the name of this I sent you $7! xD By subscribing and I'm now going to unsubscribe since this is the only way I know of to give a one-time donation on Substack. :)
Your best piece in a year. You're back! Go get 'em!
To your point about AI robot butlers -- instead of a luxury product, I'm hoping for AI elder care nurses, to help with the extremely disheartening problem of managing diseases like dementia, etc.
I'm surprised that there are so few people talking Economics about what is, essentially, an article about the Economics of AI. No one has mentioned the Jevons Paradox so far, which I shall illustrate with this made-up quote-
"Watt, you're wasting your time. No one buys steam engines except coal mines, and coal mines don't even buy that many steam engines. You're cornering the market on a product no one buys."
-which is to say that, when steam engines were crappy & expensive, no one bought them (except the people who *really* needed them, like coal mines that needed to pump out the groundwater constantly seeping into their mineshafts). Then when steam engines got cheaper... mine owners didn't spend less money buying the same amount of steam engines. They bought so many more that their total spending on steam engines actually went up. In fact, everyone started buying steam engines, realizing they had tons of uses for them they had never pursued cause they couldn't afford 'em -- up until now. Factories, the new steam ships, the new railroads, construction companies experimenting with the new steamrollers and steam shovels...
(The classic example of the Jevons Paradox is actually about coal and more efficient coal burning leading to more coal being burned, rather than steam engines, but eh, same difference)
The more tragic example would be Eli Whitney's cotton gin. Up until its invention, slavery in the American South was actually something of a dying industry, as the classic tobacco fields had used up all their fertility (to oversimplify) on their past harvests. No tobacco, means no demand for slave labor, means no slaves. Up until some bright spark invents a way to process cotton with less labor, which is another way of saying that each unit of labor processes more cotton, which is another way of saying it's more *profitable* to employ each unit of labor, including slave labor...
(The same story applies to military history, if you start looking:
Gunpowder gave you more bang for your buck then bows & arrows... so paradoxically European states start spending *more* money on their militaries, since there's now more return on investment, as the biggest spenders start conquering all the rest.
Gattling thought his gun would make armies smaller, since 1 man could do the work of 10; it did so rather more literally than he intended, but also in response people just bought *more armies*, contributing amongst other things to WW1.
Eisenhower thought nukes would lead to reductions in military spending via his "New Look"/"Massive Retaliation" policy of replacing conventional firepower with nuclear; instead, both the US and the Soviets get caught in a nuclear arms race, because falling behind in spending is now *insanely* more dangerous than before.)
I personally would have found the article more persuasive if it had talked about such things, drawing supply & demand curves to buttress its claims about hypothetical futures instead of looking solely at the present, which seems to be a bit of a category error... I mean seriously, the article argues that people won't buy the robot butlers of tomorrow, because practically no one hires the human butlers of today. You don't even have to look to Economics to know that *that's* not the full story, just look at History where ordinary middle class people were more than happy to hire lots of butlers & servants back when they were cheap. If butlers were cheap again, wouldn't things be more like the past than the present?
Overall, I'm not particularly convinced. The article raises an interesting point, but can't back it up enough to convince the people it most needs to convince: people like me. And there's so much I haven't even talked about yet (e.g. how even WW2 level manufacturing technology allows for a factory to self-replicate in about 6 months, so the invention of an automated factory could lead to growth rates of 300% a year; economically insignificant that is not, and LLMs seems to be leading to exciting/concerning progress in general purpose Robotics: https://robotics-transformer2.github.io/#:~:text=Below%2C%20we%20show%20a%20few%20videos & https://eureka-research.github.io/) -- I look forwards to seeing what more you have to say about this though! I'm sure your next article on the subject will keep building on this one.