European epicures had a similar reaction to American fast food when it started showing up after the war. McDonald's, et al, had distilled food down to it's most basic taste components (salty, umami, sweet, sour etc.) and rebuilt into assembly line products -burgers, pizza, fries and sandwiches.
The epicures found it ghastly and alien, a sort of culinary uncanny valley of things that were made of the same ingredient as real food but were somehow not food any more.
The French responded with Nouvelle Cuisine, real cooking with a lighter touch emphasizing freshness and delicate flavors. Other Euros went in similar directions with their national cuisine.
But it was all for naught, all the Nouvelle movements went by the wayside, Americanized fast food spread relentlessly and conquered all. Despair loomed.
But then a strange thing happened. ..
Once fast food became inescapable, the yearning grew for ... more. Less processed, more connected to the maker and the eater. A generation who had known little else began to reclaim their culinary heritage. Little by little the old ways were rediscovered, rustic farmhouses were looted of their ancient cookbooks, grannies were courted, hoping they'd share the old ways with the new generations. And the movement grew and grew.
So now we have both. If you want the cheapest and simplest food you can get, it's still everywhere. When you want the real thing you can go to real restaurants, and there's an unlimited variety of videos and books to show you how to make your own.
Ubiquitous fast food and ubiquitous fine food.
That's probably the best we can hope for with AI art.
I think this is already happening (insofar as cultural evolution accelerates with technological evolution). The truth is, though, that the people who crave non-commodified food and art are in the small minority. So: what once was part of mass culture “evolves” to become niche culture. Repackaged as a sort of luxury good for people rich enough to care.
It may accelerate when people realize over-reliance on technology and AI will cost them their livelihoods, social circles, and even humanity. Gen Z is one of the most anxious, depressed, and least socially connected generations ever. So many of us recognize it’s a problem; I could text any of my friends right now about this issue and they’d all stress the importance of hanging out in real life, finding non-digital venues of entertainment, exercising, reading, drawing, etc. I have an idealistic, maybe misplaced faith that we’ll raise our kids differently, restricting their access to the internet and social media in a way that our parents didn’t. I know I will. Either we’ll rediscover our humanity, or we’ll lose it entirely.
I do hope you are right in your optimism. I had a very dark view of the future, as I focused on all the ways the technology can be misused. And it will be, make no mistake.
Now, however, I choose to figure out ways in which I can make good use of the technology. I look for ways of wielding technology as a powerful tool that should be applied in a well - directed manner, instead of a hammer to hit people over the head. It appears that that is the best outlook on life that I've been able to come up with, as the evolution of technology will only be stopped if civilization collapses entirely - which I am very much against and hope not to see in my lifetime.
I think that’s a good way of looking at it. I try to find opportunities to use code for good, either through education, working for nonprofits/humanitarian organizations, or just not taking/applying to positions my morality objects to. There are so many amazing applications for technology, whether that be combatting environmental destruction, improving health outcomes for people, making education more accessible, etc. But technology without ethics or a moral framework is soulless and destructive. I think if individuals with certain technological expertise (or any expertise) leverage their skills for good, they can make a profound impact in their community—if only they seek out those opportunities. I’m not interested in changing the world, but fixing a lot of tiny problems affecting the people in my inner circle.
If you were reading the internet a decade or so ago, you would have seen lots of people complaining about “Airspace style”, and the way locality was being erased, as the same artisanal Japanese-Swedish coffeeshops and Korean taco places and handcrafted beer was being made everywhere. There’s some sense in which that represented the non-commodified natural foods winning, and some sense in which it’s just commodification again at a higher level.
I have already started. I was never completely against AI, but I have been aware of the pitfalls for as long as I have been using it.
In the beginning it was fun.
And I felt that by only picking the "best" pieces from myriads of prompts, and improving on them by hand, I was actually making some kind of "progress".
Then I realized that the stuff I was "making" - whilst esthetically pleasing to some extent - lacked something that was hard to put a finger on.
Eventually, I started finding it. And more.
Purpose, dynamics, intention, complexity of composition...
Now I can no longer look at AI generated "art" - neither my own or the slop which the Internet has become saturated with - without seeing the uncanniness. The "art, but not really" of it all.
As a result, I think I have become a better artist. At the very least I have become a better art appriciator.
And my appreciation for the real artists I already revered, have increased exponentially.
From 1915-1925, Edison sent out a team with a singer and a phonograph machine, and when the curtain went down, the audience couldn’t tell which one the song was coming from. No one today would be fooled by a phonograph, because we have learned all the important things that are missing. But they didn’t know to pay attention to those then. (Though the singer had to learn to minimize those while singing, to better fool the audience.)
You could make an argument that with the massive volume of commercial art produced in the last 100 years, and the fact that most of us see commercial art more often than actual art (even lowbrow), we have already experienced quite a bit of the phenomenon that Erik describes. I mean, what percentage of the total art produced by humanity is at the quality of Miyazaki? Probably less than one percent of one percent of one percent.
And a lot of AI art is going to just replace commercial art. It's a challenge for the fine artists, but possibly the end of the line for art JOBS. Don't get me wrong, that is a big problem - after all, a lot of real artists support themselves and gain experience with commercial art, and the rest of us will have to look at more boring and repetitive stuff. But it's a more manageable problem than us all turning into Westworld robots and saying that Starry Night "doesn't look like anything at all to me."
The proliferation of AI slop could be the beginning of the end of poptimism. Maybe even pop culture. Elite taste returns, maybe fueled by wealth inequality. Patronage makes a comeback.
Can you imagine being 13 now, growing up on phones and AI slop? I’m 24 and I feel a yearning for the old world of the Obama era, when phones were a thing we had but not everyone’s default activity, when my parents criticised my screen time. Today they are themselves screen zombies. If I didn’t have those memories and could only glimpse the old world through old media, I’d surely feel even more robbed.
Some people will be happy with the "Live Laugh Love" of fast food. It's cheap, comfy, and familiar, even if it's cliche-ridden.
And some people will prefer something truly novel. Of course, novelty fades over time. Truffle fries, anyone?
And yet others will prefer something traditional, but done exceptionally well, such as the hamburger you get at a fine steakhouse.
AI can do the first one all day long. It cannot do the second without being at least a little derivative. As I like to say, AI would never put that long chord from "A Day In The Life" at the end of a Beatlesesque song if John and Paul hadn't done it first.
And the third--well, that's all dependent on the skill of the craftsperson at his last with his tools. You're not going to get that kind of hamburger off the industrial-sized George Foreman grills they use at McD's.
Mr Miyazaki told FarOut Magazine, "I can't watch this stuff and find [it] interesting. Whoever creates this stuff has no idea what pain is whatsoever. I am utterly disgusted. If you really want to make creepy stuff, you can go ahead and do it. I would never wish to incorporate this technology into my work at all."
Miyazaki, in that famous video, was being shown 3d animation generated using an evolutionary algorithm that made creatures that walked by dragging their heads, and such tortuous visuals. He objected because it felt like a parody of people with physical deformities or handicaps. It's not known what he thinks of the current genAI but it's somewhat taken out of context.
Most AI-enthusiasts I've run into out in the world just can't seem to grasp that just because they can use AI effectively and with intent doesn't mean the general public will. That's what I fear.
I fear the semantic apocalypse for the every day person, one who can't spot AI generated content and who doesn't care to spot it. You spotted this apocalypse 6 years ago, most never will in their lifetimes. What does this mean for humanity downstream? Maybe we'll adapt, but what will we be in fifty years?
It's not enough for some of us to detach from AI, because most won't. Or care to know you even can. Just as some teens today don't know a world without social media and cannot fathom disconnecting. It's all they know. What happens when a generation comes around who has all their culture devalued and integrated with technology?
One AI enthusiasts i spoke to told me that his ideal world was immortal life with a hyperreal VR headset. He was looking forward to such a future. Essentially a hedonistic pleasure cube. He's playing video games 24/7 and has no friends.
They know that AI will destroy art, culture, beauty and humanity. But they don't care and even want it to happen.
No wonder there is a deep nihilism to the value of human life among certain Silicon Valley circles. The only ideal they can conceive of for themselves is a life so hollow it isn’t human at all.
No, these are precisely the people who need our compassion the most. Often they come to be this way for a variety of reasons, most out of their control.
Many of my tech or video game addicted teenage clients were raised with well-intentioned but emotionally neglectful parents. Especially in today's world, where its harder to get ahead and properly attend to family. Where they used to find solace out in the world as latchkey kids they were instead raised by the internet. Often, this was the only place they felt connected and seen. Makes sense they'd want to continue living in that world.
Wanting to destroy them, and viewing them as discardable, makes you no different than the people you're describing. Which is true anyway, as we're all human.
Oh, are you referring to the AI enthusiasts or the AI/Tech creators? My reply was concerning those enthusiasts, but I would agree that the technocrats in power at the big companies should absolutely lose their power and ability to lead. They've done enormous damage.
I’ve met many students in my degree who have a vested interest in simulating human personality and emotion with AI. It feels like a common sentiment among CS folks that humans are merely another type of machine that can be modeled with a complex enough network. There are legitimate benefits to certain applications of AI and NLP, e.g., using LLMs to analyze public opinion about certain policies. However, their widespread, uncritical use, particularly insofar as people try to indiscriminately apply them to contexts so deeply tied to our humanity—like art, social work, etc.—is troubling and spells disaster for our society, in my opinion.
Well said. Much of my clinical work involves working with tech addicted teenagers and young adults. One universality I've noticed about their tech addictions is it changes their perceptions of what it means to be human through things such as learned social anxiety, stunted initiation and maintenance of human contact/communication, existential dread for the future, viewing humans as inherently bad/wrong/evil, etc. Being terminally online disconnects you from being able to critically evaluate and use things such as AI. It sends you on the path of utilizing technology indiscriminately and putts on the blinders on how these integrate with our humanity. This leads to another universality I've noticed, which is tech addicted people and their lack of understanding in the history of humanity. That gap in understanding amplifies the notion in this population that humanity is doomed without understanding humanity has been MUCH worse off than it is now.
When I describe what common living was like even just one-hundred years ago, which is nothing in historical terms, they usually become slack-jawed and in awe of the progress we've made. Sure we have plenty of problems to address, but we have it pretty good overall.
So it makes sense that if you're adjacent to or in the field of AI, and you view humans and their actions as inherently wrong or misguided, then what's the worry about AI going haywire? In some of their minds, it would be a correction, not a bug in the software. Almost like a technological nihilism.
Or maybe they are "the next stage in human evolution".
Go far enough back and your ancestors were fishlike. If they could see us now, would they be happy that we are what they were destined to evolve into? Despite our many areas of superiority, I think not.
(Doesn't make me happy though. I'm a cultural conservative, just the same as almost everyone else)
I think it's a version of evolution on steroids. Most evolutionary changes take 500-1000 years to be noticeable, but humanity has had a speedrun of this. Roughly 500 years ago we had the printing press! And now we have orders of magnitude more impressive and potent tech. What does that do to a species who is just now seeing the evolutionary effects from reading, after being not that far removed from damn near everyone being illiterate?
I suspect some of our problems in modern society of skyrocketing anxiety and depression are due to the overwhelming evolutionary changes that are happening far too quickly for our minds to comprehend.
I agree with this 100%. We don’t know the full impact of long-term excessive screentime, social media use, iPhone addiction, etc., simply because it’s happening real-time. I’m a Gen Z; I started using the Internet and iPhones daily when I was under 10. My friends with siblings in Gen Alpha (or who regularly interact with kids through their jobs) tell me that most kids nowadays are “iPad babies”. We know it’s an issue—some of us try to combat it, deleting social media, downloading screen-time apps, opting out of Gen AI, etc. But it’s hard to effectively make those changes when your devices and apps have been manufactured to be as addictive as possible, and society penalizes you for not using the technologies of the moment. Change starts at an institutional level—we can’t expect millions of teenagers to break their smartphone addictions without policies and regulations targeting social media and tech companies.
A cultural evolution maybe. And possibly soon a branching of two cultural currents. I am thinking of U K Le Guin's The dispossessed here. I am not sure if I'd board that ship but I can see the transhuman appeal.
I wasn't serious. Evolution is much too slow for that but, we now change our environment so fast that we can't even guess which are the best genes from the available pool long term. Evolution is just differential reproductive survival - it doesn't have to agree with our philosophical, aesthetic, moral or emotional preferences.
This take is IT. every time I formulate arguments about AI, I try not to default into things like “art has soul” or “craft matters” because for a huge chunk of the population it doesn’t matter. They don’t care—they want the fastest route to wherever they’re trying to go. What happens, though, when the snowball of insidious effects hits us years down the line?
I don't see why the oversupply of Ghibli memes would desecrate or denigrate the movies. I love the fact we can Ghiblify anything, and I continue to love Spirited Away or Castle in the Sky. This also has been true forever, for every piece of great art ever created. We have hundreds of worse versions of LOTR, but it stands the test of time. How many detective novels trying hard, while Sherlock Holmes and Poirot continue to be brilliant. Wordsworth and Pope and Whitman have been copied ad nauseum, but remain great. And how many Harry Potter copies have we seen?
There is no reason why increased supply or even pastiches by itself should cause meaning to lessen, or for it to become an apocalypse. Semantic satiation exists, but many objects do stand the test of time even after repeated exposure. I will not get bored of Ghibli movies, nor Pope, nor Poirot, nor LOTR. Our culture is more resilient than that.
I hope you're right! I still hold out hope that AI progress is slowing (I'll write more about this) just as I called last fall. In which case, AI models really do get relegated to tools and it seems culturally survivable to me (if still detrimental in many cases, but there are positives too).
But, in the nightmare scenario where it really is possible to generate a Studio Ghibli production at the click of a button, I can't see how human art retains its aura of meaning and doesn't get semantically satiated away. To me, that latter case brings to mind other *really* big technologically-driven changes in history that the old culture didn't survive through. E.g., the printing press led to the ability to read the Bible. Huge change. This in turn led to the disenchantment with Catholicism, and that schism drove the wars and politics of Europe and in many ways, the entire world, for hundreds of years! So we know, historically, that collective disenchantments due to familiarity can indeed be extremely powerful and sweeping. I don't think human art still gets made except in niche circumstances, and with minimum cultural sway, in a case where AI really is just as good and a million times cheaper and faster.
Yes the possible future where we really can, at the click of a button, make the wonders in our minds real can and will have sweeping consequences. I do think the human “demandscape” is more variegated than that, to be so easily satisfied. But post singluarity discussions, of which this is an offshoot, are notoriously prone to such ‘divide by zero’ errors.
The printing press analogy is a really good one, especially in terms of the societal turmoil unleashed - and it's worth extending it to point out that Catholicism is still a thing, and cathedrals are still being built (e.g. Gaudi's Sagrada Familia in Barcelona) over 500 years later. The Industrial Revolution also automated textile manufacturing on a massive scale, but people can still sew and hand-make clothes, and multitudes of people enjoy doing so today.
Human art will still be valuable to the artist (as a means of self-discovery and self-expression) and to other humans (as a means of understanding other humans’ experience and perspectives).
The primary threat is to human ability to monetise or gatekeep art, which will of course be painful to those reliant on doing so, but possibly for the best in the long run.
"Gatekeep art" meaning someone who has spent a life developing various artistic skills...vs someone who uses tech, which has simply stolen and condensed these tools/skills from other artists, to "create" new images/music/"art"...
Exactly. Anyone who goes through the effort of creating something with their own hard-earned skill, deserves to be able to "gatekeep" their creations. Like the songwriters who won't allow their songs to be played at political events they oppose, for instance.
I don't get this. If generative AI was able to do high-quality work while giving the user precise control we would be having a completely different discussion. We could discuss its benefits and issues, talk about where it is applicable and so on.
From my perspective, the core problem with this technology is precisely that it doesn't actually produce good images or good writing or good code, and there seems to be no pathway to truly "fix" it without doing something completely different. Because of the sunk costs, we are unlikely to do anything different any time soon. Therefore, what we get is slop. Lots of it. Our society wastes hundreds of billions of dollars to make this slop more palpable, but in the end it will be cheaper to adapt people to slop, rather than the other way around. That's the truly scary part.
We are spending hundreds of billions of dollars on building more "powerful" slop generators, while this money could simply be spent on making whatever we actually need. For example, instead of "ghiblifying" a photo, you could send it to an art student to use as an exercise and get a unique and likely far better rendering of in return. However, our communication systems suck to much to make this easy. Instead of improving our communication systems, we invest in a slop generator. It's a technological death spiral.
If AI progress continues, hopefully at least it will become prohibitively expensive to generate a feature film. Then maybe there will be a few AI production companies which release a movie every few years which has cultural significance. That might be tolerable.
Look, anytime something special and uncommon becomes a ubiquitous commodity, the meaning that people derived from its specialness per se is drained out of it to some degree. People feel that as a loss.
Probably has something to do with the concept of sacredness and why it’s so compelling to humans, despite it being a social invention.
AI is proving to be very good at eviscerating many social fictions that people derive meaning from. And who’s to say that’s a good thing?
This. We have to differentiate derivatives and the base artifact.
Social media and meme cycles incentivize rapid “creation,” but the only kind of creation this allows for is remixing/riffing/commenting on root artistic and cultural events (faster, coarser, and built-in distribution). But transformative art and culture have only ever been created painstakingly by a small fraction. And their work embodies more beauty and truth, and therefore has more staying power. Derivatives can be spun up on a whim but have a short half life. Great original works are timeless.
These are different games, mind spaces, and weights.
A series of questions I’d ask:
- Isn’t this the natural cycle of art and culture? Has this historically disincentivized or devalued creation? And if the argument is that this is accelerating, is there *really* a going to be a rate of change or volume of derivatives that drives down the value of art?
It's a biological effect based on exposure and desensitization. The semantics attached to Ghibli's style are likely to fade away when everyone's exposed to tens of thousands of Ghibli memes many of which are pastiche or even inversions of the fundamental aesthetic.
You are not going to be exposed to thousands of versions of LOTR in a handful of days. If you were, the meaning would be drained from LOTR too.
The evidence is against this assertion so far. I guess sure there can be some clockwork orange equivalent where it's too much but this is far from that.
thank you for writing this. Studio Ghibli is magical partly because we can see how dynamic and thoughtful every frame was, and how labor intensive it was for Miyazaki to make. There is something, a pulse. Call it aura, call it magic: things that are made from a reservoir of attention also hold our attention. The first few images I saw on X of the ghiblification of photos were delightful, but they fell flat really quick. They lack the fundamental thing that makes Miyazaki's work always worth returning to: the love of a craft, the attention and soul infused into it.
Agreed. There is an indefinable but still tangible… I want to call it presence, I think, in Studio Ghibli art. Something soft, powerful, Real. It’s a complex concept, authenticity, much-debated in my field and used, of course, to market a whole load of stuff - which is in itself the antithesis of authenticity. As is “real”. But this is not Real. And personally that is what I want and it’s part an intrinsic part of Studio’s appeal. I’ll take watching the utter perfection of Princess Mononoke, Totoro and a few others once a year or so over a daily churn of Ghiblification every time, and I’m happy to die on this hill.
I truly believe everyone will eventually feel this way. Even the most ardent enjoyers of the Ghiblification. I don't think our emotions are made to endure this level of desecration.
The description of the crowd scene in this piece is misleading, it took 15 months for a complex animated crowd scene with moving parts fulfilling a complex artistic purpose, one of the major points being *distinct* characters. A simple drawing of a crowd of same-face characters looking the same way does not compare, it's beyond apples and oranges.
That passage keeps this piece in the "too easily impressed" category of AI writing that fails to comprehend the level of human work being done that the AI is barely mimicking on a surface level.
I myself debated exactly how much detail to go into for the comparison. I had a few extra lines describing why I think it's bad, but they got cut for flow. To me, the main thing with AI art is that: this is better than I expected, the time differential is absolutely insane, and today is, as they say, the worst the tech will ever be. So this piece is about my deep worry for the future, one that I see beginning signs of, so I'm writing from that perspective. But maybe it won't come true.
Broadly though, I agree with you. I've written before on how people underestimate the degree to which artistic excellence is just the last little few percentage points. The differentiation between Miyazaki and a bottom-tier animator is subtle but profound. I think this matters for AI art: because only the best of the best really matters, AI still hasn't "closed the gap" and the gap might be way larger than people think, simply because so much of successful art is in those final few percentage points of "is this better than that?"
One problem is that when only the best of the best are good enough, the best of the best become worse than they once were. I mean, how do you get "the best of the best"? Lots of people start out, some subset of them get good, or even very good, and a tiny fraction (it's never possible to tell in advance who those people will be) become the best of the best. If "very good" is worthless, and only "the best of the best" is valuable, then far fewer people even try, meaning that those who rise to the top won't be as good as the masters of old. At which point, you might as well just let AI do the job.
Which reminds me of an article that I read years ago, back when the late English queen was still alive and well. The author was lamenting that there were so few good portraits of the queen, despite the fact that she sat for a portrait at least once a year. He proposed all sorts of explanations for this, but the one that stuck with me was that, as a result of photography, fewer people ever sat for a portrait, and so there were far fewer portraitists, and they got less training than the masters of old, and so even the best of the best were just not all that good. You gain photography, you lose top-notch portraitists. Likewise, you gain AI, and you lose top-notch animators. And there's little reason to assume this only goes for animators.
With this particular 15 months example, most people watching didn't really care or notice much about this painstaking scene, it's almost perfectly unflashy and almost just there to catch the eye of the technical class that would appreciate it, and then it happened to get Ghibli a little extra PR with an informative article down the line that we could all appreciate.
So in a sense it was already unnecessary, and already replaceable by humans doing a lower effort version of this sequence that would have a 0.1% affect on audiences experience of the movie. Which is to say, Ghibli would still do this if AI could replace it as well, and we'd still be reading this article about it 10 years later, and think about it in a similar way.
I certainly worry about future art, but I'm also not very bowled over by today's art. If we are making the last human art ever, it seems like we really hit a slump in the 90s and wasted much of the past 30 years. But I think this particular example undermines your point because it is in a sense an example of Ghibli's excess, energy put towards something for the sake of it, meanwhile the last time Miyazaki really directed well was Spirited Away. The 15 months basically did not have to be 15 months and I think it was a bad decision to make it that long. But even if they had done something more reasonable, Ghibli is already a kind of decadent studio that exists as a cultural artifact pointing to their better days.
Basically I don't see any humans making another Spirited Away or Only Yesterday, and the studio exists as more or less a kind of modern cathedral building tributes to the murmurings of a once-great ideas guy, so they spend way too long on stuff that is only becoming less and less relevant. Aka they have 99 problems and AI is not one of them. We should be looking at where actually good art is coming from these days to find something to actually worry about.
"There it was: that recognizable forced cadence, that constant reaching for filler, that stilted eagerness." YES YES sorry to shout, but this is a sublimely accurate description of the AI "tells" and I thank you. Unfortunately, how many folks who don't write or edit for a living will catch them?
The apocalypse will resemble bankruptcy in that it will happen gradually, and then all at once. It is both morbid and sad; I think we are all laughing in response to Ghiblification so that we don't cry.
That's an understandable take but another way to look at it, which could be very good, is that it will cheapen all things digital and almost force a return to the material world where we can see, interact, build, enjoy real things. Maybe live storytelling will even make a come back. Maybe the online world will become so impossible to tell what is real vs fake that people will go back to talking to one another in person. Not get worked up over something they just saw flash by online but instead wait to form an opinion on matters they've actually experienced first hand.
I'm a playwright and have heard for over a decade that the increasing digitalization of life will mean people craving live experiences again. Not only has this not happened but the theatre is losing audiences rapidly...
I hear you but technology 10 years ago is not even in the same ballpark as what we have today especially when it comes to AI. Hell even the AI that was around just 2 years ago doesn't come close to the capabilities today. I think we're firmly on the exponential curve of tech right now. Things are changing and changing rapidly.
And maybe it doesn't change people's behavior at all. Who knows. I know I personally don't believe anything I read or see online. I'm a software dev and I'll take being in nature, getting my hands dirty actually building something in the garage, or reading a real book any day over what's online.
I think a lot of people probably feel this way, especially in tech. Staring at buggy code all day made me realize how much I enjoy going out in nature. Constantly tuning into social media made me see how valuable in-person hangouts with my friends were. Regardless what happens or doesn’t happen with emerging technologies, we still have our humanity, and that’s worth protecting.
When I'm on a screen I feel like I'm working, when I don't have to be I am not. 15 years ago I didn't feel this way. Socializing online feels like work, while socializing offline feels harder and harder as I get older.
Would be cool if, when using the term Semantic Apocalypse, you gave a tip of the hat to Scott Bakker who coined the term. That guy really got beat up over the past few years, his writing is excellent and he deserves some acknowledgement for the ideas he developed.
Is there any good explainer for Bakker that focuses on how LLMs lack conscious intentionality and therefore their products don't have semantic content compared to humans (my concern)? I've only seen stuff about neuromodulation and capitalism, which seems different. I can't say I fully understand the thesis (but I might disagree with it, since neuroscience has had almost no impact yet on meaning, imo).
I'm reminded of the scene in 1984 where Winston and Julia hear a woman singing a machine-made song while hanging laundry. The song itself is soulless, but the woman is singing it without self-consciousness and imbues it with humanity anyway.
It's not that I think that the machine-song isn't a demeaning of the art it fed on, but I think when you feed anything through a human it came become art again. The machines are going to make a lot more goo and slop, but I don't worry about the human capacity -- compulsion even -- for turning whatever they encounter into something meaningful again.
What a great post. Semantic satiation is a fascinating concept, including in this context. I've done this and it's true -- you can temporarily make a word lose its connection with its meaning via repetition in succession.
This Ghibli desensitization reminds me of the (probably obvious) currency inflation metaphor. By counterfeiting masses of facsimiles, the value of all the currency goes down, the counterfeit bills and the real bills both.
European epicures had a similar reaction to American fast food when it started showing up after the war. McDonald's, et al, had distilled food down to it's most basic taste components (salty, umami, sweet, sour etc.) and rebuilt into assembly line products -burgers, pizza, fries and sandwiches.
The epicures found it ghastly and alien, a sort of culinary uncanny valley of things that were made of the same ingredient as real food but were somehow not food any more.
The French responded with Nouvelle Cuisine, real cooking with a lighter touch emphasizing freshness and delicate flavors. Other Euros went in similar directions with their national cuisine.
But it was all for naught, all the Nouvelle movements went by the wayside, Americanized fast food spread relentlessly and conquered all. Despair loomed.
But then a strange thing happened. ..
Once fast food became inescapable, the yearning grew for ... more. Less processed, more connected to the maker and the eater. A generation who had known little else began to reclaim their culinary heritage. Little by little the old ways were rediscovered, rustic farmhouses were looted of their ancient cookbooks, grannies were courted, hoping they'd share the old ways with the new generations. And the movement grew and grew.
So now we have both. If you want the cheapest and simplest food you can get, it's still everywhere. When you want the real thing you can go to real restaurants, and there's an unlimited variety of videos and books to show you how to make your own.
Ubiquitous fast food and ubiquitous fine food.
That's probably the best we can hope for with AI art.
I think this is already happening (insofar as cultural evolution accelerates with technological evolution). The truth is, though, that the people who crave non-commodified food and art are in the small minority. So: what once was part of mass culture “evolves” to become niche culture. Repackaged as a sort of luxury good for people rich enough to care.
It may accelerate when people realize over-reliance on technology and AI will cost them their livelihoods, social circles, and even humanity. Gen Z is one of the most anxious, depressed, and least socially connected generations ever. So many of us recognize it’s a problem; I could text any of my friends right now about this issue and they’d all stress the importance of hanging out in real life, finding non-digital venues of entertainment, exercising, reading, drawing, etc. I have an idealistic, maybe misplaced faith that we’ll raise our kids differently, restricting their access to the internet and social media in a way that our parents didn’t. I know I will. Either we’ll rediscover our humanity, or we’ll lose it entirely.
I do hope you are right in your optimism. I had a very dark view of the future, as I focused on all the ways the technology can be misused. And it will be, make no mistake.
Now, however, I choose to figure out ways in which I can make good use of the technology. I look for ways of wielding technology as a powerful tool that should be applied in a well - directed manner, instead of a hammer to hit people over the head. It appears that that is the best outlook on life that I've been able to come up with, as the evolution of technology will only be stopped if civilization collapses entirely - which I am very much against and hope not to see in my lifetime.
I think that’s a good way of looking at it. I try to find opportunities to use code for good, either through education, working for nonprofits/humanitarian organizations, or just not taking/applying to positions my morality objects to. There are so many amazing applications for technology, whether that be combatting environmental destruction, improving health outcomes for people, making education more accessible, etc. But technology without ethics or a moral framework is soulless and destructive. I think if individuals with certain technological expertise (or any expertise) leverage their skills for good, they can make a profound impact in their community—if only they seek out those opportunities. I’m not interested in changing the world, but fixing a lot of tiny problems affecting the people in my inner circle.
If you were reading the internet a decade or so ago, you would have seen lots of people complaining about “Airspace style”, and the way locality was being erased, as the same artisanal Japanese-Swedish coffeeshops and Korean taco places and handcrafted beer was being made everywhere. There’s some sense in which that represented the non-commodified natural foods winning, and some sense in which it’s just commodification again at a higher level.
I have already started. I was never completely against AI, but I have been aware of the pitfalls for as long as I have been using it.
In the beginning it was fun.
And I felt that by only picking the "best" pieces from myriads of prompts, and improving on them by hand, I was actually making some kind of "progress".
Then I realized that the stuff I was "making" - whilst esthetically pleasing to some extent - lacked something that was hard to put a finger on.
Eventually, I started finding it. And more.
Purpose, dynamics, intention, complexity of composition...
Now I can no longer look at AI generated "art" - neither my own or the slop which the Internet has become saturated with - without seeing the uncanniness. The "art, but not really" of it all.
As a result, I think I have become a better artist. At the very least I have become a better art appriciator.
And my appreciation for the real artists I already revered, have increased exponentially.
I constantly think about the Edison tone tests: https://phonographia.com/Factola/Edison%20Tone%20Tests.htm
From 1915-1925, Edison sent out a team with a singer and a phonograph machine, and when the curtain went down, the audience couldn’t tell which one the song was coming from. No one today would be fooled by a phonograph, because we have learned all the important things that are missing. But they didn’t know to pay attention to those then. (Though the singer had to learn to minimize those while singing, to better fool the audience.)
You could make an argument that with the massive volume of commercial art produced in the last 100 years, and the fact that most of us see commercial art more often than actual art (even lowbrow), we have already experienced quite a bit of the phenomenon that Erik describes. I mean, what percentage of the total art produced by humanity is at the quality of Miyazaki? Probably less than one percent of one percent of one percent.
And a lot of AI art is going to just replace commercial art. It's a challenge for the fine artists, but possibly the end of the line for art JOBS. Don't get me wrong, that is a big problem - after all, a lot of real artists support themselves and gain experience with commercial art, and the rest of us will have to look at more boring and repetitive stuff. But it's a more manageable problem than us all turning into Westworld robots and saying that Starry Night "doesn't look like anything at all to me."
Thank you for this, it gave me so much hope.
The proliferation of AI slop could be the beginning of the end of poptimism. Maybe even pop culture. Elite taste returns, maybe fueled by wealth inequality. Patronage makes a comeback.
Can you imagine being 13 now, growing up on phones and AI slop? I’m 24 and I feel a yearning for the old world of the Obama era, when phones were a thing we had but not everyone’s default activity, when my parents criticised my screen time. Today they are themselves screen zombies. If I didn’t have those memories and could only glimpse the old world through old media, I’d surely feel even more robbed.
Love this 🙏
Some people will be happy with the "Live Laugh Love" of fast food. It's cheap, comfy, and familiar, even if it's cliche-ridden.
And some people will prefer something truly novel. Of course, novelty fades over time. Truffle fries, anyone?
And yet others will prefer something traditional, but done exceptionally well, such as the hamburger you get at a fine steakhouse.
AI can do the first one all day long. It cannot do the second without being at least a little derivative. As I like to say, AI would never put that long chord from "A Day In The Life" at the end of a Beatlesesque song if John and Paul hadn't done it first.
And the third--well, that's all dependent on the skill of the craftsperson at his last with his tools. You're not going to get that kind of hamburger off the industrial-sized George Foreman grills they use at McD's.
Thats a great response
One of the most encouraging and hopeful analogies for AI art I've heard!
Do you happen to have the names of any of these epicures ? Preferably french, I would love to read what they were saying.
Mr Miyazaki told FarOut Magazine, "I can't watch this stuff and find [it] interesting. Whoever creates this stuff has no idea what pain is whatsoever. I am utterly disgusted. If you really want to make creepy stuff, you can go ahead and do it. I would never wish to incorporate this technology into my work at all."
Miyazaki, in that famous video, was being shown 3d animation generated using an evolutionary algorithm that made creatures that walked by dragging their heads, and such tortuous visuals. He objected because it felt like a parody of people with physical deformities or handicaps. It's not known what he thinks of the current genAI but it's somewhat taken out of context.
Most AI-enthusiasts I've run into out in the world just can't seem to grasp that just because they can use AI effectively and with intent doesn't mean the general public will. That's what I fear.
I fear the semantic apocalypse for the every day person, one who can't spot AI generated content and who doesn't care to spot it. You spotted this apocalypse 6 years ago, most never will in their lifetimes. What does this mean for humanity downstream? Maybe we'll adapt, but what will we be in fifty years?
It's not enough for some of us to detach from AI, because most won't. Or care to know you even can. Just as some teens today don't know a world without social media and cannot fathom disconnecting. It's all they know. What happens when a generation comes around who has all their culture devalued and integrated with technology?
One AI enthusiasts i spoke to told me that his ideal world was immortal life with a hyperreal VR headset. He was looking forward to such a future. Essentially a hedonistic pleasure cube. He's playing video games 24/7 and has no friends.
They know that AI will destroy art, culture, beauty and humanity. But they don't care and even want it to happen.
They are anti-human.
No wonder there is a deep nihilism to the value of human life among certain Silicon Valley circles. The only ideal they can conceive of for themselves is a life so hollow it isn’t human at all.
We have to destroy these “people”
No, these are precisely the people who need our compassion the most. Often they come to be this way for a variety of reasons, most out of their control.
Many of my tech or video game addicted teenage clients were raised with well-intentioned but emotionally neglectful parents. Especially in today's world, where its harder to get ahead and properly attend to family. Where they used to find solace out in the world as latchkey kids they were instead raised by the internet. Often, this was the only place they felt connected and seen. Makes sense they'd want to continue living in that world.
Wanting to destroy them, and viewing them as discardable, makes you no different than the people you're describing. Which is true anyway, as we're all human.
I would have compassion for them were they out of power
Oh, are you referring to the AI enthusiasts or the AI/Tech creators? My reply was concerning those enthusiasts, but I would agree that the technocrats in power at the big companies should absolutely lose their power and ability to lead. They've done enormous damage.
Both
I’ve met many students in my degree who have a vested interest in simulating human personality and emotion with AI. It feels like a common sentiment among CS folks that humans are merely another type of machine that can be modeled with a complex enough network. There are legitimate benefits to certain applications of AI and NLP, e.g., using LLMs to analyze public opinion about certain policies. However, their widespread, uncritical use, particularly insofar as people try to indiscriminately apply them to contexts so deeply tied to our humanity—like art, social work, etc.—is troubling and spells disaster for our society, in my opinion.
Well said. Much of my clinical work involves working with tech addicted teenagers and young adults. One universality I've noticed about their tech addictions is it changes their perceptions of what it means to be human through things such as learned social anxiety, stunted initiation and maintenance of human contact/communication, existential dread for the future, viewing humans as inherently bad/wrong/evil, etc. Being terminally online disconnects you from being able to critically evaluate and use things such as AI. It sends you on the path of utilizing technology indiscriminately and putts on the blinders on how these integrate with our humanity. This leads to another universality I've noticed, which is tech addicted people and their lack of understanding in the history of humanity. That gap in understanding amplifies the notion in this population that humanity is doomed without understanding humanity has been MUCH worse off than it is now.
When I describe what common living was like even just one-hundred years ago, which is nothing in historical terms, they usually become slack-jawed and in awe of the progress we've made. Sure we have plenty of problems to address, but we have it pretty good overall.
So it makes sense that if you're adjacent to or in the field of AI, and you view humans and their actions as inherently wrong or misguided, then what's the worry about AI going haywire? In some of their minds, it would be a correction, not a bug in the software. Almost like a technological nihilism.
Or maybe they are "the next stage in human evolution".
Go far enough back and your ancestors were fishlike. If they could see us now, would they be happy that we are what they were destined to evolve into? Despite our many areas of superiority, I think not.
(Doesn't make me happy though. I'm a cultural conservative, just the same as almost everyone else)
I think it's a version of evolution on steroids. Most evolutionary changes take 500-1000 years to be noticeable, but humanity has had a speedrun of this. Roughly 500 years ago we had the printing press! And now we have orders of magnitude more impressive and potent tech. What does that do to a species who is just now seeing the evolutionary effects from reading, after being not that far removed from damn near everyone being illiterate?
I suspect some of our problems in modern society of skyrocketing anxiety and depression are due to the overwhelming evolutionary changes that are happening far too quickly for our minds to comprehend.
I agree with this 100%. We don’t know the full impact of long-term excessive screentime, social media use, iPhone addiction, etc., simply because it’s happening real-time. I’m a Gen Z; I started using the Internet and iPhones daily when I was under 10. My friends with siblings in Gen Alpha (or who regularly interact with kids through their jobs) tell me that most kids nowadays are “iPad babies”. We know it’s an issue—some of us try to combat it, deleting social media, downloading screen-time apps, opting out of Gen AI, etc. But it’s hard to effectively make those changes when your devices and apps have been manufactured to be as addictive as possible, and society penalizes you for not using the technologies of the moment. Change starts at an institutional level—we can’t expect millions of teenagers to break their smartphone addictions without policies and regulations targeting social media and tech companies.
A cultural evolution maybe. And possibly soon a branching of two cultural currents. I am thinking of U K Le Guin's The dispossessed here. I am not sure if I'd board that ship but I can see the transhuman appeal.
I wasn't serious. Evolution is much too slow for that but, we now change our environment so fast that we can't even guess which are the best genes from the available pool long term. Evolution is just differential reproductive survival - it doesn't have to agree with our philosophical, aesthetic, moral or emotional preferences.
This take is IT. every time I formulate arguments about AI, I try not to default into things like “art has soul” or “craft matters” because for a huge chunk of the population it doesn’t matter. They don’t care—they want the fastest route to wherever they’re trying to go. What happens, though, when the snowball of insidious effects hits us years down the line?
I don't see why the oversupply of Ghibli memes would desecrate or denigrate the movies. I love the fact we can Ghiblify anything, and I continue to love Spirited Away or Castle in the Sky. This also has been true forever, for every piece of great art ever created. We have hundreds of worse versions of LOTR, but it stands the test of time. How many detective novels trying hard, while Sherlock Holmes and Poirot continue to be brilliant. Wordsworth and Pope and Whitman have been copied ad nauseum, but remain great. And how many Harry Potter copies have we seen?
There is no reason why increased supply or even pastiches by itself should cause meaning to lessen, or for it to become an apocalypse. Semantic satiation exists, but many objects do stand the test of time even after repeated exposure. I will not get bored of Ghibli movies, nor Pope, nor Poirot, nor LOTR. Our culture is more resilient than that.
I hope you're right! I still hold out hope that AI progress is slowing (I'll write more about this) just as I called last fall. In which case, AI models really do get relegated to tools and it seems culturally survivable to me (if still detrimental in many cases, but there are positives too).
But, in the nightmare scenario where it really is possible to generate a Studio Ghibli production at the click of a button, I can't see how human art retains its aura of meaning and doesn't get semantically satiated away. To me, that latter case brings to mind other *really* big technologically-driven changes in history that the old culture didn't survive through. E.g., the printing press led to the ability to read the Bible. Huge change. This in turn led to the disenchantment with Catholicism, and that schism drove the wars and politics of Europe and in many ways, the entire world, for hundreds of years! So we know, historically, that collective disenchantments due to familiarity can indeed be extremely powerful and sweeping. I don't think human art still gets made except in niche circumstances, and with minimum cultural sway, in a case where AI really is just as good and a million times cheaper and faster.
Yes the possible future where we really can, at the click of a button, make the wonders in our minds real can and will have sweeping consequences. I do think the human “demandscape” is more variegated than that, to be so easily satisfied. But post singluarity discussions, of which this is an offshoot, are notoriously prone to such ‘divide by zero’ errors.
The printing press analogy is a really good one, especially in terms of the societal turmoil unleashed - and it's worth extending it to point out that Catholicism is still a thing, and cathedrals are still being built (e.g. Gaudi's Sagrada Familia in Barcelona) over 500 years later. The Industrial Revolution also automated textile manufacturing on a massive scale, but people can still sew and hand-make clothes, and multitudes of people enjoy doing so today.
Human art will still be valuable to the artist (as a means of self-discovery and self-expression) and to other humans (as a means of understanding other humans’ experience and perspectives).
The primary threat is to human ability to monetise or gatekeep art, which will of course be painful to those reliant on doing so, but possibly for the best in the long run.
"Gatekeep art" meaning someone who has spent a life developing various artistic skills...vs someone who uses tech, which has simply stolen and condensed these tools/skills from other artists, to "create" new images/music/"art"...
Exactly. Anyone who goes through the effort of creating something with their own hard-earned skill, deserves to be able to "gatekeep" their creations. Like the songwriters who won't allow their songs to be played at political events they oppose, for instance.
I don't get this. If generative AI was able to do high-quality work while giving the user precise control we would be having a completely different discussion. We could discuss its benefits and issues, talk about where it is applicable and so on.
From my perspective, the core problem with this technology is precisely that it doesn't actually produce good images or good writing or good code, and there seems to be no pathway to truly "fix" it without doing something completely different. Because of the sunk costs, we are unlikely to do anything different any time soon. Therefore, what we get is slop. Lots of it. Our society wastes hundreds of billions of dollars to make this slop more palpable, but in the end it will be cheaper to adapt people to slop, rather than the other way around. That's the truly scary part.
We are spending hundreds of billions of dollars on building more "powerful" slop generators, while this money could simply be spent on making whatever we actually need. For example, instead of "ghiblifying" a photo, you could send it to an art student to use as an exercise and get a unique and likely far better rendering of in return. However, our communication systems suck to much to make this easy. Instead of improving our communication systems, we invest in a slop generator. It's a technological death spiral.
If AI progress continues, hopefully at least it will become prohibitively expensive to generate a feature film. Then maybe there will be a few AI production companies which release a movie every few years which has cultural significance. That might be tolerable.
Would love to read about your assessment of recent progress Erik
Look, anytime something special and uncommon becomes a ubiquitous commodity, the meaning that people derived from its specialness per se is drained out of it to some degree. People feel that as a loss.
Probably has something to do with the concept of sacredness and why it’s so compelling to humans, despite it being a social invention.
AI is proving to be very good at eviscerating many social fictions that people derive meaning from. And who’s to say that’s a good thing?
This. We have to differentiate derivatives and the base artifact.
Social media and meme cycles incentivize rapid “creation,” but the only kind of creation this allows for is remixing/riffing/commenting on root artistic and cultural events (faster, coarser, and built-in distribution). But transformative art and culture have only ever been created painstakingly by a small fraction. And their work embodies more beauty and truth, and therefore has more staying power. Derivatives can be spun up on a whim but have a short half life. Great original works are timeless.
These are different games, mind spaces, and weights.
A series of questions I’d ask:
- Isn’t this the natural cycle of art and culture? Has this historically disincentivized or devalued creation? And if the argument is that this is accelerating, is there *really* a going to be a rate of change or volume of derivatives that drives down the value of art?
You're conflating being "inspired by" with copying
Not in this instance, no
You don't understand AI until you watch this: https://www.youtube.com/watch?v=1aM1KYvl4Dw
It's a biological effect based on exposure and desensitization. The semantics attached to Ghibli's style are likely to fade away when everyone's exposed to tens of thousands of Ghibli memes many of which are pastiche or even inversions of the fundamental aesthetic.
You are not going to be exposed to thousands of versions of LOTR in a handful of days. If you were, the meaning would be drained from LOTR too.
The evidence is against this assertion so far. I guess sure there can be some clockwork orange equivalent where it's too much but this is far from that.
Agreed! I think it boils down to creativity. There are artists who carve the lane and those who duplicate it (e.g. AI). That will never change.
thank you for writing this. Studio Ghibli is magical partly because we can see how dynamic and thoughtful every frame was, and how labor intensive it was for Miyazaki to make. There is something, a pulse. Call it aura, call it magic: things that are made from a reservoir of attention also hold our attention. The first few images I saw on X of the ghiblification of photos were delightful, but they fell flat really quick. They lack the fundamental thing that makes Miyazaki's work always worth returning to: the love of a craft, the attention and soul infused into it.
Art with out work is just a meaningless image
Agreed. There is an indefinable but still tangible… I want to call it presence, I think, in Studio Ghibli art. Something soft, powerful, Real. It’s a complex concept, authenticity, much-debated in my field and used, of course, to market a whole load of stuff - which is in itself the antithesis of authenticity. As is “real”. But this is not Real. And personally that is what I want and it’s part an intrinsic part of Studio’s appeal. I’ll take watching the utter perfection of Princess Mononoke, Totoro and a few others once a year or so over a daily churn of Ghiblification every time, and I’m happy to die on this hill.
I truly believe everyone will eventually feel this way. Even the most ardent enjoyers of the Ghiblification. I don't think our emotions are made to endure this level of desecration.
The description of the crowd scene in this piece is misleading, it took 15 months for a complex animated crowd scene with moving parts fulfilling a complex artistic purpose, one of the major points being *distinct* characters. A simple drawing of a crowd of same-face characters looking the same way does not compare, it's beyond apples and oranges.
That passage keeps this piece in the "too easily impressed" category of AI writing that fails to comprehend the level of human work being done that the AI is barely mimicking on a surface level.
I myself debated exactly how much detail to go into for the comparison. I had a few extra lines describing why I think it's bad, but they got cut for flow. To me, the main thing with AI art is that: this is better than I expected, the time differential is absolutely insane, and today is, as they say, the worst the tech will ever be. So this piece is about my deep worry for the future, one that I see beginning signs of, so I'm writing from that perspective. But maybe it won't come true.
Broadly though, I agree with you. I've written before on how people underestimate the degree to which artistic excellence is just the last little few percentage points. The differentiation between Miyazaki and a bottom-tier animator is subtle but profound. I think this matters for AI art: because only the best of the best really matters, AI still hasn't "closed the gap" and the gap might be way larger than people think, simply because so much of successful art is in those final few percentage points of "is this better than that?"
https://www.theintrinsicperspective.com/p/sorry-ted-chiang-humans-arent-very
One problem is that when only the best of the best are good enough, the best of the best become worse than they once were. I mean, how do you get "the best of the best"? Lots of people start out, some subset of them get good, or even very good, and a tiny fraction (it's never possible to tell in advance who those people will be) become the best of the best. If "very good" is worthless, and only "the best of the best" is valuable, then far fewer people even try, meaning that those who rise to the top won't be as good as the masters of old. At which point, you might as well just let AI do the job.
Which reminds me of an article that I read years ago, back when the late English queen was still alive and well. The author was lamenting that there were so few good portraits of the queen, despite the fact that she sat for a portrait at least once a year. He proposed all sorts of explanations for this, but the one that stuck with me was that, as a result of photography, fewer people ever sat for a portrait, and so there were far fewer portraitists, and they got less training than the masters of old, and so even the best of the best were just not all that good. You gain photography, you lose top-notch portraitists. Likewise, you gain AI, and you lose top-notch animators. And there's little reason to assume this only goes for animators.
With this particular 15 months example, most people watching didn't really care or notice much about this painstaking scene, it's almost perfectly unflashy and almost just there to catch the eye of the technical class that would appreciate it, and then it happened to get Ghibli a little extra PR with an informative article down the line that we could all appreciate.
So in a sense it was already unnecessary, and already replaceable by humans doing a lower effort version of this sequence that would have a 0.1% affect on audiences experience of the movie. Which is to say, Ghibli would still do this if AI could replace it as well, and we'd still be reading this article about it 10 years later, and think about it in a similar way.
I certainly worry about future art, but I'm also not very bowled over by today's art. If we are making the last human art ever, it seems like we really hit a slump in the 90s and wasted much of the past 30 years. But I think this particular example undermines your point because it is in a sense an example of Ghibli's excess, energy put towards something for the sake of it, meanwhile the last time Miyazaki really directed well was Spirited Away. The 15 months basically did not have to be 15 months and I think it was a bad decision to make it that long. But even if they had done something more reasonable, Ghibli is already a kind of decadent studio that exists as a cultural artifact pointing to their better days.
Basically I don't see any humans making another Spirited Away or Only Yesterday, and the studio exists as more or less a kind of modern cathedral building tributes to the murmurings of a once-great ideas guy, so they spend way too long on stuff that is only becoming less and less relevant. Aka they have 99 problems and AI is not one of them. We should be looking at where actually good art is coming from these days to find something to actually worry about.
"There it was: that recognizable forced cadence, that constant reaching for filler, that stilted eagerness." YES YES sorry to shout, but this is a sublimely accurate description of the AI "tells" and I thank you. Unfortunately, how many folks who don't write or edit for a living will catch them?
The apocalypse will resemble bankruptcy in that it will happen gradually, and then all at once. It is both morbid and sad; I think we are all laughing in response to Ghiblification so that we don't cry.
That's an understandable take but another way to look at it, which could be very good, is that it will cheapen all things digital and almost force a return to the material world where we can see, interact, build, enjoy real things. Maybe live storytelling will even make a come back. Maybe the online world will become so impossible to tell what is real vs fake that people will go back to talking to one another in person. Not get worked up over something they just saw flash by online but instead wait to form an opinion on matters they've actually experienced first hand.
I'm a playwright and have heard for over a decade that the increasing digitalization of life will mean people craving live experiences again. Not only has this not happened but the theatre is losing audiences rapidly...
I hear you but technology 10 years ago is not even in the same ballpark as what we have today especially when it comes to AI. Hell even the AI that was around just 2 years ago doesn't come close to the capabilities today. I think we're firmly on the exponential curve of tech right now. Things are changing and changing rapidly.
And maybe it doesn't change people's behavior at all. Who knows. I know I personally don't believe anything I read or see online. I'm a software dev and I'll take being in nature, getting my hands dirty actually building something in the garage, or reading a real book any day over what's online.
I think a lot of people probably feel this way, especially in tech. Staring at buggy code all day made me realize how much I enjoy going out in nature. Constantly tuning into social media made me see how valuable in-person hangouts with my friends were. Regardless what happens or doesn’t happen with emerging technologies, we still have our humanity, and that’s worth protecting.
And yet, here you are...
Yep, reading articles while I wait for a deployment.
You can rather be in nature than online and still have a job that pays your bills that requires you being plugged in. Life is nuanced.
When I'm on a screen I feel like I'm working, when I don't have to be I am not. 15 years ago I didn't feel this way. Socializing online feels like work, while socializing offline feels harder and harder as I get older.
Here we are.
Would be cool if, when using the term Semantic Apocalypse, you gave a tip of the hat to Scott Bakker who coined the term. That guy really got beat up over the past few years, his writing is excellent and he deserves some acknowledgement for the ideas he developed.
Is there any good explainer for Bakker that focuses on how LLMs lack conscious intentionality and therefore their products don't have semantic content compared to humans (my concern)? I've only seen stuff about neuromodulation and capitalism, which seems different. I can't say I fully understand the thesis (but I might disagree with it, since neuroscience has had almost no impact yet on meaning, imo).
As far as I know he’s fallen off the map and all the writing I’m familiar with predated LLMs. You should write that piece. FWIW, I share your concern.
Bakker stopped writing around 2020, before LLMs were publicly available, but he touched on AI in these two articles:
https://rsbakker.wordpress.com/2015/01/29/artificial-intelligence-as-socio-cognitive-pollution/
https://rsbakker.wordpress.com/2017/08/30/on-artificial-belonging-how-human-meaning-is-falling-between-the-cracks-of-the-ai-debate/
Malkovich malkovich maaaaaaaalkovich
I'm reminded of the scene in 1984 where Winston and Julia hear a woman singing a machine-made song while hanging laundry. The song itself is soulless, but the woman is singing it without self-consciousness and imbues it with humanity anyway.
It's not that I think that the machine-song isn't a demeaning of the art it fed on, but I think when you feed anything through a human it came become art again. The machines are going to make a lot more goo and slop, but I don't worry about the human capacity -- compulsion even -- for turning whatever they encounter into something meaningful again.
Funny. My son always relentlessly picks Wall-E. No less dystopian, no less a silent masterpiece.
This was the first time I'd heard of Studio Ghibli so I guess I'm immune!
What a great post. Semantic satiation is a fascinating concept, including in this context. I've done this and it's true -- you can temporarily make a word lose its connection with its meaning via repetition in succession.
This Ghibli desensitization reminds me of the (probably obvious) currency inflation metaphor. By counterfeiting masses of facsimiles, the value of all the currency goes down, the counterfeit bills and the real bills both.