That's not exactly accurate. AI can also be used to do good things, help people be more productive, and produce good things. I can use AI to economically produce graphics and videos for my business; I can use AI to help me make products for my business; I can use AI to help me be faster, more efficient, and more productive. AI can be used as a positive for all manner of things. It's not “the greatest invention of the last five years: a machine that prints crap.” It's that for every good and productive thing that AI can be used for, it can also be used for an equal amount of crap.
However, that is how it is with anything man has produced. For all the good things the Internet is and can be used for, it can also be used to pull people into porn addictions, to steal people's money and intellectual property, and for any other nefarious thing you can come up with. As with everything else, it's not the tool; it's the human. I can use my steak knife to either cut my steak or stab someone to death. It's not the knife that is faulty in the latter; it is the human, and so it is with AI.
Did ChatGPT write this? And if not, do you not understand how uselessly, tendentiously reductionist "it's people, not technology" is? Nuclear fission can be used to generate electricity or construct atom bombs; therefore it's a good idea to give everyone their own nuclear missile, am I right?
"Nuclear fission can be used to generate electricity or construct atom bombs; therefore it's a good idea to give everyone their own nuclear missile"
That isn't remotely close to the same thing I wrote. I stand by what I said. The human is always the problem. Objects, or in this case, technology, are just tools. The human wielding the tool is either good or evil, and uses the tool accordingly.
Nuclear power, for obvious reasons, should be restricted from the hands of almost everyone. A myriad of other tools, however, such as guns, knives, cars, AI, the internet, etc., should be accessible to all but a very few.
If those who have access to those tools choose to misuse them or use them for evil purposes, then that is the fault of the individual, not the tool.
If you don't like that simple answer, or if you find it reductive, that's your problem, but it doesn't change the truth of the statement.
What are the "obvious reasons" that we should restrict nuclear power from the hands of almost everyone, Tony? Might it be because the consequences of misusing the tool are catastrophic to society, even if one might consider the tool itself morally neutral divorced from societal context? Do you know that we require people to demonstrate their competency before we allow them to drive a car, since a driver can cause injury or death by misusing a car? Or that most countries place stringent restrictions on gun ownership because a gun in the hands of the wrong person can result in mass murder? This whole article is replete with examples of how generative AI is a force multiplier for all sorts of horrible things in the hands of those who refuse to use it responsibly, so maybe, just maybe the prior art we apply to cars, guns, and nuclear weapons might apply here as well.
"What are the "obvious reasons" that we should restrict nuclear power from the hands of almost everyone"
Nuclear power is a different conversation. One need special training and education to handle nuclear power technology' training and education that isn't availiable to the majority of people.
I've got just a little bit of experience with this. When I was an active duty member of the United States Navy, serving aboard a nuclear powered aircraft carrier (USS John C. Stennis, CVN74) I was not allowed in the engineering spaces, and definitely not in the reactor spaces because I was not qualified and educated to be there, which was the correct answer.
"Do you know that we require people to demonstrate their competency before we allow them to drive a car, since a driver can cause injury or death by misusing a car?"
As an American driver, yes, I am aware of that. With rare exception, that training and education are available and easily obtained by the vast majority of American citizens.
"Or that most countries place stringent restrictions on gun ownership because a gun in the hands of the wrong person can result in mass murder?"
Same answer: With rare exception, that training and education are available and easily obtained by the vast majority of American citizens
Having said that, I couldn't care less what other countries do in terms of gun ownership by their citizens. A disarmed citizenry is the reason many of them are in the state they're in.
I only care that in The United States we have the Second Amendment. We have the right to gun ownership enshrined in the Constitution.
The right to keep and bear arms is a slightly different thing because it is a check on government overreach and abuse.
There are currently 27 Constitutional carry states. In my opinion, it needs to be 50. The only gun control I'm in favor of is for convicted felons.
For everyone else, as we clearly see from our current government's behavior, we need more guns.
"This whole article is replete with examples of how generative AI is a force multiplier for all sorts of horrible things in the hands of those who refuse to use it responsibly"
That can be said about anything.
So you think only certain licensed people should have access to AI?
Who should those people be?
Who determines the criteria someone must meet to have access to AI?
Who determines those criteria? Who are the gatekeepers?
"For everyone else, as we clearly see from our current government's behavior, we need more guns."
Ah, so you're one of those crazy right-wing nutjobs. No wonder your thinking is so bizarre. But let me humor you one last time.
"I was not allowed in the engineering spaces, and definitely not in the reactor spaces because I was not qualified and educated to be there, which was the correct answer."
What a hypocrite you are. You readily acknowledge nuclear power is dangerous and that only qualified people should be allowed to wield it. You readily acknowledge that the restrictions that the United States Navy places on its personnel limiting those who cannot be trusted or relied upon to use it correctly from working with this technology are legimitate and necessary. You are well aware that bureaucrats came up with many of those regulations, technical as they are, seeing as how the Constitution says nothing about nuclear power nor are politicians in the business of making policy at this level.
And yet you ask me these stupid questions:
"Who should those people be?
Who determines the criteria someone must meet to have access to AI?
Who determines those criteria? Who are the gatekeepers?"
As an American, you don't live in a dictatorship. As you certainly know, you live in a country governed by a government whose elected officials at least nominally represent the will of their constituents. So yes, you all should absolutely come to some sort of consensus as to what sort of regulation should be enacted in order to minimize the harms of AI technology to society.
The old adage "guns don't kill people.." is so dangerously naive and American because people are not all good - they are actually on average 50% bad (using a bell curve of goodness at any saturation level this is true) there are some wrong'uns out there i.e. Give everybody on earth right now a loaded gun and watch the ticker mate.
From an economic point of view I would say that all of the "benefits" you spoke of are actually going to simply result in "removing" labour from the economy and as a free market capitalist you will be thrilled with all the delicious money you are making - but then in 20 years time realise how much you just shrank your customer base by giving your money to the tech gods.
The big corporations are all racing after the shiny brass ring of first to market with the generalised intelligence because they will be able to stay at the top - but they will essentially replace all intellectual production with robots (this is probably more than 50% of the economy if you really think about what can be replaced by AI)
How do we deal with reducing 50% of the economy (does all this money flow into the GPT owners? How do the overlords then deal with the rising population of unemployed. This going to be FULLWhack anarchy we cant all work at mcdonalds - because nobody will be buying.
That is gun toting, USA style thinking right there. The issue is either black or it's white. No shades of nuance, either Everyone has a gun or only the govt has a gun.
Why, yes, yes it is, and judging by the circumvention, end-arounds, and outright violations of The Second Amendment the United States government is currently engaging in, that is exactly the government's position.
My hope is that people's addiction to prestige will win out and chase them away from the AI internet, because an AI-infested internet is just not as good as "real" internet at conferring status on those who win at being online. We've seen this with people falling away from Twitter, for example. People will never fully be offline, but when everyone just assumes that all your followers/likes/subscribers are not real (and also that you yourself aren't real), then the same vanity that drew people to the internet will hopefully drive them away. Then we go back to having more local cultures and scenes.
Yeah. In 2021, before generative AI became big and we were talking more about the metaverse and crypto, I wrote (https://etiennefd.substack.com/p/reality-will-become-a-status-symbol) that reality was on its way to becoming a status symbol; people who could afford to travel to real places as opposed to seeing them in VR or whatever would be seen as higher status. I hadn't anticipated that virtuality would become even more full of crap than it already was, but now that this is apparent that prediction seems truer than ever.
This works for most intelligent and critically thinking people, but unfortunately we have a gaggle of people who are just looking for someone to worship, and these fundamentalists are in the majority.
Any kind of "away from the internet" will simply not be possible. It is already in your water supply, your neighbourhood, your work/living... virtually everything that touches your life. Eric's prognosis is so chilling that my comment below - about the eclipse of that old-hat thing called political/philosophical 'reality' - seems almost a side-issue. But here goes anyway:
AI will complete a process that has been underway for many decades now: the encirclement of ever-fragile Western Democracy by the goggle-eyed warriors of the Social Justice Religion. The difference being that AI will entrench the religion as a kind of cosmos - further argument or proselytising no longer necessary. And electoral pluralism? Well that delicate flower has been withering anyway... becoming just a kind of plaything....part of the media entertainment industry while a (now AI enhanced) techno-bureaucracy grinds on, constantly topped up by 'experts' emerging from its 'one-party' universities.
It seems to me that the best-case long-term scenario for escape from this lies in some kind of global electronic meltdown. And Apres le Deluge? As I wrote here a while back: "Certainly some pessimism about 'progress' is mere grumpiness..... But at its best it is a wry observation – based on close observation of friends and enemies, family and colleagues, literature and ‘current affairs’ - that there are, and always will be, honesty and self delusion, real and faux expressions of generosity of spirit, bullies dressed up as champions of liberty...wise men and fools in other words." Apres le Deluge, this fundamental human nature will hopefully reasert itself. https://grahamcunningham.substack.com/p/are-we-making-progress
And worst-case scenario? I just avert my mind from that one.
1. People will eventually get sick of AI-mediated interaction. They're still meeting in person after all.
2. Political trends seem unstoppable until they don't. Trump is on track to win the next election, and democracy may well have more to worry about than SJWs (who, to be clear, I do not like). There was a huge explosion of LGBT culture in Weimar. You know what came next.
Here is a project that leverages prestige signaling in order to improve content: https://medianudge.weebly.com/how-it-works.html. And besides that there are subscription services like substack and medium that can keep quality higher, despite AIs.
As far as the TOC: A big state, Hobbesian Leviathan solution like the Clean Air Act is hard to implement on the internet. It is easy to implement on a state level in the real world because factories that pollute are expensive, so not possible to build them for most actors (keeps their number down), and also because they are expensive they are easy to spot and monitor. Since AI-generated garbage is cheap to produce and expensive to monitor at large scales, we need a different strategy, and nature and Ostrom show the way: nested, subsidiary, self-monitoring "groups", or membrane-bound levels of organismality. Each group for our purposes has only a limited number of parts (which are people, some of whom use AIs, at the lowest level) so it can deal with the monitoring and enforcement costs of its own parts. This way the monitoring and enforcement can be more effective than just anarchic monitoring of the whole internet (with a huge number of players/parts). Similar to how cancer is first monitored and enforcement measures taken at the sub-cellular level (DNA repair), then cellular (apoptosis or killing by immune system, maybe electrical signals ala Levin, not sure if this latter happens in humans), then organism level by us taking measures to keep healthy or kill the cancer with medical interventions. There might be some differences with biological systems though, like there is not necessarily a synergy happening between different parts above the first level, which means there is no selective pressure to form higher levels, so it would have to be imposed from above (a state like Leviathan).
People have not fallen away from twitter. There is data on increasing engagement and users and my personal experience corroborates this. I work in AI and do see obviously generated content in threads but there are also good conversations in many areas.
An anecdote from military history might offer an instructive parallel. Early firearms weren't superior to the longbows they ousted from the battlefields of Europe. Quite the contrary, that first generation of guns were all around inferior weapons. They were less accurate, had less effective range, and a far slower rate of fire than the old fashioned longbow. A skilled archer was deadlier than any early gunner.
How then did firearms ever manage to supplant the bow in the first place? Because archers required intensive specialized training. They had to practice extensively almost from the time they learned to stand to be any good. When they were killed, they could be difficult to replace. Meanwhile, any idiot could become proficient with a gun within a few hours. Gunners replaced archers not because they were better, but because they were cheaper.
The analogy with internet content creators seems exact. An educated flesh and blood human might still draw a better picture or write a superior Sport's Illustrated profile than a bot. But the human needs a salary and benefits. AI costs next to nothing. It doesn't even require appreciation. Humans, like archers, will be priced out of the market.
Guns, of course, eventually evolved into more lethal weapons than bows ever were. Bots, I fear, will become better artists and writers than humans could hope to be. With the accelerated pace of technological change, it might happen sooner than we think. I'm 53 and in poor health, but I'm still afraid I'll live to see it.
(Suggested reading: War Before Civilization: The Myth of the Peaceful Savage, by Lawrence H. Keeley)
Yes! But hey the great equalizers that were guns made pro soldiers and horsemen vulnerable to angry peasants. Our societies were freed not by the ideas of Montesquieu et al. but mass manufactured cheap guns. What would be the parallel here for art creation?
The toppling of media empires like Disney or Fox, where content creation happens at a much lower level, small teams of less than a dozen produce high-quality output, equal or better to anything produced by hundreds today, allowing niche interests, local populations, .etc to have great, deep and interesting media made and targeted specifically at them.
That would be nice! Though if it's to surrender power to a youtube- or amazon-like monopoly aggregator, that freedom will be short-lived. Contingent upon the creators teaming up to bring the aggregator under their control. Same fight as right now, after all.
It seems soon parents will catch on to this sad trend and there will be reviewers who find quality human made content parents can trust. That will probably happen broadly as “human made” will soon be a value added category.
What we’re witnessing is the same thing that happened when tractors replaced many farm laborers and robots replaced many factory workers. These were generally positive developments for society as a whole.
Just as happened earlier, those directly affected by such changes will label the new tools as the work of Satan, while the vast majority of people who aren’t directly affected simply won’t care.
Substackers, do you care that your car was built by robots on an assembly line instead of being handcrafted piece by piece by individual humans? No, you don’t. You just want the best car at the lowest price.
Writing for a living was probably always a bad plan anyway, as doing so successfully requires one to mirror the group consensus and tell the audience what they want to hear, which isn’t really what writers should be doing.
Anyway, good idea or not, like it or not, AI is coming to the white collar world. Those who learn how to drive the new tractors will prosper, and those who don’t will be left behind. So it has always been.
"What we’re witnessing is the same thing that happened when tractors replaced many farm laborers and robots replaced many factory workers. These were generally positive developments for society as a whole."
Do you think all the people who killed themselves after being put out of work by automation in the early 20th century, or their families, would agree these were positive developments for society as a whole? The futurist and technocrat's "positive developments" always come at the expense of all life.
It is quite difficult not to conclude what seems obvious: that people like you must simply hate life, both the human beings whose suffering you disregard and discard as detritus in the way of "progress" and automation that gives the few the right to be free, and all the other animals that are killed in your endless extraction of resources, building of infrastructure, highways, and so forth.
Still, I'm not quite cynical enough to believe that people who have been deceived into such wild disregard for all life are truly malevolent. Have you not ever considered that with some thoughtfulness and effort, it might be possible to have this progress ethically with guardianship of life rather than its destruction? It's not as if there is some natural law in physics that states everyone must suffer and all animals including our own species must be trod upon and die for humanity to advance.
"Just as happened earlier, those directly affected by such changes will label the new tools as the work of Satan, while the vast majority of people who aren’t directly affected simply won’t care."
The level of technological sophistication in any society is not linear and guaranteed to progress, and it can and has regressed many times in the past. This is obvious to anyone with a basic understanding of history.
Regardless, what kind of morality is it you have that you think it's meaningful to say people who aren't directly affected by something won't care about the consequences, and therefore the harm to the affected doesn't matter? That attitude is beyond callous, what utter moral nihilism and sophist rubbish!
On top of that, in this case where a very significant percentage of the entire human population is affected—all those who use the Internet—how does it even make sense? What, it doesn't matter how the Internet turning into a garbage dump affects us, because there are uncontacted tribes like the Sentinelese out there who don't care?
"Substackers, do you care that your car was built by robots on an assembly line instead of being handcrafted piece by piece by individual humans? No, you don’t. You just want the best car at the lowest price."
You must have some sort of problem, maybe with empathy, if you think everyone thinks the exact same way you do, and I don't mean that as an insult, since it's really baffling. Yes, there are indeed many people who value and care about human craftsmanship and who go out of our way to pay for and support it, and you can find us everywhere. If there were cars built entirely by humans, there would be people willing to pay a premium for them even if they aren't the lowest in price.
"Writing for a living was probably always a bad plan anyway, as doing so successfully requires one to mirror the group consensus and tell the audience what they want to hear, which isn’t really what writers should be doing."
Speaking absolutely, the number of writers who make a living without preaching to the choir is actually quite high, and is only small relatively, yet you act as if it's a requirement to write for a living when it clearly is not and shouldn't be.
"Anyway, good idea or not, like it or not, AI is coming to the white collar world. Those who learn how to drive the new tractors will prosper, and those who don’t will be left behind. So it has always been."
Nah, not to a significant degree anytime soon anyway to where this is about as meaningful a statement as saying we're going to travel through interstellar space and explore Alpha Centauri anytime now. It's nothing but hype, marketing, and public relations (recall the latter is synonymous for propaganda, of course, since Edward Bernays changed the term after WWII).
LLMs produce garbage and are incapable of even simple tasks, and their usefulness is vastly overstated and hyped up by the market to attract investors, who will probably realise they've been scammed soon enough and retaliate, because it is illegal to lie to investors. They have problems which are fundamental and cannot be fixed, because hallucinations are unsolvable and neural nets themselves are incapable of understanding that A = B ∴ B = A, as research both more recent and going back many years shows.
Soon people will realise there's almost no real market for LLMs other than for producing garbage and propaganda. They're useful for businesses and for states, but not 99% of human beings. There will be pushback. The market for them will collapse, and AI as a whole will suffer for the garbage of LLMs and what companies like OpenAI have done. There will probably be another AI winter that follows.
That is an obvious conclusion or at least a high probability possibility to anyone paying attention and who reads what critical AI researchers like Gary Marcus write, as opposed to those who prefer to read from the technopriests like Yudkowsky and Boström, who don't even work with the software itself, and just have religious beliefs about AI replacing humans and think AI will let them live forever after they die, based on theory that's not of the scientific support and has nothing to back it or its predictions up in reality. You sound like one of their disciples, they are everywhere.
What, there are no replies to this post?! Thank you Emma, this was brilliantly argued and beautifully written - and above all, passionate, meaningful and human. All things that AI, as we understand it, can never be.
What?! No, that's... kind of an 𝗮𝘄𝗳𝘂𝗹 post — lots of errors, lots of needless invective, and some odd turns of phrase; I'd argue that it is, therefore, 𝘯𝘦𝘪𝘵𝘩𝘦𝘳 brilliantly-argued 𝘯𝘰𝘳 beautifully-written.
To wit:
→ "Do you think all the people who killed themselves after being put out of work by automation in the early 20th century, or their families, would agree these were positive developments for society as a whole?"
Mechanization of food production has saved many, *many* more lives than it cost — there was hardly an epidemic of suicides, but even if there were, take a look at how many used to regularly die from famine — and I think you'd be real hard-pressed to find anyone who wants to go back to horse-drawn plows.
(Also, if you've ever done that sort of thing personally, you'd know it's absolutely miserable. History is my main love, and so I often go and try to find ways to experience life as it was at some period or another; often it's fun, but non-mechanized farm labor is absolutely back-breaking.)
(Also also, see Rob's reply below: there was a shift in jobs, not a net loss of jobs. This likely won't be true in the case of AI, though, unfortunately.)
→ "It is quite difficult not to conclude what seems obvious: that people like you must simply hate life"
Come on. This is just hysterical ad-hominem posturing. When you find yourself having to assert that your interlocutor is an inhuman monster that loves evil and hates good, you've probably forgotten to take your meds again.
→ "The level of technological sophistication in any society is not linear and guaranteed to progress, and it can and has regressed many times in the past. This is obvious to anyone with a basic understanding of history."
Non-sequitur.
→ "If there were cars built entirely by humans, there would be people willing to pay a premium for them even if they aren't the lowest in price."
1.) Irrelevant to the point, which is that the overwhelming majority of people do not care.
2.) But I also doubt it's even really true: where, then, are the hand-built cars being marketed? Premium hand-crafted goods exist when there's a market for them. Doesn't appear to be one for cars.
→ "Speaking absolutely, the number of writers who make a living without preaching to the choir is actually quite high, and only small relatively"
So... just as Emma's opponent initially said, then?
→ "Nah, not to a significant degree anytime soon anyway [...] LLMs produce garbage and are incapable of even simple tasks, and their usefulness is vastly overstated and hyped up by the market to attract investors, who will probably realise they've been scammed soon enough and retaliate"
Let us make a pact to return to this comments section at whatever time you deem that this will have become obvious, and see if it is the case. If we can find a judge we both trust, and more concrete formulation, I'd probably be willing to bet money on it not being so.
(The part about "incapable of even simple tasks" is already demonstrably untrue: you can create e.g. a Reddit clone using nothing but asking the AI to code up each step for you.
(Or: I asked ChatGPT for some broad summaries of linguistic topics and data that isn't easily Google-able, just to test it, and it correctly explained things such as the perceived valence of different phonemes and their frequency by two different metrics in various languages, the main measures of linguistic conservatism and their manifestations in various languages, etc; etc.)
→ "They have problems which are fundamental and cannot be fixed, because hallucinations are unsolvable and neural nets themselves are incapable of understanding that A = B ∴ B = A, as research both more recent and going back many years shows."
Again, demonstrably untrue. Hallucinations are a difficult case, but there is no consensus whatsoever that they're an unsolvable issue — and one can test the latter claim right now with the free version of ChatGPT.
For any value of "understand" which renders the claim meaningful (i.e., which applies to how LLMs are used and/or which doesn't result in being forced to claim they don't understand anything at all), it is simply wrong, and has been for several years.
→ "They're useful for businesses and for states, but not 99% of human beings."
Pretty skeptical of this. I do not know a single student or programmer who doesn't use an LLM to some extent, and know many regular people who use them for stuff like quickly and easily developing and breaking down to-do lists, or as a Google alternative. This will only increase over time.
Again, willing to bet money on this.
→ "and think AI will let them live forever after they die, based on theory that's not of the scientific support"
What theory do you think Emma meant here, and how is it "not of the scientific support"?
So — no, friend Alistair; if you believe that *that* is brilliantly-argued, I can only assume you skimmed it quickly... or have real low standards for "brilliant". 😛
Thank you Himaldr, this was brilliantly argued and beautifully written - and above all, passionate, meaningful and human. All things that AI, as we understand it, can never be.
Man, all you had to do was to read the article to notice the author's point wasn't that AI would drive people out of business with superior output like robot-assembled cars made human assembly workers obsolete, it was that AI produces useless nonsense that will be pushed onto people via the Internet because AI's only advantage is varnishing the shit it generates with the barest sheen of plausibility.
So the tractor analogy is not a correct comparison.
Tractors themselves require a really deep economy to be manufactured, maintained and run - fuel, mechanics, spares, tyres, brakes, hydraulics, trailers, drivers etc... this change shifted the economy to the fossil fuel based economy which is itself labour intensive etc ad infinitum. There was no "replacement" of labour, rather a shift - which actually ended up creating MORE jobs.
This move to generators - is actually a net loss to the economy. There is no shift in labour just a removal.
You could argue that there will need to be chip manufacturers, data centre engineers and ML engineers and of course energy sector workers to power this - and I say this is a false assertion because a generator is not exactly labour intensive, one piece of software written by a single person can be reused millions of times - this is not a tractor. With near zero extra production cost - this can be scaled very efficiently a tractor has a DEEP logistical supply chain - this new kind of technology is very very short - one step and this step is going to be owned by two or three global players. the cost of the energy is a fraction of the requirements for mechanised labour - an energy economy.
I agree with your general framing of the notion of semantic pollution, Erik, and also that it is important, and that it is a tragedy of the commons. However, this was already happening before the AI-advances of the past few years. Bizarre mass-produced kids videos have been on Youtube for at least 7 years, probably longer, and so before they could have been AI-generated. The business logic and consequences of the older content farms are similar. Now AI is making it cheaper to generate the content. But was "content generation cost" ever the bottleneck to profit for these operations? I'm not sure. Overall, it seems like more of an acceleration of an existing problem right now. But I share your concern that the problem of semantic pollution will be qualitatively worse within a few years.
If you look at the how-to videos they all involve ChatGPT at some point to write songs or scripts. I think they were using AI in the past, but it was more algorithmic recombination of simple plots and scenes with different colors, etc. Still AI though, just of that era.
Had an interesting conversation with an approx. 20-year-old guy who told me he thinks his generation will abandon the internet altogether; such a practice seems highly rational and sensible if the alternative is to be exposed to the kinds of garbage discussed here.
It was a thought I also had. If we reach the point when we cannot believe anything we see or read on the internet, then our relationship with the internet will substantially change. Abandoning it altogether is not practical given it is a powerful and useful tool for commercial purposes, but as a source of information it may die a quicker death than we may realize. You may even see a revival of the print publications as people shift to a model of paying for accuracy and truthfulness (or one hopes).
You mean the technology they want to use to monetize every interaction on the internet?
Blockchain being described as “realistic” is a pretty funny thing to say in 2024, especially when the alternative is just not using social media. Yeah social media is addictive, but it’s an easy habit to break if it suddenly becomes paywalled or if there’s too much garbage to maintain attention (see: Twitter).
I think you underestimate how addicted to social media many people are. Also if you think blockchain technology is trying to monetize every interaction on the internet, you clearly have very little grasp of what the purposes of blockchain are. Look up the terms permissionless, immutable, encryption, and decentralization. That should get you on the right track of just some of the potential.
It’s psychologically addictive, not chemically addictive, and it becomes a lot less easy to waste time on when it’s no longer free and the sites are full of garbage content. Again, look at Twitter.
I fully understand the potential for blockchain and all the terms you mention, but you are falling for the marketing of it. I think you don’t understand what tech companies actually want to do with it or how it would actually be financially viable. Look up Web 3.0 and token based economies.
This is something I see us trending towards, at least partially abandoning the internet. There has been a rise in interest in the TradWife movement and homesteading. In just one example, this morning I watched somebody harvest clay from their land, haul it by donkey, and hand-make their own kitchen tile. People are craving humanity in its most natural form, and are struggling to find this type of satisfaction online since it has been commodified and saturated with impersonal, inhuman content. Perhaps it will remain a subculture but I think it is symptomatic of the pollution described.
It’s damming that YouTube Kids doesn’t allow comments, for obvious reasons—a loophole identified by the inhuman content farms—leaving out any direct way to signal parents of the subpar content quality.
We’re on a rapid decline. The upcoming generation, well beyond their toddler years, faced the lockdown education system headfirst unprepared. Now dissociated from their peers and unable to read/write adequately, indicated by the slew who’ve taken their pleas online to voice the growing epidemic.
If utopia is ahead, who maintains it other than the machines the predecessors will leave behind?
most youtube comments with any substance to them are now filtered by bots anyway. Youtube has not been a place for meaningful conversation for quite a while now, thanks partially to the (artificially stupid) bots.
Shit, it's not just me then? I can't tell you how many times I have tried to leave a polite and substantive comment on YT, only have it immediately disappear.
Can't figure out what the fuck is triggering the filtering — meanwhile, a thousand variations of "lmao, who watchin dis in 2024?? leave a like 🔥🔥 💯" get through just fine.
As a nerd, I spent several hours one day trying to defeat the algorithm, to post someone else's comment who was convinced (erroneously, I talked to the owner and he confirmed it wasn't him) that the channel owner was censoring them. I succeeded eventually, but I didn't figure out why. It seems to be a combination of things like how many other comments there already are, how long the comment is, and certain hot-button words or phrases.
One thing I've resorted to is breaking a comment into several chunks and trying them one-at-a-time until I find the bit that's triggering the filtering.
As you say, though, I've often been left mystified as to *why* — weird that it does seem to depend on other comments (I once finally managed to get one to post, went on to write another... and the first one then disappeared!).
I'm not sure if it's just because smarter folks notice that it's happening more, or if there's actually some correlation; but I am obviously more articulate than the average commenter (he said, humbly), and I notice the others I see who mention "YT keeps eating my comments" are also likely to write more properly.
Maybe it's part of a massive endumbening conspiracy!
The positive spin, here, is that the cultural pollution has been happening for at least ten or fifteen years, but now, with the tidal wave of bilge AI content washing over everything, people are finally starting to see the problem. And the solution, as before, will be a combination of human curation and trust markets. The good news is that the more people become aware of the problem, the more profitable the solution will be.
The conept of power with instead of power with is I want to live in. Societies that hold space for nature, music, creativity. For empathic learning and connecting between citizens.. .. The Eutrscans were the ones who were more strongly connected to art and nature. They were more sophisticated than the Romans in many ways
I think AI generated content is just the logical consequence of the attention economy. If you make money by getting eyeballs on your content, then finding a way how to produce it cheaply, fast, and at massive scale will become the main objective. In that sense, we haven't landed anywhere surprising - we're just cruising on the same predictable trajectory, like a comet through space.
Couldn’t agree more. And in that sense all of this pollution may be a net positive insofar as it reveals the absurdity that is the competition for “clicks”. At some point people are going to have to reject the internet as it’s currently constructed, and Generative AI may be accelerating that (but that is also the most optimistic possible take).
Strong, strong agree. I see more and more children with dead eyes and slack jaws, sometimes with a screen propped up on the buggy (!)
I feel enormously *more* strongly about this since having children myself.
I know each generation thinks the world around them is getting worse, but parents *really are* getting worse. Let alone the ubiquity of screens and childhood obesity, in the UK a quarter of children aren't potty trained by the time they start school. A quarter!
Well, the alternative is a child screaming and running around breaking stuff, so I can't blame the parents.
Then again, maybe this is why I recognized early on that I'm not parent material, heh. Can't imagine why anyone goes through it. I have friends who just keep popping out babies and I don't get it!
(...and these are, uh, mostly the dumber friends, which brings to mind ANOTHER fear for the future...)
(To be clear, I loved this article, and I know that parents will likely keep allowing their toddlers to use screens given how effectively it keeps them pacified.)
Or as I decided in the seventies. Simply, do not have children. I am and do not regret being a dead end gene pool, but in troubling moments I ask myself, “Did I reject my biological destiny, so that AI could replace me?” Evoking fears of replacement theory. Yikes, another heartless dead end. 🤔
Word. I have had instinctive distaste for children since I *was* a child, heh.
(The old "Stop crying, god damn it, or I'll give you something to cry about!" is something I could picture myself saying in a moment of frustration — so... yeah, no kids for me, probably best...)
While I fully agree with this critique of AI-generated content and share your sense of impending doom for the Internet, your use of “the tragedy of the commons” to build you argument took me aback. Economist Elinor Ostrum debunked this idea (which was never based on data) when Hardin first published his theory in 1968. Her work on the topic finally* won the Nobel Prize in Economics in 2009, making her the first woman to receive the prize. This “tragedy” has no basis in observed fact or data modeling— it’s the theory of one biologist that got traction because it aligned with neo-liberal anti-communism and a “dog-eat-dog” interpretation of human nature that tacitly endorses unbridled capitalism. We MUST stop using it to build arguments, even good ones, and let it whither away . Read a fairly good takedown of this non-tragedy: https://aeon.co/essays/the-tragedy-of-the-commons-is-a-false-and-dangerous-myth
I don’t think it’s quite accurate to say that she debunked the idea that managing Common Pool Resources is a challenge that humans do face. The 2nd chapter of her book starts with a section entitled “The CPR Situation”. What she showed is that humans have evolved adaptive collaborative behaviors to solve this problem on the social complexity landscape. The main political implication of this is that over centralization of regulatory apparatuses “may destroy one set of effective CPR institutions without necessarily developing effective alternatives.”
I did not say she debunked the idea that managing common pool resources is a challenge that humans face— in fact, most of her work analyzes the ways people respond to this challenge and argues in favor of embracing the complexity inherent in the problem. I said she debunked Hardin’s facile explanation that all humans in all cases choose the most destructive approach.
I don't think that's Hardin's view, necessarily. He articulates that it *is* a challenge — not that it's an unsolvable one.
"No basis in observed fact" is an overreach, as well. We have certainly seen this exact result play out, more than once; we're seeing it now with the AI bilge!
At times people have indeed chosen the most destructive and most short-sighted approach to resource management. Neither I nor Ostrum claim otherwise. We both oppose Hardin’s argument that humans always, unavoidably choose this path, however.
“Might not actual human-generated cultural content normally contain cognitive micro-nutrients (like cohesive plots and sentences, detailed complexity, reasons for transitions, an overall gestalt, etc) that the human mind actually needs?”
Well said, Erik. Some of the children's video problem appeared several years ago with the Elsagate phenomenon. I'm not sure we've ever fully determined how much of that was AI generated, and how much was just garbage skits/animated shows from places with cheap labor. Regardless, Elsagate demonstrated that inert, unsophisticated child viewers could be a highly lucrative audience for torrents of low-quality but shocking visual content. Beloved characters like Spiderman and Elsa giving each other injections, playing with feces, weird pregnancy stuff --- the fusion of schizophrenia with Saturday morning cartoons.
And it turns out that children are not the only audience for this swill. I can't wait until it gets piped straight into our VR headsets. Or, what the hell, right into our Neuralinks.
I really got it for the first time a few days ago when I went on YouTube to get into a music hole. I do that every few months: I grab a beverage, play a song I like and see where the algorithm takes me — YouTube has served for many years as a source of new music and fascinating reviews by experts that have led me to expert producers’ explainers on why pop music pops, singing techniques of various a cappella groups, a Siberian metal band, and a bevy of new independent artists I’d never have found directly. This last session, I gave up after about ten minutes. More than half of the videos YouTube fed me were AI-produced schlock, mostly consisting of duets between various dead people. I’m so sad one of my favorite pastimes is ruined, but even more, sad for all those artists whose work is being buried in an avalanche of dreck.
Somehow in someway, we need to grow a community of people in every free country where the ethos is to pay for, or only try to engage with, creators who do not use generative AI. That’s what I try to do, a large part because as a creator myself, I believe using generative AI lacks integrity. I pay for your newsletter Erik, not only because it’s brilliant, but it seems like you’re not cheating me with your creations, that you’re not using AI. Now, as you said in your piece, there exists no decent AI detection software so what needs to happen is development of faith and trust in those you believe are not using AI - and an adjacent community of investigators who seeks to expose creators who use AI. Anyway great article, once more you freaked me out lol.
I was skeptical: As a lazy parent, I know there are millions of lousy toddler videos on youtube. (Even Ryan's world is bad and most is worse.) Without AI. But, I stand corrected: This AI stuff manages to creep me out. Scott Alexander was right about the future of movies by AI: "The dumbest possible way to do this is to ask GPT-4 to write a summary (“write the summary of a plot for a detective mystery story”), then ask it to convert the summary into a 100-point outline, then convert that into 100 minutes of a 100-minute movie, then ask Sora to generate each one-minute block." - But that 'future' is now - for the youngest. - I hope the market will soon offer easy ways to block that BS out. If not: tough luck for the kids of less-than-perfect-parents.
The last thing the world needed was more crap content, and yet somehow that’s the great invention of the last five years: a machine that prints crap.
They're desperate to obfuscate and bury the truth under mountains of bullshit.
It won't work in the end, but it might delay the end.
That's not exactly accurate. AI can also be used to do good things, help people be more productive, and produce good things. I can use AI to economically produce graphics and videos for my business; I can use AI to help me make products for my business; I can use AI to help me be faster, more efficient, and more productive. AI can be used as a positive for all manner of things. It's not “the greatest invention of the last five years: a machine that prints crap.” It's that for every good and productive thing that AI can be used for, it can also be used for an equal amount of crap.
However, that is how it is with anything man has produced. For all the good things the Internet is and can be used for, it can also be used to pull people into porn addictions, to steal people's money and intellectual property, and for any other nefarious thing you can come up with. As with everything else, it's not the tool; it's the human. I can use my steak knife to either cut my steak or stab someone to death. It's not the knife that is faulty in the latter; it is the human, and so it is with AI.
Did ChatGPT write this? And if not, do you not understand how uselessly, tendentiously reductionist "it's people, not technology" is? Nuclear fission can be used to generate electricity or construct atom bombs; therefore it's a good idea to give everyone their own nuclear missile, am I right?
"Did ChatGPT write this? "
No, ChatGPT didn't write that. I wrote it.
"Nuclear fission can be used to generate electricity or construct atom bombs; therefore it's a good idea to give everyone their own nuclear missile"
That isn't remotely close to the same thing I wrote. I stand by what I said. The human is always the problem. Objects, or in this case, technology, are just tools. The human wielding the tool is either good or evil, and uses the tool accordingly.
Nuclear power, for obvious reasons, should be restricted from the hands of almost everyone. A myriad of other tools, however, such as guns, knives, cars, AI, the internet, etc., should be accessible to all but a very few.
If those who have access to those tools choose to misuse them or use them for evil purposes, then that is the fault of the individual, not the tool.
If you don't like that simple answer, or if you find it reductive, that's your problem, but it doesn't change the truth of the statement.
What are the "obvious reasons" that we should restrict nuclear power from the hands of almost everyone, Tony? Might it be because the consequences of misusing the tool are catastrophic to society, even if one might consider the tool itself morally neutral divorced from societal context? Do you know that we require people to demonstrate their competency before we allow them to drive a car, since a driver can cause injury or death by misusing a car? Or that most countries place stringent restrictions on gun ownership because a gun in the hands of the wrong person can result in mass murder? This whole article is replete with examples of how generative AI is a force multiplier for all sorts of horrible things in the hands of those who refuse to use it responsibly, so maybe, just maybe the prior art we apply to cars, guns, and nuclear weapons might apply here as well.
"What are the "obvious reasons" that we should restrict nuclear power from the hands of almost everyone"
Nuclear power is a different conversation. One need special training and education to handle nuclear power technology' training and education that isn't availiable to the majority of people.
I've got just a little bit of experience with this. When I was an active duty member of the United States Navy, serving aboard a nuclear powered aircraft carrier (USS John C. Stennis, CVN74) I was not allowed in the engineering spaces, and definitely not in the reactor spaces because I was not qualified and educated to be there, which was the correct answer.
"Do you know that we require people to demonstrate their competency before we allow them to drive a car, since a driver can cause injury or death by misusing a car?"
As an American driver, yes, I am aware of that. With rare exception, that training and education are available and easily obtained by the vast majority of American citizens.
"Or that most countries place stringent restrictions on gun ownership because a gun in the hands of the wrong person can result in mass murder?"
Same answer: With rare exception, that training and education are available and easily obtained by the vast majority of American citizens
Having said that, I couldn't care less what other countries do in terms of gun ownership by their citizens. A disarmed citizenry is the reason many of them are in the state they're in.
I only care that in The United States we have the Second Amendment. We have the right to gun ownership enshrined in the Constitution.
The right to keep and bear arms is a slightly different thing because it is a check on government overreach and abuse.
There are currently 27 Constitutional carry states. In my opinion, it needs to be 50. The only gun control I'm in favor of is for convicted felons.
For everyone else, as we clearly see from our current government's behavior, we need more guns.
"This whole article is replete with examples of how generative AI is a force multiplier for all sorts of horrible things in the hands of those who refuse to use it responsibly"
That can be said about anything.
So you think only certain licensed people should have access to AI?
Who should those people be?
Who determines the criteria someone must meet to have access to AI?
Who determines those criteria? Who are the gatekeepers?
How do you police such a thing?
"For everyone else, as we clearly see from our current government's behavior, we need more guns."
Ah, so you're one of those crazy right-wing nutjobs. No wonder your thinking is so bizarre. But let me humor you one last time.
"I was not allowed in the engineering spaces, and definitely not in the reactor spaces because I was not qualified and educated to be there, which was the correct answer."
What a hypocrite you are. You readily acknowledge nuclear power is dangerous and that only qualified people should be allowed to wield it. You readily acknowledge that the restrictions that the United States Navy places on its personnel limiting those who cannot be trusted or relied upon to use it correctly from working with this technology are legimitate and necessary. You are well aware that bureaucrats came up with many of those regulations, technical as they are, seeing as how the Constitution says nothing about nuclear power nor are politicians in the business of making policy at this level.
And yet you ask me these stupid questions:
"Who should those people be?
Who determines the criteria someone must meet to have access to AI?
Who determines those criteria? Who are the gatekeepers?"
As an American, you don't live in a dictatorship. As you certainly know, you live in a country governed by a government whose elected officials at least nominally represent the will of their constituents. So yes, you all should absolutely come to some sort of consensus as to what sort of regulation should be enacted in order to minimize the harms of AI technology to society.
The old adage "guns don't kill people.." is so dangerously naive and American because people are not all good - they are actually on average 50% bad (using a bell curve of goodness at any saturation level this is true) there are some wrong'uns out there i.e. Give everybody on earth right now a loaded gun and watch the ticker mate.
From an economic point of view I would say that all of the "benefits" you spoke of are actually going to simply result in "removing" labour from the economy and as a free market capitalist you will be thrilled with all the delicious money you are making - but then in 20 years time realise how much you just shrank your customer base by giving your money to the tech gods.
The big corporations are all racing after the shiny brass ring of first to market with the generalised intelligence because they will be able to stay at the top - but they will essentially replace all intellectual production with robots (this is probably more than 50% of the economy if you really think about what can be replaced by AI)
How do we deal with reducing 50% of the economy (does all this money flow into the GPT owners? How do the overlords then deal with the rising population of unemployed. This going to be FULLWhack anarchy we cant all work at mcdonalds - because nobody will be buying.
"The old adage "guns don't kill people.." is so dangerously naive and American because people are not all good"
I suppose you would rather that only the government have guns.
That is gun toting, USA style thinking right there. The issue is either black or it's white. No shades of nuance, either Everyone has a gun or only the govt has a gun.
Why, yes, yes it is, and judging by the circumvention, end-arounds, and outright violations of The Second Amendment the United States government is currently engaging in, that is exactly the government's position.
Take for instance the films of Stanley Kubrick - all made with AI
? The only interesting films he made predate the Internet.
My hope is that people's addiction to prestige will win out and chase them away from the AI internet, because an AI-infested internet is just not as good as "real" internet at conferring status on those who win at being online. We've seen this with people falling away from Twitter, for example. People will never fully be offline, but when everyone just assumes that all your followers/likes/subscribers are not real (and also that you yourself aren't real), then the same vanity that drew people to the internet will hopefully drive them away. Then we go back to having more local cultures and scenes.
Yeah. In 2021, before generative AI became big and we were talking more about the metaverse and crypto, I wrote (https://etiennefd.substack.com/p/reality-will-become-a-status-symbol) that reality was on its way to becoming a status symbol; people who could afford to travel to real places as opposed to seeing them in VR or whatever would be seen as higher status. I hadn't anticipated that virtuality would become even more full of crap than it already was, but now that this is apparent that prediction seems truer than ever.
This works for most intelligent and critically thinking people, but unfortunately we have a gaggle of people who are just looking for someone to worship, and these fundamentalists are in the majority.
Any kind of "away from the internet" will simply not be possible. It is already in your water supply, your neighbourhood, your work/living... virtually everything that touches your life. Eric's prognosis is so chilling that my comment below - about the eclipse of that old-hat thing called political/philosophical 'reality' - seems almost a side-issue. But here goes anyway:
AI will complete a process that has been underway for many decades now: the encirclement of ever-fragile Western Democracy by the goggle-eyed warriors of the Social Justice Religion. The difference being that AI will entrench the religion as a kind of cosmos - further argument or proselytising no longer necessary. And electoral pluralism? Well that delicate flower has been withering anyway... becoming just a kind of plaything....part of the media entertainment industry while a (now AI enhanced) techno-bureaucracy grinds on, constantly topped up by 'experts' emerging from its 'one-party' universities.
It seems to me that the best-case long-term scenario for escape from this lies in some kind of global electronic meltdown. And Apres le Deluge? As I wrote here a while back: "Certainly some pessimism about 'progress' is mere grumpiness..... But at its best it is a wry observation – based on close observation of friends and enemies, family and colleagues, literature and ‘current affairs’ - that there are, and always will be, honesty and self delusion, real and faux expressions of generosity of spirit, bullies dressed up as champions of liberty...wise men and fools in other words." Apres le Deluge, this fundamental human nature will hopefully reasert itself. https://grahamcunningham.substack.com/p/are-we-making-progress
And worst-case scenario? I just avert my mind from that one.
Not saying you're entirely off-base, but:
1. People will eventually get sick of AI-mediated interaction. They're still meeting in person after all.
2. Political trends seem unstoppable until they don't. Trump is on track to win the next election, and democracy may well have more to worry about than SJWs (who, to be clear, I do not like). There was a huge explosion of LGBT culture in Weimar. You know what came next.
Here is a project that leverages prestige signaling in order to improve content: https://medianudge.weebly.com/how-it-works.html. And besides that there are subscription services like substack and medium that can keep quality higher, despite AIs.
As far as the TOC: A big state, Hobbesian Leviathan solution like the Clean Air Act is hard to implement on the internet. It is easy to implement on a state level in the real world because factories that pollute are expensive, so not possible to build them for most actors (keeps their number down), and also because they are expensive they are easy to spot and monitor. Since AI-generated garbage is cheap to produce and expensive to monitor at large scales, we need a different strategy, and nature and Ostrom show the way: nested, subsidiary, self-monitoring "groups", or membrane-bound levels of organismality. Each group for our purposes has only a limited number of parts (which are people, some of whom use AIs, at the lowest level) so it can deal with the monitoring and enforcement costs of its own parts. This way the monitoring and enforcement can be more effective than just anarchic monitoring of the whole internet (with a huge number of players/parts). Similar to how cancer is first monitored and enforcement measures taken at the sub-cellular level (DNA repair), then cellular (apoptosis or killing by immune system, maybe electrical signals ala Levin, not sure if this latter happens in humans), then organism level by us taking measures to keep healthy or kill the cancer with medical interventions. There might be some differences with biological systems though, like there is not necessarily a synergy happening between different parts above the first level, which means there is no selective pressure to form higher levels, so it would have to be imposed from above (a state like Leviathan).
People have not fallen away from twitter. There is data on increasing engagement and users and my personal experience corroborates this. I work in AI and do see obviously generated content in threads but there are also good conversations in many areas.
An anecdote from military history might offer an instructive parallel. Early firearms weren't superior to the longbows they ousted from the battlefields of Europe. Quite the contrary, that first generation of guns were all around inferior weapons. They were less accurate, had less effective range, and a far slower rate of fire than the old fashioned longbow. A skilled archer was deadlier than any early gunner.
How then did firearms ever manage to supplant the bow in the first place? Because archers required intensive specialized training. They had to practice extensively almost from the time they learned to stand to be any good. When they were killed, they could be difficult to replace. Meanwhile, any idiot could become proficient with a gun within a few hours. Gunners replaced archers not because they were better, but because they were cheaper.
The analogy with internet content creators seems exact. An educated flesh and blood human might still draw a better picture or write a superior Sport's Illustrated profile than a bot. But the human needs a salary and benefits. AI costs next to nothing. It doesn't even require appreciation. Humans, like archers, will be priced out of the market.
Guns, of course, eventually evolved into more lethal weapons than bows ever were. Bots, I fear, will become better artists and writers than humans could hope to be. With the accelerated pace of technological change, it might happen sooner than we think. I'm 53 and in poor health, but I'm still afraid I'll live to see it.
(Suggested reading: War Before Civilization: The Myth of the Peaceful Savage, by Lawrence H. Keeley)
Great (and scary) analogy
Yes! But hey the great equalizers that were guns made pro soldiers and horsemen vulnerable to angry peasants. Our societies were freed not by the ideas of Montesquieu et al. but mass manufactured cheap guns. What would be the parallel here for art creation?
The toppling of media empires like Disney or Fox, where content creation happens at a much lower level, small teams of less than a dozen produce high-quality output, equal or better to anything produced by hundreds today, allowing niche interests, local populations, .etc to have great, deep and interesting media made and targeted specifically at them.
That would be nice! Though if it's to surrender power to a youtube- or amazon-like monopoly aggregator, that freedom will be short-lived. Contingent upon the creators teaming up to bring the aggregator under their control. Same fight as right now, after all.
moving control to openai and google..
It seems soon parents will catch on to this sad trend and there will be reviewers who find quality human made content parents can trust. That will probably happen broadly as “human made” will soon be a value added category.
Like "Made in America"?
What we’re witnessing is the same thing that happened when tractors replaced many farm laborers and robots replaced many factory workers. These were generally positive developments for society as a whole.
Just as happened earlier, those directly affected by such changes will label the new tools as the work of Satan, while the vast majority of people who aren’t directly affected simply won’t care.
Substackers, do you care that your car was built by robots on an assembly line instead of being handcrafted piece by piece by individual humans? No, you don’t. You just want the best car at the lowest price.
Writing for a living was probably always a bad plan anyway, as doing so successfully requires one to mirror the group consensus and tell the audience what they want to hear, which isn’t really what writers should be doing.
Anyway, good idea or not, like it or not, AI is coming to the white collar world. Those who learn how to drive the new tractors will prosper, and those who don’t will be left behind. So it has always been.
"What we’re witnessing is the same thing that happened when tractors replaced many farm laborers and robots replaced many factory workers. These were generally positive developments for society as a whole."
Do you think all the people who killed themselves after being put out of work by automation in the early 20th century, or their families, would agree these were positive developments for society as a whole? The futurist and technocrat's "positive developments" always come at the expense of all life.
It is quite difficult not to conclude what seems obvious: that people like you must simply hate life, both the human beings whose suffering you disregard and discard as detritus in the way of "progress" and automation that gives the few the right to be free, and all the other animals that are killed in your endless extraction of resources, building of infrastructure, highways, and so forth.
Still, I'm not quite cynical enough to believe that people who have been deceived into such wild disregard for all life are truly malevolent. Have you not ever considered that with some thoughtfulness and effort, it might be possible to have this progress ethically with guardianship of life rather than its destruction? It's not as if there is some natural law in physics that states everyone must suffer and all animals including our own species must be trod upon and die for humanity to advance.
"Just as happened earlier, those directly affected by such changes will label the new tools as the work of Satan, while the vast majority of people who aren’t directly affected simply won’t care."
The level of technological sophistication in any society is not linear and guaranteed to progress, and it can and has regressed many times in the past. This is obvious to anyone with a basic understanding of history.
Regardless, what kind of morality is it you have that you think it's meaningful to say people who aren't directly affected by something won't care about the consequences, and therefore the harm to the affected doesn't matter? That attitude is beyond callous, what utter moral nihilism and sophist rubbish!
On top of that, in this case where a very significant percentage of the entire human population is affected—all those who use the Internet—how does it even make sense? What, it doesn't matter how the Internet turning into a garbage dump affects us, because there are uncontacted tribes like the Sentinelese out there who don't care?
"Substackers, do you care that your car was built by robots on an assembly line instead of being handcrafted piece by piece by individual humans? No, you don’t. You just want the best car at the lowest price."
You must have some sort of problem, maybe with empathy, if you think everyone thinks the exact same way you do, and I don't mean that as an insult, since it's really baffling. Yes, there are indeed many people who value and care about human craftsmanship and who go out of our way to pay for and support it, and you can find us everywhere. If there were cars built entirely by humans, there would be people willing to pay a premium for them even if they aren't the lowest in price.
"Writing for a living was probably always a bad plan anyway, as doing so successfully requires one to mirror the group consensus and tell the audience what they want to hear, which isn’t really what writers should be doing."
Speaking absolutely, the number of writers who make a living without preaching to the choir is actually quite high, and is only small relatively, yet you act as if it's a requirement to write for a living when it clearly is not and shouldn't be.
"Anyway, good idea or not, like it or not, AI is coming to the white collar world. Those who learn how to drive the new tractors will prosper, and those who don’t will be left behind. So it has always been."
Nah, not to a significant degree anytime soon anyway to where this is about as meaningful a statement as saying we're going to travel through interstellar space and explore Alpha Centauri anytime now. It's nothing but hype, marketing, and public relations (recall the latter is synonymous for propaganda, of course, since Edward Bernays changed the term after WWII).
LLMs produce garbage and are incapable of even simple tasks, and their usefulness is vastly overstated and hyped up by the market to attract investors, who will probably realise they've been scammed soon enough and retaliate, because it is illegal to lie to investors. They have problems which are fundamental and cannot be fixed, because hallucinations are unsolvable and neural nets themselves are incapable of understanding that A = B ∴ B = A, as research both more recent and going back many years shows.
Soon people will realise there's almost no real market for LLMs other than for producing garbage and propaganda. They're useful for businesses and for states, but not 99% of human beings. There will be pushback. The market for them will collapse, and AI as a whole will suffer for the garbage of LLMs and what companies like OpenAI have done. There will probably be another AI winter that follows.
That is an obvious conclusion or at least a high probability possibility to anyone paying attention and who reads what critical AI researchers like Gary Marcus write, as opposed to those who prefer to read from the technopriests like Yudkowsky and Boström, who don't even work with the software itself, and just have religious beliefs about AI replacing humans and think AI will let them live forever after they die, based on theory that's not of the scientific support and has nothing to back it or its predictions up in reality. You sound like one of their disciples, they are everywhere.
What, there are no replies to this post?! Thank you Emma, this was brilliantly argued and beautifully written - and above all, passionate, meaningful and human. All things that AI, as we understand it, can never be.
What?! No, that's... kind of an 𝗮𝘄𝗳𝘂𝗹 post — lots of errors, lots of needless invective, and some odd turns of phrase; I'd argue that it is, therefore, 𝘯𝘦𝘪𝘵𝘩𝘦𝘳 brilliantly-argued 𝘯𝘰𝘳 beautifully-written.
To wit:
→ "Do you think all the people who killed themselves after being put out of work by automation in the early 20th century, or their families, would agree these were positive developments for society as a whole?"
Mechanization of food production has saved many, *many* more lives than it cost — there was hardly an epidemic of suicides, but even if there were, take a look at how many used to regularly die from famine — and I think you'd be real hard-pressed to find anyone who wants to go back to horse-drawn plows.
(Also, if you've ever done that sort of thing personally, you'd know it's absolutely miserable. History is my main love, and so I often go and try to find ways to experience life as it was at some period or another; often it's fun, but non-mechanized farm labor is absolutely back-breaking.)
(Also also, see Rob's reply below: there was a shift in jobs, not a net loss of jobs. This likely won't be true in the case of AI, though, unfortunately.)
→ "It is quite difficult not to conclude what seems obvious: that people like you must simply hate life"
Come on. This is just hysterical ad-hominem posturing. When you find yourself having to assert that your interlocutor is an inhuman monster that loves evil and hates good, you've probably forgotten to take your meds again.
→ "The level of technological sophistication in any society is not linear and guaranteed to progress, and it can and has regressed many times in the past. This is obvious to anyone with a basic understanding of history."
Non-sequitur.
→ "If there were cars built entirely by humans, there would be people willing to pay a premium for them even if they aren't the lowest in price."
1.) Irrelevant to the point, which is that the overwhelming majority of people do not care.
2.) But I also doubt it's even really true: where, then, are the hand-built cars being marketed? Premium hand-crafted goods exist when there's a market for them. Doesn't appear to be one for cars.
→ "Speaking absolutely, the number of writers who make a living without preaching to the choir is actually quite high, and only small relatively"
So... just as Emma's opponent initially said, then?
→ "Nah, not to a significant degree anytime soon anyway [...] LLMs produce garbage and are incapable of even simple tasks, and their usefulness is vastly overstated and hyped up by the market to attract investors, who will probably realise they've been scammed soon enough and retaliate"
Let us make a pact to return to this comments section at whatever time you deem that this will have become obvious, and see if it is the case. If we can find a judge we both trust, and more concrete formulation, I'd probably be willing to bet money on it not being so.
(The part about "incapable of even simple tasks" is already demonstrably untrue: you can create e.g. a Reddit clone using nothing but asking the AI to code up each step for you.
(Or: I asked ChatGPT for some broad summaries of linguistic topics and data that isn't easily Google-able, just to test it, and it correctly explained things such as the perceived valence of different phonemes and their frequency by two different metrics in various languages, the main measures of linguistic conservatism and their manifestations in various languages, etc; etc.)
→ "They have problems which are fundamental and cannot be fixed, because hallucinations are unsolvable and neural nets themselves are incapable of understanding that A = B ∴ B = A, as research both more recent and going back many years shows."
Again, demonstrably untrue. Hallucinations are a difficult case, but there is no consensus whatsoever that they're an unsolvable issue — and one can test the latter claim right now with the free version of ChatGPT.
For any value of "understand" which renders the claim meaningful (i.e., which applies to how LLMs are used and/or which doesn't result in being forced to claim they don't understand anything at all), it is simply wrong, and has been for several years.
→ "They're useful for businesses and for states, but not 99% of human beings."
Pretty skeptical of this. I do not know a single student or programmer who doesn't use an LLM to some extent, and know many regular people who use them for stuff like quickly and easily developing and breaking down to-do lists, or as a Google alternative. This will only increase over time.
Again, willing to bet money on this.
→ "and think AI will let them live forever after they die, based on theory that's not of the scientific support"
What theory do you think Emma meant here, and how is it "not of the scientific support"?
So — no, friend Alistair; if you believe that *that* is brilliantly-argued, I can only assume you skimmed it quickly... or have real low standards for "brilliant". 😛
Thank you Himaldr, this was brilliantly argued and beautifully written - and above all, passionate, meaningful and human. All things that AI, as we understand it, can never be.
...okay, you got me, I cracked up
edit: maybe I'll edit the post slightly, in light of the recent discovery that you're evidently a gentleman of high culture and deep erudition
Man, all you had to do was to read the article to notice the author's point wasn't that AI would drive people out of business with superior output like robot-assembled cars made human assembly workers obsolete, it was that AI produces useless nonsense that will be pushed onto people via the Internet because AI's only advantage is varnishing the shit it generates with the barest sheen of plausibility.
So the tractor analogy is not a correct comparison.
Tractors themselves require a really deep economy to be manufactured, maintained and run - fuel, mechanics, spares, tyres, brakes, hydraulics, trailers, drivers etc... this change shifted the economy to the fossil fuel based economy which is itself labour intensive etc ad infinitum. There was no "replacement" of labour, rather a shift - which actually ended up creating MORE jobs.
This move to generators - is actually a net loss to the economy. There is no shift in labour just a removal.
You could argue that there will need to be chip manufacturers, data centre engineers and ML engineers and of course energy sector workers to power this - and I say this is a false assertion because a generator is not exactly labour intensive, one piece of software written by a single person can be reused millions of times - this is not a tractor. With near zero extra production cost - this can be scaled very efficiently a tractor has a DEEP logistical supply chain - this new kind of technology is very very short - one step and this step is going to be owned by two or three global players. the cost of the energy is a fraction of the requirements for mechanised labour - an energy economy.
Surely the analogy here is less tractors and more disposable plastic shopping bags? Junk which is cheaper to create than to properly dispose of.
I agree with your general framing of the notion of semantic pollution, Erik, and also that it is important, and that it is a tragedy of the commons. However, this was already happening before the AI-advances of the past few years. Bizarre mass-produced kids videos have been on Youtube for at least 7 years, probably longer, and so before they could have been AI-generated. The business logic and consequences of the older content farms are similar. Now AI is making it cheaper to generate the content. But was "content generation cost" ever the bottleneck to profit for these operations? I'm not sure. Overall, it seems like more of an acceleration of an existing problem right now. But I share your concern that the problem of semantic pollution will be qualitatively worse within a few years.
If you look at the how-to videos they all involve ChatGPT at some point to write songs or scripts. I think they were using AI in the past, but it was more algorithmic recombination of simple plots and scenes with different colors, etc. Still AI though, just of that era.
Had an interesting conversation with an approx. 20-year-old guy who told me he thinks his generation will abandon the internet altogether; such a practice seems highly rational and sensible if the alternative is to be exposed to the kinds of garbage discussed here.
It was a thought I also had. If we reach the point when we cannot believe anything we see or read on the internet, then our relationship with the internet will substantially change. Abandoning it altogether is not practical given it is a powerful and useful tool for commercial purposes, but as a source of information it may die a quicker death than we may realize. You may even see a revival of the print publications as people shift to a model of paying for accuracy and truthfulness (or one hopes).
That's how I see it as well.
I think this is too radical an idea for a critical mass of people to adopt.
Blockchain technology offers a more realistic solution without such dramatic behavior change being required.
You mean the technology they want to use to monetize every interaction on the internet?
Blockchain being described as “realistic” is a pretty funny thing to say in 2024, especially when the alternative is just not using social media. Yeah social media is addictive, but it’s an easy habit to break if it suddenly becomes paywalled or if there’s too much garbage to maintain attention (see: Twitter).
I think you underestimate how addicted to social media many people are. Also if you think blockchain technology is trying to monetize every interaction on the internet, you clearly have very little grasp of what the purposes of blockchain are. Look up the terms permissionless, immutable, encryption, and decentralization. That should get you on the right track of just some of the potential.
It’s psychologically addictive, not chemically addictive, and it becomes a lot less easy to waste time on when it’s no longer free and the sites are full of garbage content. Again, look at Twitter.
I fully understand the potential for blockchain and all the terms you mention, but you are falling for the marketing of it. I think you don’t understand what tech companies actually want to do with it or how it would actually be financially viable. Look up Web 3.0 and token based economies.
This is something I see us trending towards, at least partially abandoning the internet. There has been a rise in interest in the TradWife movement and homesteading. In just one example, this morning I watched somebody harvest clay from their land, haul it by donkey, and hand-make their own kitchen tile. People are craving humanity in its most natural form, and are struggling to find this type of satisfaction online since it has been commodified and saturated with impersonal, inhuman content. Perhaps it will remain a subculture but I think it is symptomatic of the pollution described.
We are currently here.
That's why we don't put much credence in the things 20-year-olds say. 😛
(...Unless you'd like to seriously predict a net decrease in Internet usage in 5/10/15 years; I don't see it, personally.)
It’s damming that YouTube Kids doesn’t allow comments, for obvious reasons—a loophole identified by the inhuman content farms—leaving out any direct way to signal parents of the subpar content quality.
We’re on a rapid decline. The upcoming generation, well beyond their toddler years, faced the lockdown education system headfirst unprepared. Now dissociated from their peers and unable to read/write adequately, indicated by the slew who’ve taken their pleas online to voice the growing epidemic.
If utopia is ahead, who maintains it other than the machines the predecessors will leave behind?
That's a really good point about the bans on comments
most youtube comments with any substance to them are now filtered by bots anyway. Youtube has not been a place for meaningful conversation for quite a while now, thanks partially to the (artificially stupid) bots.
Shit, it's not just me then? I can't tell you how many times I have tried to leave a polite and substantive comment on YT, only have it immediately disappear.
Can't figure out what the fuck is triggering the filtering — meanwhile, a thousand variations of "lmao, who watchin dis in 2024?? leave a like 🔥🔥 💯" get through just fine.
I'm beginning to hate YouTube.
As a nerd, I spent several hours one day trying to defeat the algorithm, to post someone else's comment who was convinced (erroneously, I talked to the owner and he confirmed it wasn't him) that the channel owner was censoring them. I succeeded eventually, but I didn't figure out why. It seems to be a combination of things like how many other comments there already are, how long the comment is, and certain hot-button words or phrases.
One thing I've resorted to is breaking a comment into several chunks and trying them one-at-a-time until I find the bit that's triggering the filtering.
As you say, though, I've often been left mystified as to *why* — weird that it does seem to depend on other comments (I once finally managed to get one to post, went on to write another... and the first one then disappeared!).
I'm not sure if it's just because smarter folks notice that it's happening more, or if there's actually some correlation; but I am obviously more articulate than the average commenter (he said, humbly), and I notice the others I see who mention "YT keeps eating my comments" are also likely to write more properly.
Maybe it's part of a massive endumbening conspiracy!
Nah, it's just unintended consequences of an algorithm...
The positive spin, here, is that the cultural pollution has been happening for at least ten or fifteen years, but now, with the tidal wave of bilge AI content washing over everything, people are finally starting to see the problem. And the solution, as before, will be a combination of human curation and trust markets. The good news is that the more people become aware of the problem, the more profitable the solution will be.
The conept of power with instead of power with is I want to live in. Societies that hold space for nature, music, creativity. For empathic learning and connecting between citizens.. .. The Eutrscans were the ones who were more strongly connected to art and nature. They were more sophisticated than the Romans in many ways
I think AI generated content is just the logical consequence of the attention economy. If you make money by getting eyeballs on your content, then finding a way how to produce it cheaply, fast, and at massive scale will become the main objective. In that sense, we haven't landed anywhere surprising - we're just cruising on the same predictable trajectory, like a comet through space.
Couldn’t agree more. And in that sense all of this pollution may be a net positive insofar as it reveals the absurdity that is the competition for “clicks”. At some point people are going to have to reject the internet as it’s currently constructed, and Generative AI may be accelerating that (but that is also the most optimistic possible take).
I'm surprised nobody has commented on the obvious yet, which is maybe don't let your toddlers use screens.
The bar is so low, it's in hell.
Strong, strong agree. I see more and more children with dead eyes and slack jaws, sometimes with a screen propped up on the buggy (!)
I feel enormously *more* strongly about this since having children myself.
I know each generation thinks the world around them is getting worse, but parents *really are* getting worse. Let alone the ubiquity of screens and childhood obesity, in the UK a quarter of children aren't potty trained by the time they start school. A quarter!
I watched my older brother on his death bed in hospital, unable to move his hand from his phone, his life line. He is almost 80.
Well, the alternative is a child screaming and running around breaking stuff, so I can't blame the parents.
Then again, maybe this is why I recognized early on that I'm not parent material, heh. Can't imagine why anyone goes through it. I have friends who just keep popping out babies and I don't get it!
(...and these are, uh, mostly the dumber friends, which brings to mind ANOTHER fear for the future...)
(To be clear, I loved this article, and I know that parents will likely keep allowing their toddlers to use screens given how effectively it keeps them pacified.)
Or as I decided in the seventies. Simply, do not have children. I am and do not regret being a dead end gene pool, but in troubling moments I ask myself, “Did I reject my biological destiny, so that AI could replace me?” Evoking fears of replacement theory. Yikes, another heartless dead end. 🤔
Word. I have had instinctive distaste for children since I *was* a child, heh.
(The old "Stop crying, god damn it, or I'll give you something to cry about!" is something I could picture myself saying in a moment of frustration — so... yeah, no kids for me, probably best...)
While I fully agree with this critique of AI-generated content and share your sense of impending doom for the Internet, your use of “the tragedy of the commons” to build you argument took me aback. Economist Elinor Ostrum debunked this idea (which was never based on data) when Hardin first published his theory in 1968. Her work on the topic finally* won the Nobel Prize in Economics in 2009, making her the first woman to receive the prize. This “tragedy” has no basis in observed fact or data modeling— it’s the theory of one biologist that got traction because it aligned with neo-liberal anti-communism and a “dog-eat-dog” interpretation of human nature that tacitly endorses unbridled capitalism. We MUST stop using it to build arguments, even good ones, and let it whither away . Read a fairly good takedown of this non-tragedy: https://aeon.co/essays/the-tragedy-of-the-commons-is-a-false-and-dangerous-myth
If I may concur with your introductory words and add an excellent piece by Cory Doctorow: “‘The Tragedy of the Commons’: How Ecofascism Was Smuggled into Mainstream Thought – Cory Doctorow’s MEMEX,” October 1, 2019. https://memex.craphound.com/2019/10/01/the-tragedy-of-the-commons-how-ecofascism-was-smuggled-into-mainstream-thought/. It complements (and precedes) the Aeon article.
Enshitification of the internet. Dear Cory.
I don’t think it’s quite accurate to say that she debunked the idea that managing Common Pool Resources is a challenge that humans do face. The 2nd chapter of her book starts with a section entitled “The CPR Situation”. What she showed is that humans have evolved adaptive collaborative behaviors to solve this problem on the social complexity landscape. The main political implication of this is that over centralization of regulatory apparatuses “may destroy one set of effective CPR institutions without necessarily developing effective alternatives.”
I did not say she debunked the idea that managing common pool resources is a challenge that humans face— in fact, most of her work analyzes the ways people respond to this challenge and argues in favor of embracing the complexity inherent in the problem. I said she debunked Hardin’s facile explanation that all humans in all cases choose the most destructive approach.
I don't think that's Hardin's view, necessarily. He articulates that it *is* a challenge — not that it's an unsolvable one.
"No basis in observed fact" is an overreach, as well. We have certainly seen this exact result play out, more than once; we're seeing it now with the AI bilge!
I came here to say this
Thank you.
No basis in fact? Check out the collapse in fish stocks from over-fishing in areas like the Grand Banks.
At times people have indeed chosen the most destructive and most short-sighted approach to resource management. Neither I nor Ostrum claim otherwise. We both oppose Hardin’s argument that humans always, unavoidably choose this path, however.
Not what you said originally… but sure, you’re now making sense.
This, “cognitive micronutrients” yes. Exactly.
“Might not actual human-generated cultural content normally contain cognitive micro-nutrients (like cohesive plots and sentences, detailed complexity, reasons for transitions, an overall gestalt, etc) that the human mind actually needs?”
Nightmare.
Well said, Erik. Some of the children's video problem appeared several years ago with the Elsagate phenomenon. I'm not sure we've ever fully determined how much of that was AI generated, and how much was just garbage skits/animated shows from places with cheap labor. Regardless, Elsagate demonstrated that inert, unsophisticated child viewers could be a highly lucrative audience for torrents of low-quality but shocking visual content. Beloved characters like Spiderman and Elsa giving each other injections, playing with feces, weird pregnancy stuff --- the fusion of schizophrenia with Saturday morning cartoons.
And it turns out that children are not the only audience for this swill. I can't wait until it gets piped straight into our VR headsets. Or, what the hell, right into our Neuralinks.
I really got it for the first time a few days ago when I went on YouTube to get into a music hole. I do that every few months: I grab a beverage, play a song I like and see where the algorithm takes me — YouTube has served for many years as a source of new music and fascinating reviews by experts that have led me to expert producers’ explainers on why pop music pops, singing techniques of various a cappella groups, a Siberian metal band, and a bevy of new independent artists I’d never have found directly. This last session, I gave up after about ten minutes. More than half of the videos YouTube fed me were AI-produced schlock, mostly consisting of duets between various dead people. I’m so sad one of my favorite pastimes is ruined, but even more, sad for all those artists whose work is being buried in an avalanche of dreck.
Somehow in someway, we need to grow a community of people in every free country where the ethos is to pay for, or only try to engage with, creators who do not use generative AI. That’s what I try to do, a large part because as a creator myself, I believe using generative AI lacks integrity. I pay for your newsletter Erik, not only because it’s brilliant, but it seems like you’re not cheating me with your creations, that you’re not using AI. Now, as you said in your piece, there exists no decent AI detection software so what needs to happen is development of faith and trust in those you believe are not using AI - and an adjacent community of investigators who seeks to expose creators who use AI. Anyway great article, once more you freaked me out lol.
Chur,
The Delinquent Academic
I was skeptical: As a lazy parent, I know there are millions of lousy toddler videos on youtube. (Even Ryan's world is bad and most is worse.) Without AI. But, I stand corrected: This AI stuff manages to creep me out. Scott Alexander was right about the future of movies by AI: "The dumbest possible way to do this is to ask GPT-4 to write a summary (“write the summary of a plot for a detective mystery story”), then ask it to convert the summary into a 100-point outline, then convert that into 100 minutes of a 100-minute movie, then ask Sora to generate each one-minute block." - But that 'future' is now - for the youngest. - I hope the market will soon offer easy ways to block that BS out. If not: tough luck for the kids of less-than-perfect-parents.