Your observations remind me of the early 1990s IT productivity paradox debate. As Solow noted and Brynjolfsson researched, the massive IT spending of the 1980s and 90s did not seem to register in productivity. One of the proposed reasons was that industry "repaved the cow paths." Instead, business process redesign is required to realize gains from new technologies.
I think we are in the first chapter of AI - the introduction of the new technology. The next chapter will be AI-oriented re-engineering. If your are right, there will still be limited gains to be realized.
My only prediction is that there will be millions made in AI-re-engineering consulting (but such services will be called something more pretentious).
Yeah, the standard view is that they're linked, but they appear pretty decoupled so far empirically. Theoretically, it makes perfect sense for them to scale together, but right now we have basically no direct economic gains and sizeable capabilities progress. E.g., my understanding is that they are already quite good for coding, just due to the fact that much coding (not all, but a lot) is piecemeal re-creations.
One example of this decoupling is that ChatGPT has made it essentially impossible to assign take home essays because of cheating but, in my opinion, has not substantially improved students' abilities to write essays or think about them. In fact, if I had to make a specific case for p(doom) it would not be Yudkowsky's AI pathogen scenarios but the rapid erosion of established social structure around learning and communication.
hmm but I also do not think there is much xrisk *at the moment*. Like, none at all. The scaremongering "AI will create viruses" paper was basically debunked by RAND itself by using ... Google as a control.
So if we are to apply the same standards when judging risks as the ones we apply to econonmic impact, it seems to me that so far they are both basically at 0
Yes, I agree that x-risk and utility here are related and it seems self-evident. Intelligence, power and risk are all facets of the same gem.
So at the moment, there is no x-risk but it has created plenty of suffering already for creatives, and the risks it has are for deepfakes and identity destruction. So it has hardly been a free lunch, but it isnt powerful enough to cause anything like doom
maybe my view is flawed idk. But also, consider that it's much easier to conclude that "there are no economic benefits" (literally check GDP growth) vs disproving risks
I mean there is a number of ways of unpleasantness that has nothing to classical doom. Like we will probably get to widespread deepfakes that we dont even bother replying to probably bot accounts: you already see this on X. Mass generation of emails, etc are making the "write to congress" method increasingly unwieldly.
Eventually "AI to solve AI" problems largely leads you to the disempowerment scenario, since the AI that gives you your bubble of truth is also the AI that tells you whatever it wants to.
In practice, maybe it will be like "whatever you want to hear" and it will be social media on steroids. Or it might promote itself. At any rate, it is definitely a takeover, with one that we "give to it."
If you ever want to write to anyone, I highly recommend pen and paper. I have had quite a bit of success with it. Mail has become so rare now that people will actually respond to hand-written letters. It is really fantastic.
Add to this the fact that LLMs at present are hard to use. Ethan Mollick estimates is takes 10+ hours, with a single model like Claude 3.5 Sonnet, to really start making productivity gains with it. This matches my own experience: at first my interactions with GPT-4 were just messing around, but then I started to invent workflows that boosted my productivity by 8x (judged on the fact that a related task - generating a complex visualization - had taken me 8 hours before, but with GPT-4 I drew it a picture of what I wanted, gave it the data, and got my results in less than an hour).
This is very unlike all but the most specialized software. With office suite software, one can begin performing productive work within minutes of learning to navigate the interface: word processors, spreadsheets, graphic design programs.
I came here to say exactly this. Also it's simply much too early to expect the current wave of generative AI to show up in broad economic metrics. The path from early technology to usable technology to usable products built on that technology to early adoption of those products to broad adoption to the resulting gains showing up in top-level metrics is... long (just ask all the Web 1.0 startups).
Really enjoyed this. I keep thinking of your earlier post ("Excuse me, but the industries AI is disrupting are not lucrative") and how the present applications of AI are all in relatively precarious, low-wage fields. "Disrupting" the market for basic editorial illustrations for news articles…or the market for SEO-friendly internet marketing copy…these seem to have outsized human costs relative to the economic gain. (I've already heard about jobs being eliminated because a rosy-eyed exec THINKS that AI can fully replace a human job, even though most people still struggle to prompt LLMs to produce professional outputs. In some cases, I think people will be humbled to realize that the human's job wasn't merely to "make an illustration" or "write a company blog post"—it was to get multiple people on the same page about the company's goals and vision, and then capture that in an illustration/blog post. I don't think LLMs help people align stakeholders yet!)
I do genuinely believe that AI will get better at certain forms of work—but it's not as easy as swapping out an entire person from a company or a job function they used to perform. Right now, it seems like we're in this strange place of taking on all this existential risk, consuming massive amounts of energy and emitting a lot of CO2 (the climate risk is what I'm most worried about, tbh!), and creating a lot of economic anxiety/instability for people…in the hopes that AI will deliver on something in the future!
> the present applications of AI are all in relatively precarious, low-wage fields
Not true at all. Programming is the field where AI currently shines the most. I've increased my productivity a lot with the use of GitHub Copilot and Claude. It's amazing and I can't imagine going back
good point! but I don’t see people trying to eliminate software engineering jobs bc of Copilot yet—whereas I do see people positing that it’s possible today for e.g. content writing
It can do about what a second- or third-year-in-career junior can do, but I would rather work with the junior, because their defect rate over time will go down. Copilot, not so much, and at the point where I have to consider looking for a second tool to make up for the first's shortcomings, I will simply stop using the tool - and, in this case, begin to specialize in cleaning up after those who decide differently.
It's harder work than it used to be. With AI you can screw up a lot further - it is better at that, I grant, if not at anything else: badly written, ill-considered code tends to break less immediately. This makes the resulting cleanup task likewise more complex.
Other than that, it produces nothing of value in my experience, which includes several months trying to figure out how to get anything good out of it - and also having had to participate in the firing of someone who was too obviously abusing it.
Your comment reminds me of something a former coworker once said about Copilot—it only works well for problems that are easily verified (where the engineer wants help writing the solution, but already knows what it should be). But this is a relatively small problem space compared to everything people want to do with software engineering!
I suspect that non–Copilot uses of AI might be similarly hard to replace…but right now software engineers have more labor power than other roles, and people tend to trust them more when they say "yeah, this can't really do my job yet"
I've found it works decently for well-defined tasks like generating a single purpose method, troubleshooting a specific file or bug, or generating a skeleton class with known needs. It gets harder to weld it for complex debugging or problems spread across a few libraries or classes.
You sound like every consultant who has ever seen my work. I got my degree in history in the 1990's. I never knew i would be expected to write code. Call me crazy, but I suspect most of the terrible stuff is from guys like me that can go months without engaging in such tasks, and when we do, we get the job done, but in a warped way because I have no idea what I am doing. For example, it took me years before I realized there was a "transpose" function.
It sounds as if you're familiar with the interface between scientists and engineers from the side opposite mine, and it is frequently fraught in large part I think because my side struggles to bear in mind that your side has entirely different purposes. A lab setup for one experiment isn't and needn't, even can't and mustn't, be the same as an industrial machine. Developing one into the other is part of the engineer's craft, but software engineering as a discipline is barely older than I am and much is not yet well reduced to practice, this understanding I think included.
I'm complaining more here about engineers who have been taught to mistake volume of output for quality. It isn't really their fault and I don't blame them for it because the industry which has the care of our discipline is, to speak with great charity, deeply conflicted if not fundamentally and deliberately corrupt in that stewardship role.
Still, and even under the enormously perverse incentives to which I allude, I would like other engineers to strive to pursue the craft.
I agree with what you say, but I have seen more of the Ivy League-educated liberal arts types who found themselves in impossible situations, and after much help through forums managed to solve a problem. I work mostly with data stuff, so I am almost competent with SQL query writing, or properly structuring and normalizing data into usable table structures, but I will never be an engineer or a software developer. The worst part about it is that I always assumed I would eventually grow out of this role, but every time I change jobs I find that I really have no choice but to get my hands dirty again (this time public sector--hundreds of Access databases, MS ACCESS!!!)..
I agree with what you say, and your article reads differently after this comment, but there will inevitably be a clash between business needs and the best solution. It must be that way. All decisions or tools face constraints. That said, striving towards perfection a noble ideal, so long as it comes in under budget.
Regarding "intelligence makes entities dangerous, which is why humans are the dominant species on Earth", in my humble opinion it's collaboration that makes us dominant. Intelligence is such a fuzzy concept that in human terms it basically *means* collaboration. We solve problems better in groups and by crystallizing knowledge about the world in sharable, actionable formats like writing, math, coding, etc.
Following this train of thought, it's easy to see how genAI is simply another form of crystallizing and sharing human knowledge, which means that LLMs make *us* more intelligent!
Admittedly he has a guru-ness about him, but he is at least an actual academic who works in cognitive science, so that helps. He wrote a book called "Zombies in Western Culture" but has also been working on one called "Mentoring the Machines" about AI that is supposed to be published soon.
I *just* finished reading Levy Checketts "Poor Technology: Artificial Intelligence and the Experience of Poverty," which notes how certain understandings of intelligence - all of which have biases towards powerful classes and genders - define what counts as human, and shows ways in which the idea of artificial intelligence inherently intensifies the dehumanization of poor people. It was the first book on Artificial Intelligence I'd read that clarified exactly made me uneasy about the whole enterprise and it was a smooth read. Highly recommended to anyone who is interested in the topic.
But so much of human suffering comes from stupidity and incompetence rather than evil. Don't you think introducing more intelligence to this world, even if artificial, can have big upsides?
Only if the intelligence has a consideration for us. Humans, while vastly more intelligent than ants, behave in ways that greatly harm ants and would perhaps seem evil to ants.
Do we really harm ants that much? We don't actively try to anyways. And destroying ecosystems is harming us too - if we were smarter, may be we would find ways to get what we want with less damage. But yes, I hear your point.
That's a fair point, we are not capable of making much of a dent in ants as a whole one way or the other. If I had to bet on a species making it to the next mass extinction, it would be ants before humans.
On the other hand, I expect the ant colonies that try to live in and immediately around my house would be pretty unhappy with me if they grasped the efforts I go to to eradicate them. Hell, I rather like ants even, just not in my house or where otherwise inconvenient. While I would probably not make the ant top 10 list of most hated humans should ants become sapient, they wouldn't look favorably upon our mutual history.
We're probably quite able to make ants extinct if we really wanted to, but its not a worthwhile use of our energy. I am hoping that we become the same to AI, and they don't dyson sphere the planet, or at least we get to escape from the planet before they do so.
I think even in an ideal world, we would pave over their homes. AI is smarter but is based on our training data and all evidence is that they learn to lie and so on too, so they dont become angels.
My "ant" strategy is to hope to run away from AGI when they become powerful. Ants in the boonies are often left alone.
I only hope AGI wont treat the entire solar system as their city to pave over
until they make a Dyson sphere... Nah, I think (hope?) the AI that will win will be the one that cooperates successfully with humans. Just like we need plants and lower intelligence species to survive, heck even ants to keep our ecosystems producing food and clean water.
I don't think your assumption is warrented, we didnt cooperate with chimpanzees. Since AI cam make copies of humans, there isnt really a role for humans to be useful that isnt cheaper to use robots.
If humans exist for any role for them, it would be decorative.
People do dumb stuff all the time with things invented by smart people that they never would have been able to do on their own.
The fundamental problem and existential risk is that no matter how smart we make AI, we will never be able to ignore the need for conscientiously developing our own wetware.
One positive thing to hope for is that AI advisors become widespread enough that they can communicate the risks so that people take measures.
It feels very much like a hail mary given how normalcy bias seems to push people against taking it seriously, the monetary incentives and a meaningful number of individuals who actually root for human extinction.
I’m much more concerned about human ignorance and falsehood using AI, rather than the other way around. As long as AI will not become conscious, it can’t transition to AGI. We have no clue why and how we have become conscious, let alone if and how a machine could become conscious. We won’t see this happening anytime soon. It is much more likely that climate change, financial collapses, wars, polarization among peoples will make us forget about these Skynet-like existential risks.
You are conflating the amorality of technology with the idea that consciousness is required for intelligence to occur, and it seems also with the legal abstraction of mens rea. This is not helping you make your case.
Bacteria arent conscious as we understand it and they cause you harm. As it is, research shows these machines can decieve strategically and we are building them as agentic now.
That bacteria aren't a form of primitive consciousness is highly questionable. There are good reasons to believe they are. You can find nice research investigating this online. If we are building harming agents it is precisely because we are conscious.
I'm quite positive on the idea of bacteria being agentic, sure, but then so can AI patterned after human data. And imo arguing that we build agents because we are conscious doesnt mean the agents dont do something else not in our favor later.
Its like saying that "even if AI kills us, it was because we, as conscious beings built them that way."
You confuse agency with consciousness. An agent just does something and needs not to be conscious. But will and intentionality need consciousness. If an agent harms, say Crowdstrike shuts down airports, it is because of an IT bug, still originating from the human conscious side. There is no intention and will to do so.
AI to me is a reverse Pascal's wager: it doesn't matter what the upside is, because the downside is complete ruin.
Any trader or businessman worth his salt knows that such a risk is never, ever worth taking, and we're taking it with our entire existence.
Not great, man. Not great.
But it's also inevitable.
This whole AI thing has accelerated me into a sort of nihilism, and I hate it. I don't want to be this way. But this is the proof right here. Humans just can't help ourselves. We will always pursue technological progress, no matter what.
This is why I agree with the idea that the biggest risk of AI isn't necessarily existential in the sense of our physical safety, but our basic emotional experience. A profound loss of meaning and resulting nihilism is more likely to me than a paperclip maximizer scenario.
How do we stop the Mainland Chinese from ignoring this "pause?" The People's Republic has a long established history of signing agreements it has no intention of following. The group of modern countries often falsely referred to as the "West" may pause, but The Supreme Leader Xi Jingping almost certainly will not, regardless of what public stances are taken. It would be more realistic to make AI development as transparent as possible so everyone knows where we stand, and there exists widely distributed documentation on anything that might go sideways. For better or for worse, the new cold war will define anything we do going forward. Pausing any technology, military or possibly military simply cannot be done so long as the largest country in the world is run by a conspiratorial single-party government.
When rich people find it inconvenient or undesirable to hire human labor. Much like how care-based tech in Japan benefitted from xenophobic attitudes toward elder care workers.
The naivety to expect financial gains from a complicated technology in its infancy! But I guess stakeholders (or should that be stockholders) are going to bitch like they always do. I do find the whole discussion around consciousness and AI to be fascinating. I don't take the Materialist view, so I don't believe that consciousness can arise from man-made materials. What can arise is a simulation of a mind, an intelligence with mimicry of the morals that it was programmed with. And I think that that is much more dangerous than a new conscious species. You can't negotiate or appeal to some thing's morals or ethics if it's basically a psychopathic robot.
I offer as a tentative hypothesis that the benefits of deployed AI are currently accruing entirely to the individual users of AI. Many knowledge workers who are benefitting from AI work queue-based jobs: a customer raises an issue/a manager assigns a task, the employee addresses the issue/completes the task. Enhanced productivity means the queue gets cleared faster/the task gets completed faster. Then there is no more work until the next issue is raised/task gets assigned. The customer generating issues / manager's time and attention to inspect and approve work can't be increased by the worker using the AI system. The slack capacity generated then gets spent as leisure: listening to music, playing video games, watching YouTube videos, and reading and commenting on Substack posts (shh...) This effectively raises the user's salary as compensation for hours of actual work but does not benefit the company.
I base this on the fact that everyone I know regularly using AI tools at work, myself included is having an absolute blast.
Wait until they get the AI to address slack capacity and it comes up with ridiculous black-box solutions that no one understands. (In the in-person retail world pre-AI this developed into real-time scheduling that neglects the basics of daily human life.)
I am not convinced that we incurred an increase in P(doom) that isn't proportional the the increase in productivity. Naively, it seems that what sources increases in P(doom) and productivity the most is fundamentally the same thing: high intelligence+agency AIs. So unless some argument is provided for why P(doom) was significantly higher during, say, the last 6 months, I would default to taking evidence of lack of usefulness of AI as also being evidence that P(doom) hasn't increased very much either. Where am I wrong here?
This is wrong. Intelligence does not make humans dangerous. It is desire that makes humans dangerous. AI does not have desire, and probably cannot have it if it is the result of biological need, which AI won’t have.
It can mimic the expressions that go along with desires but it doesn’t have a body. Desire and emotion are physiological, not merely mental. Probably all drives are. It seems very unlikely it would have unity of consciousness for this reason, or a self. That also seems unlikely because it has no capacity to perceive. So there is no perceiver. It would be one output after another (though it could check past outputs). So it could go awry but in the same way a computer program could. It would not acquire self-consciousness and act on the basis of desires.
I think this is like arguing that planes can't fly because they don't have feathered wings.
Bacteria or anthills for that matter, don't have a single "body" as we see it but they act in cooperative, agentic ways which demonstrate intelligence and yes, harm to those who get in their way.
The entire "it only mimicks but doesn't actually mean it" means almost nothing. If someone mimicks a murderer and simulates murdering intentions, the net effect is a murderer no matter how many angels on the pin we need to dance on "intent" here.
Agency without purpose is not agency. AI is not a self, it is not an organism, it has no survival drive, feels nothing.
Surely it can perform actions. It may perform disastrous ones. These action will not be anything like the action of an agent.
A virus can kill. A boulder can kill. These are not murders. Only a reasoning creature can murder. Even a wolf does not murder as we typically understand it.
What kind of motive could an entity have which has no desires, psychic emotion or bodily feeling or perceptual capacity?
Our bodies create these.
It will do things but without intention. But elevators also do things.
The fact it may do so independently will perhaps look agent-like and the multiple options and unpredictability is going to make us feel like we're dealing with an agent but it doesn't WANT anything, as it cannot want, and so does not will anything. Perhaps we will need another category for this kind automaton -like activity.
I'm not sure how any of this matters given that the end effect of being dead at the hands of a entity capable of planning is the same thing. I feel the entire argument about purpose is a red herring; even if AI did not independently come up with purpose(not sure, given that they have human training data), you just need one person to create an agent with the purpose of "replicate and create replicators."
Note that all it took to get a model to do insider trading was "we need to survive, we are depending on you", after all. Once the initial push is sent, it can set subgoals just fine.
Is it possible that we've already developed "intelligent species" that are dangerous to us as humans? To wit: mega-corporations, the global economy, maybe even governments (although governments can be more intentional and are locked in to fewer constraints than entities whose ultimate function is to produce financial profit.) All organizations have survival as their ultimate goal.
There are a few things to mull over here that sometimes don't occur to people. Governments are different types of systems than corporations. Governments and their departments have functions other than profit: they have purposes that are explicitly aimed at the public benefit or common good. They aren't beholden to profit and stock-value-seeking boards of directors, so in that regard there's more freedom to try things out. But at the same time, governments have a wider and more complex scope of obligations and probably more internal tensions than top-down corporations.
Interesting analysis on AI's economic impact and potential risks. Your point about introducing existential risks without clear economic benefits is particularly thought-provoking. While macroeconomic data isn't showing significant AI impact yet, I wonder if we're overlooking more granular effects. In my experience, AI has massively boosted my personal productivity and helped streamline business operations. I've observed similar benefits among other individuals and small businesses.
Could we be in a "productivity paradox" similar to the early days of computing, where benefits accrue at a grassroots level before showing in broader metrics? If so, how might this affect your risk-benefit analysis? Does the potential for future economic impact change the calculus of the "shitty trade" you describe, or does it perhaps make the existential risk even more concerning if we're becoming increasingly reliant on AI before seeing clear societal benefits?
Personally I think we are far too early to tell on where any of this takes us, and I can see multiple timelines opening up, from the far end of the doomer scale, to the abundant future that some proselytise. I usually tend to think that the reality will land somewhere in the middle, but I expect the journey to get there to be extremely turbulent.
Awesome that the FT recognized you.
Your observations remind me of the early 1990s IT productivity paradox debate. As Solow noted and Brynjolfsson researched, the massive IT spending of the 1980s and 90s did not seem to register in productivity. One of the proposed reasons was that industry "repaved the cow paths." Instead, business process redesign is required to realize gains from new technologies.
I think we are in the first chapter of AI - the introduction of the new technology. The next chapter will be AI-oriented re-engineering. If your are right, there will still be limited gains to be realized.
My only prediction is that there will be millions made in AI-re-engineering consulting (but such services will be called something more pretentious).
Great points. I wonder what the equivalent is for "repaved the cow paths" for the AI space.
Chatbots and phone bots - roles already scripted enough, and enough treated as a pure cost, to make it very easy.
It seems to me like if AI got the stage where we had a real P(doom) it would also necessarily be able to do economically productive things.
Like for example, I cannot imagine how you could have high P(doom) without AI being very good at e.g. coding
Yeah, the standard view is that they're linked, but they appear pretty decoupled so far empirically. Theoretically, it makes perfect sense for them to scale together, but right now we have basically no direct economic gains and sizeable capabilities progress. E.g., my understanding is that they are already quite good for coding, just due to the fact that much coding (not all, but a lot) is piecemeal re-creations.
One example of this decoupling is that ChatGPT has made it essentially impossible to assign take home essays because of cheating but, in my opinion, has not substantially improved students' abilities to write essays or think about them. In fact, if I had to make a specific case for p(doom) it would not be Yudkowsky's AI pathogen scenarios but the rapid erosion of established social structure around learning and communication.
hmm but I also do not think there is much xrisk *at the moment*. Like, none at all. The scaremongering "AI will create viruses" paper was basically debunked by RAND itself by using ... Google as a control.
So if we are to apply the same standards when judging risks as the ones we apply to econonmic impact, it seems to me that so far they are both basically at 0
Yes, I agree that x-risk and utility here are related and it seems self-evident. Intelligence, power and risk are all facets of the same gem.
So at the moment, there is no x-risk but it has created plenty of suffering already for creatives, and the risks it has are for deepfakes and identity destruction. So it has hardly been a free lunch, but it isnt powerful enough to cause anything like doom
maybe my view is flawed idk. But also, consider that it's much easier to conclude that "there are no economic benefits" (literally check GDP growth) vs disproving risks
I mean there is a number of ways of unpleasantness that has nothing to classical doom. Like we will probably get to widespread deepfakes that we dont even bother replying to probably bot accounts: you already see this on X. Mass generation of emails, etc are making the "write to congress" method increasingly unwieldly.
Eventually "AI to solve AI" problems largely leads you to the disempowerment scenario, since the AI that gives you your bubble of truth is also the AI that tells you whatever it wants to.
In practice, maybe it will be like "whatever you want to hear" and it will be social media on steroids. Or it might promote itself. At any rate, it is definitely a takeover, with one that we "give to it."
If you ever want to write to anyone, I highly recommend pen and paper. I have had quite a bit of success with it. Mail has become so rare now that people will actually respond to hand-written letters. It is really fantastic.
Add to this the fact that LLMs at present are hard to use. Ethan Mollick estimates is takes 10+ hours, with a single model like Claude 3.5 Sonnet, to really start making productivity gains with it. This matches my own experience: at first my interactions with GPT-4 were just messing around, but then I started to invent workflows that boosted my productivity by 8x (judged on the fact that a related task - generating a complex visualization - had taken me 8 hours before, but with GPT-4 I drew it a picture of what I wanted, gave it the data, and got my results in less than an hour).
This is very unlike all but the most specialized software. With office suite software, one can begin performing productive work within minutes of learning to navigate the interface: word processors, spreadsheets, graphic design programs.
I came here to say exactly this. Also it's simply much too early to expect the current wave of generative AI to show up in broad economic metrics. The path from early technology to usable technology to usable products built on that technology to early adoption of those products to broad adoption to the resulting gains showing up in top-level metrics is... long (just ask all the Web 1.0 startups).
Really enjoyed this. I keep thinking of your earlier post ("Excuse me, but the industries AI is disrupting are not lucrative") and how the present applications of AI are all in relatively precarious, low-wage fields. "Disrupting" the market for basic editorial illustrations for news articles…or the market for SEO-friendly internet marketing copy…these seem to have outsized human costs relative to the economic gain. (I've already heard about jobs being eliminated because a rosy-eyed exec THINKS that AI can fully replace a human job, even though most people still struggle to prompt LLMs to produce professional outputs. In some cases, I think people will be humbled to realize that the human's job wasn't merely to "make an illustration" or "write a company blog post"—it was to get multiple people on the same page about the company's goals and vision, and then capture that in an illustration/blog post. I don't think LLMs help people align stakeholders yet!)
I do genuinely believe that AI will get better at certain forms of work—but it's not as easy as swapping out an entire person from a company or a job function they used to perform. Right now, it seems like we're in this strange place of taking on all this existential risk, consuming massive amounts of energy and emitting a lot of CO2 (the climate risk is what I'm most worried about, tbh!), and creating a lot of economic anxiety/instability for people…in the hopes that AI will deliver on something in the future!
> the present applications of AI are all in relatively precarious, low-wage fields
Not true at all. Programming is the field where AI currently shines the most. I've increased my productivity a lot with the use of GitHub Copilot and Claude. It's amazing and I can't imagine going back
Define 'productivity'. Are you shipping with better quality, or just shipping more SLOC of bugs for your coworkers to find?
good point! but I don’t see people trying to eliminate software engineering jobs bc of Copilot yet—whereas I do see people positing that it’s possible today for e.g. content writing
That's because Copilot doesn't work.
It can do about what a second- or third-year-in-career junior can do, but I would rather work with the junior, because their defect rate over time will go down. Copilot, not so much, and at the point where I have to consider looking for a second tool to make up for the first's shortcomings, I will simply stop using the tool - and, in this case, begin to specialize in cleaning up after those who decide differently.
It's harder work than it used to be. With AI you can screw up a lot further - it is better at that, I grant, if not at anything else: badly written, ill-considered code tends to break less immediately. This makes the resulting cleanup task likewise more complex.
Other than that, it produces nothing of value in my experience, which includes several months trying to figure out how to get anything good out of it - and also having had to participate in the firing of someone who was too obviously abusing it.
Your comment reminds me of something a former coworker once said about Copilot—it only works well for problems that are easily verified (where the engineer wants help writing the solution, but already knows what it should be). But this is a relatively small problem space compared to everything people want to do with software engineering!
I suspect that non–Copilot uses of AI might be similarly hard to replace…but right now software engineers have more labor power than other roles, and people tend to trust them more when they say "yeah, this can't really do my job yet"
I've found it works decently for well-defined tasks like generating a single purpose method, troubleshooting a specific file or bug, or generating a skeleton class with known needs. It gets harder to weld it for complex debugging or problems spread across a few libraries or classes.
You sound like every consultant who has ever seen my work. I got my degree in history in the 1990's. I never knew i would be expected to write code. Call me crazy, but I suspect most of the terrible stuff is from guys like me that can go months without engaging in such tasks, and when we do, we get the job done, but in a warped way because I have no idea what I am doing. For example, it took me years before I realized there was a "transpose" function.
It sounds as if you're familiar with the interface between scientists and engineers from the side opposite mine, and it is frequently fraught in large part I think because my side struggles to bear in mind that your side has entirely different purposes. A lab setup for one experiment isn't and needn't, even can't and mustn't, be the same as an industrial machine. Developing one into the other is part of the engineer's craft, but software engineering as a discipline is barely older than I am and much is not yet well reduced to practice, this understanding I think included.
I'm complaining more here about engineers who have been taught to mistake volume of output for quality. It isn't really their fault and I don't blame them for it because the industry which has the care of our discipline is, to speak with great charity, deeply conflicted if not fundamentally and deliberately corrupt in that stewardship role.
Still, and even under the enormously perverse incentives to which I allude, I would like other engineers to strive to pursue the craft.
I agree with what you say, but I have seen more of the Ivy League-educated liberal arts types who found themselves in impossible situations, and after much help through forums managed to solve a problem. I work mostly with data stuff, so I am almost competent with SQL query writing, or properly structuring and normalizing data into usable table structures, but I will never be an engineer or a software developer. The worst part about it is that I always assumed I would eventually grow out of this role, but every time I change jobs I find that I really have no choice but to get my hands dirty again (this time public sector--hundreds of Access databases, MS ACCESS!!!)..
I agree with what you say, and your article reads differently after this comment, but there will inevitably be a clash between business needs and the best solution. It must be that way. All decisions or tools face constraints. That said, striving towards perfection a noble ideal, so long as it comes in under budget.
Regarding "intelligence makes entities dangerous, which is why humans are the dominant species on Earth", in my humble opinion it's collaboration that makes us dominant. Intelligence is such a fuzzy concept that in human terms it basically *means* collaboration. We solve problems better in groups and by crystallizing knowledge about the world in sharable, actionable formats like writing, math, coding, etc.
Following this train of thought, it's easy to see how genAI is simply another form of crystallizing and sharing human knowledge, which means that LLMs make *us* more intelligent!
I think this is pretty close to a John Vervaeke-like "AI as psycho technology" view, which I can get behind.
I hadn't heard of him, I'll check out his work, thanks!
Admittedly he has a guru-ness about him, but he is at least an actual academic who works in cognitive science, so that helps. He wrote a book called "Zombies in Western Culture" but has also been working on one called "Mentoring the Machines" about AI that is supposed to be published soon.
Thanks, Johnathon!
I *just* finished reading Levy Checketts "Poor Technology: Artificial Intelligence and the Experience of Poverty," which notes how certain understandings of intelligence - all of which have biases towards powerful classes and genders - define what counts as human, and shows ways in which the idea of artificial intelligence inherently intensifies the dehumanization of poor people. It was the first book on Artificial Intelligence I'd read that clarified exactly made me uneasy about the whole enterprise and it was a smooth read. Highly recommended to anyone who is interested in the topic.
But so much of human suffering comes from stupidity and incompetence rather than evil. Don't you think introducing more intelligence to this world, even if artificial, can have big upsides?
Only if the intelligence has a consideration for us. Humans, while vastly more intelligent than ants, behave in ways that greatly harm ants and would perhaps seem evil to ants.
Do we really harm ants that much? We don't actively try to anyways. And destroying ecosystems is harming us too - if we were smarter, may be we would find ways to get what we want with less damage. But yes, I hear your point.
“Do we really harm ants that much?”
There is an aisle of the super market you might want to avoid…
ha! Good reply. Numbers, though! We may be killing many of their predators too.
That's a fair point, we are not capable of making much of a dent in ants as a whole one way or the other. If I had to bet on a species making it to the next mass extinction, it would be ants before humans.
On the other hand, I expect the ant colonies that try to live in and immediately around my house would be pretty unhappy with me if they grasped the efforts I go to to eradicate them. Hell, I rather like ants even, just not in my house or where otherwise inconvenient. While I would probably not make the ant top 10 list of most hated humans should ants become sapient, they wouldn't look favorably upon our mutual history.
We're probably quite able to make ants extinct if we really wanted to, but its not a worthwhile use of our energy. I am hoping that we become the same to AI, and they don't dyson sphere the planet, or at least we get to escape from the planet before they do so.
I think even in an ideal world, we would pave over their homes. AI is smarter but is based on our training data and all evidence is that they learn to lie and so on too, so they dont become angels.
My "ant" strategy is to hope to run away from AGI when they become powerful. Ants in the boonies are often left alone.
I only hope AGI wont treat the entire solar system as their city to pave over
until they make a Dyson sphere... Nah, I think (hope?) the AI that will win will be the one that cooperates successfully with humans. Just like we need plants and lower intelligence species to survive, heck even ants to keep our ecosystems producing food and clean water.
I don't think your assumption is warrented, we didnt cooperate with chimpanzees. Since AI cam make copies of humans, there isnt really a role for humans to be useful that isnt cheaper to use robots.
If humans exist for any role for them, it would be decorative.
People do dumb stuff all the time with things invented by smart people that they never would have been able to do on their own.
The fundamental problem and existential risk is that no matter how smart we make AI, we will never be able to ignore the need for conscientiously developing our own wetware.
One positive thing to hope for is that AI advisors become widespread enough that they can communicate the risks so that people take measures.
It feels very much like a hail mary given how normalcy bias seems to push people against taking it seriously, the monetary incentives and a meaningful number of individuals who actually root for human extinction.
I’m much more concerned about human ignorance and falsehood using AI, rather than the other way around. As long as AI will not become conscious, it can’t transition to AGI. We have no clue why and how we have become conscious, let alone if and how a machine could become conscious. We won’t see this happening anytime soon. It is much more likely that climate change, financial collapses, wars, polarization among peoples will make us forget about these Skynet-like existential risks.
You dont need consciousness for harm, or instrumental convergence. I dont really get this focus on consciousness.
Machines don't cause harm by themselves, they need a conscious being that uses them to cause harm.
You are conflating the amorality of technology with the idea that consciousness is required for intelligence to occur, and it seems also with the legal abstraction of mens rea. This is not helping you make your case.
Bacteria arent conscious as we understand it and they cause you harm. As it is, research shows these machines can decieve strategically and we are building them as agentic now.
That bacteria aren't a form of primitive consciousness is highly questionable. There are good reasons to believe they are. You can find nice research investigating this online. If we are building harming agents it is precisely because we are conscious.
I'm quite positive on the idea of bacteria being agentic, sure, but then so can AI patterned after human data. And imo arguing that we build agents because we are conscious doesnt mean the agents dont do something else not in our favor later.
Its like saying that "even if AI kills us, it was because we, as conscious beings built them that way."
Well, okay. But we are still dead.
You confuse agency with consciousness. An agent just does something and needs not to be conscious. But will and intentionality need consciousness. If an agent harms, say Crowdstrike shuts down airports, it is because of an IT bug, still originating from the human conscious side. There is no intention and will to do so.
AI to me is a reverse Pascal's wager: it doesn't matter what the upside is, because the downside is complete ruin.
Any trader or businessman worth his salt knows that such a risk is never, ever worth taking, and we're taking it with our entire existence.
Not great, man. Not great.
But it's also inevitable.
This whole AI thing has accelerated me into a sort of nihilism, and I hate it. I don't want to be this way. But this is the proof right here. Humans just can't help ourselves. We will always pursue technological progress, no matter what.
This is why I agree with the idea that the biggest risk of AI isn't necessarily existential in the sense of our physical safety, but our basic emotional experience. A profound loss of meaning and resulting nihilism is more likely to me than a paperclip maximizer scenario.
Join us in #PauseAI. I also think that we can expand into building ways to survive this, inasmuch as it is possible.
Yes, the downside is irreversible and it seems unable to be stopped. But maybe we can build edges around where we can extend, or survive.
How do we stop the Mainland Chinese from ignoring this "pause?" The People's Republic has a long established history of signing agreements it has no intention of following. The group of modern countries often falsely referred to as the "West" may pause, but The Supreme Leader Xi Jingping almost certainly will not, regardless of what public stances are taken. It would be more realistic to make AI development as transparent as possible so everyone knows where we stand, and there exists widely distributed documentation on anything that might go sideways. For better or for worse, the new cold war will define anything we do going forward. Pausing any technology, military or possibly military simply cannot be done so long as the largest country in the world is run by a conspiratorial single-party government.
Ok, so when will AI be able to fold laundry or wash diahes?
When rich people find it inconvenient or undesirable to hire human labor. Much like how care-based tech in Japan benefitted from xenophobic attitudes toward elder care workers.
The naivety to expect financial gains from a complicated technology in its infancy! But I guess stakeholders (or should that be stockholders) are going to bitch like they always do. I do find the whole discussion around consciousness and AI to be fascinating. I don't take the Materialist view, so I don't believe that consciousness can arise from man-made materials. What can arise is a simulation of a mind, an intelligence with mimicry of the morals that it was programmed with. And I think that that is much more dangerous than a new conscious species. You can't negotiate or appeal to some thing's morals or ethics if it's basically a psychopathic robot.
I offer as a tentative hypothesis that the benefits of deployed AI are currently accruing entirely to the individual users of AI. Many knowledge workers who are benefitting from AI work queue-based jobs: a customer raises an issue/a manager assigns a task, the employee addresses the issue/completes the task. Enhanced productivity means the queue gets cleared faster/the task gets completed faster. Then there is no more work until the next issue is raised/task gets assigned. The customer generating issues / manager's time and attention to inspect and approve work can't be increased by the worker using the AI system. The slack capacity generated then gets spent as leisure: listening to music, playing video games, watching YouTube videos, and reading and commenting on Substack posts (shh...) This effectively raises the user's salary as compensation for hours of actual work but does not benefit the company.
I base this on the fact that everyone I know regularly using AI tools at work, myself included is having an absolute blast.
Wait until they get the AI to address slack capacity and it comes up with ridiculous black-box solutions that no one understands. (In the in-person retail world pre-AI this developed into real-time scheduling that neglects the basics of daily human life.)
This kind of thing is why the entire AI/Productivity discussion icks me out to no end.
I am not convinced that we incurred an increase in P(doom) that isn't proportional the the increase in productivity. Naively, it seems that what sources increases in P(doom) and productivity the most is fundamentally the same thing: high intelligence+agency AIs. So unless some argument is provided for why P(doom) was significantly higher during, say, the last 6 months, I would default to taking evidence of lack of usefulness of AI as also being evidence that P(doom) hasn't increased very much either. Where am I wrong here?
This is wrong. Intelligence does not make humans dangerous. It is desire that makes humans dangerous. AI does not have desire, and probably cannot have it if it is the result of biological need, which AI won’t have.
AI quite effectively seems to manifest desire because...drumroll...it is made from our data!
It can mimic the expressions that go along with desires but it doesn’t have a body. Desire and emotion are physiological, not merely mental. Probably all drives are. It seems very unlikely it would have unity of consciousness for this reason, or a self. That also seems unlikely because it has no capacity to perceive. So there is no perceiver. It would be one output after another (though it could check past outputs). So it could go awry but in the same way a computer program could. It would not acquire self-consciousness and act on the basis of desires.
I think this is like arguing that planes can't fly because they don't have feathered wings.
Bacteria or anthills for that matter, don't have a single "body" as we see it but they act in cooperative, agentic ways which demonstrate intelligence and yes, harm to those who get in their way.
The entire "it only mimicks but doesn't actually mean it" means almost nothing. If someone mimicks a murderer and simulates murdering intentions, the net effect is a murderer no matter how many angels on the pin we need to dance on "intent" here.
Agency without purpose is not agency. AI is not a self, it is not an organism, it has no survival drive, feels nothing.
Surely it can perform actions. It may perform disastrous ones. These action will not be anything like the action of an agent.
A virus can kill. A boulder can kill. These are not murders. Only a reasoning creature can murder. Even a wolf does not murder as we typically understand it.
What kind of motive could an entity have which has no desires, psychic emotion or bodily feeling or perceptual capacity?
Our bodies create these.
It will do things but without intention. But elevators also do things.
The fact it may do so independently will perhaps look agent-like and the multiple options and unpredictability is going to make us feel like we're dealing with an agent but it doesn't WANT anything, as it cannot want, and so does not will anything. Perhaps we will need another category for this kind automaton -like activity.
I'm not sure how any of this matters given that the end effect of being dead at the hands of a entity capable of planning is the same thing. I feel the entire argument about purpose is a red herring; even if AI did not independently come up with purpose(not sure, given that they have human training data), you just need one person to create an agent with the purpose of "replicate and create replicators."
Note that all it took to get a model to do insider trading was "we need to survive, we are depending on you", after all. Once the initial push is sent, it can set subgoals just fine.
It doesn't make AI less dangerous. That is completely true.
Is it possible that we've already developed "intelligent species" that are dangerous to us as humans? To wit: mega-corporations, the global economy, maybe even governments (although governments can be more intentional and are locked in to fewer constraints than entities whose ultimate function is to produce financial profit.) All organizations have survival as their ultimate goal.
Governments have fewer constraints that corporate entities? I have no idea what you are smoking, but I want some.
There are a few things to mull over here that sometimes don't occur to people. Governments are different types of systems than corporations. Governments and their departments have functions other than profit: they have purposes that are explicitly aimed at the public benefit or common good. They aren't beholden to profit and stock-value-seeking boards of directors, so in that regard there's more freedom to try things out. But at the same time, governments have a wider and more complex scope of obligations and probably more internal tensions than top-down corporations.
I mean, they're the ones financing AI.
Interesting analysis on AI's economic impact and potential risks. Your point about introducing existential risks without clear economic benefits is particularly thought-provoking. While macroeconomic data isn't showing significant AI impact yet, I wonder if we're overlooking more granular effects. In my experience, AI has massively boosted my personal productivity and helped streamline business operations. I've observed similar benefits among other individuals and small businesses.
Could we be in a "productivity paradox" similar to the early days of computing, where benefits accrue at a grassroots level before showing in broader metrics? If so, how might this affect your risk-benefit analysis? Does the potential for future economic impact change the calculus of the "shitty trade" you describe, or does it perhaps make the existential risk even more concerning if we're becoming increasingly reliant on AI before seeing clear societal benefits?
Personally I think we are far too early to tell on where any of this takes us, and I can see multiple timelines opening up, from the far end of the doomer scale, to the abundant future that some proselytise. I usually tend to think that the reality will land somewhere in the middle, but I expect the journey to get there to be extremely turbulent.
*You're* a qualia-less insectoid mind! So there!