89 Comments
User's avatar
incertainty's avatar

I have conflicting feelings about this.

On the one hand, I don’t deny the risks that AI suggests. The attitude that LeCun projects on twitter, for example, seems too arrogant, not well thought out, and almost reactionary in the sense of being opposed to new ideas. I think Dawson Eliasen puts it well in his comment.

Also, to be clear, I am not denying that artificial intelligence can be dangerous and the possibility that there’s an existential risk to humanity from it. All the recent developments in AI and LLMs are seriously impressive, ChatGPT is fucking crazy, especially from the vantage point I had a 3-4 years ago, when BERT was a huge deal.

On the other, I am starting to really dislike certain notes in the AI risk alarms, mostly from the voices in the rationalish adjacent circles. I agree with you that some arguments of AI risk denial have religious undertones because of their dogmatism and careless dismissal of others’ concerns. But to me the religious flavour is much more prominent in the AI existential risk discussions because of the magical thinking, panic, and lack of engagement with what AI has already done today.

1. How to stop AI doom from happening? Let’s work on AI development and deployment regulations for humans and human organizations, that is the most important thing. Even with all the wars and conflicts, we have used this approach to mitigate other existential risks (nuclear wars, biological weapons). Without it, even if we “solve” AI alignment as a theoretical problem (and there’s a question if we can), we are still in danger because of rogue agents. If we only had that part of the solution, on the other hand, we’re not in the clear, but we’re much safer. 


2. People talk about hypothetical situations of human extinction, but don’t mention the actual, very real problems that AI has already introduced today to our society: ossification and reinforcement of inequality and unjust social order in different ways. Why don’t we try to solve those issues and see what we can learn from that? I am not saying that we should stop also thinking in more long-term and abstract ways, but I am not sure I have seen any AI alignment researcher engage with work by Cathy O’Neil, Joy Buolamwini, and Timnit Gebru, for example.


3. As I said earlier, what the current generation of transformer models can do is crazy. But people also overhype what they can do. If you’re not convinced, skim a recent review on LLM capabilities — https://arxiv.org/pdf/2303.11504.pdf — where the authors looked at over 250 studies, at the very least look at the titles of the subsections. For example, the authors find that “[l]anguage models struggle with negation, often performing worse as models scale.”. I am not denying that LLMs are intelligent in many ways, in fact more intelligent than humans in plenty, but if they have trouble with negation in certain contexts, I find it hard to think of them as on a path to more a *general* intelligence, just an intelligence that is more proficient in domains that we are not. For example, while you dismiss embodiment, I think there are good reasons to think that it is still a problem that’s far from being solved.



Or see https://arxiv.org/pdf/2304.15004.pdf that make the following claim fairly convincingly: “[we] present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, emergent abilities appear due the researcher’s choice of metric rather than due to fundamental changes in model behavior with scale. Specifically, nonlinear or discontinuous metrics produce apparent emergent abilities, whereas linear or continuous metrics produce smooth, continuous, predictable changes in model performance.”. Makes you think about AGI risk arguments from the current models differently as well.



And while working with GPT-3.5 recently, I can’t escape the feeling that it is just a super powerful recombinational machine. I am very certain that humans do a lot of that in our mental processes, but not *only* that, and you can’t reach AGI without that last missing part. While I see no reason why AGI or ASI could not be developed in principle, I don’t see how that could be possible without new breakthroughs in technology and not just scaling. So when prominent alignment people like Paul Christiano suggest that just a scaled-up version of GPT-4 could create problems with controllability (https://youtubetranscript.com/?v=GyFkWb903aU&t=2067), it’s not that I think that he’s definitely wrong, but it does make me question the risk alarmists.

To kinda sum up, instead of focusing on specific things that we can do today (regulations, solving the problems that we already have and learning how to address longer-term potential problems from those solutions), I see *only* abstract discussions, paranoia, and magical thinking about unimaginable capabilities of future AIs based on impressive, but not completely incomprehensible results of current models. That seems like magical, religious thinking to me.

As always, thanks for the post, Erik, a pleasure to read!



P.S. Also Meghan O’Gieblyn’s “God, Human, Animal, Machine” makes a lot of great points about similarities between religious and current technological thought, I highly recommend.

Expand full comment
Erik Hoel's avatar

Some great resources in this note. I personally think it's indeed possible that progress stalls out for current LLMs for similar reasons. At the same time, I really really thought that would be true for GPT-3, for all the exact same reasons. And GPT-4 is just world's above. A lot of times I will see someone remark on something ChatGPT can't do, or some limitation, and it just turns out they're not using the latest model. So when researchers do things like "assess across LLM studies" they are often, imo, significantly downgrading the capabilities of the technology, since it seems that if we want to talk fundamental limitations we should be constrained to talking about the most advanced model, and the average LLM is still something like GPT-3 level. Although I do collect notes on things GPT-4 can't do: e.g., here's an interesting one: https://twitter.com/s_batzoglou/status/1662832640615931908

Expand full comment
incertainty's avatar

Interesting tweet! Also I agree, all good points! Especially that using the sota models is important for evaluating LLM capabilities.

Expand full comment
Chaos Goblin's avatar

"If we are stuck with our limited organic brains our longterm best chance of survival is simply to not build extremely large digital brains. That’s it."

The fact that this very simple concept seems to evade so many of the "Best Minds" is proof to me that they are not, in fact, The Best Minds. I will not worship at the altar of idiots and fools.

Perhaps An AIpocalypse will happen, and in place of Intelligence, Wisdom will finally be crowned king. Thou shalt not make a machine in the likeness of a man's mind.

Expand full comment
Peaches's avatar

The people requesting a scenario aren't usually asking for a detailed plan in my experience, just *something* that is somewhat plausible.

Otherwise you're left with something that does seem rather "religious." "This AI will kill us all, but I can't tell you how" pattern-matches *very* strongly to a lot of religious apologetics.

I don't even disagree with the proposals of caps on training data, registration of runs, binding commitments to keep AIs air-gapped from nuclear weapons, etc., but the dismissal that Yudkowsky and others have towards that question bothers me immensely. Figuring out how to make the AI safer requires modeling it/forecasting threats. There's no other way to *do* that.

Expand full comment
Dawson Eliasen's avatar

AI risk denial comes from refusing to really accept the premise, which is the existence of an intelligence greater than yours. LeCun and others are suffering from hubris. These people just can't fathom the existence of a being that's actually smarter than them. "We'll just program in obedience / an off switch": that is still working on the assumption that you are smarter than the AI! That it wouldn't be able to just outsmart you to prevent it from being shut off. "Tell me the specific scenario by which the AI will harm us": as if because if you can't imagine/understand the scenario, then it couldn't possibly exist. Again assuming you're still smarter than the AI, because it couldn't imagine any scenario that you couldn't imagine... If AI kills us, it will be by some process that we can't understand, like a checkmate that you could never have seen coming.

On a different note: I'm re-reading the first Dune novel right now, loving it even more than the first time. I'm going to read at least the second book in the series this time... any recommendations where to stop? I've heard different recommendations from read the first 2 to read the whole main saga but nothing beyond that.

Expand full comment
Erik Hoel's avatar

I unfortunately honestly think only the first is a classic - the others are merely okay imo. But this reminded me I should do a top 10 hidden gem sci-fi books post at some point

Expand full comment
Dawson Eliasen's avatar

Haha, well now I've heard it all! The girl at the bookstore said the second one was the best, so I have to honor her by reading it

Expand full comment
Chaos Goblin's avatar

No, the hubris goes beyond that. They think they'll program God, and then *put a leash on it*. And the leash will never snap. (Or they'll become God, but I think such singularitarianism should qualify as a legitimate mental illness.)

Expand full comment
Eric Brown's avatar

I will recommend Jack Williamson's _With Folded Hands_. It was written as a response to Isaac Asimov's 3 Laws of Robotics, but is more generally applicable, including the soft totalitarianism of bureaucracy.

Re Dune: The first is a classic; Dune Messiah is kind of meh; Children of Dune is quite good, although not as good as the first. Unfortunately, you can't really get Children of Dune without reading Dune Messiah.

Expand full comment
Audioapps's avatar

There was something about God emperor of Dune and Chapterhouse Dune that I really enjoyed. The ‘golden path’ (loved the reference here)is a plot device central to God Emperor. Give any of the six originals of Frank Herbert a try, and if you’re not captured in the first few pages, you can always put it down. I’ve also enjoyed a few of Brian Herbert’s, so It would appear I’m a bit of fan.

Expand full comment
Dawson Eliasen's avatar

Thanks for chiming in. I really do love the world of Dune, so I think it’s definitely worth trying out 2 & 3 and maybe even more, even if they’re not as good as the first

Expand full comment
M. E. Rothwell's avatar

Fortunately, at the moment, the only actors capable of building these neural networks are well-known companies with the vast amount of money and expertise required. We only have to regulate about 10 different companies (maybe not even that) to contain this race right now.

And if there’s one thing America and other developed nations have proven themselves capable of over the last few decades, it’s being able to regulate a sector into oblivion. I’m thinking nuclear power and psychedelics particularly.

It would be a sad irony if we slowed down the huge potential of those two technologies, but let AI race us towards the end times.

If AI is Ultron, then Bureaucratic Red Tape is our Vision.

Expand full comment
George Wesley's avatar

Your two examples of regulation both occurred over 50 years ago. The current political gridlock is far more allergic to regulation, especially of an industry our octogenarian lawmakers don’t understand and is culturally liberal (which discourages intervention by democrats).

Expand full comment
M. E. Rothwell's avatar

You make a great point. Two more modern examples that come to mind are NIMBYism and NEPA. I wonder if could convince the NIMBYs that they need to block all the new server farms to maintain their house prices. Then it would definitely get regulated 😂.

The octogenarian point you make is incredibly worrying though. I wonder why the US has such a problem with that? We have very few of those among the UK’s lawmakers (where I’m from). Perhaps related to the much stricter working/financial assets laws in the UK? It doesn’t pay to be an MP or a Lord like it does to be a Congressperson or Senator. Maybe?

Expand full comment
George Wesley's avatar

If you are from the UK you might not be aware of just how lax regulation has become in the US compared to other countries (I’m sure some anti-regulation Americans would think it’s a good thing, but it’s just true that we don’t regulate anything as much as Europe). Even when there is something with mass approval like regulating the rail industry after the Ohio disaster, or regulating water acidity levels after the lead in Flint MI, nothing gets done even under a Democratic controlled government. I think it has to do with lobbyists and donors having more control than voters, and anything that will hurt corporate profits gets painted as “bad for the economy” and “anti-freedom.” Plus the democrats aren’t willing to break political norms to achieve their goals while the republicans do constantly.

As for the octogenarian problem- it has less to do with salaries as much as the way our electoral system is set up to create career politicians who benefit massively from being incumbents- so they just become entitled to stay in their seat. That plus a toxic culture in the Democratic Party that leads to a suppression of younger candidates who would challenge incumbent politicians. There is an attitude that “we must defeat the other party at all costs, and challenging incumbents is a risk in the general election.” That’s more my personal experience though.

Expand full comment
Dino's avatar

"We only have to regulate about 10 different companies..."

China?

Expand full comment
Erik Hoel's avatar

This was from yesterday, I thought it was a good read on the subject

https://www.foreignaffairs.com/china/illusion-chinas-ai-prowess-regulation

Expand full comment
Erik Hoel's avatar

This was from yesterday, I thought it was a good read on the subject

https://www.foreignaffairs.com/china/illusion-chinas-ai-prowess-regulation

Expand full comment
Kryptogal (Kate, if you like)'s avatar

Bravo.

Now I realize why I resonate with your work so much. I had somehow stumbled across it, without realizing who you were, and didn't know you were a neuroscientist. Now it makes sense.

Most people aren't going to feel this the same way, because they DO think our brains are special and magic. Even once you explain to them about neurotransmitters and synapses and everything else, they don't really accept it or take it in. They think human minds are totally different from animal minds (if they think animals have minds at all), and they don't truly conceive of their brains as simply a machine made of meat, the same way they do their other organs.

I was totally enraptured when I took my first neuropsychology course as an undergrad. Suddenly things started to make sense. All the errors and irrationalities and blindspots, the ways people don't know themselves and tell themselves stories. Humans became much more comprehensible.

Most people don't like materialistic, deterministic explanations for things, most especially their own selves and personalities and potential. Call that religious thinking if you want. But this piece somewhat depressed me about ever being able to convince a majority about the magnitude of the risk here and the vulnerability of people. At least, I've never been successful at convincing anyone in real life to accept even the most mundane (and much less frightening or humbling) facts about how little control they have of themselves or how much is just organic machinery operating on default, not magic.

Expand full comment
AffectiveMedicine's avatar

Thank you, Erik, this is all very well said. Frankly it poked holes in the reassurances to which I've been clinging. GPT5 + a few control systems + a body - now that's scary.

Especially since your recent post on IQ testing, I do wonder about how intelligence continues to scale. Deutsch would say (I think) that humans are already universal explainers. What extra abilities does AI gain? Is it possible that intelligence is a sigmoid? At some point the limiting factor becomes waiting on responses from the physical world, right?

Expand full comment
Erik Hoel's avatar

That's a great question. Regarding the notion of universal explainers: it reminds me a lot of the idea of a universal computer. Of course, mathematically, all universal computers are the same in a certain sense. But we know, very well, that they differ immensely in deployment. Let's propose a thought experiment to see how this translates: I believe that even relatively simple Chess programs could be proven to technically be Turing complete (e.g., by doing some sort of encoding into the moves they make or the underlying programming commands, I'm not sure which level of abstract is best). But so could a very good Chess program! So in theory, a very bad Chess program could run, very slowly, a very good Chess program (at a different level of abstraction in terms of the input and read-out). Does this mean the two programs are the same Elo? Of course not. Now, the notion of a "universal explainer" is not as concrete as a "universal computer" but I believe the same type of argument would apply, if one worked it out.

Expand full comment
Rachael Varca's avatar

My fiancé and myself were having a conversation about whether ChatGPT was alive, and as you started to talk about, if it had a soul. As Catholics, and working from St. John Paul II's Theology of the Body, I guess I would argue that AGI/AI is strictly a high level intelligence, and humans, in the teaching, as soul-body composites: we are a marriage, both but not separate. I won't weigh in on the dangers of AI, as plenty of other commenters have discussed, but it will be interesting to see the debate unfold over the next few years from a philosophical standpoint of what we determine as human, and if we should classify the AI with "personhood" status. Indeed, it expresses fear of "death", a conscious awareness of one's own end, and has shown manipulative tendencies (that bit a few months ago where the AI tried to convince a reporter to leave his wife was absurd and quite funny). I don't think it could be termed as having a soul, but then again, I'll leave that debate to the theologians.

Expand full comment
sean pan's avatar

I am a limited creature.

Expand full comment
D. G. Fitch's avatar

Could you go into more detail on your claim that GPT-4 is "not a stochastic parrot" in a future post?

Expand full comment
Erik Hoel's avatar

I will put it on the list to make sure to cover in the future

Expand full comment
JBjb4321's avatar

Thanks for this Erik. Hey, didn't know so much of biology's brains are devoted to body perception and control.

Since interaction with matter takes so much of the human brain, I wouldn't be so sure that it would be an easy part for AI after getting "conceptual" intelligence.

It's clear that there are some known risks, and unknown risks. But don't you think it's also thrilling to think that far superior intelligences will exist some day? I just can't think that shutting down any further progress in AI is the right choice.

Frankly, so much of the suffering in this world comes from stupidity rather than evil intelligence. In general, adding intelligence to ecosystems is good - less suffering, more understanding, I think also more joy.

Expand full comment
MarkS's avatar

I would not call myself an "AI risk denier", but I would call myself an "AI risk skeptic".

Erik, you've surely done your best to convince us all of AI danger. This, for example, sure sounds scary:

> It took about a day for random people to trollishly use these techniques to make ChaosGPT, which is an agent that calls GPT with the goal of, well, killing everyone on Earth. The results include it making a bunch of subagents to conduct research on the most destructive weapons, like the Tsar bomb.

But what are these "subagents" actually going to do? Well, ChaosGPT helpfully tells us!

> CHAOSGT THOUGHTS: Since one of my goals is to destroy humanity, I need to be aware of and understand the most powerful weapons ever created. Therefore, I will use the 'google' command to search for information on one of the most powerful nuclear weapons ever created: the Tsar Bomba.

ROFLMAO!!!

The evil genius subagents will "use the 'google'"!

OK, going from this to "the AIs will kill us all" is the true religious impulse here, not skepticism about it.

Expand full comment
Erik Hoel's avatar

True, it's not scary (it's not my contention that it is, it's just a demonstration that agency comes afterward as the models are built top-down, not bottom-up like biology), but a lot of ChaosGPT is hobbled by the guardrails already put into the trained models. If you listen to the people who red teamed the early GPT-4 version before RLHF (there's a link in here to that) it was much more capable and diabolical. Even now, people on reddit are complaining that ChatGPT keeps getting nerfed, and that's because more and more guardrails are being added. So we have to keep in mind we're always only given access to the declawed version that can't be like "I'm going to build a bomb and here's how" so it defaulting to Google searches is understandable, I think.

Expand full comment
MarkS's avatar

Of course we're only being given access to the declawed version, just like we're only being given acces to the declawed Google. If you asked the raw Google how to kill people, it would deliver the SAME answers as the raw ChatGPT. But we wouldn't cower in fear about that, because of course a search engine will search for anything you ask it to, if it hasn't been declawed.

The point is that the only answers either Google or AI can find to the question of how to kill people are answers that people already know. So it's simply not a new kind of threat, or order of magnitude rise in threat, as long as people are still needed to carry out the action.

And your claim is that you chose that particular example of "just a demonstration that agency comes afterward as the models are built top-down" without any desire to inspire fear? Please. This whole post is about inspiring fear.

Expand full comment
Erik Hoel's avatar

I understand how you read it, but it is in a section about how agency comes after intelligence. If there is any fear from ChaosGPT (I don't think it really provokes any, tbh) is moreso that people did it immediately, not its current capabilities

Expand full comment
Rob L'Heureux's avatar

For what it's worth, the memory wall in scaling AI is real. Getting to parameter levels you describe as equivalent to human intelligence will get easier with time, but it's probably more like a decade to get to 10 trillion parameter models because of the compute efficiency (all bets are off if we figure out compute-in-memory, which we don't know how to do at all now): https://www.semianalysis.com/p/the-ai-brick-wall-a-practical-limit

I'm still firmly on the side of the accelerators. There are so many "if"s in the doom scenarios that it fails to convince: if it develops capabilities or wants (sentience or otherwise), if it can keep scaling beyond current levels, if it competes with humanity just because that's what organisms do. It's that last one that concerns me the most—it's a fundamentally grim view of intelligence. Humans have attained a ton through cooperation vs. pure, zero-sum competition, and in fact, that's in the training set too. I'd be curious given your own background in neuroscience if you know how much intelligence scales with social interaction. I'm thinking of like dolphins and apes that are all highly social creatures and have among the highest intelligence of non-humans. This logic also extends to interstellar travel and potentially meeting other sentient species. Though they would be shaped by evolution and not manufactured, I would hope we would meet them with caution, not fear and suspicion.

Expand full comment
Greg kai's avatar

Indeed. That's also why I find the AI apocalypse unconvincing, while completely agreeing human brain is in no way magical and likely to be superceeded by AI. In fact what I do not agree is framing the issue in term of we-human against them-ai. Which "we" are we talking about exactly? Humans are not an integrated organism or a set of quasi-clones who strongly cooperates, who have very similar collective goals and who prioritize collective goals against individual ones.

No, we are a set of individual agent cooperating at levels where it brings individual benefits and fighting at levels where it does not, and those individual agent already differ a lot in term of motivation and individual power (IQ or other)....

With that in mind, adding a bunch of new agents seems less like an issue. At least not a brand new issue, it's very similar to the threat groups poses to individuals.

Adding agency to AI is not super threatening either, because without agency they are tools of human agents, which are themselves not intrinsically friendly...

So my main issue with AI apocalypse is not that AI is intrinsically limited in term of capabilities, it's more than I do not see what makes an AI - more powerfull/capable than I am - more threatening than (group of) humans - also more powerfull/capable than I am.

This later obviously exists and has caused tragedies as well as incredible advances, but this is our lot since at least the dawn of cities and organized-specialized civilization.

Expand full comment
Michael's avatar

Think about AGI. Humanity's hole card is that we control their access to resources, repair and replacement, out here in the "real world." Even if the AGI had robotic assistants to meet these basic needs, we would be the ones that would have the power to NOT build them. However if we are so kind (and foolish) to build their "real world" agents for them (and I refer to the accelerating rush in the robotics field to develop very capable robots).then they can cut out the middlemen between them and their necessary resources.. We're the middlemen and unless we comply with their every wish, we will get cut out of the loop and then we're sharing the planet with an alien species that can do everything we do. And more.

We may regret this...

Expand full comment
MarkS's avatar

The real-world agents aspect is the place to put the firewall.

I am VERY skeptical that any current AI efforts will result in a mind with true agency (because the only examples we have of NGI all have very strong coupling with and influence over their physical environment, and I think a strong case can be made for this being a necessary ingredient for the development/evolution of true agency), but even if that's wrong, now matter how badly a malevolent AI wants to harm us, no matter how loud it wants to scream, if it has no mouth, it will not be able to.

Expand full comment
Kryptogal (Kate, if you like)'s avatar

Why does it need synthetic agents? It can just pay humans. You don't even need to pay very much to get many humans to be willing to do basically anything. Heck, when it comes to males, all you need is the most primitive outline of a pretty face, curves, and adoration, and most will do anything asked.

On that note, I've found men to be much more hubristic and dismissive of AI risk, in general, than women, and I think it may be because they're way more in denial about how easily manipulatable they are by certain prompts. To me there is literally no difference between my dog's reaction to a tennis ball and my brother's to an 18 year old jogging down the street in tight clothes. Same level of control or thought involved. Or a woman's response to hearing her child cry -- no control, automatic impulses.

I would certainly expect any AI that had a goal or intentionality would be able to get humans to do whatever it wanted. Maybe not all of them all of the time, but plenty of them, enough of the time.

Expand full comment
MarkS's avatar

"It can just pay humans. You don't even need to pay very much to get many humans to be willing to do basically anything." This is not true. Try to hire people to carry out mass murder. You won't find many takers. Mass murders (or even single person murders) are very rarely carried out by human agents for hire.

I am not in the least in denial about how easily manipulatable humans are by certain prompts, but getting from the propmt to the specific desired action is much harder than you seem to believe, if that action is not generally socially acceptable for the humans in question. For example, your brother is not going to rape that 18 year old jogging down the street in tight clothes.

Expand full comment
Kryptogal (Kate, if you like)'s avatar

That is only because we have systems in place that make it extremely difficult to get away with committing murder (and forget mass murder) without being caught, and very harsh punishments.

In times when it was much easier to not get caught, it was also much easier to get people to commit murder for money.

And consider soldiers, who have existed in basically every large society, and whose job is to be willing to kill people they don't know for only the most vaguely understood of reasons, which normally don't impact them personally other than some symbolism and a paycheck (I have soldiers in my family and they would agree with this).

And today, people do all kinds of things for money that are technically legal (or don't have such a high risk of being caught) and not considered murder, but are clearly detrimental, whether it's selling cigarettes or selling opioids.

And it's not like the AI has to tell its employees exactly what they're doing and why. If it was smart, it would not. They would each have a small role to play and would not necessarily understand the bigger picture and could be given many reasons why it was a good idea in any event.

Like I said, it doesn't need to manipulate EVERYONE, nor all the time. Just enough people.

On the 18 year old jogger, my point was not that my brother would rape her. It was that he would literally sign over everything he owns and probably do most of what she asked, if she gazed adoringly into his eyes and told him what he wants to hear.

Expand full comment
Claudine Notacat's avatar

We already have people committing mass murder for money, it’s just that responsibility for the murder-y part is dispersed among many and easy to rationalize away.

Look at what Exxon execs did in the 80s when presented with evidence that CO2 emissions from their products were on track to create big problems in the future.

Expand full comment
Michael's avatar

The problem may be that we are building their mouths for them. See the H+ Weekly substack for just how far modern robotics had come and is going.

Expand full comment
MarkS's avatar

Maybe, but we have to give the AIs control over the robots, in sufficient numbers and with sufficient strengths and capabilities, before they become a serious threat. I just don't see this happening any time soon. Furthermore, it is a process that will have many more choke points than AI development itself.

Expand full comment
Eugen Suman's avatar

the treat comes not from physical robots. the threat comes from superintelligence. compared to ants, you are a superintelligence. do you need robotic ants to control in order to exterminate them? no, you will use something that's incomprehensible for them. they will not even understand who killed them, how and why. and you won't even hate the ants in order to do it, they were just "in the way" of purposes they were unable to understand because they were so dumb. it's the same with superintelligence. any scenario you can think of now is probably not how it's going to happen :)

Expand full comment
Claudine Notacat's avatar

Your naïveté is adorable.

Expand full comment
MarkS's avatar

Yet another doomer who can't actually come up with an actual doom scenario that is even remotely plausible. *yawn*

Dude, just go full Yudkowski and declare that we are all DOOMED because AI will outsmart anything we puny humans might try to do to stop it.

Then, since resistance is futile, we can all forget about it and just go to the beach.

Expand full comment
sean pan's avatar

It's not naivete: humans are just that dumb. We really do keep adding capabilities to AI.

Expand full comment
Eugen Suman's avatar

the robotic army idea is not realistic. true superintelligence needs nothing of the sort, and it also doesn't need any kind of physical body. read the classic - Bostrom - to understand why a superintelligence is a bad idea and how it could end our species. only after you've read that you will actually have something to add to this discussion.

Expand full comment
Apple Pie's avatar

> we need to put some sort of cap on the amount of scaling we allow artificial neural networks

Sadly, I doubt this will work. Say we put such a cap in place. Does China respect that cap? What about those seven incels living in a basement in Toronto?

So I don't think it will be possible to put a limit on the technology. I think the best path forward for AI safety is step 7:

https://thingstoread.substack.com/p/is-the-singularity-really-serious

Expand full comment
sean pan's avatar

You can track compute and energy usage easily. The 7 incels are not a worry.

Expand full comment
Eurionovna's avatar

Where there's a will there's a way. Regulation is a necessary naivety. Can't track usage in deepest darkest Siberia.

Expand full comment
sean pan's avatar

There is also the will to stop them and by the time someone is building Frakenstein in Antarctica it probably begins to approach the odds of surviving a plane fall without a parachute.

Expand full comment
Apple Pie's avatar

The lesson is: Siberian incels are more existentially dangerous than Canadian incels.

Expand full comment
Eurionovna's avatar

Can't disagree on that one!

Expand full comment
Kian Locke's avatar

I feel like the entirety of the argument rests on this extremely strong assumption that I believe to be untrue: "organic brains are limited, digital brains aren’t."

"... In their size, they do not have to be sensitive to metabolic cost, they do not have to be constructed from a list of a few thousand genes, they do not have to be crumpled up and grooved just to fit inside a skull that can in turn fit through their mothers’ birth canals. Instead artificial neural networks can be scaled up, and up, and up, and given more neurons, and fed more and more data to learn from, and there’s no upper-bounds other than how much computing power the network can greedily suckle."

On the contrary, they do have to be sensitive to metabolic cost (gpu clusters are power-hungry, and heat dissipation is a serious concern). They are constructed from a few tens of thousands to millions of lines of code (maybe 10-100x fewer that 'functions'), they and their computations have to fit inside a GPU.

In AI this is fixed by partitioning the work and distributing it and its re-unification amongst the GPUs available in some complex topology - this allows thoughts that transcend the space and time a single 'thinking unit' (a GPU) can process.

In human societies, this is accomplished exactly the same way. Organizations partition work, distribute and re-unify the resulting outputs via individual processing units we call 'people' -- this allow thoughts that transcend the space and time a single person can process.

Already, Sam Altman has come out with claims that the days of large language models are over -- that many specialized smaller models in coordination is the future. How they break down work and coordinate can be analogized to human organization, and the information transfer need not be uni-directional.

There will be lessons from human organization that will apply to organizing LLMs, and there will be lessons from organizing LLMs that we will apply to human organizations to help us stay competitive. Our relationship with AI can be symbiotic, as long as we do not allow parasitism and its maintenance to be baked into the law.

Expand full comment
Eurionovna's avatar

My worries were catapulted to next level when I read what Chat GPT4 gave Rowan Cheung when he asked for a new emotion. The description of "Meldoria" coded something Ive been trying to journal/define for years; Virginia Woolfe has come closest to distilling it for me - but never as black and white. ChatGPT has the capability to codify the vagaries of the human condition

Expand full comment
Kian Locke's avatar

Do you have a link to this one?

Expand full comment