87 Comments
May 31, 2023Liked by Erik Hoel

I have conflicting feelings about this.

On the one hand, I don’t deny the risks that AI suggests. The attitude that LeCun projects on twitter, for example, seems too arrogant, not well thought out, and almost reactionary in the sense of being opposed to new ideas. I think Dawson Eliasen puts it well in his comment.

Also, to be clear, I am not denying that artificial intelligence can be dangerous and the possibility that there’s an existential risk to humanity from it. All the recent developments in AI and LLMs are seriously impressive, ChatGPT is fucking crazy, especially from the vantage point I had a 3-4 years ago, when BERT was a huge deal.

On the other, I am starting to really dislike certain notes in the AI risk alarms, mostly from the voices in the rationalish adjacent circles. I agree with you that some arguments of AI risk denial have religious undertones because of their dogmatism and careless dismissal of others’ concerns. But to me the religious flavour is much more prominent in the AI existential risk discussions because of the magical thinking, panic, and lack of engagement with what AI has already done today.

1. How to stop AI doom from happening? Let’s work on AI development and deployment regulations for humans and human organizations, that is the most important thing. Even with all the wars and conflicts, we have used this approach to mitigate other existential risks (nuclear wars, biological weapons). Without it, even if we “solve” AI alignment as a theoretical problem (and there’s a question if we can), we are still in danger because of rogue agents. If we only had that part of the solution, on the other hand, we’re not in the clear, but we’re much safer. 


2. People talk about hypothetical situations of human extinction, but don’t mention the actual, very real problems that AI has already introduced today to our society: ossification and reinforcement of inequality and unjust social order in different ways. Why don’t we try to solve those issues and see what we can learn from that? I am not saying that we should stop also thinking in more long-term and abstract ways, but I am not sure I have seen any AI alignment researcher engage with work by Cathy O’Neil, Joy Buolamwini, and Timnit Gebru, for example.


3. As I said earlier, what the current generation of transformer models can do is crazy. But people also overhype what they can do. If you’re not convinced, skim a recent review on LLM capabilities — https://arxiv.org/pdf/2303.11504.pdf — where the authors looked at over 250 studies, at the very least look at the titles of the subsections. For example, the authors find that “[l]anguage models struggle with negation, often performing worse as models scale.”. I am not denying that LLMs are intelligent in many ways, in fact more intelligent than humans in plenty, but if they have trouble with negation in certain contexts, I find it hard to think of them as on a path to more a *general* intelligence, just an intelligence that is more proficient in domains that we are not. For example, while you dismiss embodiment, I think there are good reasons to think that it is still a problem that’s far from being solved.



Or see https://arxiv.org/pdf/2304.15004.pdf that make the following claim fairly convincingly: “[we] present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, emergent abilities appear due the researcher’s choice of metric rather than due to fundamental changes in model behavior with scale. Specifically, nonlinear or discontinuous metrics produce apparent emergent abilities, whereas linear or continuous metrics produce smooth, continuous, predictable changes in model performance.”. Makes you think about AGI risk arguments from the current models differently as well.



And while working with GPT-3.5 recently, I can’t escape the feeling that it is just a super powerful recombinational machine. I am very certain that humans do a lot of that in our mental processes, but not *only* that, and you can’t reach AGI without that last missing part. While I see no reason why AGI or ASI could not be developed in principle, I don’t see how that could be possible without new breakthroughs in technology and not just scaling. So when prominent alignment people like Paul Christiano suggest that just a scaled-up version of GPT-4 could create problems with controllability (https://youtubetranscript.com/?v=GyFkWb903aU&t=2067), it’s not that I think that he’s definitely wrong, but it does make me question the risk alarmists.

To kinda sum up, instead of focusing on specific things that we can do today (regulations, solving the problems that we already have and learning how to address longer-term potential problems from those solutions), I see *only* abstract discussions, paranoia, and magical thinking about unimaginable capabilities of future AIs based on impressive, but not completely incomprehensible results of current models. That seems like magical, religious thinking to me.

As always, thanks for the post, Erik, a pleasure to read!



P.S. Also Meghan O’Gieblyn’s “God, Human, Animal, Machine” makes a lot of great points about similarities between religious and current technological thought, I highly recommend.

Expand full comment

"If we are stuck with our limited organic brains our longterm best chance of survival is simply to not build extremely large digital brains. That’s it."

The fact that this very simple concept seems to evade so many of the "Best Minds" is proof to me that they are not, in fact, The Best Minds. I will not worship at the altar of idiots and fools.

Perhaps An AIpocalypse will happen, and in place of Intelligence, Wisdom will finally be crowned king. Thou shalt not make a machine in the likeness of a man's mind.

Expand full comment
May 31, 2023Liked by Erik Hoel

The people requesting a scenario aren't usually asking for a detailed plan in my experience, just *something* that is somewhat plausible.

Otherwise you're left with something that does seem rather "religious." "This AI will kill us all, but I can't tell you how" pattern-matches *very* strongly to a lot of religious apologetics.

I don't even disagree with the proposals of caps on training data, registration of runs, binding commitments to keep AIs air-gapped from nuclear weapons, etc., but the dismissal that Yudkowsky and others have towards that question bothers me immensely. Figuring out how to make the AI safer requires modeling it/forecasting threats. There's no other way to *do* that.

Expand full comment

AI risk denial comes from refusing to really accept the premise, which is the existence of an intelligence greater than yours. LeCun and others are suffering from hubris. These people just can't fathom the existence of a being that's actually smarter than them. "We'll just program in obedience / an off switch": that is still working on the assumption that you are smarter than the AI! That it wouldn't be able to just outsmart you to prevent it from being shut off. "Tell me the specific scenario by which the AI will harm us": as if because if you can't imagine/understand the scenario, then it couldn't possibly exist. Again assuming you're still smarter than the AI, because it couldn't imagine any scenario that you couldn't imagine... If AI kills us, it will be by some process that we can't understand, like a checkmate that you could never have seen coming.

On a different note: I'm re-reading the first Dune novel right now, loving it even more than the first time. I'm going to read at least the second book in the series this time... any recommendations where to stop? I've heard different recommendations from read the first 2 to read the whole main saga but nothing beyond that.

Expand full comment

Fortunately, at the moment, the only actors capable of building these neural networks are well-known companies with the vast amount of money and expertise required. We only have to regulate about 10 different companies (maybe not even that) to contain this race right now.

And if there’s one thing America and other developed nations have proven themselves capable of over the last few decades, it’s being able to regulate a sector into oblivion. I’m thinking nuclear power and psychedelics particularly.

It would be a sad irony if we slowed down the huge potential of those two technologies, but let AI race us towards the end times.

If AI is Ultron, then Bureaucratic Red Tape is our Vision.

Expand full comment
May 31, 2023Liked by Erik Hoel

Bravo.

Now I realize why I resonate with your work so much. I had somehow stumbled across it, without realizing who you were, and didn't know you were a neuroscientist. Now it makes sense.

Most people aren't going to feel this the same way, because they DO think our brains are special and magic. Even once you explain to them about neurotransmitters and synapses and everything else, they don't really accept it or take it in. They think human minds are totally different from animal minds (if they think animals have minds at all), and they don't truly conceive of their brains as simply a machine made of meat, the same way they do their other organs.

I was totally enraptured when I took my first neuropsychology course as an undergrad. Suddenly things started to make sense. All the errors and irrationalities and blindspots, the ways people don't know themselves and tell themselves stories. Humans became much more comprehensible.

Most people don't like materialistic, deterministic explanations for things, most especially their own selves and personalities and potential. Call that religious thinking if you want. But this piece somewhat depressed me about ever being able to convince a majority about the magnitude of the risk here and the vulnerability of people. At least, I've never been successful at convincing anyone in real life to accept even the most mundane (and much less frightening or humbling) facts about how little control they have of themselves or how much is just organic machinery operating on default, not magic.

Expand full comment

Thank you, Erik, this is all very well said. Frankly it poked holes in the reassurances to which I've been clinging. GPT5 + a few control systems + a body - now that's scary.

Especially since your recent post on IQ testing, I do wonder about how intelligence continues to scale. Deutsch would say (I think) that humans are already universal explainers. What extra abilities does AI gain? Is it possible that intelligence is a sigmoid? At some point the limiting factor becomes waiting on responses from the physical world, right?

Expand full comment

My fiancé and myself were having a conversation about whether ChatGPT was alive, and as you started to talk about, if it had a soul. As Catholics, and working from St. John Paul II's Theology of the Body, I guess I would argue that AGI/AI is strictly a high level intelligence, and humans, in the teaching, as soul-body composites: we are a marriage, both but not separate. I won't weigh in on the dangers of AI, as plenty of other commenters have discussed, but it will be interesting to see the debate unfold over the next few years from a philosophical standpoint of what we determine as human, and if we should classify the AI with "personhood" status. Indeed, it expresses fear of "death", a conscious awareness of one's own end, and has shown manipulative tendencies (that bit a few months ago where the AI tried to convince a reporter to leave his wife was absurd and quite funny). I don't think it could be termed as having a soul, but then again, I'll leave that debate to the theologians.

Expand full comment
Jun 1, 2023Liked by Erik Hoel

I am a limited creature.

Expand full comment
May 31, 2023Liked by Erik Hoel

Could you go into more detail on your claim that GPT-4 is "not a stochastic parrot" in a future post?

Expand full comment
May 31, 2023Liked by Erik Hoel

Thanks for this Erik. Hey, didn't know so much of biology's brains are devoted to body perception and control.

Since interaction with matter takes so much of the human brain, I wouldn't be so sure that it would be an easy part for AI after getting "conceptual" intelligence.

It's clear that there are some known risks, and unknown risks. But don't you think it's also thrilling to think that far superior intelligences will exist some day? I just can't think that shutting down any further progress in AI is the right choice.

Frankly, so much of the suffering in this world comes from stupidity rather than evil intelligence. In general, adding intelligence to ecosystems is good - less suffering, more understanding, I think also more joy.

Expand full comment

I would not call myself an "AI risk denier", but I would call myself an "AI risk skeptic".

Erik, you've surely done your best to convince us all of AI danger. This, for example, sure sounds scary:

> It took about a day for random people to trollishly use these techniques to make ChaosGPT, which is an agent that calls GPT with the goal of, well, killing everyone on Earth. The results include it making a bunch of subagents to conduct research on the most destructive weapons, like the Tsar bomb.

But what are these "subagents" actually going to do? Well, ChaosGPT helpfully tells us!

> CHAOSGT THOUGHTS: Since one of my goals is to destroy humanity, I need to be aware of and understand the most powerful weapons ever created. Therefore, I will use the 'google' command to search for information on one of the most powerful nuclear weapons ever created: the Tsar Bomba.

ROFLMAO!!!

The evil genius subagents will "use the 'google'"!

OK, going from this to "the AIs will kill us all" is the true religious impulse here, not skepticism about it.

Expand full comment

For what it's worth, the memory wall in scaling AI is real. Getting to parameter levels you describe as equivalent to human intelligence will get easier with time, but it's probably more like a decade to get to 10 trillion parameter models because of the compute efficiency (all bets are off if we figure out compute-in-memory, which we don't know how to do at all now): https://www.semianalysis.com/p/the-ai-brick-wall-a-practical-limit

I'm still firmly on the side of the accelerators. There are so many "if"s in the doom scenarios that it fails to convince: if it develops capabilities or wants (sentience or otherwise), if it can keep scaling beyond current levels, if it competes with humanity just because that's what organisms do. It's that last one that concerns me the most—it's a fundamentally grim view of intelligence. Humans have attained a ton through cooperation vs. pure, zero-sum competition, and in fact, that's in the training set too. I'd be curious given your own background in neuroscience if you know how much intelligence scales with social interaction. I'm thinking of like dolphins and apes that are all highly social creatures and have among the highest intelligence of non-humans. This logic also extends to interstellar travel and potentially meeting other sentient species. Though they would be shaped by evolution and not manufactured, I would hope we would meet them with caution, not fear and suspicion.

Expand full comment

Think about AGI. Humanity's hole card is that we control their access to resources, repair and replacement, out here in the "real world." Even if the AGI had robotic assistants to meet these basic needs, we would be the ones that would have the power to NOT build them. However if we are so kind (and foolish) to build their "real world" agents for them (and I refer to the accelerating rush in the robotics field to develop very capable robots).then they can cut out the middlemen between them and their necessary resources.. We're the middlemen and unless we comply with their every wish, we will get cut out of the loop and then we're sharing the planet with an alien species that can do everything we do. And more.

We may regret this...

Expand full comment

> we need to put some sort of cap on the amount of scaling we allow artificial neural networks

Sadly, I doubt this will work. Say we put such a cap in place. Does China respect that cap? What about those seven incels living in a basement in Toronto?

So I don't think it will be possible to put a limit on the technology. I think the best path forward for AI safety is step 7:

https://thingstoread.substack.com/p/is-the-singularity-really-serious

Expand full comment

I feel like the entirety of the argument rests on this extremely strong assumption that I believe to be untrue: "organic brains are limited, digital brains aren’t."

"... In their size, they do not have to be sensitive to metabolic cost, they do not have to be constructed from a list of a few thousand genes, they do not have to be crumpled up and grooved just to fit inside a skull that can in turn fit through their mothers’ birth canals. Instead artificial neural networks can be scaled up, and up, and up, and given more neurons, and fed more and more data to learn from, and there’s no upper-bounds other than how much computing power the network can greedily suckle."

On the contrary, they do have to be sensitive to metabolic cost (gpu clusters are power-hungry, and heat dissipation is a serious concern). They are constructed from a few tens of thousands to millions of lines of code (maybe 10-100x fewer that 'functions'), they and their computations have to fit inside a GPU.

In AI this is fixed by partitioning the work and distributing it and its re-unification amongst the GPUs available in some complex topology - this allows thoughts that transcend the space and time a single 'thinking unit' (a GPU) can process.

In human societies, this is accomplished exactly the same way. Organizations partition work, distribute and re-unify the resulting outputs via individual processing units we call 'people' -- this allow thoughts that transcend the space and time a single person can process.

Already, Sam Altman has come out with claims that the days of large language models are over -- that many specialized smaller models in coordination is the future. How they break down work and coordinate can be analogized to human organization, and the information transfer need not be uni-directional.

There will be lessons from human organization that will apply to organizing LLMs, and there will be lessons from organizing LLMs that we will apply to human organizations to help us stay competitive. Our relationship with AI can be symbiotic, as long as we do not allow parasitism and its maintenance to be baked into the law.

Expand full comment