Discussion about this post

User's avatar
incertainty's avatar

I have conflicting feelings about this.

On the one hand, I don’t deny the risks that AI suggests. The attitude that LeCun projects on twitter, for example, seems too arrogant, not well thought out, and almost reactionary in the sense of being opposed to new ideas. I think Dawson Eliasen puts it well in his comment.

Also, to be clear, I am not denying that artificial intelligence can be dangerous and the possibility that there’s an existential risk to humanity from it. All the recent developments in AI and LLMs are seriously impressive, ChatGPT is fucking crazy, especially from the vantage point I had a 3-4 years ago, when BERT was a huge deal.

On the other, I am starting to really dislike certain notes in the AI risk alarms, mostly from the voices in the rationalish adjacent circles. I agree with you that some arguments of AI risk denial have religious undertones because of their dogmatism and careless dismissal of others’ concerns. But to me the religious flavour is much more prominent in the AI existential risk discussions because of the magical thinking, panic, and lack of engagement with what AI has already done today.

1. How to stop AI doom from happening? Let’s work on AI development and deployment regulations for humans and human organizations, that is the most important thing. Even with all the wars and conflicts, we have used this approach to mitigate other existential risks (nuclear wars, biological weapons). Without it, even if we “solve” AI alignment as a theoretical problem (and there’s a question if we can), we are still in danger because of rogue agents. If we only had that part of the solution, on the other hand, we’re not in the clear, but we’re much safer. 


2. People talk about hypothetical situations of human extinction, but don’t mention the actual, very real problems that AI has already introduced today to our society: ossification and reinforcement of inequality and unjust social order in different ways. Why don’t we try to solve those issues and see what we can learn from that? I am not saying that we should stop also thinking in more long-term and abstract ways, but I am not sure I have seen any AI alignment researcher engage with work by Cathy O’Neil, Joy Buolamwini, and Timnit Gebru, for example.


3. As I said earlier, what the current generation of transformer models can do is crazy. But people also overhype what they can do. If you’re not convinced, skim a recent review on LLM capabilities — https://arxiv.org/pdf/2303.11504.pdf — where the authors looked at over 250 studies, at the very least look at the titles of the subsections. For example, the authors find that “[l]anguage models struggle with negation, often performing worse as models scale.”. I am not denying that LLMs are intelligent in many ways, in fact more intelligent than humans in plenty, but if they have trouble with negation in certain contexts, I find it hard to think of them as on a path to more a *general* intelligence, just an intelligence that is more proficient in domains that we are not. For example, while you dismiss embodiment, I think there are good reasons to think that it is still a problem that’s far from being solved.



Or see https://arxiv.org/pdf/2304.15004.pdf that make the following claim fairly convincingly: “[we] present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, emergent abilities appear due the researcher’s choice of metric rather than due to fundamental changes in model behavior with scale. Specifically, nonlinear or discontinuous metrics produce apparent emergent abilities, whereas linear or continuous metrics produce smooth, continuous, predictable changes in model performance.”. Makes you think about AGI risk arguments from the current models differently as well.



And while working with GPT-3.5 recently, I can’t escape the feeling that it is just a super powerful recombinational machine. I am very certain that humans do a lot of that in our mental processes, but not *only* that, and you can’t reach AGI without that last missing part. While I see no reason why AGI or ASI could not be developed in principle, I don’t see how that could be possible without new breakthroughs in technology and not just scaling. So when prominent alignment people like Paul Christiano suggest that just a scaled-up version of GPT-4 could create problems with controllability (https://youtubetranscript.com/?v=GyFkWb903aU&t=2067), it’s not that I think that he’s definitely wrong, but it does make me question the risk alarmists.

To kinda sum up, instead of focusing on specific things that we can do today (regulations, solving the problems that we already have and learning how to address longer-term potential problems from those solutions), I see *only* abstract discussions, paranoia, and magical thinking about unimaginable capabilities of future AIs based on impressive, but not completely incomprehensible results of current models. That seems like magical, religious thinking to me.

As always, thanks for the post, Erik, a pleasure to read!



P.S. Also Meghan O’Gieblyn’s “God, Human, Animal, Machine” makes a lot of great points about similarities between religious and current technological thought, I highly recommend.

Expand full comment
Chaos Goblin's avatar

"If we are stuck with our limited organic brains our longterm best chance of survival is simply to not build extremely large digital brains. That’s it."

The fact that this very simple concept seems to evade so many of the "Best Minds" is proof to me that they are not, in fact, The Best Minds. I will not worship at the altar of idiots and fools.

Perhaps An AIpocalypse will happen, and in place of Intelligence, Wisdom will finally be crowned king. Thou shalt not make a machine in the likeness of a man's mind.

Expand full comment
87 more comments...

No posts