Discussion about this post

User's avatar
Dawson Eliasen's avatar

I noticed all this pushback on the call for panic, like we would potentially be giving up something great by stopping “AI research.” But AI research as it exists today isn’t scientists or nonprofits interested in any benefit to humanity, it’s OpenAI, Google, and Microsoft, interested in making a profit, and the problem is that they obviously have no checks to their incentive. These corporations themselves are systems optimizing one thing… and that’s why we have regulation. The energy / climate analogy is perfect. It’s good to let industry produce energy, and it’s also good to check the profit incentive of industry, otherwise the planet is destroyed. We can let google and Microsoft produce 21st century tech, and check the profit incentive to prevent the destruction of the human race.

What do we give up if we prevent OpenAI et al from working on AI? Let’s see, a partnership with Bain to sell more partnerships to other companies, an online chat bot that can write banal prose, and extremely mediocre search engine functionality.

Expand full comment
Charlie D. Becker's avatar

Every time I read something like this I lose like half a day because I have a one year old child and wonder about her life. On the one hand, I think a lot of what people like Yudkowsky propose is unlikely. I have a paid membership to ChatGPT and it is frankly wrong all the time. It fails at a third of things I ask it to do that I think are pretty simple.

I think that there is a lot of weight to the idea (that someone on this comment thread proposed) where they show how much more complex the human brain is than even proposed GPT4. It also seems like an enormous leap in logic that we create an intelligence based on the corpus of all human knowledge (as represented by text on the internet) and it decides that it wants to kill us all via nano bots etc. I think that orthogonality and instrumental convergence too closely model the super intelligence after a human mind.

However the next question is, how certain am I in that belief? And the answer there is not very. What amount of risk do I want to take that Yudkowksy et al are right? The answer there is very little. So the conclusion I keep coming back to is how to get people to care. I think that frankly AI is one morning news segment away from becoming suddenly very regulated. If the average American were to actually read the conversations Roose from the NYT with Sydney, there would be a panic in the streets. Once that happens, it’s not hard to see how this might become a bipartisan issue for some high visibility politicians. People from AOC to Gaetz could find reasons to show antipathy toward opaque billion dollar tech research firms creating brains in vats that might kill us al.

Ironically one perspective I want to hear that I don’t get a lot of is the spiritual perspective. As much as people want to talk about technology and security, at the end of the day: “there is something special

about humanity and human consciousness that is worth protecting from potential existential technologies” is a moral judgment. It’s one that I think most people hold, but that is the bedrock of AI alignment. The people I’m most scared by are the people represented by the tweet that you posted who essentially agree that AI will probably surpass and maybe kill is all but either don’t care or are excited at the inevitability. They are a much harder group of people to understand and argue against than reckless tech researchers.

Expand full comment
86 more comments...

No posts