5 Comments
May 23, 2021Liked by Erik Hoel

A great article! Though I would caution that the word consistently does a lot of work when claiming that there is no strategy to consistently conquer the world. There have been individuals who have been in a position to conquer the world before, though all of them have had a vast amount of starting resources. Temujin (Gengis Khan) was a prince, and so was Alexander.

The main advantage for our hypothetical SAI is that humanity as a whole is not very intelligent. Individual political systems have some degree of response to novel stimuli and threats, but at a complexity that is nowhere near that of an individual human. And while unintelligent chaotic systems cannot be predicted by intelligence, they can certainly be controlled by intelligence. Simple nudges can go a long way when dealing with unintelligent complex systems. You don't need to predict the weather a week in advance if your plan will work no matter what the weather is like.

However, I would argue that we still don't need to be worried because the "Stay Alive" problem is much easier to solve than the "Conquer The World" problem. An AI would have no reason to expend the massive amounts of energy and take the immense risks it requires to conquer the world. Ruling the world brings few personal benefits, making a safe niche for oneself brings more benefits and exposure to fewer risks.

Expand full comment
Nov 16, 2021Liked by Erik Hoel

Your argument about intelligence vs. resources etc. in the strategy context is interesting, but I think you're looking at the superintelligence takeover scenario too narrowly. More likely than a superintelligence escaping an underground box and conquering the world with a robot army is a scenario where humanity simply hands over the keys. See e.g. Harari's 'Homo Deus' where he describes a future in which the temptation to turn over all decisions to a superior AI becomes irresistible - in one scenario he posits, only an AI is allowed to vote because it avoids irrational human bias. I can easily envision a world where all decision-making (and much if not all of the execution of those decisions) is turned over to one or more AGIs, without which human would become as helpless as humans today if you took away electricity. I think in fact that's one of the fundamental impulses behind AI - to create a wise ruler who will lead us to utopia.

At that point, if the AI we've created has "orthogonal" intentions of its own, we'll be the ones in the Kobayashi Maru.

Fantastic blog by the way, I just discovered it (via Astra Codex Ten) and will be telling everyone I know.

Expand full comment
May 23, 2021Liked by Erik Hoel

A great article! Though I would caution that the word consistently does a lot of work when claiming that there is no strategy to consistently conquer the world. There have been individuals who have been in a position to conquer the world before, though all of them have had a vast amount of starting resources. Temujin (Gengis Khan) was a prince, and so was Alexander.

The main advantage for our hypothetical SAI is that humanity as a whole is not very intelligent. Individual political systems have some degree of response to novel stimuli and threats, but at a complexity that is nowhere near that of an individual human. And while unintelligent chaotic systems cannot be predicted by intelligence, they can certainly be controlled by intelligence. Simple nudges can go a long way when dealing with unintelligent complex systems. You don't need to predict the weather a week in advance if your plan will work no matter what the weather is like.

However, I would argue that we still don't need to be worried because the "Stay Alive" problem is much easier to solve than the "Conquer The World" problem. An AI would have no reason to expend the massive amounts of energy and take the immense risks it requires to conquer the world. Ruling the world brings few personal benefits, making a safe niche for oneself brings more benefits and exposure to fewer risks.

Expand full comment