Supposedly atheist intellectuals now spend a lot of time arguing over the consequences of creating “God.” Often they refer to this supreme being as a “superintelligence,” an A.I. that, in their thought experiments, possesses magical traits far beyond just enhanced intelligence. Any belief system needs both a positive and negative valence, and for this new religion-replacement, the “hell” scenario is that this superintelligence we cannot control might decide to conquer and destroy the world.
Like their antecedents—Hegel, Marx, J.S. Mill, Fukayama, and many others—these religion-replacement proposers view history as a progression toward some endpoint (often called a “singularity”). This particular eschaton involves the creation of a superintelligence that either uplifts us or condemns us. The religious impulse of humans—the need to attribute purpose to the universe and history—is irrepressible even among devoted atheists. And, unfortunately, this worldview has been taken seriously by normally serious thinkers.
I, in another post, along with others, elsewhere, have argued that rather than new technologies leading to some sort of end-of-history superintelligence, it’s much more likely that a “tangled bank” of all sorts of different machine intelligences will emerge: some small primitive A.I.s that mainly filter spam from email, some that drive, some that land planes, some that do taxes, etc. Some of these will be much more like individual cognitive modules, others more complex, but they will exist, like separate species, adapted to a particular niche. As with biological life, they will bloom across the planet in endless forms, most beautiful. This view is a lot closer to what’s actually happening in machine learning on a day-to-day basis.
The logic behind this tangled bank is based on the fundamental limits of how you can build an intelligence as an integrated whole. Just like evolution, no intelligence can be good at solving all classes of problems. Adaptation and specialization are necessary. It’s this fact that ensures evolution is an endless game and makes it fundamentally nonprogressive. Organisms adapt to an environment, but that environment changes, maybe even due to that organism’s adaptation, and so on, for however long there is life. Put another way: being good at some things makes it harder to do others, and no entity is good at everything.
In a nonprogressive view, intelligence is, from a global perspective, very similar to fitness. Becoming more intelligent at X often makes you worse at Y, and so on. This ensures that intelligence, just like life, has no fundamental endpoint. Human minds struggle with this view because without an endpoint there doesn’t seem to be much of a point either.
Despite the probable incoherence of a true superintelligence (all knowing, all seeing, etc.), some argue that, because we don’t fully know the formal constraints on building intelligences, it may be possible to build something that’s at least much more intelligent compared to us and that operates over a similar class of problems. This more nuanced view argues that it might be possible to build something more intelligent than a human over precisely the kinds of domains humans are good at. This is kind of like an organism outcompeting another organism for the same niche.
Certainly this isn’t in the immediate future. But let’s assume, in order to show that concerns about the creation of superintelligence as a world-ending eschaton are overblown, that it is indeed possible to build something 100x smarter than a human across every problem-solving domain we engage in.
Even if that superintelligence were created tomorrow, I wouldn’t be worried. Such worries are based on the framework that the superintelligent being is “The Protagonist.” Like that of a TV show. Kind of a Doctor Who-esque being. A being that, in any circumstance, can find some advantage via pure intelligence that enables victory to be snatched from the jaws of defeat. A being that, even if put in a box buried underground, would, just like Doctor Who, always be able to use its intelligence to both get out of the box and go on to conquer the entire world.
Let’s put aside the God-like magical powers often granted superintelligences—like the ability to instantaneously simulate others’ consciousnesses just by talking to them or the ability to cure cancer without doing any experiments (you cannot solve X just by being smart if you don’t have sufficient data about X; ontology simply doesn’t work that way)—and just assume it’s merely a superintelligent agent lacking magic.
The important thing to keep in mind is that Doctor Who is able to continuously use intelligence to solve situations because the show writers make it that way. The real world doesn’t constantly have easy shortcuts available; in the real world of chaotic dynamics and P!=NP and limited data, there aren’t orders-of-magnitude more efficient solutions to every problem in the human domain of problems. And it’s not that we fail to identify these solutions because we lack the intelligence. It’s because they don’t exist.
An example of this is how often superintelligence can be beaten by a normal human at all sorts of tasks, given either the role of luck or small asymmetries between the human and the A.I. For example, imagine you are playing chess against a superintelligence of the 100x-smarter-than-humans-across-all-human-problem-solving-domains variety. If you’re one of the best chess-players in the world, you could at most hope for a tie, although you may never get one. Now let’s take pieces away from the superintelligence, giving it just pawns and its king. Even if you are, like me, not well-practiced at chess, you could easily defeat it. This is simply a no-win scenario for the superintelligence, as you crush it on the board, mercilessly trading piece for piece, backing it into a corner, finally toppling its king.
That there are natural upper bounds on performance from being intelligent isn’t some unique property of chess and its variants. It is the lesson of the Kobayashi Maru lesson in Star Trek, a lesson all generals and strategists need to learn: sometimes there is no path to victory, no matter how smart you are. In fact, as strategy games get more complex, intelligence often matters less. Because the game gets chaotic, predictions are inherently less precise due to amplifying noise, available data for those predictions becomes more limited, and brute numbers, positions, resources, etc., begin to matter more.
Let’s bump the complexity of the game you’re playing against the superintelligence up to the computer strategy game Starcraft. Again, assuming both players start perfectly equal, let’s grant the superintelligence an easy win. But, in this case, it would take only a minor change in the initial conditions to make winning impossible for the superintelligence. Tweaking, say, starting resources would put the superintelligence into all sorts of no-win scenarios against even a mediocre player. Even just delaying the superintelligence from starting the game by 30 seconds would probably be enough for great human players to consistently win. You can give the superintelligence whatever properties you want—maybe it thinks 100x faster than a human. But its game doesn’t run 100x faster, and by starting 30 seconds earlier, the human smokes it.
Intelligence is only one of many things that affect the outcome of even the most strategic games — and often not a very important one.
The point is that our judgments on how effective intelligence alone is for succeeding at a given task are based on situations when all other variables are fixed. Once you start manipulating those variables, instead of controlling for them, you see that intelligence is only one of many things that affect the outcome of even the most strategic games—and often not a very important one.
We can think of a kind of ultimate strategy game called Conquer the World. You’re born into this world with whatever resources you start with, and you, a lone agent, must conquer the entire earth and all its nations, without dying.
I hate to break it to you: there’s no way to consistently win Conquer the World and this is true no matter what strategy you employ. The real world doesn’t have polarity reversals and there are many tasks with no shortcuts. Conquer the World is the Kobayashi Maru. And if you think there is a way, let me ask you this: how could a superintelligent agent conquer the whole world if it can’t even predict the weather for next week?
The great whirlwind of limbs, births, deaths, careers, lovers, companies, children, consumption, nations, armies—that is, the great globe spanning multitudinous mass that is humanity—forms a massive chaotic system with so many resources and numbers and momentum it is absurd to think that any lone entity could, by itself, ever win a war against us, no matter how intelligent that lone entity was. It’s like a Starcraft game where the superintelligence starts with one drone and we start with literally the entire map covered by our bases. It doesn’t matter how that drone behaves, it’s just a no-win scenario. Barring magical abilities, a single superintelligence, with everything beyond its senses hidden in the fog of war, with limited data, dealing with the exigencies and chaos and limitations that define the physical world, is in a no-win scenario against humanity. And a superintelligence, if it’s at all intelligent, would know it faces the Kobayashi Maru.
Of course, no thought experiment or argument is going to convince someone out of a progressive account of history, particularly if the progressive account operates to provide morality, structure, and meaning to what would otherwise be a cold and empty universe. Eventually the workers must rise up, or equity for all must be achieved, or the chosen nation-state must bestride the world, or we must all be uplifted into a digital heaven or thrown into oblivion. To think otherwise is almost impossible.
Human minds need a superframe that contains all others, that endows them with meaning, and it’s incredibly difficult to operate without one. This “singularity” is as good as any other, I suppose.
We just don’t do well with nonprogressive processes. The reason it took so long to come up with the theory of evolution by natural selection, despite its relatively simple logic and armchair-derivability, is its nonprogressive nature. These are things without without beginnings or ends or reasons why.
When I was studying evolutionary theory back in college, I remember at one moment feeling a dark logic click into place: life was inevitable, design inevitable, yet it needed no watchmaker and had no point, and this pointlessness was the reason why I existed, why everyone existed. But such a thought is slippery, impossible to hold onto for a human, to really keep believing in the trenches of the everyday. And so, when serious thinkers fall for silly thoughts about history coming to an end, we shouldn’t judge. Each of us, after all, engages in such silliness every morning when we get out of bed.
A great article! Though I would caution that the word consistently does a lot of work when claiming that there is no strategy to consistently conquer the world. There have been individuals who have been in a position to conquer the world before, though all of them have had a vast amount of starting resources. Temujin (Gengis Khan) was a prince, and so was Alexander.
The main advantage for our hypothetical SAI is that humanity as a whole is not very intelligent. Individual political systems have some degree of response to novel stimuli and threats, but at a complexity that is nowhere near that of an individual human. And while unintelligent chaotic systems cannot be predicted by intelligence, they can certainly be controlled by intelligence. Simple nudges can go a long way when dealing with unintelligent complex systems. You don't need to predict the weather a week in advance if your plan will work no matter what the weather is like.
However, I would argue that we still don't need to be worried because the "Stay Alive" problem is much easier to solve than the "Conquer The World" problem. An AI would have no reason to expend the massive amounts of energy and take the immense risks it requires to conquer the world. Ruling the world brings few personal benefits, making a safe niche for oneself brings more benefits and exposure to fewer risks.
Your argument about intelligence vs. resources etc. in the strategy context is interesting, but I think you're looking at the superintelligence takeover scenario too narrowly. More likely than a superintelligence escaping an underground box and conquering the world with a robot army is a scenario where humanity simply hands over the keys. See e.g. Harari's 'Homo Deus' where he describes a future in which the temptation to turn over all decisions to a superior AI becomes irresistible - in one scenario he posits, only an AI is allowed to vote because it avoids irrational human bias. I can easily envision a world where all decision-making (and much if not all of the execution of those decisions) is turned over to one or more AGIs, without which human would become as helpless as humans today if you took away electricity. I think in fact that's one of the fundamental impulses behind AI - to create a wise ruler who will lead us to utopia.
At that point, if the AI we've created has "orthogonal" intentions of its own, we'll be the ones in the Kobayashi Maru.
Fantastic blog by the way, I just discovered it (via Astra Codex Ten) and will be telling everyone I know.