7 Comments
Aug 3, 2021Liked by Erik Hoel

I don't think your no free lunch theorems give much evidence against the possibility of super-intelligence. The no free lunch theorems don't stop humans being better than lizards at a huge variety of problems in practice. It might be impossible to make something that's perfect at everything. But its definitely possible to make an algorithm that's useless at everything. There may be a boundary of possibility, where any improvement in one category must trade against others, and humans could be a long way from it.

Secondly, software is much more flexible than hardware. If an AI needs to do X, it can temporarily shift its mental architecture towards being good at X.

The AI can choose its own tradeoffs, maybe the AI trades off ability to rapidly and accurately throw stones at mammoths for better stock market prediction. The criteria evolution selected humans for aren't the abilities most useful in the modern world.

It is true that eventually the AI must be bounded, it must run into trade-offs. It must take some resources to run. But I haven't seen evidence this must happen on a human scale. It could get to some vastly superhuman level and then stop.

Compare "No superhumanly fast machine can exist because speed is inherently bounded". Yes speed is inherently bounded, to the speed of light. Machines can be slower than that, and still very fast on human scales.

Expand full comment

"The threat of superintelligent AI is being ranked by a significant number of thinkers in the same category as the threat of nuclear war. Here’s Elon Musk, speaking at MIT back in 2014"

This bit made me smile, because I could read it somewhat differently. Nuclear war is a massive existential risk because we (who're not particularly good at figuring out what we're doing, until after the fact) have invested effort in building and threatening each other with such WMDs.

If we were to see AI as the same or similar sort of weapon, then that statement makes sense, in that we could undo a lot of human progress or general betterment of life/survival (if not fitness), by handing off our judgement to machine intelligence.

Not a serious argument against where you take the piece from here. More of an aside :)

Expand full comment

"The fantasist then claims of course you can have something that both acts as a sledgehammer and opens a bottle of wine. You just strap a corkscrew to a sledgehammer!"

The problem with bolting things together is that there are a very large number of potential tools. So you get a very large weight and cost from bolting them all together.

However, a computer is already a highly general artifact. If the AGI needs to play chess, it can write a chess playing algorithm and run that. Before the AI decided it needed to play chess or even learned the rules, not a single circuit in the machine was specific to chess. This means the AGI can learn and play any one of exponentially many games. Because it's general. A key copying machine can copy any one of exponentially many keys, because it isn't a separate machine to make each possible key.

>The types of abilities Bostrom and others give to superintelligence go far far far beyond superhuman - it is effectively an omniscience genie of infinite self-improvement (which is like a perfect organism, just not a possibility).

What specific thing does Bostrom claim that ASI can know which you think it can't.

What is the difference between an "effectively infinite" AI, and one that's 10 orders of magnitude smarter than us?

It's possible, and fairly likely, that both of the following are true.

1) There is no optimal intelligence, instead there are a large class of intelligences that are comparable, each with different balance of tradeoffs and no clear winner.

2) That whole class of equivalently good intelligences is WAY WAY above us.

Expand full comment

I think you are making an efficiency argument, and on that ground it is interesting and probably valid. The big problem is that it is very easy to imagine a super computer with 10,000 times (or more) calculation power than the human brain. Efficiency issues start to become trivial at this point and for the vast majority of tasks it is only a question of how many orders of magnitude the super computer would be better than a human.

Expand full comment