Aug 3, 2021Liked by Erik Hoel

I don't think your no free lunch theorems give much evidence against the possibility of super-intelligence. The no free lunch theorems don't stop humans being better than lizards at a huge variety of problems in practice. It might be impossible to make something that's perfect at everything. But its definitely possible to make an algorithm that's useless at everything. There may be a boundary of possibility, where any improvement in one category must trade against others, and humans could be a long way from it.

Secondly, software is much more flexible than hardware. If an AI needs to do X, it can temporarily shift its mental architecture towards being good at X.

The AI can choose its own tradeoffs, maybe the AI trades off ability to rapidly and accurately throw stones at mammoths for better stock market prediction. The criteria evolution selected humans for aren't the abilities most useful in the modern world.

It is true that eventually the AI must be bounded, it must run into trade-offs. It must take some resources to run. But I haven't seen evidence this must happen on a human scale. It could get to some vastly superhuman level and then stop.

Compare "No superhumanly fast machine can exist because speed is inherently bounded". Yes speed is inherently bounded, to the speed of light. Machines can be slower than that, and still very fast on human scales.

Expand full comment

"The threat of superintelligent AI is being ranked by a significant number of thinkers in the same category as the threat of nuclear war. Here’s Elon Musk, speaking at MIT back in 2014"

This bit made me smile, because I could read it somewhat differently. Nuclear war is a massive existential risk because we (who're not particularly good at figuring out what we're doing, until after the fact) have invested effort in building and threatening each other with such WMDs.

If we were to see AI as the same or similar sort of weapon, then that statement makes sense, in that we could undo a lot of human progress or general betterment of life/survival (if not fitness), by handing off our judgement to machine intelligence.

Not a serious argument against where you take the piece from here. More of an aside :)

Expand full comment

I think you are making an efficiency argument, and on that ground it is interesting and probably valid. The big problem is that it is very easy to imagine a super computer with 10,000 times (or more) calculation power than the human brain. Efficiency issues start to become trivial at this point and for the vast majority of tasks it is only a question of how many orders of magnitude the super computer would be better than a human.

Expand full comment