82 Comments
Apr 13, 2022Liked by Erik Hoel

I dunno, dude. I think I disagree with your initial premise. I think Humanness is the first thing we need to get rid of, even if we accept as sacrosanct the fundamentally egotistical premise that Human survival should be some kind of prime imperative.

It’s thanks to Humanness that we’re wiping out 70,000-130,000 species every year. It’s thanks to hyperbolic discounting that today’s minor inconvenience is always more real to us than tomorrow’s global catastrophe. Family Values keep us churning out self-centred parasites, most of whom will just sit on the couch snarfing Cheetos and playing Animal Crossing until the ceiling crashes in, who we will defend with our lives for no better reason than that molecules have tricked us into making other molecules like them.

Of course, this isn’t just Human nature. Short-sighted selfishness is an inevitable hallmark of any organism forged by natural selection, because natural selection has no foresight. The difference in our case is the force-multiplier of Human intellect: that thing we could use to control our instincts, but instead use to make up complex rationalizations promoting them. The problem is that brains which evolved to handle short-term problems with local impacts now run technology with global impacts and long-term consequences. That’s Human.

We’re behaving completely naturally, of course. But it’s not working, is it? Being Human is killing us and our own life-support systems. Being Human threatens the survival of complex society itself.

We want to have a hope in hell of pulling this out of the fire, we gotta start behaving unnaturally ASAP.

There are indications of ways we might do that. Certain brain injuries that strip away Family Values, improving our ability to make ethical choices even if they don’t benefit our own larvae. Diseases that have the promising side-effect of suppressing the religious impulse. Hell, if we could just edit hyperbolic discounting out of the human mindset, get the gut to recognize the reality of future consequences, we’d be halfway home. Ironically, the best hope of saving Humanity might be by making us less Human. IMO this would be a good thing, both for our species and for all the others we’re wiping out—because this thing we are now, Erik, this thing you want to persist endlessly unchanged into the future: dude, it sucks.

Also I’m not entirely convinced that the incomprehension of the little guy in the Chinese Room really proves anything. Of course he doesn’t understand Chinese, any more than a single neuron would be able to tell you what the whole brain is thinking. Surely the question is whether the system as a whole comprehends, not whether any given gear or cog does.

Expand full comment
Apr 13, 2022Liked by Erik Hoel

I am not sure the Nietzschean future ends they way you predict. An interesting alternate reality, as portrayed in "Beggars In Spain", is that GM people recognize the humanness of the non-GM. Right now, people in Mensa don't say that low-IQ people are sub-human. (I presume, having never been to a Mensa meeting.)

Expand full comment

Lots to think about here! Well done. Is there no beautiful combination of the 4 that would be possible?!

Expand full comment

Boy, Erik -- that one was a lot of work. Thanks! Here's something to think about re: evolution -- your view is, of course, consistent with most of current scientific thought. But what if one considers evolution as bipartite? The first is the closed system inside individual organisms against their temporary competitors. The faster zebra wins the race. But that's not all there is to evolution -- the second involves inter-agent coordination. Once you consider this, it directly follows that inter-agent coordination is trans-species, as it must be. And that means you might actually understand what the lion is saying.

Expand full comment

Another fantastic essay

Expand full comment
Sep 6, 2022Liked by Erik Hoel

I have always thought that the "Chinese room" thought experiment was a red herring. Being human, we tend to focus our attention on the man in the room, and it is obvious that the *man* does not understand Chinese; he's just blindly following rules. However, the *room itself* obviously does understand Chinese, since it is able to produce Chinese text fluently. The logical rules, filing cabinets, etc. function like individual neurons in a brain; simple in isolation, but with complex emergent behavior when grouped together.

The "Chinese room" thought experiment is also a bit dated in the post-GPT3 period, since we now have actual, working "Chinese rooms" -- software that can produce fluent natural language. Large language models (LLMs) clearly "understand Chinese" in the sense that they can easily parse Chinese inputs, produce appropriate responses, and accurately translate Chinese to and from a variety of other languages, so anybody who claims that they don't "understand Chinese" must be using the word "understand" in a non-obvious way.

One clear difference is that we no longer consider "translating and/or producing fluent natural language" to be synonymous with human-level intelligence. GPT3/PaLM are very fluent, but don't (yet) pass the Turing test for practitioners who are aware of their limitations; it is not hard to steer a conversation with them into the territory of machine-generated gibberish.

The main sticking point is that humans don't just produce natural language, we *use* natural language for particular purposes: e.g. to communicate information, persuade others, achieve higher social status, or pursue personal goals. Large language models (LLMs) have no personal goals, no long-term memory of past interactions, no ability to form social connections with (or even to remember) other people, and limited ability to do logical reasoning (especially wrt. to the physical world), or to distinguish fact from fiction. These are all skills related to "acting as an agent in the real world", that are somewhat orthogonal to "understanding natural language."

Thus, the Chinese room is a distraction. The real question is "what sort of agent-like skills do humans possess, that AI does not?" Perhaps self-awareness? Once an AI agent starts to operate in the real world, starts to understand, model and predict the real world, and takes actions which have observable consequences, then at a certain level of "intelligence", it will notice that it, itself, is an entity of interest in the world that it is modeling. It will become "self-aware". Self-awareness, like many things, is a spectrum. If an agent begins to reflect upon, and analyze, its own motivations and reasoning, then I see little reason to doubt that it could become as self-aware as a human.

Are there other skills? Social connections and social status? Long-term friendships and relationships? Curiosity? Doesn't seem impossible...

Note that this clarification doesn't take away from the main point of your article, which is that even if AI becomes self-aware, socially-adept, intrinsically-motivated, etc. in addition to being able to use natural language, it will still be a non-human alien intelligence. (I've left out "conscious", because there are wildly-differing definitions of that word, even when people try to define it at all.)

Nevertheless, there is an important consideration that you've left out. A non-human alien society could have all of the same richness of culture, art, gossip, triumph and tragedy, etc. as human society. Could they appreciate Shakespeare? Perhaps not. Does that matter? Could we appreciate the great Octopus bard, who wrote such heartfelt dramas about the joy of solitude, or the triumphant feeling that Octopus mothers feel as they sacrifice their lives for their 1000 children? Perhaps not...

Expand full comment

Turing: It seems plausible (and it is also implied by IIT I think) that whether an AI (either built from scratch or based on a human mind upload) is conscious depends on the physical substrate upon which the AI is implemented. Building a physical substrate able to support consciousness is an engineering problem, one that we’ll solve one day I think.

Teilhard: who’s to say that a group mind can’t include pockets of individual consciousness that are active now and then? This seems ruled out by the current version of IIT, but I disagree. I think consciousness at very different scales can be nested.

In both scenarios, Shakespeare can be remembered and appreciated. And didn’t he say that there are more things in heaven and earth and all that?

Said this, love your last paragraphs.

Expand full comment

Thank you! Incredibly thought-provoking. I’m grateful for your reflections on the Chinese Room; I’ve felt for years that largely functional views of consciousness, like Dennett’s, have gotten way more popular airplay than they deserve.

Here’s where the power of the larger argument didn’t quite convince me: “Anything else [other than Shakespeare] will be unrecognizable and immoral, except by its own incommensurate and alien standards.” I think the AI aliens might reply (in their unintelligible language), So what? Come and try us. You’re missing out—you just don’t understand why, yet.

Still, an excellent essay with much to ponder.

Expand full comment

Erik, what an excellent article! I found your thoughts around the "Chinese Room" scenario compelling, especially because I have found myself thinking that "conscious" AI will never arrive, that consciousness is locked behind some ontological door which resists approach. BUT, the possibility that the real threat of AI exists in the orthogonality of intelligence and consciousness is disturbing. Hyper intelligent machines becoming so refined and efficient so as to dwarf human intelligence, yet lacking a key component of "being human": scary stuff.

I'm curious to know if you have heard of Iain McGilchrist and his work on brain hemispheres? He has a comprehensive and astounding book called "The Master and His Emissary: The Divided Brain and the Making of the Western World". In it he discusses how our asymmetric brain hemispheres perceive the world in different modes, the left tending towards itemization, categorization, and abstraction, the right tending toward universal awareness, uniqueness, and comprehension. Anyway, his work intersects with your article here, because the work of building machines and technology, broadly speaking, is a predominantly left-hemisphere function. He goes to great lengths to describe how both hemispheres are needed in many tasks, but how the right brain's broad picture awareness is a primary function, and that the left brain's entire world is a re-presentation of the information that is given to it by the right hemisphere. Carrying this line forward, the left brain tends not to realize that it has any dependencies, and imagines that it is self-sufficient. Not only that, but it further hypothesizes that it's own mode of perception is the only useful/worthwhile mode, that, even worse, there is no other mode of perception. He examines countless studies and examples from split brain patients to illustrate the point.

The future created by the technology saturated left-brain priorities disregards many of the elements that we would properly identify as human. Namely, artistic appreciation, a sense of wholeness, intuition, and, more importantly, the ability to shift perspectives based on new information.

I would argue that your Shakespearean future is the only one that incorporates right-brained modes of engaging with reality. The other three are all mechanistic/power based, viewing the world as primarily a realm of inanimate things to be dominated and controlled. Our ability to tinker, manipulate, refine, control, categorize, tool, and specify is an essential aspect of being human, but, if divorced from our ability to see the big picture, feel awe and compassion, experience new things, be surprised by beauty, and be humbled in the presence of existence... well, that is a dangerous road.

I'm loving all your work! (About to go read your falsification and consciousness article). You're giving me ideas for my own writing as well. I appreciate the time you put towards this work.

Expand full comment
Apr 15, 2022Liked by Erik Hoel

I'm not sure that Teilhard is germane to your point, however I struggled with your grammar in part of a sentence, viz. “Teilhard de Chardin described his idea of a noosphere as an early form of the internet”. How could TdC have compared anything to the “internet” when the good Father died in 1955, well before the earliest mention of the connected computer networks that became known as the Internet in 1974?

I always thought the Noosphere sounded like prayer and reflection, theological constructs from a Christian priest who said Mass every day, transubstantiating bread and wine to flesh and blood.

I'm definitely adopting your War Cry. Humans Forever!

Expand full comment

If you’re worried about Chinese Rooms, how do you propose to determine whether any non-human entity is conscious?

Does that require a real, functioning theory of consciousness?

In the absence of such a theory, surely it’s “safer” to assume intelligent actors with, say, opinions on Shakespeare are conscious?

Expand full comment

This long-ass piece was worth every second! ;) It also made me think of an essay I once wrote in late 2019 (which I have taken offline though, may rewrite it still) where I ended with “Humanity first.”

The view I was expressing in that essay, was that we should ‘short-circuit’ (in the programming/coding jargon sense) a ‘Brighter Future.’

We can find that brighter future through our humanity, not by solely focusing on technology. So say either technology or our humanity can bring a brighter future, our best bet is to skip tech and focus on humanity first for the best outcome (or our tech will be our downfall).

Expand full comment
Apr 14, 2022·edited Apr 14, 2022Liked by Erik Hoel

Give me human. This life we have is interesting because it has purpose. Not a single global purpose but infinite purposes arising out of new events. The source code is open and it goes on creating new patterns which we call the experience of life, one of which is Bach or Shakespeare, in my case it is Rabindranath Tagore but that we are different matters. The AI or ubermensch makes us all into one mass of inform nuts and bolts but for what? What is the purpose? Existence of such a mass of uniform consciousness is same as a super blob of matter doing nothing. Elon Musk is just having fun at our expense. So take his technology and just laugh at his philosophy. And go on being human. We can have one common purpose : to preserve our humanity. And STOP the bloody inhumans from killing us or fusing us. Ever live humanity!!!

Expand full comment
Apr 13, 2022Liked by Erik Hoel

Great post. Thank you. I just wanted to say that Greg Egan illustrates the Chinese Room problem vividly and intuitively in his 1990 short story Learning To Be Me. A masterpiece which I recommend anyone reads.

Expand full comment
Apr 13, 2022Liked by Erik Hoel

I guess I don't understand why so many philosophers assume the Chinese room is not conscious in some form. Seems like a straightforward analogy to our own consciousness - replace "room" with "skull", and "man with Chinese books" with "big pile of neurons", and "slips of paper" with "sensory input data".

Expand full comment

These futures remind of the Reasons from "SMT III: Nocturne." In this game, the world of humans has been destroyed, and the player travel a world of demons to decide which Reason to support, the Reason that will decide the fate of the coming world.

The reason "Shijima" heralds a world where all is one with the world, and the world is god. This is very similar to the "Pierre Teilhard de Chardin" future.

The reason "Musubi" promises a world where every living being is isolated from others, yet master of their own isolated part of the universe. Musubi is a rather confusing Reason, but I think it's main feature is the rejection of human connection. So, I think there's at least some relation with the "Turing" future, in the sense that human connection may very well be lost if everyone is a Chinese Room.

The reason "Yosuga" posits that "the strong and beautiful" must lead in the new world, and that there is to be no respect for whatever is "weak". It promises a world full of strife, by powerful beings seeking to become more powerful still. This is very much related to The Nietzschean future.

The reasons Shijima, Musubi, and Yosuga are all the main choices, but the player may choose to reject all of them. This can have various consequences. One possibility is that the world returns to the way it was, the "freedom" ending, and your Shakespearean future. But if the player displays cowardice at the decisive moment, the world will remain stuck in its broken intermediate state, with no hope for a future: the "demon" ending.

The final option is to not only reject the answers, but also reject the question: to completely give up one's humanity, and join Lucifer in a fight against Fate itself.

I think these are some interesting parallels, but can we learn something from them? That is a good question. Maybe if one does not only read a summary, but experiences the original work in full, one may learn something about humanity and it's value.

Or maybe just read Shakespeare, if you're more comfortable with that.

(This comment is written with the aid of this page: https://megamitensei.fandom.com/wiki/Reason )

Expand full comment