I dunno, dude. I think I disagree with your initial premise. I think Humanness is the first thing we need to get rid of, even if we accept as sacrosanct the fundamentally egotistical premise that Human survival should be some kind of prime imperative.
It’s thanks to Humanness that we’re wiping out 70,000-130,000 species every year. It’s thanks to hyperbolic discounting that today’s minor inconvenience is always more real to us than tomorrow’s global catastrophe. Family Values keep us churning out self-centred parasites, most of whom will just sit on the couch snarfing Cheetos and playing Animal Crossing until the ceiling crashes in, who we will defend with our lives for no better reason than that molecules have tricked us into making other molecules like them.
Of course, this isn’t just Human nature. Short-sighted selfishness is an inevitable hallmark of any organism forged by natural selection, because natural selection has no foresight. The difference in our case is the force-multiplier of Human intellect: that thing we could use to control our instincts, but instead use to make up complex rationalizations promoting them. The problem is that brains which evolved to handle short-term problems with local impacts now run technology with global impacts and long-term consequences. That’s Human.
We’re behaving completely naturally, of course. But it’s not working, is it? Being Human is killing us and our own life-support systems. Being Human threatens the survival of complex society itself.
We want to have a hope in hell of pulling this out of the fire, we gotta start behaving unnaturally ASAP.
There are indications of ways we might do that. Certain brain injuries that strip away Family Values, improving our ability to make ethical choices even if they don’t benefit our own larvae. Diseases that have the promising side-effect of suppressing the religious impulse. Hell, if we could just edit hyperbolic discounting out of the human mindset, get the gut to recognize the reality of future consequences, we’d be halfway home. Ironically, the best hope of saving Humanity might be by making us less Human. IMO this would be a good thing, both for our species and for all the others we’re wiping out—because this thing we are now, Erik, this thing you want to persist endlessly unchanged into the future: dude, it sucks.
Also I’m not entirely convinced that the incomprehension of the little guy in the Chinese Room really proves anything. Of course he doesn’t understand Chinese, any more than a single neuron would be able to tell you what the whole brain is thinking. Surely the question is whether the system as a whole comprehends, not whether any given gear or cog does.
But isn't the whole point of Blindsight at the end, with Siri Keeton heading back to Earth, that it's pretty horrible that he might be the last sentient creature in the whole universe? And he's happy to finally be Human, maybe the last Human (indicating that there's some sort of moral worth there). At least, that's how I always read it.
Being the only sentient creature in the universe would be a huge relief. I wouldn't have to feel depressed the way I do now about human suffering and the cruelty that humans inflict on each other.
Well, we should probably not forget that Siri is not exactly the most reliable narrator on the board. Which will be brought home with renewed ferocity if I ever get off my ass and finish Omniscience.
But there's a reason he mourns the possibility of being the last "sentient" being in the universe, and not the last "human" one. Unless you buy into the whole free-energy-minimization thing (which I'm thinking ultimately I might, but I'd never even heard of those guys back when I was writing Blindsight), there's no reason that sentience and short-sighted survival imperatives have to be functionally coupled. Survival drives can be implemented without conscious awareness (in fact, they usually are). And there's no de facto reason why a constructed sentient being (as opposed to an evolved one) should give a rat's ass whether it lives or dies (unless you deliberately give it those drives, and if you do you're asking for whatever you get). Siri is lonely. He misses companionship. He glimpses a vast a wondrous universe, and mourns that he may have no one to share it with.
He certainly doesn't mourn all the blind short-sighted brainstem instincts that make us pollute that universe everywhere we set foot.
I think I agree with both of you? I don't particularly value humanity for its own sake (the existence of the broiler chicken is argument enough against letting us into the galactic confederation), but consciousness is the only thing that gives the universe anything approximating meaning. I agree we should be careful about anything that might extinguish that candle, but think there's clear room for improvement before we spread out across the galaxy.
> And there's no de facto reason why a constructed sentient being (as opposed to an evolved one) should give a rat's ass whether it lives or dies (unless you deliberately give it those drives, and if you do you're asking for whatever you get)
"It’s thanks to Humanness that we’re wiping out 70,000-130,000 species every year. It’s thanks to hyperbolic discounting that today’s minor inconvenience is always more real to us than tomorrow’s global catastrophe. Family Values keep us churning out self-centred parasites, most of whom will just sit on the couch snarfing Cheetos and playing Animal Crossing until the ceiling crashes in, who we will defend with our lives for no better reason than that molecules have tricked us into making other molecules like them."
But it is also humanness that allows you to be disgusted by any of these things. Eradicating humanness does not merely (or even necessarily?) eradicate the actions we find abhorrent, it eradicates the very concept of abhorrence.
It's like the old quote that "he who despises himself still recognizes himself as one who despises." Humanity is often rightly disgusted with itself, but it is also the source of the concept of disgust, and the only thing we have ever encountered capable of recognizing and expressing it. What have we gained when we eradicate not the things we find awful, but the things that found it awful?
The word that comes most strongly to my mind is "chauvinism", in its original sense.
Granted, I'm right there with whoever it was who spoke of turning to your work when "[their] will to live grew too strong" - I don't think there is nothing of worth in the species, which sometimes is the vibe I get from your stuff, but I likewise can't agree that *everything* in the species is of worth, as the essay under discussion seems to posit.
Somewhere in the Laundry series, Stross has a character say something along the line that, if whatever survives this still cherishes a memory of having once been human, that counts as a win. Seems to me like the right way to look at it.
This reminds me that SF has often returned to the theme of the unmodified versus the modified. Usually they are factions at war with each other. The real lesson there is that humans like to draw a line in the sand more than they like to compromise for co-existence. In the long run I think there would be a third polarity, the anti-natalists who just think we ought to p!ss off and die. A Forever War with three factions that can never become allies would be quite the thing, and oh, so human. If Eric (without any such intention) became a prophet of the Unmodified, perhaps Peter (also against his will) would become a prophet of the Modified? Of the Anti-natalists? Maybe those factions would define themselves by differences in Wattsian interpretations.
But, seriously, it was another marvelous, provocative, and wide-ranging essay. And, for those who don’t know, Peter Watts has applied most of the themes herein to some head-spinning science fiction.
I think it will be less a distinction between enhanced and unenhanced humanity. It will be more of an addition to what does already exist: an enhancement of talent for various professions and arts through teaching and meritocratic selection. This will be further enhanced through genetic engineering.
The problem will be to keep an inner peace for such a multitalented society. Miraculously, free society has accomplished that already during some periods of time from the onset of professionalism, eg in Sumer. I take it that there will be conflict and peace alternating also over some future.
It reminds of a Roman fable relayed by Livy. Accordingly, consul Menenius (503BC) told the soldiers a fable about the parts of the human body (eg the stomach and the lung) and how each has its own purpose in the greater function of the body.
I am not sure the Nietzschean future ends they way you predict. An interesting alternate reality, as portrayed in "Beggars In Spain", is that GM people recognize the humanness of the non-GM. Right now, people in Mensa don't say that low-IQ people are sub-human. (I presume, having never been to a Mensa meeting.)
I'd think the Nietzschean future is hard to avoid, given that the meritocratic present looks an awful lot like the Nietzschean future.
Especially so, given that Michael Young accurately predicted the meritocratic present back in 1958 in his essay/short novel "The Rise of the Meritocracy 1870-2033: An Essay on Education and Equality".
If we very generously assume the Nietzschean scenario as a premise, we can trivially infer that übermenschen who don't empathize with wild-type humans will quickly outcompete those who do, and are thus less effective sociopaths than those who make no such exceptions among whom they'll unhesitatingly victimize.
Granted, I think the whole article is bunk, or at absolute minimum so wildly caricatured for effect that the intent buckles under the weight of hyperbole. But if we ignore that for the moment, I don't think your objection holds water.
Boy, Erik -- that one was a lot of work. Thanks! Here's something to think about re: evolution -- your view is, of course, consistent with most of current scientific thought. But what if one considers evolution as bipartite? The first is the closed system inside individual organisms against their temporary competitors. The faster zebra wins the race. But that's not all there is to evolution -- the second involves inter-agent coordination. Once you consider this, it directly follows that inter-agent coordination is trans-species, as it must be. And that means you might actually understand what the lion is saying.
hmmm, that's an interesting thought - but I'm not sure how much coordination there is between lions and humans! Perhaps there is coordination between lions and their prey though, in unexpected ways (or like, with their parasites, etc).
"Perhaps there is coordination between lions and their prey"
One proposed explanation of "stotting" is that it is a coordination signal to the lion: "I am healthy and fast, so chasing me would waste both of our time and energy"
Think about your pet. At lower informational levels (hardware/gene coding) what you wrote is absolutely correct. But at higher complexity (software/memetics) eh, not so much. But then you run into another problem -- meeting someone/species that functions at a much higher level of complexity than humans do. It's inconceivable to us, because complexity ceilings also work on our understanding as well. Recommend either my work on structural memetics (collective intelligence/knowledge structure formation) or M. L. Commons Model of Hierarchical Complexity (independent agent scale.)
I have always thought that the "Chinese room" thought experiment was a red herring. Being human, we tend to focus our attention on the man in the room, and it is obvious that the *man* does not understand Chinese; he's just blindly following rules. However, the *room itself* obviously does understand Chinese, since it is able to produce Chinese text fluently. The logical rules, filing cabinets, etc. function like individual neurons in a brain; simple in isolation, but with complex emergent behavior when grouped together.
The "Chinese room" thought experiment is also a bit dated in the post-GPT3 period, since we now have actual, working "Chinese rooms" -- software that can produce fluent natural language. Large language models (LLMs) clearly "understand Chinese" in the sense that they can easily parse Chinese inputs, produce appropriate responses, and accurately translate Chinese to and from a variety of other languages, so anybody who claims that they don't "understand Chinese" must be using the word "understand" in a non-obvious way.
One clear difference is that we no longer consider "translating and/or producing fluent natural language" to be synonymous with human-level intelligence. GPT3/PaLM are very fluent, but don't (yet) pass the Turing test for practitioners who are aware of their limitations; it is not hard to steer a conversation with them into the territory of machine-generated gibberish.
The main sticking point is that humans don't just produce natural language, we *use* natural language for particular purposes: e.g. to communicate information, persuade others, achieve higher social status, or pursue personal goals. Large language models (LLMs) have no personal goals, no long-term memory of past interactions, no ability to form social connections with (or even to remember) other people, and limited ability to do logical reasoning (especially wrt. to the physical world), or to distinguish fact from fiction. These are all skills related to "acting as an agent in the real world", that are somewhat orthogonal to "understanding natural language."
Thus, the Chinese room is a distraction. The real question is "what sort of agent-like skills do humans possess, that AI does not?" Perhaps self-awareness? Once an AI agent starts to operate in the real world, starts to understand, model and predict the real world, and takes actions which have observable consequences, then at a certain level of "intelligence", it will notice that it, itself, is an entity of interest in the world that it is modeling. It will become "self-aware". Self-awareness, like many things, is a spectrum. If an agent begins to reflect upon, and analyze, its own motivations and reasoning, then I see little reason to doubt that it could become as self-aware as a human.
Are there other skills? Social connections and social status? Long-term friendships and relationships? Curiosity? Doesn't seem impossible...
Note that this clarification doesn't take away from the main point of your article, which is that even if AI becomes self-aware, socially-adept, intrinsically-motivated, etc. in addition to being able to use natural language, it will still be a non-human alien intelligence. (I've left out "conscious", because there are wildly-differing definitions of that word, even when people try to define it at all.)
Nevertheless, there is an important consideration that you've left out. A non-human alien society could have all of the same richness of culture, art, gossip, triumph and tragedy, etc. as human society. Could they appreciate Shakespeare? Perhaps not. Does that matter? Could we appreciate the great Octopus bard, who wrote such heartfelt dramas about the joy of solitude, or the triumphant feeling that Octopus mothers feel as they sacrifice their lives for their 1000 children? Perhaps not...
Turing: It seems plausible (and it is also implied by IIT I think) that whether an AI (either built from scratch or based on a human mind upload) is conscious depends on the physical substrate upon which the AI is implemented. Building a physical substrate able to support consciousness is an engineering problem, one that we’ll solve one day I think.
Teilhard: who’s to say that a group mind can’t include pockets of individual consciousness that are active now and then? This seems ruled out by the current version of IIT, but I disagree. I think consciousness at very different scales can be nested.
In both scenarios, Shakespeare can be remembered and appreciated. And didn’t he say that there are more things in heaven and earth and all that?
I think a lot depends on the use of "substrate" here. For example, if substrate just means "physical material" than yes, almost certainly consciousness is multiply-realizable (something even Searle would agree on) in that if you designed a silicon brain that was basically as identical as possible to a real brain, it would be conscious. But this is then interpreted to mean "consciousness doesn't depend on implementation at all, just the behavioral profile" and that's what I'm saying is incorrect.
Yes I mean physical material substrate, without excluding exotic substrates like plasma, superfluids, neutron stars, or perhaps even blobs of bare quantum fields. In fact, I don’t think silicon devices like those we are familiar with could be conscious, but since matter can (you and I are living proof) I’m sure we’ll engineer conscious matter one day.
This is such an odd belief to have. its like saying, I am sure the mario brothers video game can be built on a computer, but not one that uses PNP transistors. The properties of consciousnesses are so remote from the substrate, I would expect that one would have no bearing on the other. do you believe that consciousness is computable? are you a reductionist? I am just surprised by this common conjecture.
Thank you! Incredibly thought-provoking. I’m grateful for your reflections on the Chinese Room; I’ve felt for years that largely functional views of consciousness, like Dennett’s, have gotten way more popular airplay than they deserve.
Here’s where the power of the larger argument didn’t quite convince me: “Anything else [other than Shakespeare] will be unrecognizable and immoral, except by its own incommensurate and alien standards.” I think the AI aliens might reply (in their unintelligible language), So what? Come and try us. You’re missing out—you just don’t understand why, yet.
Erik, what an excellent article! I found your thoughts around the "Chinese Room" scenario compelling, especially because I have found myself thinking that "conscious" AI will never arrive, that consciousness is locked behind some ontological door which resists approach. BUT, the possibility that the real threat of AI exists in the orthogonality of intelligence and consciousness is disturbing. Hyper intelligent machines becoming so refined and efficient so as to dwarf human intelligence, yet lacking a key component of "being human": scary stuff.
I'm curious to know if you have heard of Iain McGilchrist and his work on brain hemispheres? He has a comprehensive and astounding book called "The Master and His Emissary: The Divided Brain and the Making of the Western World". In it he discusses how our asymmetric brain hemispheres perceive the world in different modes, the left tending towards itemization, categorization, and abstraction, the right tending toward universal awareness, uniqueness, and comprehension. Anyway, his work intersects with your article here, because the work of building machines and technology, broadly speaking, is a predominantly left-hemisphere function. He goes to great lengths to describe how both hemispheres are needed in many tasks, but how the right brain's broad picture awareness is a primary function, and that the left brain's entire world is a re-presentation of the information that is given to it by the right hemisphere. Carrying this line forward, the left brain tends not to realize that it has any dependencies, and imagines that it is self-sufficient. Not only that, but it further hypothesizes that it's own mode of perception is the only useful/worthwhile mode, that, even worse, there is no other mode of perception. He examines countless studies and examples from split brain patients to illustrate the point.
The future created by the technology saturated left-brain priorities disregards many of the elements that we would properly identify as human. Namely, artistic appreciation, a sense of wholeness, intuition, and, more importantly, the ability to shift perspectives based on new information.
I would argue that your Shakespearean future is the only one that incorporates right-brained modes of engaging with reality. The other three are all mechanistic/power based, viewing the world as primarily a realm of inanimate things to be dominated and controlled. Our ability to tinker, manipulate, refine, control, categorize, tool, and specify is an essential aspect of being human, but, if divorced from our ability to see the big picture, feel awe and compassion, experience new things, be surprised by beauty, and be humbled in the presence of existence... well, that is a dangerous road.
I'm loving all your work! (About to go read your falsification and consciousness article). You're giving me ideas for my own writing as well. I appreciate the time you put towards this work.
Thank you for your kind comment Caleb. I do know of Iain's work - I think you're right that the Shakespearian future would be the only one that meshed with his conception.
I'm not sure that Teilhard is germane to your point, however I struggled with your grammar in part of a sentence, viz. “Teilhard de Chardin described his idea of a noosphere as an early form of the internet”. How could TdC have compared anything to the “internet” when the good Father died in 1955, well before the earliest mention of the connected computer networks that became known as the Internet in 1974?
I always thought the Noosphere sounded like prayer and reflection, theological constructs from a Christian priest who said Mass every day, transubstantiating bread and wine to flesh and blood.
I'm definitely adopting your War Cry. Humans Forever!
Even if you don't have a "final theory" of consciousness (i.e., something widely-regarded as true by most of the field) you can offer predictions concerning consciousness based on the current theories; e.g., for theories like global workspace it would require some sort of actual global workspace somewhere. Same for IIT, which would require integration. Brain complexity correlates with consciousness, which is something a host of papers show. But does such complexity actually correlate with intelligence? After all, AIXI is very simple in its working, and it is the only schema for universal intelligence we know about. That is, much of what we currently know or think about consciousness, even if we don't understand it fully, implies that the orthogonality thesis is, i.m.o., likely true (at minimum it implies that it could be true, and therefore Chinese Rooms are a real possibility). Or to put it plainly: if there is indeed an orthogonality between consciousness and intelligence it's not safe to assume anything. The alternative is to maintain that consciousness is simply a function of behavior, but this assumption is not informative (literally, as a tautology, it generates zero information) so I think it shouldn't be treated as any sort of reasonable framework or theory.
This long-ass piece was worth every second! ;) It also made me think of an essay I once wrote in late 2019 (which I have taken offline though, may rewrite it still) where I ended with “Humanity first.”
The view I was expressing in that essay, was that we should ‘short-circuit’ (in the programming/coding jargon sense) a ‘Brighter Future.’
We can find that brighter future through our humanity, not by solely focusing on technology. So say either technology or our humanity can bring a brighter future, our best bet is to skip tech and focus on humanity first for the best outcome (or our tech will be our downfall).
Give me human. This life we have is interesting because it has purpose. Not a single global purpose but infinite purposes arising out of new events. The source code is open and it goes on creating new patterns which we call the experience of life, one of which is Bach or Shakespeare, in my case it is Rabindranath Tagore but that we are different matters. The AI or ubermensch makes us all into one mass of inform nuts and bolts but for what? What is the purpose? Existence of such a mass of uniform consciousness is same as a super blob of matter doing nothing. Elon Musk is just having fun at our expense. So take his technology and just laugh at his philosophy. And go on being human. We can have one common purpose : to preserve our humanity. And STOP the bloody inhumans from killing us or fusing us. Ever live humanity!!!
Great post. Thank you. I just wanted to say that Greg Egan illustrates the Chinese Room problem vividly and intuitively in his 1990 short story Learning To Be Me. A masterpiece which I recommend anyone reads.
I guess I don't understand why so many philosophers assume the Chinese room is not conscious in some form. Seems like a straightforward analogy to our own consciousness - replace "room" with "skull", and "man with Chinese books" with "big pile of neurons", and "slips of paper" with "sensory input data".
I think this analogy only holds true if implementation doesn't matter at all. All current theories of consciousness (from Francis Crick's 40 Hz gamma band to IIT to global workspace theories to higher-order theories) depend on the specifics of implementations. We could just say those are wrong, but that leaves the sort of system-level input/output functionalism that we've proven is unfalsifiable and not a scientific sort of hypothesis.
I think the analogy works if you believe that the Chinese Room argument actually contains a sufficient argument that the room is not conscious. Searle's argument seems to be unaffected by the presence of something in the room happening at the 40 Hz gamma band, so even given a Chinese room that was not actually conscious and also lacked those 40 Hz waves, it would seem to me that at least one of Crick's theory or Searle's argument must fail as an explanation of why that room is unconscious. I don't detect in Searle's argument any criteria used to judge the room unconscious that an alien species with a different sort of brain than our own would not be able to use to rule us unconscious, so it just seems to fail as an argument even if it could happen to be true that such a room would be unconscious.
hmmm Analogies are simply appropriate mappings or not. And here, the mapping is inappropriate, since its eliding that there is a huge difference between "look-up table" and "big pile of neurons" (same with the alien species analogy, which elides the same thing). All modern scientific theories of consciousness would say that look-up tables are not conscious, and big piles or neurons would depend on how they are hooked up: i.e., their internal structure. I think some confusion is that people want an answer from Searle, but Searle isn't providing a theory, he's providing an example.
The "lookup table" containing rules to converse in Chinese would necessarily have analogous complexity to interconnected neural tissue. The man trapped inside is then analogous to the arrow of time in that quantum-chemical system. This is, I think, the place where intuition fails.
These futures remind of the Reasons from "SMT III: Nocturne." In this game, the world of humans has been destroyed, and the player travel a world of demons to decide which Reason to support, the Reason that will decide the fate of the coming world.
The reason "Shijima" heralds a world where all is one with the world, and the world is god. This is very similar to the "Pierre Teilhard de Chardin" future.
The reason "Musubi" promises a world where every living being is isolated from others, yet master of their own isolated part of the universe. Musubi is a rather confusing Reason, but I think it's main feature is the rejection of human connection. So, I think there's at least some relation with the "Turing" future, in the sense that human connection may very well be lost if everyone is a Chinese Room.
The reason "Yosuga" posits that "the strong and beautiful" must lead in the new world, and that there is to be no respect for whatever is "weak". It promises a world full of strife, by powerful beings seeking to become more powerful still. This is very much related to The Nietzschean future.
The reasons Shijima, Musubi, and Yosuga are all the main choices, but the player may choose to reject all of them. This can have various consequences. One possibility is that the world returns to the way it was, the "freedom" ending, and your Shakespearean future. But if the player displays cowardice at the decisive moment, the world will remain stuck in its broken intermediate state, with no hope for a future: the "demon" ending.
The final option is to not only reject the answers, but also reject the question: to completely give up one's humanity, and join Lucifer in a fight against Fate itself.
I think these are some interesting parallels, but can we learn something from them? That is a good question. Maybe if one does not only read a summary, but experiences the original work in full, one may learn something about humanity and it's value.
Or maybe just read Shakespeare, if you're more comfortable with that.
Well, this segues into my second point, actually. You choose to "anchor" your conception of humanity to Shakespeare's works, probably because that's the piece of art that best captures your idea of human values. That is, that you know of. Yet you also admit that anchoring your values in the past prevents improvement in causes you believe are good.
My personal solution to this dilemma is to keep looking for new art worthy of our human values, wherever it may appear! You have already suggested to look beyond books ( https://erikhoel.substack.com/p/the-future-of-literature-is-video ), that is good, but I think we can do better. The list in your article consists mainly of games that are both 1. "western" and 2. "marketed" as "narrative". This means you, for example, risk missing out on the stories in Japanese RPG's like SMT. There are more examples of more recent "hidden stories" (both east and west) that deal with aspects of humanity in a way I haven't seen elsewhere, but this is not the place for an attempt at a list.
That said, I cannot resist recommending the excellent "visual novel" (VN) "Baldr Sky", because it is very relevant to this post. In this tale, the precursors of your futures exist in parallel: genetically "optimized" humans, neuralinked students, a digital Chinese room that said students call "Mother", a cult that claims the virtual world is the real one (*vague mumbling of metaverse in the distance*), and more. The story also plays with the fact that it is a branching story, a playful "deconstruction" of the concept of "routes" that most VN's use.
Of course, I can understand that for some, this VN may indeed be too uncomfortable to read. It was not created for a western audience, merely translated into English. As such, the very minor amount of pornographic content has not been removed. I personally don't think it distracts from the story and is easy to fastforward, but well, some people do not like even trace amounts of pornography.
I dunno, dude. I think I disagree with your initial premise. I think Humanness is the first thing we need to get rid of, even if we accept as sacrosanct the fundamentally egotistical premise that Human survival should be some kind of prime imperative.
It’s thanks to Humanness that we’re wiping out 70,000-130,000 species every year. It’s thanks to hyperbolic discounting that today’s minor inconvenience is always more real to us than tomorrow’s global catastrophe. Family Values keep us churning out self-centred parasites, most of whom will just sit on the couch snarfing Cheetos and playing Animal Crossing until the ceiling crashes in, who we will defend with our lives for no better reason than that molecules have tricked us into making other molecules like them.
Of course, this isn’t just Human nature. Short-sighted selfishness is an inevitable hallmark of any organism forged by natural selection, because natural selection has no foresight. The difference in our case is the force-multiplier of Human intellect: that thing we could use to control our instincts, but instead use to make up complex rationalizations promoting them. The problem is that brains which evolved to handle short-term problems with local impacts now run technology with global impacts and long-term consequences. That’s Human.
We’re behaving completely naturally, of course. But it’s not working, is it? Being Human is killing us and our own life-support systems. Being Human threatens the survival of complex society itself.
We want to have a hope in hell of pulling this out of the fire, we gotta start behaving unnaturally ASAP.
There are indications of ways we might do that. Certain brain injuries that strip away Family Values, improving our ability to make ethical choices even if they don’t benefit our own larvae. Diseases that have the promising side-effect of suppressing the religious impulse. Hell, if we could just edit hyperbolic discounting out of the human mindset, get the gut to recognize the reality of future consequences, we’d be halfway home. Ironically, the best hope of saving Humanity might be by making us less Human. IMO this would be a good thing, both for our species and for all the others we’re wiping out—because this thing we are now, Erik, this thing you want to persist endlessly unchanged into the future: dude, it sucks.
Also I’m not entirely convinced that the incomprehension of the little guy in the Chinese Room really proves anything. Of course he doesn’t understand Chinese, any more than a single neuron would be able to tell you what the whole brain is thinking. Surely the question is whether the system as a whole comprehends, not whether any given gear or cog does.
But isn't the whole point of Blindsight at the end, with Siri Keeton heading back to Earth, that it's pretty horrible that he might be the last sentient creature in the whole universe? And he's happy to finally be Human, maybe the last Human (indicating that there's some sort of moral worth there). At least, that's how I always read it.
Being the only sentient creature in the universe would be a huge relief. I wouldn't have to feel depressed the way I do now about human suffering and the cruelty that humans inflict on each other.
Well, we should probably not forget that Siri is not exactly the most reliable narrator on the board. Which will be brought home with renewed ferocity if I ever get off my ass and finish Omniscience.
But there's a reason he mourns the possibility of being the last "sentient" being in the universe, and not the last "human" one. Unless you buy into the whole free-energy-minimization thing (which I'm thinking ultimately I might, but I'd never even heard of those guys back when I was writing Blindsight), there's no reason that sentience and short-sighted survival imperatives have to be functionally coupled. Survival drives can be implemented without conscious awareness (in fact, they usually are). And there's no de facto reason why a constructed sentient being (as opposed to an evolved one) should give a rat's ass whether it lives or dies (unless you deliberately give it those drives, and if you do you're asking for whatever you get). Siri is lonely. He misses companionship. He glimpses a vast a wondrous universe, and mourns that he may have no one to share it with.
He certainly doesn't mourn all the blind short-sighted brainstem instincts that make us pollute that universe everywhere we set foot.
I think I agree with both of you? I don't particularly value humanity for its own sake (the existence of the broiler chicken is argument enough against letting us into the galactic confederation), but consciousness is the only thing that gives the universe anything approximating meaning. I agree we should be careful about anything that might extinguish that candle, but think there's clear room for improvement before we spread out across the galaxy.
> And there's no de facto reason why a constructed sentient being (as opposed to an evolved one) should give a rat's ass whether it lives or dies (unless you deliberately give it those drives, and if you do you're asking for whatever you get)
If it has any drives at all, desire to survive will appear as an instrumental goal. https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
Yeah, I only just stumbled across this Instrumental Covergence stuff the other day. Thanks for the paper!
"It’s thanks to Humanness that we’re wiping out 70,000-130,000 species every year. It’s thanks to hyperbolic discounting that today’s minor inconvenience is always more real to us than tomorrow’s global catastrophe. Family Values keep us churning out self-centred parasites, most of whom will just sit on the couch snarfing Cheetos and playing Animal Crossing until the ceiling crashes in, who we will defend with our lives for no better reason than that molecules have tricked us into making other molecules like them."
But it is also humanness that allows you to be disgusted by any of these things. Eradicating humanness does not merely (or even necessarily?) eradicate the actions we find abhorrent, it eradicates the very concept of abhorrence.
It's like the old quote that "he who despises himself still recognizes himself as one who despises." Humanity is often rightly disgusted with itself, but it is also the source of the concept of disgust, and the only thing we have ever encountered capable of recognizing and expressing it. What have we gained when we eradicate not the things we find awful, but the things that found it awful?
The word that comes most strongly to my mind is "chauvinism", in its original sense.
Granted, I'm right there with whoever it was who spoke of turning to your work when "[their] will to live grew too strong" - I don't think there is nothing of worth in the species, which sometimes is the vibe I get from your stuff, but I likewise can't agree that *everything* in the species is of worth, as the essay under discussion seems to posit.
Somewhere in the Laundry series, Stross has a character say something along the line that, if whatever survives this still cherishes a memory of having once been human, that counts as a win. Seems to me like the right way to look at it.
This reminds me that SF has often returned to the theme of the unmodified versus the modified. Usually they are factions at war with each other. The real lesson there is that humans like to draw a line in the sand more than they like to compromise for co-existence. In the long run I think there would be a third polarity, the anti-natalists who just think we ought to p!ss off and die. A Forever War with three factions that can never become allies would be quite the thing, and oh, so human. If Eric (without any such intention) became a prophet of the Unmodified, perhaps Peter (also against his will) would become a prophet of the Modified? Of the Anti-natalists? Maybe those factions would define themselves by differences in Wattsian interpretations.
But, seriously, it was another marvelous, provocative, and wide-ranging essay. And, for those who don’t know, Peter Watts has applied most of the themes herein to some head-spinning science fiction.
I think it will be less a distinction between enhanced and unenhanced humanity. It will be more of an addition to what does already exist: an enhancement of talent for various professions and arts through teaching and meritocratic selection. This will be further enhanced through genetic engineering.
The problem will be to keep an inner peace for such a multitalented society. Miraculously, free society has accomplished that already during some periods of time from the onset of professionalism, eg in Sumer. I take it that there will be conflict and peace alternating also over some future.
It reminds of a Roman fable relayed by Livy. Accordingly, consul Menenius (503BC) told the soldiers a fable about the parts of the human body (eg the stomach and the lung) and how each has its own purpose in the greater function of the body.
I am not sure the Nietzschean future ends they way you predict. An interesting alternate reality, as portrayed in "Beggars In Spain", is that GM people recognize the humanness of the non-GM. Right now, people in Mensa don't say that low-IQ people are sub-human. (I presume, having never been to a Mensa meeting.)
Beggars in Spain was a great book, glad to see it still referenced
I'd think the Nietzschean future is hard to avoid, given that the meritocratic present looks an awful lot like the Nietzschean future.
Especially so, given that Michael Young accurately predicted the meritocratic present back in 1958 in his essay/short novel "The Rise of the Meritocracy 1870-2033: An Essay on Education and Equality".
If we very generously assume the Nietzschean scenario as a premise, we can trivially infer that übermenschen who don't empathize with wild-type humans will quickly outcompete those who do, and are thus less effective sociopaths than those who make no such exceptions among whom they'll unhesitatingly victimize.
Granted, I think the whole article is bunk, or at absolute minimum so wildly caricatured for effect that the intent buckles under the weight of hyperbole. But if we ignore that for the moment, I don't think your objection holds water.
Lots to think about here! Well done. Is there no beautiful combination of the 4 that would be possible?!
Boy, Erik -- that one was a lot of work. Thanks! Here's something to think about re: evolution -- your view is, of course, consistent with most of current scientific thought. But what if one considers evolution as bipartite? The first is the closed system inside individual organisms against their temporary competitors. The faster zebra wins the race. But that's not all there is to evolution -- the second involves inter-agent coordination. Once you consider this, it directly follows that inter-agent coordination is trans-species, as it must be. And that means you might actually understand what the lion is saying.
hmmm, that's an interesting thought - but I'm not sure how much coordination there is between lions and humans! Perhaps there is coordination between lions and their prey though, in unexpected ways (or like, with their parasites, etc).
"Perhaps there is coordination between lions and their prey"
One proposed explanation of "stotting" is that it is a coordination signal to the lion: "I am healthy and fast, so chasing me would waste both of our time and energy"
Think about your pet. At lower informational levels (hardware/gene coding) what you wrote is absolutely correct. But at higher complexity (software/memetics) eh, not so much. But then you run into another problem -- meeting someone/species that functions at a much higher level of complexity than humans do. It's inconceivable to us, because complexity ceilings also work on our understanding as well. Recommend either my work on structural memetics (collective intelligence/knowledge structure formation) or M. L. Commons Model of Hierarchical Complexity (independent agent scale.)
Another fantastic essay
I have always thought that the "Chinese room" thought experiment was a red herring. Being human, we tend to focus our attention on the man in the room, and it is obvious that the *man* does not understand Chinese; he's just blindly following rules. However, the *room itself* obviously does understand Chinese, since it is able to produce Chinese text fluently. The logical rules, filing cabinets, etc. function like individual neurons in a brain; simple in isolation, but with complex emergent behavior when grouped together.
The "Chinese room" thought experiment is also a bit dated in the post-GPT3 period, since we now have actual, working "Chinese rooms" -- software that can produce fluent natural language. Large language models (LLMs) clearly "understand Chinese" in the sense that they can easily parse Chinese inputs, produce appropriate responses, and accurately translate Chinese to and from a variety of other languages, so anybody who claims that they don't "understand Chinese" must be using the word "understand" in a non-obvious way.
One clear difference is that we no longer consider "translating and/or producing fluent natural language" to be synonymous with human-level intelligence. GPT3/PaLM are very fluent, but don't (yet) pass the Turing test for practitioners who are aware of their limitations; it is not hard to steer a conversation with them into the territory of machine-generated gibberish.
The main sticking point is that humans don't just produce natural language, we *use* natural language for particular purposes: e.g. to communicate information, persuade others, achieve higher social status, or pursue personal goals. Large language models (LLMs) have no personal goals, no long-term memory of past interactions, no ability to form social connections with (or even to remember) other people, and limited ability to do logical reasoning (especially wrt. to the physical world), or to distinguish fact from fiction. These are all skills related to "acting as an agent in the real world", that are somewhat orthogonal to "understanding natural language."
Thus, the Chinese room is a distraction. The real question is "what sort of agent-like skills do humans possess, that AI does not?" Perhaps self-awareness? Once an AI agent starts to operate in the real world, starts to understand, model and predict the real world, and takes actions which have observable consequences, then at a certain level of "intelligence", it will notice that it, itself, is an entity of interest in the world that it is modeling. It will become "self-aware". Self-awareness, like many things, is a spectrum. If an agent begins to reflect upon, and analyze, its own motivations and reasoning, then I see little reason to doubt that it could become as self-aware as a human.
Are there other skills? Social connections and social status? Long-term friendships and relationships? Curiosity? Doesn't seem impossible...
Note that this clarification doesn't take away from the main point of your article, which is that even if AI becomes self-aware, socially-adept, intrinsically-motivated, etc. in addition to being able to use natural language, it will still be a non-human alien intelligence. (I've left out "conscious", because there are wildly-differing definitions of that word, even when people try to define it at all.)
Nevertheless, there is an important consideration that you've left out. A non-human alien society could have all of the same richness of culture, art, gossip, triumph and tragedy, etc. as human society. Could they appreciate Shakespeare? Perhaps not. Does that matter? Could we appreciate the great Octopus bard, who wrote such heartfelt dramas about the joy of solitude, or the triumphant feeling that Octopus mothers feel as they sacrifice their lives for their 1000 children? Perhaps not...
Turing: It seems plausible (and it is also implied by IIT I think) that whether an AI (either built from scratch or based on a human mind upload) is conscious depends on the physical substrate upon which the AI is implemented. Building a physical substrate able to support consciousness is an engineering problem, one that we’ll solve one day I think.
Teilhard: who’s to say that a group mind can’t include pockets of individual consciousness that are active now and then? This seems ruled out by the current version of IIT, but I disagree. I think consciousness at very different scales can be nested.
In both scenarios, Shakespeare can be remembered and appreciated. And didn’t he say that there are more things in heaven and earth and all that?
Said this, love your last paragraphs.
I think a lot depends on the use of "substrate" here. For example, if substrate just means "physical material" than yes, almost certainly consciousness is multiply-realizable (something even Searle would agree on) in that if you designed a silicon brain that was basically as identical as possible to a real brain, it would be conscious. But this is then interpreted to mean "consciousness doesn't depend on implementation at all, just the behavioral profile" and that's what I'm saying is incorrect.
Yes I mean physical material substrate, without excluding exotic substrates like plasma, superfluids, neutron stars, or perhaps even blobs of bare quantum fields. In fact, I don’t think silicon devices like those we are familiar with could be conscious, but since matter can (you and I are living proof) I’m sure we’ll engineer conscious matter one day.
This is such an odd belief to have. its like saying, I am sure the mario brothers video game can be built on a computer, but not one that uses PNP transistors. The properties of consciousnesses are so remote from the substrate, I would expect that one would have no bearing on the other. do you believe that consciousness is computable? are you a reductionist? I am just surprised by this common conjecture.
Thank you! Incredibly thought-provoking. I’m grateful for your reflections on the Chinese Room; I’ve felt for years that largely functional views of consciousness, like Dennett’s, have gotten way more popular airplay than they deserve.
Here’s where the power of the larger argument didn’t quite convince me: “Anything else [other than Shakespeare] will be unrecognizable and immoral, except by its own incommensurate and alien standards.” I think the AI aliens might reply (in their unintelligible language), So what? Come and try us. You’re missing out—you just don’t understand why, yet.
Still, an excellent essay with much to ponder.
Erik, what an excellent article! I found your thoughts around the "Chinese Room" scenario compelling, especially because I have found myself thinking that "conscious" AI will never arrive, that consciousness is locked behind some ontological door which resists approach. BUT, the possibility that the real threat of AI exists in the orthogonality of intelligence and consciousness is disturbing. Hyper intelligent machines becoming so refined and efficient so as to dwarf human intelligence, yet lacking a key component of "being human": scary stuff.
I'm curious to know if you have heard of Iain McGilchrist and his work on brain hemispheres? He has a comprehensive and astounding book called "The Master and His Emissary: The Divided Brain and the Making of the Western World". In it he discusses how our asymmetric brain hemispheres perceive the world in different modes, the left tending towards itemization, categorization, and abstraction, the right tending toward universal awareness, uniqueness, and comprehension. Anyway, his work intersects with your article here, because the work of building machines and technology, broadly speaking, is a predominantly left-hemisphere function. He goes to great lengths to describe how both hemispheres are needed in many tasks, but how the right brain's broad picture awareness is a primary function, and that the left brain's entire world is a re-presentation of the information that is given to it by the right hemisphere. Carrying this line forward, the left brain tends not to realize that it has any dependencies, and imagines that it is self-sufficient. Not only that, but it further hypothesizes that it's own mode of perception is the only useful/worthwhile mode, that, even worse, there is no other mode of perception. He examines countless studies and examples from split brain patients to illustrate the point.
The future created by the technology saturated left-brain priorities disregards many of the elements that we would properly identify as human. Namely, artistic appreciation, a sense of wholeness, intuition, and, more importantly, the ability to shift perspectives based on new information.
I would argue that your Shakespearean future is the only one that incorporates right-brained modes of engaging with reality. The other three are all mechanistic/power based, viewing the world as primarily a realm of inanimate things to be dominated and controlled. Our ability to tinker, manipulate, refine, control, categorize, tool, and specify is an essential aspect of being human, but, if divorced from our ability to see the big picture, feel awe and compassion, experience new things, be surprised by beauty, and be humbled in the presence of existence... well, that is a dangerous road.
I'm loving all your work! (About to go read your falsification and consciousness article). You're giving me ideas for my own writing as well. I appreciate the time you put towards this work.
Thank you for your kind comment Caleb. I do know of Iain's work - I think you're right that the Shakespearian future would be the only one that meshed with his conception.
I'm not sure that Teilhard is germane to your point, however I struggled with your grammar in part of a sentence, viz. “Teilhard de Chardin described his idea of a noosphere as an early form of the internet”. How could TdC have compared anything to the “internet” when the good Father died in 1955, well before the earliest mention of the connected computer networks that became known as the Internet in 1974?
I always thought the Noosphere sounded like prayer and reflection, theological constructs from a Christian priest who said Mass every day, transubstantiating bread and wine to flesh and blood.
I'm definitely adopting your War Cry. Humans Forever!
Thanks, that's a fair point as to the grammatical failure - I'll push a new less-time-traveling sentence now.
If you’re worried about Chinese Rooms, how do you propose to determine whether any non-human entity is conscious?
Does that require a real, functioning theory of consciousness?
In the absence of such a theory, surely it’s “safer” to assume intelligent actors with, say, opinions on Shakespeare are conscious?
Even if you don't have a "final theory" of consciousness (i.e., something widely-regarded as true by most of the field) you can offer predictions concerning consciousness based on the current theories; e.g., for theories like global workspace it would require some sort of actual global workspace somewhere. Same for IIT, which would require integration. Brain complexity correlates with consciousness, which is something a host of papers show. But does such complexity actually correlate with intelligence? After all, AIXI is very simple in its working, and it is the only schema for universal intelligence we know about. That is, much of what we currently know or think about consciousness, even if we don't understand it fully, implies that the orthogonality thesis is, i.m.o., likely true (at minimum it implies that it could be true, and therefore Chinese Rooms are a real possibility). Or to put it plainly: if there is indeed an orthogonality between consciousness and intelligence it's not safe to assume anything. The alternative is to maintain that consciousness is simply a function of behavior, but this assumption is not informative (literally, as a tautology, it generates zero information) so I think it shouldn't be treated as any sort of reasonable framework or theory.
This long-ass piece was worth every second! ;) It also made me think of an essay I once wrote in late 2019 (which I have taken offline though, may rewrite it still) where I ended with “Humanity first.”
The view I was expressing in that essay, was that we should ‘short-circuit’ (in the programming/coding jargon sense) a ‘Brighter Future.’
We can find that brighter future through our humanity, not by solely focusing on technology. So say either technology or our humanity can bring a brighter future, our best bet is to skip tech and focus on humanity first for the best outcome (or our tech will be our downfall).
Give me human. This life we have is interesting because it has purpose. Not a single global purpose but infinite purposes arising out of new events. The source code is open and it goes on creating new patterns which we call the experience of life, one of which is Bach or Shakespeare, in my case it is Rabindranath Tagore but that we are different matters. The AI or ubermensch makes us all into one mass of inform nuts and bolts but for what? What is the purpose? Existence of such a mass of uniform consciousness is same as a super blob of matter doing nothing. Elon Musk is just having fun at our expense. So take his technology and just laugh at his philosophy. And go on being human. We can have one common purpose : to preserve our humanity. And STOP the bloody inhumans from killing us or fusing us. Ever live humanity!!!
Great post. Thank you. I just wanted to say that Greg Egan illustrates the Chinese Room problem vividly and intuitively in his 1990 short story Learning To Be Me. A masterpiece which I recommend anyone reads.
I guess I don't understand why so many philosophers assume the Chinese room is not conscious in some form. Seems like a straightforward analogy to our own consciousness - replace "room" with "skull", and "man with Chinese books" with "big pile of neurons", and "slips of paper" with "sensory input data".
I think this analogy only holds true if implementation doesn't matter at all. All current theories of consciousness (from Francis Crick's 40 Hz gamma band to IIT to global workspace theories to higher-order theories) depend on the specifics of implementations. We could just say those are wrong, but that leaves the sort of system-level input/output functionalism that we've proven is unfalsifiable and not a scientific sort of hypothesis.
I think the analogy works if you believe that the Chinese Room argument actually contains a sufficient argument that the room is not conscious. Searle's argument seems to be unaffected by the presence of something in the room happening at the 40 Hz gamma band, so even given a Chinese room that was not actually conscious and also lacked those 40 Hz waves, it would seem to me that at least one of Crick's theory or Searle's argument must fail as an explanation of why that room is unconscious. I don't detect in Searle's argument any criteria used to judge the room unconscious that an alien species with a different sort of brain than our own would not be able to use to rule us unconscious, so it just seems to fail as an argument even if it could happen to be true that such a room would be unconscious.
hmmm Analogies are simply appropriate mappings or not. And here, the mapping is inappropriate, since its eliding that there is a huge difference between "look-up table" and "big pile of neurons" (same with the alien species analogy, which elides the same thing). All modern scientific theories of consciousness would say that look-up tables are not conscious, and big piles or neurons would depend on how they are hooked up: i.e., their internal structure. I think some confusion is that people want an answer from Searle, but Searle isn't providing a theory, he's providing an example.
The "lookup table" containing rules to converse in Chinese would necessarily have analogous complexity to interconnected neural tissue. The man trapped inside is then analogous to the arrow of time in that quantum-chemical system. This is, I think, the place where intuition fails.
It absolutely would not have analogous complexity.
These futures remind of the Reasons from "SMT III: Nocturne." In this game, the world of humans has been destroyed, and the player travel a world of demons to decide which Reason to support, the Reason that will decide the fate of the coming world.
The reason "Shijima" heralds a world where all is one with the world, and the world is god. This is very similar to the "Pierre Teilhard de Chardin" future.
The reason "Musubi" promises a world where every living being is isolated from others, yet master of their own isolated part of the universe. Musubi is a rather confusing Reason, but I think it's main feature is the rejection of human connection. So, I think there's at least some relation with the "Turing" future, in the sense that human connection may very well be lost if everyone is a Chinese Room.
The reason "Yosuga" posits that "the strong and beautiful" must lead in the new world, and that there is to be no respect for whatever is "weak". It promises a world full of strife, by powerful beings seeking to become more powerful still. This is very much related to The Nietzschean future.
The reasons Shijima, Musubi, and Yosuga are all the main choices, but the player may choose to reject all of them. This can have various consequences. One possibility is that the world returns to the way it was, the "freedom" ending, and your Shakespearean future. But if the player displays cowardice at the decisive moment, the world will remain stuck in its broken intermediate state, with no hope for a future: the "demon" ending.
The final option is to not only reject the answers, but also reject the question: to completely give up one's humanity, and join Lucifer in a fight against Fate itself.
I think these are some interesting parallels, but can we learn something from them? That is a good question. Maybe if one does not only read a summary, but experiences the original work in full, one may learn something about humanity and it's value.
Or maybe just read Shakespeare, if you're more comfortable with that.
(This comment is written with the aid of this page: https://megamitensei.fandom.com/wiki/Reason )
Had no idea about this game, interesting that the main choices link up to the main futures on offer
Well, this segues into my second point, actually. You choose to "anchor" your conception of humanity to Shakespeare's works, probably because that's the piece of art that best captures your idea of human values. That is, that you know of. Yet you also admit that anchoring your values in the past prevents improvement in causes you believe are good.
My personal solution to this dilemma is to keep looking for new art worthy of our human values, wherever it may appear! You have already suggested to look beyond books ( https://erikhoel.substack.com/p/the-future-of-literature-is-video ), that is good, but I think we can do better. The list in your article consists mainly of games that are both 1. "western" and 2. "marketed" as "narrative". This means you, for example, risk missing out on the stories in Japanese RPG's like SMT. There are more examples of more recent "hidden stories" (both east and west) that deal with aspects of humanity in a way I haven't seen elsewhere, but this is not the place for an attempt at a list.
That said, I cannot resist recommending the excellent "visual novel" (VN) "Baldr Sky", because it is very relevant to this post. In this tale, the precursors of your futures exist in parallel: genetically "optimized" humans, neuralinked students, a digital Chinese room that said students call "Mother", a cult that claims the virtual world is the real one (*vague mumbling of metaverse in the distance*), and more. The story also plays with the fact that it is a branching story, a playful "deconstruction" of the concept of "routes" that most VN's use.
Of course, I can understand that for some, this VN may indeed be too uncomfortable to read. It was not created for a western audience, merely translated into English. As such, the very minor amount of pornographic content has not been removed. I personally don't think it distracts from the story and is easy to fastforward, but well, some people do not like even trace amounts of pornography.