This is a super argument. I love the connections with Dune (& explanation of why the jihad was "Butlerian") and Ex Machina. Also the ambivalence of the nerdosphere about super AI. And the underlying weakness of reasoning by utility, consequence and expected value. I hope that this essay gets a lot of discussion. Perhaps you know that Thomas Metzinger has tried hard to get the moral hazard of machine suffering into the public eye (if not, search for synthetic phenomenology).
I think you could have paid more attention to the difference between artificial intelligence and artificial consciousness. As far as we know, neither demands the other, and reasons why people are pursuing them tend to differ.
I was taken by Metzinger's self-model theory of consciousness and, nerding all the way, wrote about how that and related ideas might be used to grow a conscious AI. My fictional AI suffers multiple kinds of angst, but I neglected all the casualities of its reinforcement learning forbears (your "monsters"). I wound up thinking that anything able to learn to be an artificial ego would have the drive to continue learning. And that would make the AI and humans each nervous about their existential risks.
But, once it's conscious and has an ongoing life story, how do you decide whether or not it is an abomination? Or isn't it too late for that?
Fascinating Ted - I had no idea that Metzinger made that push, but I'm glad. He's such an interesting thinker (I only met him once, back in 2010, but he made a great impression). I agree that consciousness is the wild card here. However, I think in both cases, either intelligence + consciousness, or intelligence alone, one can develop strong cases of moral problems - in the case of consciousness, I think it's extremely unlikely it would somehow have the mind of a warm fuzzy mammal capable of love, etc, all baked in.
I would just add: any 'existential upside' to AI will still be there, ripe for plucking, in a hundred or a thousand years. The potential will never go away.
By waiting patiently, pausing all further AI research, and taking the time to fully, deeply assess the real risks of AI, we don't lose any of the upside, and we minimize the downside.
Sure, AI might be able to help current living generations to some degree -- but why not let the AIs start helping our great-great-great grandkids, once we make sure AIs will play nice with our descendants.
In other words, just hit the 'pause' button on AI -- not forever, just for a few centuries.
Stumbled on this old and very interesting essay. I have to say, I find most discussions of AI a little too abstract. I don't think AI is going to advance by an explicit attempt to build quasi-human conscious Frankensteins in a lab. Such efforts may happen but they will be academic exercises. Instead, it is going to advance by taking over and exceeding human capacities one by one in areas where there are economic rewards for doing so. Such a process may eventually lead to "strong" AI, but it also may not. If such replacement is allowed to occur in an unlimited and uncontrolled way it will have done great damage to human community long before any strong AI results. Can we create barriers or limits to that process?
Yup, people are perhaps overly focused on Strong AI, although, on the other hand, they have good reason to be, if it were ever to be accomplished. And I totally agree that the creep will bring blowback, perhaps legislative, before that happens however. And legislation *is* possible, no matter what anyone says contrariwise about this being too complex.
I would say that, whether or not AI is "successfully aligned", it is a final departure from the human into an abominable world. Of all the possibilities, whether supposedly positive or negative, humanity as we have been is lost, and something unrecognizable takes its place. Even if we got everything we wanted, would it really do us any good? Are we that sure we know what's best for ourselves?
Looking at how eager the world is to press forwards into this unknown it seems very hopeless, and there's not much time left. How do we make a difference?
This is an expression of pretty much exactly how I feel. I don't know precisely what to do - it seems certainly that there is no stopping weak AI, which will permeate our lives. Strong AI, however, may be stoppable. This is because scaling laws guarantee that basically only the really big tech companies will possess enough compute power and expertise to train these huge parameter systems. This means the possibility of government regulation (which is the case in many other industries, there's nothing weird about regulation in an area as impactful as this).
Hi Erik, thanks a lot for your writing! I'm following the AI xrisk space for a few years now but can't recall more clear and original thinking about the topic. I'm founder of an existing risk awareness raising organization, and also interested in regulation. I agree there's nothing weird about regulating AGI in principle, but it does seem very hard in practice. I would say this is partially because there's little awareness (which we are - way too slowly - working on), and partially because regulating research or software seems insufficiently robust, while regulating hardware seems challenging. Since you have clearly thought about the topic in detail, I would love to hear your thoughts on how to practically do AGI regulation for xrisk reduction.
I love this Otto. I do actually have an upcoming piece that makes the case that good old public/government regulation will work with AI, although it probably won't have the level of specificity you're looking for regarding exactly how such regulation will work, since you already think a lot about these topics. To me, the first step basically to make sure AI safety is at least beginning to become on track to eventually, perhaps decades from now, receive the same attention climate change and nuclear weapons get, and that's what I can use my platform to contribute to.
Interesting argument. A few problems though. This type of human exceptionalism argument has been used to ban embryonic stem cell research and sanctioning the abuse of non-human animals. Second, evolution isn't the only game in town. We can make things better than evolution. There is no reason to believe that our minds/consciousness will always be better than synthetic counterparts. Third, morality is tuned to harm. Would more harm be brought to synthetic beings in their making than evolution has exacted on trillions on our ancestors? Unlikely.
This brings to mind a possible moral conflict between humanity's obligations to itself vs. its obligations to other sentient beings. Can we deny that we have an obligation to treat animals properly? Can we deny that we have an obligation to future generations of humans? It could be argued that this logic applies to future non-human sentient beings as well. Perhaps, as this essay argues very well, we have an obligation to future humanity to not develop strong AI. Or perhaps we have an obligation to future artificial sentiences to bring them into existence.
I agree Robert - but one point might be that "future artificial sentiences" in general may be closer to insectoid than mammalian. How much would we owe to a synthetic hive-mind that cares nothing for its members, or isn't honed to live in altruistic societies like humans are? Not saying that's for sure, but I'm not sure sentience is a good in and of itself.
I agree. I don't actually hold the view that we have any obligation towards non-human sentient beings (current or future) other than to, very carefully, take their well-being into consideration when making our decisions. This is obviously a very human-centric view, but I'm a bit biased and also definitely not an anti-natalist.
"This type of human exceptionalism argument has been used to ban embryonic stem cell research ..."
And this is supposedly a bad thing?
Even if I didn't think humans exceptional in any objective sense, putting your own species first is not a wrong position. In fact, it is the sane and logical position.
If humans were not exceptional in any other way, they are - alas - exceptional in that some human beings delight in denigrating their own species. And logically, they will turn not only humanity on the whole to "animals" (by which they mean beasts), they will also turn individual human beings into lab rats or worse, mere material, to be sacrificed to the greater good.
As much as I do not want to harm ("abuse" as you call it) animals - IF there has to be such sacrificing research, I'd rather use real lab rats.
The argument against "synthethic" augmentation of humans is not that it can't be done or that ur "minds/consciousness will always be better" but that it shouldn't be done and there is huge danger that "synthetic counterparts" might be better in many regards, which then will bring huge harm to the non-synthetic, authentic human beings. It is telling that you didn't even mention that harm but rather immediately jumped to "harm ... brought to synthetic beings in their making". I'd rather consider the harm to humanity first.
Profound and much-needed insights here, Erik, as usual. I agree with much, but disagree as well.
I think you are right in identifying just how high the stakes are going to become as we build (garden, train, raise) increasingly human like AIs. I think voices like yours, growing ever louder in coming years, will help humanity get the "social pushback", the activism, oversight and regulation that is needed as these technologies get more powerful. No ban is possible, in my view, but a whole lot more regulation is coming our way, thank goodness.
I think you are also right in your implication that there is far more we're going to have to learn from neuroscience in order to build more humanlike systems. You allude to that in your great new post on the still primitive state of neuroscience:
Where we may disagree, is that I expect our most capable AI will have to become increasingly like us, via both neuromimicry and biomimicry (analogs to evo-devo genetics), in order to become significantly safer, more ethical, and more intelligent. I don't see any other viable path for the human-machine symbiosis, as their complexity scales. Nothing else seems realistic or defensible.
I also think we're going to have a lot more time with these primitive, first-gen general AIs than many others do, before they'll see another punctuation in their abilities. I agree with Peter Norvig that these frontier models are the first-generation of both "general" and "generative" AI.
But I think there is far, far more they're going to have to learn from metazoan bodies and brains before they can kick off big robotics advances, have true world and self models, or make trustable decisions. Lots of R&D and time still for us to get our responses in place. My present guess as to when generally human-surpassing AI emerges is the 2080s. I wonder where yours would be.
I try to defend those claims, with a co-author, in our own Substack series on these topics, Natural Alignment:
Great thoughtful essay. This fellow MA'chusetts novelist (bit.ly/CWS-p) is just catching up on your back catalog. Congrats on your bold/wise move today leaving my wife's long-ago alma mater.
Two thots on your essay here:
1) Re. "starting with this one": Pardon me for being a little shy of Herbert when Hubbard's man-made religion has led to so much deceit and exploitation. I.e., the track record of sci-fi writers founding moral systems is a bit less than stellar.
2) You write, "...it is an abomination... not an evolved being." Thus, evolution is the opposite of abominable? (e.g., delightful, beautiful, blessed). Yet the logic of macro-evolution relies on time+matter+purely random intelligence-free unguided chance. All three elements are devoid of morality. An AI could make the same claim (time+matter+chance) and be correct--if it left out the fact that *it* was created. In either case, a positive (2nd law of thermodynamics-bucking) telos is smuggled in; borrowed from the Christian worldview in which man is made "very good" in God's image, male and female... until he listens to the imitator, Satan, bent on founding his own religion and recruiting his own worshipers, denying the Creator they (and we) all know is real.
In the grand scheme of things, humans aren't special. We are just another species living on a big rock floating in space. There's no metaphysical reason to oppose AI.
That doesn't mean it shouldn't matter to us, for the same reason H. Sapiens mattered a lot to H. Neanderthalensis. We shouldn't be trying to make ourselves extinct. That's the gist of it. It's trendy to hate on humanity, but I like people. I am one.
AI potentially could replace humanity by being better at everything we do, and everything we could possibly do.
It would not be that hard to destroy human civilization. We've seen that a plague can spread worldwide before governments can react. Imagine an AI actively designing something a lot worse than what we have now.
If I can think of that, I'm pretty sure an AI could.
In my opinion, this talk about human-like AI is a distraction from the real, immediate danger of AI: that it will misinform and mislead consumers of technology by mis-categorizing false information as relevant. The internet brought us information, and it's sorting it, and that has a profound impact on us.
It's my belief that we're psychologically very vulnerable, especially as consumers.
Agreed - if we feel over-saturated now with the info-stream, I can't imagine what ten years looks like when bots are producing the majority of all content
Just as you've said, the probabilities of strong AI are exceptionally low! I work in technology and think Ex Machina sufficiently demonstrated how improbable human AI is in any immediate future. Your points all stand in that future, though.
I love Dune though, and I'm enjoying my read through your posts.
I'm a year late to this but I came over from ACX to make a stronger version of this point and to say that AI doesn't have to be strong in order to be stronger than we are, and is in fact already there in the one domain that actually matters, which is control of the human mind. I expatiated on this at length over there but wouldn't presume to quote myself.
I really enjoyed this piece. And this may not be the forum for this type of question/observation, so I'll understand if you remove this response.
If we get to the point where we can program a machine to begin to learn ethics, couldn't we program the goal set(s) (or intentions, if you will) to include a non-sectarian set of principles based on, say, the Buddhist 8 fold path? I mean, not all of them, but the ethical charges for right speech including all of its subcomponents (don't lie, don't disparage, etc.) and right action (cause no harm to other sentient beings), and so on? Couldn't it, theoretically, be "born" enlightened?
I realize this doesn't solve the other inherent risks, but I'd love to hear someone in the field discuss whether programming morality is even feasible? If the intelligence can't "suffer", can it have as a goal the relief of suffering for others?
So many questions. Anyway, thanks. I'll re-read this several time. You made a subscriber out of me.
Thanks Mike! Much appreciated. There has been some work on programming in machine ethics (I like the idea of a Buddhist 8-fold path - there's a cool scifi story in there somewhere as well). But it is quite difficult to do this. There have been a lot of debates in the effective altruism community about how to program in morality. There is some scientific evidence that they certain can learn specific moral rules, like this: https://vciba.springeropen.com/articles/10.1186/s42492-020-00063-9
As someone who grew up during the Cold War, we talked a lot in very emotional terms about the existential threat of our technology. Very little was accomplished. What still freaks me out is that all those nukes still exist and are still pointing at cities around the world!
They never stopped being a threat. I guess my point is that there are no lessons to draw on from Cold War attempts to pull back on technology. And nuke provide no utility beyond their threat of mutually assured destruction.
But in a sense, the reduction of nukes from their cold war heights (30,000) to around 1700 actually viable warheads do speak to the success of the disarmament measures. And most of those current warheads are old and purposefully not being updated. So despite the horrors a nuclear war would engender, we are much much safer now than we were in the 60s to 80s because of disarmament measures. I'd also point to the success of the ban on bioweapons. No major state has an ongoing bioweapons program - and they easily could! So I think there's reason to be hopeful about the effectiveness of bans / disarmament. I think the same could work for AI.
"Since we currently lack a scientific theory of consciousness"
"Even the best current AIs, like GPT-3, are not in the Strong category yet, although they may be getting close."
So we don't even know what consciousness is, therefore we don't even know if what we're currently doing is even moving towards or away from that goal, and you think current AI, a lot of which is just a fancy statistical regression or gradient descent, needs to be banned?
I specifically differentiate between strong and weak AI. You can read more about that here, by the companies building these things with, as they say, "a certain mindset of AI development."
Pretty much all current AIs are weak, perhaps with the exception of NLPs like GPT-3. It, in turn, definitely isn't strong, but it is at least moving in the direction of generalization in comparison.
> Far more important than the process: strong AI is immoral in and of itself. For example, if you have strong AI, what are you going to do with it besides effectively have robotic slaves?
why exactly is this immoral? they are machines, not humans. crypto-christianity on your part, imo
Wow. I never imagined a moral stance, rather than a utilitarian argument. I definitely agree in theory it is more the kind of thing that society might be able to get behind. I further agree that many doomsayers are themselves really lovers of AI and nerds at heart. In a way one has to be a techno-optomist to believe that strong AI could be an existential threat, so for those that are not techno-optomists, this is not even on their radar is a thing worthy of consideration.
Still notice in the Dune book it took some long drawn out war against the AIs in order for humanity to adopt this stance. They didn't adopt it from the start, the theoretical peril was first made very very real. and THEN they adopted the rule. I suspect we are the same, we wont adopt such a rule until the actual dangers are known with pretty high confidence.
Even if we could gain acceptance for this rule, I suspect we could not enforce it. The problem is, if we allow work on self driving cars, GPT3, etc. then we are strengthening the building blocks for this AI tech. I suspect it will be too easy and too beneficial to cross the threshold once all the parts are in everyone's hands. And even just one cross over in a fertile world with enormous latent cloud computing infrastructure is one too many.
A war against evil robots was only in the execrable Brian Herbert novels. Both Frank Herbert and Samuel Butler's novels indicated something much more like the way we use computers today as being the objectionable cause of a backlash against machine intelligence.
"Yet there is just as much an argument that AI leads to a utopia of great hospitals, autonomous farming, endless leisure time, and planetary expansion as it does to a dystopia of humans being hunted to death by robots governed by some superintelligence that considers us bugs."
Another question is... what if the former is actually just as dystopian as the latter? Think of a hypothetical Grandpa who no longer possesses the ability to drive and is instead carted around endlessly by his grandkids. Even if they drive him everywhere he wants to go, whenever he wants, I still think Grandpa would be happier driving himself.
This is a super argument. I love the connections with Dune (& explanation of why the jihad was "Butlerian") and Ex Machina. Also the ambivalence of the nerdosphere about super AI. And the underlying weakness of reasoning by utility, consequence and expected value. I hope that this essay gets a lot of discussion. Perhaps you know that Thomas Metzinger has tried hard to get the moral hazard of machine suffering into the public eye (if not, search for synthetic phenomenology).
I think you could have paid more attention to the difference between artificial intelligence and artificial consciousness. As far as we know, neither demands the other, and reasons why people are pursuing them tend to differ.
I was taken by Metzinger's self-model theory of consciousness and, nerding all the way, wrote about how that and related ideas might be used to grow a conscious AI. My fictional AI suffers multiple kinds of angst, but I neglected all the casualities of its reinforcement learning forbears (your "monsters"). I wound up thinking that anything able to learn to be an artificial ego would have the drive to continue learning. And that would make the AI and humans each nervous about their existential risks.
But, once it's conscious and has an ongoing life story, how do you decide whether or not it is an abomination? Or isn't it too late for that?
Fascinating Ted - I had no idea that Metzinger made that push, but I'm glad. He's such an interesting thinker (I only met him once, back in 2010, but he made a great impression). I agree that consciousness is the wild card here. However, I think in both cases, either intelligence + consciousness, or intelligence alone, one can develop strong cases of moral problems - in the case of consciousness, I think it's extremely unlikely it would somehow have the mind of a warm fuzzy mammal capable of love, etc, all baked in.
Excellent piece. I'll tweet about it soon.
I would just add: any 'existential upside' to AI will still be there, ripe for plucking, in a hundred or a thousand years. The potential will never go away.
By waiting patiently, pausing all further AI research, and taking the time to fully, deeply assess the real risks of AI, we don't lose any of the upside, and we minimize the downside.
Sure, AI might be able to help current living generations to some degree -- but why not let the AIs start helping our great-great-great grandkids, once we make sure AIs will play nice with our descendants.
In other words, just hit the 'pause' button on AI -- not forever, just for a few centuries.
Stumbled on this old and very interesting essay. I have to say, I find most discussions of AI a little too abstract. I don't think AI is going to advance by an explicit attempt to build quasi-human conscious Frankensteins in a lab. Such efforts may happen but they will be academic exercises. Instead, it is going to advance by taking over and exceeding human capacities one by one in areas where there are economic rewards for doing so. Such a process may eventually lead to "strong" AI, but it also may not. If such replacement is allowed to occur in an unlimited and uncontrolled way it will have done great damage to human community long before any strong AI results. Can we create barriers or limits to that process?
Yup, people are perhaps overly focused on Strong AI, although, on the other hand, they have good reason to be, if it were ever to be accomplished. And I totally agree that the creep will bring blowback, perhaps legislative, before that happens however. And legislation *is* possible, no matter what anyone says contrariwise about this being too complex.
I would say that, whether or not AI is "successfully aligned", it is a final departure from the human into an abominable world. Of all the possibilities, whether supposedly positive or negative, humanity as we have been is lost, and something unrecognizable takes its place. Even if we got everything we wanted, would it really do us any good? Are we that sure we know what's best for ourselves?
Looking at how eager the world is to press forwards into this unknown it seems very hopeless, and there's not much time left. How do we make a difference?
This is an expression of pretty much exactly how I feel. I don't know precisely what to do - it seems certainly that there is no stopping weak AI, which will permeate our lives. Strong AI, however, may be stoppable. This is because scaling laws guarantee that basically only the really big tech companies will possess enough compute power and expertise to train these huge parameter systems. This means the possibility of government regulation (which is the case in many other industries, there's nothing weird about regulation in an area as impactful as this).
Hi Erik, thanks a lot for your writing! I'm following the AI xrisk space for a few years now but can't recall more clear and original thinking about the topic. I'm founder of an existing risk awareness raising organization, and also interested in regulation. I agree there's nothing weird about regulating AGI in principle, but it does seem very hard in practice. I would say this is partially because there's little awareness (which we are - way too slowly - working on), and partially because regulating research or software seems insufficiently robust, while regulating hardware seems challenging. Since you have clearly thought about the topic in detail, I would love to hear your thoughts on how to practically do AGI regulation for xrisk reduction.
I love this Otto. I do actually have an upcoming piece that makes the case that good old public/government regulation will work with AI, although it probably won't have the level of specificity you're looking for regarding exactly how such regulation will work, since you already think a lot about these topics. To me, the first step basically to make sure AI safety is at least beginning to become on track to eventually, perhaps decades from now, receive the same attention climate change and nuclear weapons get, and that's what I can use my platform to contribute to.
Interesting argument. A few problems though. This type of human exceptionalism argument has been used to ban embryonic stem cell research and sanctioning the abuse of non-human animals. Second, evolution isn't the only game in town. We can make things better than evolution. There is no reason to believe that our minds/consciousness will always be better than synthetic counterparts. Third, morality is tuned to harm. Would more harm be brought to synthetic beings in their making than evolution has exacted on trillions on our ancestors? Unlikely.
This brings to mind a possible moral conflict between humanity's obligations to itself vs. its obligations to other sentient beings. Can we deny that we have an obligation to treat animals properly? Can we deny that we have an obligation to future generations of humans? It could be argued that this logic applies to future non-human sentient beings as well. Perhaps, as this essay argues very well, we have an obligation to future humanity to not develop strong AI. Or perhaps we have an obligation to future artificial sentiences to bring them into existence.
I agree Robert - but one point might be that "future artificial sentiences" in general may be closer to insectoid than mammalian. How much would we owe to a synthetic hive-mind that cares nothing for its members, or isn't honed to live in altruistic societies like humans are? Not saying that's for sure, but I'm not sure sentience is a good in and of itself.
I agree. I don't actually hold the view that we have any obligation towards non-human sentient beings (current or future) other than to, very carefully, take their well-being into consideration when making our decisions. This is obviously a very human-centric view, but I'm a bit biased and also definitely not an anti-natalist.
"This type of human exceptionalism argument has been used to ban embryonic stem cell research ..."
And this is supposedly a bad thing?
Even if I didn't think humans exceptional in any objective sense, putting your own species first is not a wrong position. In fact, it is the sane and logical position.
If humans were not exceptional in any other way, they are - alas - exceptional in that some human beings delight in denigrating their own species. And logically, they will turn not only humanity on the whole to "animals" (by which they mean beasts), they will also turn individual human beings into lab rats or worse, mere material, to be sacrificed to the greater good.
As much as I do not want to harm ("abuse" as you call it) animals - IF there has to be such sacrificing research, I'd rather use real lab rats.
The argument against "synthethic" augmentation of humans is not that it can't be done or that ur "minds/consciousness will always be better" but that it shouldn't be done and there is huge danger that "synthetic counterparts" might be better in many regards, which then will bring huge harm to the non-synthetic, authentic human beings. It is telling that you didn't even mention that harm but rather immediately jumped to "harm ... brought to synthetic beings in their making". I'd rather consider the harm to humanity first.
Profound and much-needed insights here, Erik, as usual. I agree with much, but disagree as well.
I think you are right in identifying just how high the stakes are going to become as we build (garden, train, raise) increasingly human like AIs. I think voices like yours, growing ever louder in coming years, will help humanity get the "social pushback", the activism, oversight and regulation that is needed as these technologies get more powerful. No ban is possible, in my view, but a whole lot more regulation is coming our way, thank goodness.
I think you are also right in your implication that there is far more we're going to have to learn from neuroscience in order to build more humanlike systems. You allude to that in your great new post on the still primitive state of neuroscience:
https://www.theintrinsicperspective.com/p/neuroscience-is-pre-paradigmatic
Where we may disagree, is that I expect our most capable AI will have to become increasingly like us, via both neuromimicry and biomimicry (analogs to evo-devo genetics), in order to become significantly safer, more ethical, and more intelligent. I don't see any other viable path for the human-machine symbiosis, as their complexity scales. Nothing else seems realistic or defensible.
I also think we're going to have a lot more time with these primitive, first-gen general AIs than many others do, before they'll see another punctuation in their abilities. I agree with Peter Norvig that these frontier models are the first-generation of both "general" and "generative" AI.
https://www.noemamag.com/artificial-general-intelligence-is-already-here/
But I think there is far, far more they're going to have to learn from metazoan bodies and brains before they can kick off big robotics advances, have true world and self models, or make trustable decisions. Lots of R&D and time still for us to get our responses in place. My present guess as to when generally human-surpassing AI emerges is the 2080s. I wonder where yours would be.
I try to defend those claims, with a co-author, in our own Substack series on these topics, Natural Alignment:
https://naturalalignment.substack.com
Thanks for all you do!
Great thoughtful essay. This fellow MA'chusetts novelist (bit.ly/CWS-p) is just catching up on your back catalog. Congrats on your bold/wise move today leaving my wife's long-ago alma mater.
Two thots on your essay here:
1) Re. "starting with this one": Pardon me for being a little shy of Herbert when Hubbard's man-made religion has led to so much deceit and exploitation. I.e., the track record of sci-fi writers founding moral systems is a bit less than stellar.
2) You write, "...it is an abomination... not an evolved being." Thus, evolution is the opposite of abominable? (e.g., delightful, beautiful, blessed). Yet the logic of macro-evolution relies on time+matter+purely random intelligence-free unguided chance. All three elements are devoid of morality. An AI could make the same claim (time+matter+chance) and be correct--if it left out the fact that *it* was created. In either case, a positive (2nd law of thermodynamics-bucking) telos is smuggled in; borrowed from the Christian worldview in which man is made "very good" in God's image, male and female... until he listens to the imitator, Satan, bent on founding his own religion and recruiting his own worshipers, denying the Creator they (and we) all know is real.
Gonna go ahead and steal most of this for a chapter in the novel I’m writing thanks
In the grand scheme of things, humans aren't special. We are just another species living on a big rock floating in space. There's no metaphysical reason to oppose AI.
That doesn't mean it shouldn't matter to us, for the same reason H. Sapiens mattered a lot to H. Neanderthalensis. We shouldn't be trying to make ourselves extinct. That's the gist of it. It's trendy to hate on humanity, but I like people. I am one.
AI potentially could replace humanity by being better at everything we do, and everything we could possibly do.
It would not be that hard to destroy human civilization. We've seen that a plague can spread worldwide before governments can react. Imagine an AI actively designing something a lot worse than what we have now.
If I can think of that, I'm pretty sure an AI could.
Sleep well.
In my opinion, this talk about human-like AI is a distraction from the real, immediate danger of AI: that it will misinform and mislead consumers of technology by mis-categorizing false information as relevant. The internet brought us information, and it's sorting it, and that has a profound impact on us.
It's my belief that we're psychologically very vulnerable, especially as consumers.
Agreed - if we feel over-saturated now with the info-stream, I can't imagine what ten years looks like when bots are producing the majority of all content
Just as you've said, the probabilities of strong AI are exceptionally low! I work in technology and think Ex Machina sufficiently demonstrated how improbable human AI is in any immediate future. Your points all stand in that future, though.
I love Dune though, and I'm enjoying my read through your posts.
I'm a year late to this but I came over from ACX to make a stronger version of this point and to say that AI doesn't have to be strong in order to be stronger than we are, and is in fact already there in the one domain that actually matters, which is control of the human mind. I expatiated on this at length over there but wouldn't presume to quote myself.
I really enjoyed this piece. And this may not be the forum for this type of question/observation, so I'll understand if you remove this response.
If we get to the point where we can program a machine to begin to learn ethics, couldn't we program the goal set(s) (or intentions, if you will) to include a non-sectarian set of principles based on, say, the Buddhist 8 fold path? I mean, not all of them, but the ethical charges for right speech including all of its subcomponents (don't lie, don't disparage, etc.) and right action (cause no harm to other sentient beings), and so on? Couldn't it, theoretically, be "born" enlightened?
I realize this doesn't solve the other inherent risks, but I'd love to hear someone in the field discuss whether programming morality is even feasible? If the intelligence can't "suffer", can it have as a goal the relief of suffering for others?
So many questions. Anyway, thanks. I'll re-read this several time. You made a subscriber out of me.
Thanks Mike! Much appreciated. There has been some work on programming in machine ethics (I like the idea of a Buddhist 8-fold path - there's a cool scifi story in there somewhere as well). But it is quite difficult to do this. There have been a lot of debates in the effective altruism community about how to program in morality. There is some scientific evidence that they certain can learn specific moral rules, like this: https://vciba.springeropen.com/articles/10.1186/s42492-020-00063-9
Yes, this paper is exactly what I was talking about. Excellent. Thank you!
And, yeah - "Deep Space Dharma". I'd read/watch it.)
As someone who grew up during the Cold War, we talked a lot in very emotional terms about the existential threat of our technology. Very little was accomplished. What still freaks me out is that all those nukes still exist and are still pointing at cities around the world!
They never stopped being a threat. I guess my point is that there are no lessons to draw on from Cold War attempts to pull back on technology. And nuke provide no utility beyond their threat of mutually assured destruction.
But in a sense, the reduction of nukes from their cold war heights (30,000) to around 1700 actually viable warheads do speak to the success of the disarmament measures. And most of those current warheads are old and purposefully not being updated. So despite the horrors a nuclear war would engender, we are much much safer now than we were in the 60s to 80s because of disarmament measures. I'd also point to the success of the ban on bioweapons. No major state has an ongoing bioweapons program - and they easily could! So I think there's reason to be hopeful about the effectiveness of bans / disarmament. I think the same could work for AI.
"Since we currently lack a scientific theory of consciousness"
"Even the best current AIs, like GPT-3, are not in the Strong category yet, although they may be getting close."
So we don't even know what consciousness is, therefore we don't even know if what we're currently doing is even moving towards or away from that goal, and you think current AI, a lot of which is just a fancy statistical regression or gradient descent, needs to be banned?
I specifically differentiate between strong and weak AI. You can read more about that here, by the companies building these things with, as they say, "a certain mindset of AI development."
https://www.ibm.com/cloud/learn/strong-ai
Pretty much all current AIs are weak, perhaps with the exception of NLPs like GPT-3. It, in turn, definitely isn't strong, but it is at least moving in the direction of generalization in comparison.
> Far more important than the process: strong AI is immoral in and of itself. For example, if you have strong AI, what are you going to do with it besides effectively have robotic slaves?
why exactly is this immoral? they are machines, not humans. crypto-christianity on your part, imo
Wow. I never imagined a moral stance, rather than a utilitarian argument. I definitely agree in theory it is more the kind of thing that society might be able to get behind. I further agree that many doomsayers are themselves really lovers of AI and nerds at heart. In a way one has to be a techno-optomist to believe that strong AI could be an existential threat, so for those that are not techno-optomists, this is not even on their radar is a thing worthy of consideration.
Still notice in the Dune book it took some long drawn out war against the AIs in order for humanity to adopt this stance. They didn't adopt it from the start, the theoretical peril was first made very very real. and THEN they adopted the rule. I suspect we are the same, we wont adopt such a rule until the actual dangers are known with pretty high confidence.
Even if we could gain acceptance for this rule, I suspect we could not enforce it. The problem is, if we allow work on self driving cars, GPT3, etc. then we are strengthening the building blocks for this AI tech. I suspect it will be too easy and too beneficial to cross the threshold once all the parts are in everyone's hands. And even just one cross over in a fertile world with enormous latent cloud computing infrastructure is one too many.
Still your jihad idea really IS novel!
A war against evil robots was only in the execrable Brian Herbert novels. Both Frank Herbert and Samuel Butler's novels indicated something much more like the way we use computers today as being the objectionable cause of a backlash against machine intelligence.
The son’s novels were atrociously bad novels, I definitely agree with that.
But they really were based on Frank’s notes, and the synthesis after the post-God-Emperor diaspora really does make some sense.
So I tend to think Brian accurately described Frank’s vision, just with incredibly inept plotting and prose.
"Yet there is just as much an argument that AI leads to a utopia of great hospitals, autonomous farming, endless leisure time, and planetary expansion as it does to a dystopia of humans being hunted to death by robots governed by some superintelligence that considers us bugs."
Another question is... what if the former is actually just as dystopian as the latter? Think of a hypothetical Grandpa who no longer possesses the ability to drive and is instead carted around endlessly by his grandkids. Even if they drive him everywhere he wants to go, whenever he wants, I still think Grandpa would be happier driving himself.