32 Comments
User's avatar
jstaab's avatar

I really like this, it reminds me of the short film Sunshine Bob, which beautifully portrays human helplessness in the face of an increasingly unlistening world.

https://www.youtube.com/watch?v=645OxM9MePA

Expand full comment
Erik Hoel's avatar

Had never seen this, it's really good

Expand full comment
jstaab's avatar

Dandy Dwarves and their sister channel SCAD Shorts are all really good.

Expand full comment
Sean Trott's avatar

Great post––the connection between black box AI and kamis is an interesting insight.

Have you read "The Restless Clock" by Jessica Riskin? In it, she argues that pre-Reformation, people in medieval Europe tended to imbue mechanical automata with a kind of vital spirit as well. I haven't verified the extent to which this is true but I thought it was an interesting connection nonetheless.

It also reminded me of something I've been thinking about with respect to "explainability" and AI. I work with neural language models a fair amount for my own research (like BERT or GPT-3), and these are in some sense prototypical black boxes. Even though in theory, we (or at least someone, somewhere) know the precise matrix of weights specifying the transformations of input across each layer, this feels somehow unsatisfactory as an explanation––perhaps because it doesn't really allow us to make generalizations about *informational properties* of the input? And so there's an odd sense in which practitioners have gone full circle, and we are now using the same battery of psycholinguistic tests––designed to probe the original black box of the human mind––to probe these models. See, for example:

Futrell, R., Wilcox, E., Morita, T., Qian, P., Ballesteros, M., & Levy, R. (2019). Neural language models as psycholinguistic subjects: Representations of syntactic state. arXiv preprint arXiv:1903.03260. (https://arxiv.org/abs/1903.03260)

Expand full comment
Erik Hoel's avatar

Had not read "The Restless Clock" but that's very interesting. Agreed on the weird full-circle of how we understand high-parameters ANNs. As a neuroscientist it's particularly concerning because, if we can't understand a system which we have *complete access to* how can we hope to understand the brain?

Expand full comment
William Collen's avatar

>During the industrial age, only other humans had minds, never machines.

Of course, animals (including Minerva) have a mind, the workings of which may not correspond to how our own, human, minds work. Perhaps animal minds are the original AI? There is a long tradition of anthropomorphizing the thought process of animals.

>My credit card now has a kami.

Maybe, in a century or so, we will worship specific AIs.

Expand full comment
Brad & Butter's avatar

> In a century or so, we will worship specific AIs

> As AIs gradually eat the world

There are a lot of untapped horny potential in this line of thought. "Oh bite me, AI daddy!"

Jokes aside, people used to think animals as automata contrary to eastern traditions (thanks, enlightenment).

Expand full comment
Mo Nastri's avatar

It would be interesting / disconcerting / worrying (I feel some mix of all three) to see how worship-in-practice evolves over time as AIs gradually eat the world.

Expand full comment
Charlotte Dune's avatar

Indeed, this went to my promo tab in gmail even though I've moved you before into my primary. Gmail doesn't like Substack. Every Substack newsletter I get goes to the wrong tab. It makes me not want to use the platform myself... But great article! Love the Kami comparison.

Expand full comment
Erik Hoel's avatar

Omg, I knew this was happening was to some people! Well, thanks for trying to commune with your kami over it. I will say I usually end up with above 50% open rates so it can't be *that* bad, but agreed, probably the biggest problem with substack is the black-box over how emails get received. Also, thank you!

Expand full comment
Loren Christopher's avatar

I get a lot of substacks and roughly half of them end up in promotions - I've learned to read promotions as a second inbox and unsubscribe from actual promotional emails, lol. (This one is always in promotions)

Expand full comment
Alex Bennett's avatar

Erik, I truly loved reading this piece -- it hits an intellectual sweet spot -- an intersection of science, mysticism, culture and maybe philosophy -- where to begin?

First, I've had quite a few such experiences -- AI sending my earnest emails into other people's spam folders -- and also human ones, in which a programmer makes "Cretan" errors and the rest of the organization effectively assumes there is no appeal to the man behind the curtain -- I once had a senior help technician helping me troubleshoot a problem with his company's anti-virus program -- he ran out of failure modes and said "a virus got in it." That is some sort of meta-Cretan error.

Second, your piece points to the question of an AI that monitors other AIs for impenetrable, inarticulatable errors. Might something like that take months of real-world training to catch and repair such errors? And of course it would add some kind regressive false results -- in which X% of AI errors are solved, yet introducing another X% of AI monitor errors. Maybe if there was a way to document the original AI's decisions, the AI monitor could read that documentation, and so perhaps learn how to repair the error in the decision(s) that cause malfunction(s)?

I completely agree with and emotionally support you saying "AI makes animists of us all" but the logical positivist in me is not ready to concede the point. On the one hand, when you say logical positivism denies "meaning to all statements that can’t be tested" my reply is “absolutely they have no meaning if they can’t be tested!” On the other hand, my reply is “what do you mean by ‘test’?” (this is what “truth units” is about, I’ll have something on Medium about it sooner rather than later).

Again, it was truly exciting to read this piece – so well written and so very intellectually engaging – it maps so well one of the truly great frontiers of human culture we face now.

Expand full comment
Erik Hoel's avatar

Thanks Alex! I think the biggest problem is that in many of these cases the AIs are working as intended, or, alternatively, there's no way for the corporation to check if the AI is working as intended. For example, Chase cannot possibly see if my card is working as intended, there's no feedback. As you say, if I were put in the position of "training" the card with feedback on errors it might improve, but I really don't think we'll see that anytime soon. However, if we do, they'll be even more Kami-like as we actually train and treat our household objects (fridges, houses, credit cards) more like minds.

Expand full comment
Alex Bennett's avatar

Thank you for these thoughts! I felt a little dumb/naive after my post. I think I was trying to be theoretical. As you point out, actually collecting and analyzing error data is problematic (a bit like in the Imitation Game before the dramatic breakthrough) -- then there's the bigger "analysis" of "is this a bug or a feature?" -- like in discrete parts manufacturing, where zero defects is rarely a goal because of its costs -- a certain failure rate (angry customers) is desirable for profitability. Kami-wise, imagine a toaster oven doing a sub-par job in order to avoid legal liability for its manufacturer?

Expand full comment
Hippo's avatar

Oh wow this is awesome! Came across your newsletter via The Sample a few days ago (and subscribed with a different email ID) but this stuck out because it suddenly metaphors the whole story into something else!

The style reminds me of what we try to approach at my publication, Snipette, and it's really cool how you can suddenly look at the world in a whole new way. Actually, at the first section I was like "okay, typical credit card complaint/fraud story", but then I remembered the title, and then I read on down, and then I was just "whoa 😮" all the way through!

Expand full comment
Erik Hoel's avatar

Thank you! It's my philosophy that the best essays always have twists, some sort of Prestige somewhere. Also, quite glad to hear about The Sample, I like it as well.

Expand full comment
Hippo's avatar

Yep. One of the best ways to end an article is with a twist! I'm glad I found your newsletter and looking forward to more :D

Expand full comment
Dan Oblinger's avatar

Erik, this is such an odd attack on AI. Basically you are disappointed that the decision surfaces used to make decisions are not simple. would you want college admissions to be executed on a simple decision surface? whom you should marry? which joke is funny?

The thing that is new is that the complex, in-articulate surface is not originating within a biological entity. I agree that is new. but then you go on to rail against the injustice of in-articulate surfaces that affect your life.

but really, would you want the outcome of your next date to be determined by a simple surface?

yet those outcomes have HUGE impact.

we have always lived in a world with opaque decisions affecting our most prized things. we just called it ... life...

Expand full comment
Étienne Fortier-Dubois's avatar

The metaphor of the kami is a good one, but here's another. Before algorithms, decisions like those were made by people. Usually we can understand people to a large extent because we are similar to them, but sometimes not: Consider, for instance, moving to a country whose culture and language you understand poorly. You want to buy a property, you go to a bank, and the banker tells you in very laborious English that your application was denied. He's unable to explain why due to the language barrier, and you can't guess because you don't know the culture.

And of course, cultural distances can also occur within a single country, for example due to social class.

Algorithms, first applied to people (decision trees, etc.), and then to machines, have bridged cultural distances and made many processes legible, but perhaps it was indeed just a matter of time before those cultural distances became gulfs again.

Expand full comment
Marco Enrico's avatar

I really liked it, and yes: my newsletter keep going in Promotions...

Expand full comment
Jan Sand's avatar

I am not a technician nor do I have any up to date training in information technology since my last encounter with real circuitry occurred back in 1944 when I trained in the US Army Air Force as a radar technician. But it seems obvious to me that almost all the essential controls of our current civilization have been ceded to the digital complex which is easily invaded by the growing decision making AI complex to create that SF monster Frankenstein that could easily relegate all of humanity to the fictional meta game playing jungles of infinite fantasy while the real world is deftly handled by a rapidly evolving intellect more powerful and impenetrably complex wherein humanity, if it does not ultimately destroy everything, will become amusing pussycats of no real power or importance.

Expand full comment
Jan Sand's avatar

I looked you up on Google and discovered you were a neuroscientist, I am a retired industrial designer and the brain has vastly interested me since one of my early assignments was with an exhibition designer in New York city that created an exhibit on the brain for Upjohn Pharmaceuticals back in 1960. I worked with a scientist and we took apart several human brains to see the anatomical complexities of its basic components, In those early days very little was understood of that marvelous complex but through the years my reading in sources such as The Scientific American clued me in to that minimum available to an ignoramus such as myself, But this introduction led me to question whether to accept that the rather crude digital circuitry could ever encompass the complex universe of a living brain with its billions of interconnections of living cells that automatically interacted in various metaphorical abstractions for independent processes of what we simply accept as thinking beyond the concept of intent which is driven by the living requirements of surviving in a universe of almost infinite probabilities. A living complex of dynamic neurons is altogether different than the interactions of the complex of computer elements and it seems to me that the different internal living neuron societies contains inherent complexities far beyond the possibilities of even, perhaps, the ultimates of quantum computation. When I attempt poetry on my blog at https://jansandhere.wordpress.com/ , I find I am more a spectator being fed the next words rather than someone consciously constructing an interacting compilation of interesting words and the final results are very frequently a surprise to me as to a reader. I suspect much of creative effort by both artists and scientists are the result of the same subconscious process.

Expand full comment
Linda George's avatar

Excellent explanation of why my Chase Amazon card declines purchases of $1.22 then accepts one of $14.95.

Thanks.

Expand full comment
Joe Canimal's avatar

I suppose I fail to see the difference between the new world you describe and the old world. It used to be that the ancients observed patterns in nature and ascribed them to spirits, or Laws, or The Gods -- is any of this different from thinking there's a "black box" and trying to form a hypostatic abstraction that conciliates various observations? We think we know other people but their real internals are beyond us (even conceptually, given quantum free will and all that, and certainly practically given the complexity of the brain, not to mention reflexivity and the like) -- isn't there a black box there, too? (Skinner and disciples certainly got a lot of leverage from the idea.)

We're always doing model induction! AI is a new, complicated, prevalent thing, but there are plenty of other examples all around. The new Kami was just the Gods of the Copybook headings.

Expand full comment
​​​​'s avatar

Ethology has lately been displacing behaviorism in studies of animal behavior, as it turns out Skinner and his disciples did a much better job of producing pronouncements suitable for impressing laymen than models for accurately predicting how animals (including, but not exclusive to, humans) will behave or can be trained.

As an engineer, I am not at all comforted by the argument that since old-style gods are unintrospectable and unaccountable, it's okay if engineered gods are too. If we can't do better, I see no really compelling reason why we should do at all.

Expand full comment
Joe Canimal's avatar

We are on the same page, I think. My point is that everything begins as a black box which we progressively open and explore by making model inferences, using them to make predictions, testing and refining the predictions, and conciliating. I think my point of difference may just be that I don't see us as at the dawn of some new age of transcendental AI; rather, we've been in a synechistic world where "[a]ll communication from mind to mind is through continuity of being” (CP 7.572) this whole time.

Expand full comment
Angus's avatar

It's a very evocative metaphor

Expand full comment
Brad & Butter's avatar

I can't help but think that, if Kami lives inside every IOT device, sooner or later some of us will go the route of Seances and Ritual Possessions, think of it as Megaman Battle Network Technology blended with spiritual forces akin to SMT or Persona 5. (Assuming "There Is No AI Risk" is real, that is) Memetic fusion with the body would possibly become the new vehicle for personal transformation.

Expand full comment
User's avatar
Comment removed
Feb 18, 2022
Comment removed
Expand full comment
Erik Hoel's avatar

I don't know how it developed this grudge, but a grudge it certainly has.

Expand full comment
User's avatar
Comment deleted
Feb 16, 2022
Comment deleted
Expand full comment
Erik Hoel's avatar

It's certainly possible that neurosymbolic AI takes off, but it reminds me of the hype around GOFAI and other symbolic approaches which waxed and waned from the 1960s onward. Since in college (in the days before deep learning!!) I've been on the side of Pinker and (his student) Gary Marcus, who argued similarly. But the deep learning revolution took everyone by surprise. For me, the breaking point was GPT-3, which is, I think, unreasonably, freakishly, terrifyingly good. So in terms of what I've been exposed to, the really mind-breaking stuff has been extremely high parameter ANNs. If I had to guess, things will continue that way, and while there will be some high-level stitching together (like making a GPT-X into a Broca's area) everything will just be scaling laws that can be taken advantage of only by really big corporations. But that's just my personal prediction.

Expand full comment
Brad & Butter's avatar

What is your stance on Graph AI then, it is definitely a new field, plus it is by default social network friendly.

Expand full comment
User's avatar
Comment deleted
Feb 16, 2022Edited
Comment deleted
Expand full comment
​​​​'s avatar

That symbolic AI takes a lot longer to develop I think more or less guarantees it is never going to take off in industry, not unless it is qualitatively so much better at achieving institutional goals that it's possible to justify the cost of replacing ANNs that will by then be deeply entrenched.

I agree that, if we're going to have AI, this is the kind we want. But it is going to have to be a *lot* better.

Expand full comment