62 Comments
Sep 21, 2022Liked by Erik Hoel

Also: ask one hundred humans to draw a horse riding an astronaut & I bet about fifty of them would draw an astronaut riding a horse.

Expand full comment
author

Agreed, this absolutely happens all the time. It's actually shocking how humans will make the same mistakes as AIs given some prompt, mistakes which are then held up as being somehow deeply revealing about AI psychology.

Expand full comment

Totally agree.

In fact, there's a relatively popular theory in psycholinguistics called "good-enough processing", which proposes that people, for the most part, often form relatively shallow representations of the linguistic input they receive. Personally I think a lot of language comprehension is association-based and "fast and frugal". This leads to so-called mistakes like:

Experimenter: "How many animals of each kind did Moses bring onto the ark?"

Many people: 2

Expand full comment
Sep 24, 2022Liked by Erik Hoel

I literally did this in my mind when I saw the prompt!

My mind interpreted the prompt as "astronaut riding a horse.".

Expand full comment

Yep. You likely literally saw it that way. Top-down processing: https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/

> The key insight: the brain is a multi-layer prediction machine. All neural processing consists of two streams: a bottom-up stream of sense data, and a top-down stream of predictions. These streams interface at each level of processing, comparing themselves to each other and adjusting themselves as necessary.

> The bottom-up stream starts out as all that incomprehensible light and darkness and noise that we need to process. It gradually moves up all the cognitive layers that we already knew existed – the edge-detectors that resolve it into edges, the object-detectors that shape the edges into solid objects, et cetera.

> The top-down stream starts with everything you know about the world, all your best heuristics, all your priors, everything that’s ever happened to you before – everything from “solid objects can’t pass through one another” to “e=mc^2” to “that guy in the blue uniform is probably a policeman”. It uses its knowledge of concepts to make predictions – not in the form of verbal statements, but in the form of expected sense data. It makes some guesses about what you’re going to see, hear, and feel next, and asks “Like this?” These predictions gradually move down all the cognitive layers to generate lower-level predictions. If that uniformed guy was a policeman, how would that affect the various objects in the scene? Given the answer to that question, how would it affect the distribution of edges in the scene? Given the answer to that question, how would it affect the raw-sense data received?

> As these two streams move through the brain side-by-side, they continually interface with each other. Each level receives the predictions from the level above it and the sense data from the level below it. Then each level uses Bayes’ Theorem to integrate these two sources of probabilistic evidence as best it can. This can end up a couple of different ways.

> First, the sense data and predictions may more-or-less match. In this case, the layer stays quiet, indicating “all is well”, and the higher layers never even hear about it. The higher levels just keep predicting whatever they were predicting before.

> Second, low-precision sense data might contradict high-precision predictions. The Bayesian math will conclude that the predictions are still probably right, but the sense data are wrong. The lower levels will “cook the books” – rewrite the sense data to make it look as predicted – and then continue to be quiet and signal that all is well. The higher levels continue to stick to their predictions. (...)

Expand full comment

I, for one, would ask for confirmation if I had read the prompt correctly. There's a good chance I'd "autocorrect it" to the normal version without even noticing.

Expand full comment
Sep 21, 2022Liked by Erik Hoel

Interesting post. I have to say I had been very sympathetic to Marcus but this post makes me less so.

Something that leaves me somewhat sympathetic to Marcus’ rhetoric calling these “parlor tricks” and such is the fact that these models, while superficially impressive, have yet to yield much value in real world applications. I realize for PALM and Dalle-2 it’s still super early, but we’ve had GPT-3 for years and as far as I know there have been no major applications of it beyond “hey look how novel it is that an AI wrote part of this article”. Whenever I talk to people who program/use AI in commercial applications, they are much more cynical about its capacities.

Given that, it really does seem like something is missing, and maybe we’re prone to overestimating these models. While many of the specific critiques made by Marcus were wrong, I think he’s getting at that more general intuition.

As a side point, I’m also more skeptical than Erik is on how much progress we’ll get for further scaling, given that we may be running out of the kind of data these models use to train: https://www.lesswrong.com/posts/6Fpvch8RR29qLEWNH/chinchilla-s-wild-implications

Expand full comment
author

Great link - I had a whole section on scaling and data where I cited that exact post but I ended up cutting the section as it ended up being too long of a digression. I, for one, am praying that data is indeed limiting and that deep learning can therefore only get so good.

Expand full comment

Ajeya Cotra goes out on a limb to suggest that "human feedback on diverse tasks" is a path to AGI. https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to

If she's right, then data (it's plural) are not limiting. And, as Yudkowsky and others have already asserted, our goose would be cooked.

I've even said that there might be data we haven't tapped. Like if you wanted to teach an AI persuasion, then use weird stuff like legal arguments, fan debates, FAQ's, opinion polls,

Expand full comment

and (someone else suggested, I think) have AI's debate each other.

Expand full comment

Re: commercial applications —

I’m a professional software engineer and I’ve become addicted to GitHub’s Copilot.

If I’m doing anything at all repetitive it’s a godsend and saves massive amounts of time. All boilerplate is done instantly, which helps keep my brain focused on more important tasks.

But even beyond boilerplate and simple pattern matching, it generates useful suggestions regularly. In a test suite or around error handling it will often bring up cases I would have eventually gotten to, but it generates them instantaneously.

Sometimes I get caught by not properly reading a suggestion that looks right and a bug will slip in that I never would have written. But I’ve never had that sit more than a few minutes.

And sometimes it just seems absolutely magical, and produces wonderful working code exactly how I would have done it, but again instantaneously without my having to do much more than wish it onto the screen.

I’ve not in 20 years had such moments of spontaneous laugh out loud surprise and joy in writing code.

It’s not perfect. It’s sometimes dumb. But it’s fun and useful and sometimes amazing and it’s available to me at $10/month. 🤯

Expand full comment

A real world application would be deepfaking, but we don't want it to be.

Expand full comment

Deepfaking is useful for more than just malicious uses; for example, it can be used in cinema and other entertainment applications.

Expand full comment

> given that we may be running out of the kind of data these models use to train:

I really doubt it. The training dataset doesn't seem anything remotely near being "all of the text on the internet". GPT didn't even read things like Reddit comments.

Expand full comment
Sep 21, 2022Liked by Erik Hoel

What conclusions would an alien Gary Marcus come to about human cognition if all they knew of it was that it produced all these after the prompt "draw a bicycle"?

https://www.washingtonpost.com/news/wonk/wp/2016/04/18/the-horrible-hilarious-pictures-you-get-when-you-ask-random-people-to-draw-a-bicycle/

Expand full comment

From the very first paragraph, I thought, “Is this Eric Hoel, or a GPT-3 bot that was prompted by Eric Hoel to write Hoel-esque thoughts?”

I am expecting, one day soon, an AI preacher, and soon after that perhaps, an AI “Jesus” (liberal and conservative versions, of course). In the latter case, the AI will presumably ingest the Bible, particularly the red-letter sections, supplemented by associated religious writings across almost two millennia, to create a convincing semblance. Miracles to follow. The future will be strange and disturbing.

Perhaps, rather than more Turing tests, which investigate whether a person can distinguish AI from another person, we need a test to see whether a person can fall in love with AI (full well knowing it is AI), and mourn when the AI dies (again knowing it is AI), with the same poignancy and pain as they would for a real person. I expect that in many cases, the answer will be yes. And that will tell us something frightening, not about AI, but about us.

Perhaps what we need to be training is not AI, but human minds, to think comprehensively, and not just emotionally.

Expand full comment

Good post!

I find myself somewhat conflicted on the issue. On the one hand, I'm sympathetic to the skeptical stance in general. I think the "clever Hans" problem is a real one––in fact, there were well-documented cases of previous LLMs like BERT performing >90% on a natural language inference task, only to find that the task's items basically baked in the answer via more superficial linguistic cues.

On the other hand, it's undeniable that these models have made incredible progress, and I'm not as sympathetic to what I see as widespread a priori assumptions that models like GPT-3 simply *can't* understand language (and therefore any demonstration of "understanding" is a kind of reductio ad absurdum of the task itself). In general it feels like there's a lack of rigorous philosophical work aimed at discovery as opposed to position-taking.

With that said, I'll put in a shameless plug for a pre-print I recently released: https://arxiv.org/abs/2209.01515

In it, my co-authors and I compared GPT-3's performance on a False Belief task to humans. The central question is where Theory of Mind comes from *in humans*, and the paper is written and framed that way (i.e., to what extent does language alone explain ToM). But beneath that result you'll find that––even though GPT-3 is not quite at human level––it's performing reasonably above chance. The study and its analyses were entirely pre-registered, and the stimuli were carefully controlled to try to omit potential confounds. (Impossible to know for sure, but there are very few differences across conditions other than the key experimental manipulation). Does this mean that GPT-3 has the beginnings of something like ToM? We don't take a strong stance on it in the paper, but I think more and more that we need to start asking ourselves hard questions like that––and about the link between experimental instruments and the constructs they're designed to test.

Anyway, I thought it was potentially relevant to your post and might be of interest to you!

Expand full comment

A clever idea for a study. "Reasonably above chance" is a little iffy so I don't know whether to think GPT-3 absorbed some level of ToM in its training. I've long thought that we have ToM because we have a self-model because we have ToM because ... (ToM and the Self develop by bouncing off of each other), and (2) an AI could be taught to have ToM but it would probably need a self-model, too. And if it did, we have to wonder whether it also has phenomenal experience.

Expand full comment

Thanks!

I should've been clearer in my comment––by "reasonably above chance" I mean that GPT-3 responds significantly differently depending on whether a character in a story knows or doesn't know where an object is located. Specifically, GPT-3 predicts that they'll look for the object in a different location. If these probabilities are converted to discrete answers, GPT-3 gets ~62% of the questions right, compared to 80% for humans. (Notably, humans were also not at ceiling, and in fact, we excluded some participants––according to pre-registered exclusion criteria––who ended up having considerably lower performance.)

My point being that at least behaviorally, it's not clear that the difference between GPT-3 and human performance is one of kind or one of degree. And if it's the latter, then it seems conceivable that a bigger, more powerful model might end up achieving human-level performance––at which point we'll need to ask ourselves what this *means* with respect to whether GPT-3 has ToM.

(Also, I agree that conceptually, ToM seems very bound up with our notion of selfhood.)

Expand full comment

Cool. I gather that you didn’t ask GPT-3 to explain its reasoning. My experience is that it might not be good at that. But maybe newer LLMs might be. I hope you get to try them. I’m disappointed in the humans. Premack and Woodruff’s chimp in 1978 gave the right answer on a similar but more complex task in 0.875 of trials. A chimp!

Expand full comment

Not in this analysis, though it's something we're looking into for a future study. In this first pass we wanted as simple a measure as possible, so we looked at a measure called "surprisal", which measures the likelihood GPT-3 assigns to an upcoming word.

I was surprised by the low performance by humans too! And importantly that's not because they didn't read the passage––the 80% was among humans who correctly responded to other "probe" questions about the content in the passage.

Expand full comment

Whenever I read these debates, I am always left wondering exactly what the claim is, and exactly what the criticism of the claim is.

AIs of this sort are pastiche generators. That is what they are. They respond to prompts. They are, in many ways, remarkable pastiche generators, if often quirky. But what is the claim? Does it go beyond saying, wow, look at what a great pastiche generator we built. Does the critique go beyond saying, nah, your pastiche generator ain't so great?

I ask this because if there is one thing that the world is not short of, its artistic and literary pastiche. The stuff is literally being given away in vast quantities. The world is not suffering from a pastiche shortage. Why bother to automate its production?

Is it to prove a point about something else? Is this really all about proving that human beings are just automatons by building an automaton that is indistinguishable from a human being? Is the claim, humans as just robots because soon we will be able to build a robot that is just as human as a human? (Not exactly a novel concept in Sci Fi.) And if so, is the critique, that the robots are not really very lifelike at all (because, implicitly, human beings are not automatons, but are possessed of a divine spark)?

If that is the critique, then I'm not sure that it is all that different from your own conclusion that AI is not human-like and thus is not a philosophical challenge to human status, and therefore not a denial of the divine spark. Except that you would seem to be saying that that was never the claim in the first place and therefore it was never necessary to refute that claim.

In other words is this really all about saying that the question of whether Pinocchio is a real boy is irrelevant because all anybody claimed to be doing was building a really cool marionette?

Expand full comment

I think you’re hitting on some important undercurrents. In a world where belief in objective meaning has largely collapsed (“God is dead”), there is a lingering urge to plug up the nihilistic hole to keep the ship from sinking. When AI finally transitions to “AGI”, ie., is indistinguishable from humans across a broad range of complex tasks, it will be, for many, a seeming vindication of the belief that we humans are, after all, just sophisticated machines. I’m not saying this is Eric’s view, but it will probably be the popular default interpretation, the big headline, the take-home message. And this view, I think, will merely put an axe to the hole, and the ship will keep sinking, because something essential will have been diminished in our own perception of ourselves.

Expand full comment

Fascinating. Appreciate the context on this debate, and on all the conflations and the goalposts!

Expand full comment

This is a nicely done multi pronged analysis, and the replies cover a lot of ground (my phone just put in the word "ground" on its own. )

Re intellectual capability vs human brains, maybe consider an industrial machine... there is no human that can recall or calculate as well as a computer , but big deal, because no human can drill an oil well on their own. It's a big dumb task that we solved by imagining and creating the equipment. I think ai achievements mimic this process.

Maybe a new way to conceive of human-like intellectual behaviour is twigging. It's a mini eureka with utterly minimal input. Hey i get it !

Expand full comment

This is excellent! It also articulates a few of the points I was writing about for next week, on the Wittgensteinian language games within AI discourse, and much better than I have yet.

The idea that we can circumscribe the realities of the world, or indeed experience, to a few fuzzy categories that are neither well understood nor adequately explored, makes most of the conversations regarding this topic a fight about definitions.

Expand full comment
Oct 3, 2022Liked by Erik Hoel

> Humans think about what they’re going to write or paint, and it takes them a long time,

> and they really try to get it correct on the first go. Throw all that out the window for AIs.

This, I think, is a really important point. When a human creates something, we plan ahead, write, revise, and iterate. The latest diffusion models actually do that, to some extent, with images; although "denoising" is different from revising. However, large language models generate their answers left-to-right in one shot. For a task like programming, this is an impossibly high bar. No human could write correct code of any length like that; a single missed semicolon would lead to instant failure.

The important thing to realize is that the inability to plan and revise is not a fundamental failure of deep learning, it is a flaw in the way that our models are currently trained. Over the next few years, I fully expect that changes to architecture and/or training will fix that flaw. And when these models are able to iterate and fix their mistakes, it will be much easier to see what they are really capable of.

(Note that actor/critic RL models and GANs already have a "critic" that judges the likelihood (quality) of their own outputs, and LLMs return likelihood as well. AlphaGo has demonstrated how to use monte-carlo tree search to do planning, and training tasks like infilling and masked language modeling show how to repair broken inputs. Thus, all of the pieces would seem to be in place for a sophisticated architectures based on planning and revising.)

Expand full comment
Sep 26, 2022Liked by Erik Hoel

I am of two mind on this. On the one hand, we have made stunning strides in this last decade... downplaying this is very clearly moving goal posts.

Still to anyone who thinks that more of the same will yield AGI is also mistaken I think. Present DL systems are roughly like human unconscious or "instant" thinking. It is very stunning how (like humans) these algorithms seem to pull many kinds of information into intelligent-looking outputs.

its not a parlor trick... it is (very roughly) like human subconscious thought.

Still it lacks the subsequent deliberative thinking that human can apply to the raw materials they are provided by their subconscious.

But ten years ago we did not have human like generativity that we have now. not even close. who is to say in ten more years we wont add a new class of deliberation onto of the subconscious outputs of DL today? It seems more than plausible to me, it seems likely. And THEN these systems will be critiquing their thinking, and the comparison to humans will be far, far more even.

The idea that we are not making vast strides is just ridiculous, even if we are not yet there.

Expand full comment

As a recent entry to the conversations I really have to sift through a lot of different AI views. I do find that while reading Gary Marcus’ essays, he’s often talking about the lack of regulation also that comes with AI technology due to simply a lack of knowledge. I think that this is probably a question that needs more insight rather than the focus on replicating human intelligence

Expand full comment
Sep 21, 2022Liked by Erik Hoel

What about radiologists bring replaced by AI? We were told that was imminent, 10 years ago.

Expand full comment
author

I certainly agree - there's been plenty of over-claims about AI, particularly around if they would take people's jobs. But in general, if you look at what people (such as Marcus) said about the limitations of deep learning, they were wrong. Even just a few years ago Judea Pearl was claiming that AI's understanding causality was impossible - absolutely incorrect, the new models can answer all sorts of questions about causation. So it seems pretty consistent to maintain that (a) the goalposts have moved almost entirely in one direction, and (b) we don't know what the effects of this cognitive automation will be.

Expand full comment
Sep 22, 2022·edited Sep 22, 2022Liked by Erik Hoel

As a practical matter it seems unlikely that AI would *not* start performing enormous chunks of various forms of (I'll just call it) white collar work (admin, management, analysis, let's just say broadly??) and I would think it's a wildly open question as to if/how education and the job market and society as a whole will adapt.

Cognitive automation raises the immediate analogy to the impact of "regular" automation, the challenges of which are obviously ongoing.

Could we handle the condition in which the only available human jobs necessitate out-competing the latest AI models (via an ever narrowing ability, by comparison, to "think creatively", I guess?)? Or, doing a similar job as the AI but for cheaper than whatever the latest software update costs?

Expand full comment

Brilliant Erik. I’m not a partisan in this debate, but your approach is very useful.

Expand full comment

This is a nicely done multi pronged analysis, and the replies cover a lot of ground (my phone just put in the word "ground" on its own. ) so just a few thoughts...

Re intellectual capability vs human brains, maybe consider an industrial machine... there is no human that can recall or calculate as well as a computer , but big deal, because no human can drill an oil well on their own. It's a big dumb task that we solved by imagining and creating the equipment. I think ai achievements mimic this process.

Maybe a new way to conceive of human-like intellectual behaviour is twigging. It's a mini eureka with utterly minimal input. Hey i get it !

On self driving cars...

Humans have the advantage of being trained over millions of years. But ai has the advantage of being trained by millions of tb of data and maybe millions of scientists. Right now it does not seem that ai will ever drive a car at L5 ie all conditions including winter and city and off road and emergencies like i could do when I was just a kid. Yes ai can be astonishing but compared to the little training that people can leverage into high level capability I'm very unsure how ai stacks up.

Expand full comment

Well, great. Thanks for dispelling my wishful thinking that if I just ignored the obvious threats from AI, it wouldn’t become a problem.

Also, incredible flourish at the conclusion... corporations always exploring new ways to generate alien minds...

Expand full comment