Discussion about this post

User's avatar
Graham L's avatar

There is something fundamentally silly in people who can claim that a bee is not conscious. It’s a certain level, an alien-to-us level, of awareness, but it isn’t functioning without any. If they think it isn’t aware, they are presumably prepared to torture a bee on the grounds that it can’t suffer and it would be like torturing a lego brick – you can’t. It’s just part of the mechanical-reductionist philosophy which replaced the living universe that people used to live in before certain smartass scientists told us it wasn’t real, even though they didn’t even know the extent to which they were prisoners of an imbalance between the left and right hemispheres of their own brains while they were saying it. And it led to the appalling cruelty of vivisection on animals that supposedly weren’t conscious. And talking of lego bricks, the linguistic pieces inside the Large Language Models are surely similar in a deep philosophical sense. No matter how sophisticated the manipulation of lego bricks is, e.g. using them to build a computer or a hospital or a starship or a submarine, it’s still a bunch of impressive lego bricks. It hasn’t become conscious just because it’s complicated – or because it can play with language. There’s something fundamentally wrong with the idea, philosophically, that manipulated sentences and linguistic concepts, in themselves, can imply or create or manifest consciousness. If it was manipulating light beams instead, and impressing us with the displays, would you insist it had created eyes? It’s no good imagining that an animal or insect isn’t conscious, because it can’t use language to argue with you, and that an incredibly sophisticated bunch of artificial circuits IS conscious, just because it CAN use language to argue with you.

Expand full comment
Jacobo's avatar

Say I wrote a very simple program that returned the set of strings you posted here when you gave it the prompts you did, just a simple lookup table. If so, and you knew how it worked, you probably wouldn't use the phrase "it replied", and you'd intuitively know that the output doesn't reflect the amount of consciousness the system might have. If a naive person came along and claimed the output was proof of consciousness, you'd be pretty dismissive of them.

If I spent 80 years of my life improving the sophistication of the program, adding better understanding of inputs, dynamic qualities to the output, weights of various kinds, etc. Would there be any reason to believe this program is now conscious? I think it would be pretty safe to continue to be dismissive. Plenty of sophisticated and complex programs have been made before, the only thing that's special about this one is I've tune the algorithm towards talking with people. Why should it be any more conscious than a videogame with 1000s of lines of code, or a weather forecasting program, etc. Fundamentally it's just another computer program, a series of logic gates.

There's the question of whether programs even deserve to be assigned an "it" or an identity. When you say it replied, what is the it? I see the miracle of consciousness in how our trillions of cells somehow converge into a single locus of experience (possibly constructed out of a more "field-like" awareness). This motivates us to see other animals via identity (you, him, her), but it's always been a bigger reach to see chairs and boats the same way, see Plato trying to make sense of that. Personally, I haven't found the evidence convincing that our view of AI isn't a similar stretch.

Expand full comment
53 more comments...

No posts