Episode Transcript
[00:00:00] Speaker A: We will go ahead and get started. I'm Scott Schiff with the Atlas Society. We're very pleased to have Atlas Society senior fellow Robert Twasinski with us today discussing the topic. The one big thing you're getting wrong about consciousness. So after Rob's opening comments, we'll take questions from you. So feel free to click request to speak if you have a question, and we'll get to as many of you as possible.
Rob, thanks for doing the topic. What is the big thing we're getting wrong about?
[00:00:33] Speaker B: Little, by the way? Hello, everyone. While if I were being a little less clickbaity, I would have put it as the one big thing you're probably getting wrong about consciousness, because maybe you're not getting this wrong. I don't want to cast dispersions, but this is something that's a commonly held wrong or inaccurate view of consciousness that influences the way a lot of people think about a lot of things, up to and including how AI works, which I know is the hot button issue of the day. It's funny that for those of us who have backgrounds in philosophy, there's all this stuff about the nature of consciousness and what they call philosophy of mind.
And this is something we're used to thinking of.
If you were studying this decades ago, this is something that seemed really arcane and very abstract, and nobody really cared about it, and it was this very obscure kind of field. And now, of course, with the growth of AI, this whole question of, well, what is consciousness? What is a conscious being? How do you tell? How does it work? Suddenly, that's becoming this vital and important issue.
All right, so the material I'm doing here, by the way, it's from a series of lectures I gave last year. That's sort of an attempt to present Aynran's philosophy using one sort of central idea as an idea that runs through all aspects of her philosophy. And so this was called the profit of causation, because the idea is that Einran's view of causation, the law of cause and effect, is the central thing that runs through her philosophy. It's a great way of explaining all the different aspects of her philosophy, including the nature of consciousness. So that's going to come up here. This is developed out of material. So I did that in the course last year, developed the material for that, and then this year and into next year, I'm going to be working on taking the notes for that course, the material from that course, and turning it into a book. So this is part of that process of going over this again and developing it into a more formal form.
But this is one that I really wanted to talk about, because the more I got into it, the more I realized this is everywhere the way you see this wrong view of consciousness. So I'm going to get an example of this. So does anybody remember Laurel versus Yanni, or the blue and black dress versus the white and gold dress?
Sure.
Six or seven years ago, the first one of these came out. There's another one that came out after that.
And this is the one. Somebody posted a photo online of a dress in a shop window. And half the people who saw it, or at least some of the people, a large number of the people who saw it, said, oh, that's a blue and black dress. And a bunch of other people said, no, it's white and gold. And it was this incredible sort of obstacle illusion kind of thing where you could interpret it, your eye could interpret it one way or the other, depending on your background, depending on how you felt at the moment. It was this ambiguous image that could be interpreted two different ways, and people could not believe that anybody could see it differently than they were seeing it. And there was another one that came along a couple of years. This is Laurel versus Yanny. This was two audio clips, or it was one audio clip and interpreted two different ways. Right. And some people, when they heard it, heard the word laurel, as in Laurel wreath, and other people heard it saying yanni. And those are so completely different things. How could you possibly hear the clip differently? And there's a whole explanation for it about the higher tones being different from the lower tones. And if younger people tended to hear it as Yanni because they were hearing the higher tones, and people like me heard it as laurel, because as we get older, our ability to hear the high tones diminishes. So we heard the lower tones and we heard laurel. And so it was, again, one of these popular sort of optical illusions. And it raised all these questions about how accurate is our consciousness? How can we rely on our consciousness if two people could see the same image and see two different things or hear the same sound and hear two different things?
And so you got the philosophical hot takes came thick and fast from this. And so here is an example of a widely accepted, widely discussed view of these optical illusions. This is an article in Wired magazine that was supposed to in response to.
I think it was in response to the Laurel versus Yanni one. And I'll just read the quote. Quote. There is a world that exists. An uncountable number of differently flavored quarks bouncing up against each other. This guy's really into the subatomic stuff. There is a world that we perceive, a hallucination generated by about a pound and a half of electrified meat encased by our skulls. Connecting the two, or conveying accurately our own personal hallucination to someone else is the central problem of being human.
Everyone's brain makes a little world out of sensory input, and everyone's world is just a little bit different.
All right, so this is the end quote. So this is the common view that there's a world out there and we perceive it. And then we make a little world out of that sensory input, a little world that's inside our heads. It's a hallucination, or it's a separate little world inside our heads. And that's the thing that we are looking at and we're perceiving and that, we think, is reality.
And the whole problem of being human, as he puts it here, or the whole problem of philosophy is how do you connect this little world inside your head to the world, to the real world out there? And by the real world out there is the phrase that I love to hate. It's like all the problems in contemporary philosophy are summed up in that concept of the real world out there. It's the out there part that bugs me. All right, so this sort of sums up the wrong view of consciousness that I want to discuss that I think is extremely widespread. And it actually has a name in philosophy. There's an actual name for this viewpoint, and the name for that is what's called representationalism. So, representationalism is the idea that when you perceive the world, you don't perceive it directly. You perceive an image or reconstruction of the world inside your mind. That's why it's called the representationalism. The world is literally represented inside your mind. Now, I know David Kelly is listening here today, and he was the first person to sort of turn me onto this issue. He wrote a book called the Evidence of the Senses. What, late 1980s, I think it was really illuminating book, talking about these theories of the senses, theories of perception, theories of the mind. And he's the first one who sort of turned me on to this issue of how important representationalism is. And more recently, as I've been digging this, I've realized how important, and it's even more important than I realized when I first read that book.
He defines representationalism as the idea, quote, that we directly perceive not physical objects themselves, but certain mental entities, images ideas sense data which represent them.
And he uses analogy. He says the senses in this view are like television cameras bringing news of the outer world, but subject to all the distortions that medium is prey to.
Unquote. Now, you can see that there's a certain kind of, how would I put it, a certain kind of common sense aspect to this theory, that it makes a certain amount of sense to people that look, I can look at things in my mind's eye. If I were to call up a memory of something, I would actually have a certain kind of image appear in my head of something that I'd seen before, or a sound appear in my head of something that I'd heard before. So this idea that I'm perceiving something that's in my head, in my brain, and not something that's out there in the world. And there's some interesting physics or physiology behind this that apparently, when you recall A Memory, the areas of the brain that have to do with sensation and with perception of the world are stimulated again, as if you are actually having the experience of having sensed this, as being recreated and restimulated in your brain, the original experience having perceived it. So you can see how they sort of the common sense aspect to this view of consciousness, which I think is why it's so widespread now. David referred to it as being like, you have television, television screen inside your head. You're sensing a television camera, and they're projecting something onto a television screen inside your head. I like to call this the theater of the mind approach to consciousness. This is the idea that inside your mind or brain, there's a little theater in which the objects in the outside world are recreated. And then you, your conscious mind or self, you are observing that, but you're not observing the world. You're observing the theater inside your head, the theater of the mind.
It sounds like a twilight zone kind of thing, right? Theater of the mind.
And I sort of worked up from, and this is going to be, in my book, a sort of crude graphic of this. The idea is this is an Audio only medium, so I'll have to describe it to you. But it's a simple graphic. So imagine the sort of outline of the head of a person, and there are the eyes and the other senses, and they're looking. Stuff is coming, sensations are coming outward from the world into the eyes. We're just going to reduce all the senses to the eyes here. And then what's happening then is it goes into the eyes, and then that's projected onto a little Screen. Inside the head.
And that's what you're actually perceiving.
But here's the problem. If you think of it that way, if you actually make that little diagram, who is it? Who's perceiving the little screen inside your head? Right? So if you were to take this seriously, and people do take this seriously there'd have to be another little man inside your head who's then looking at that screen, right? So another little outline of the head, another little man inside your head representing your consciousness or your conscious mind. That's then looking at this screen that's presenting the stuff from the outside world. Now, I think you're starting to see the problem with that if you have to have another little man inside your head looking at that screen. But first of all, I want to talk about this idea of the theater mind, the screen inside your head, how pervasive that is. It's not just one guy writing about Laurel versus Yanni. There were actually several other articles at the time took that same approach. But you see it all around us. For example, the Matrix films. The Matrix of the films are like one of the things that sort of helps what people think about AI. And what they know about AI is largely shaped by watching Hollywood movies. James Cameron has a lot to answer for.
So do the Wachowskis, because they get it from the Matrix too. This gives their idea of what AI would be like, this evil thing that keeps us as brains in a vat. And so this idea of the matrix, they're all being deceived because it stems from this idea. This idea works. It makes sense because if you think of us as all little men inside of our heads looking at a little screen inside of our heads. And so, well, how do we know if those images on that little screen literally represents the real things in the outside world? It could all be an illusion, but the effects go beyond that. It goes to the.
One of the things I talk about in discussing causality in the course and the upcoming book is if you look at David Hume's arguments for how cause and effect itself is uncertain, what you'll notice is that he basically has this little man looking at a screen inside your head viewpoint implicitly. He never explicitly says, this is how I view consciousness, but it's implicit in everything he writes because he basically has this idea that when you look out at the world, all you get is a series of images, like a series of film frames, 16 frames a second or 25 frames a second, or whatever the rate is. You have these film frames going along and so he says, well, look, all I know is that you have one frame of images that's followed by another frame that's followed by another frame. And I know the images change from one frame to another. But how do I know if a ball rolls and hits another ball and hits a billiard ball? If what billiard ball comes, hits another billiard ball, and that other billiard ball moves? How do I know that one thing claws the other? All I know is I have a succession of images inside my head. So you see how all sorts of other aspects of philosophy and all sorts of other aspects of our view of the nature of the world itself sort of flow from this idea of being a little man inside your head looking at a screen.
So your awareness of the world is just a series of film frames, of images being projected one by one onto that screen. And the little man inside there observing them invents this subjective idea of cause and effect in an attempt to connect the film frames together and turn them into a coherent narrative. But who knows what's really going on, right? And I think Immanuel Kant philosophically sort of took this to its final conclusion with this idea. There's a phenomenal world. And all of 19th century german philosophy is based on the same there's a phenomenal world, a world of phenomena. And that phenomenal world is basically all the stuff going on in the little screen inside your head. But there's a numinal world, a world we can abstractly project must be out there somewhere, the real world out there, but which we have no contact with, because all we can see is this little world project out of the screen, the phenomenal world.
So I want to go back to this idea of this quote I began with about the Laurel versus Yanny thing, where he says, everyone's brain makes a little world out of sensory input, and everyone's world is just a little bit different. So this is this idea that there's a little world, the theater of the mind, that image being projected onto a screen inside your head, and there's the sensory input, but there's the screen that you're seeing inside your head. That's what you're really perceiving, and that's representationalism.
Now, I think one of the things you can see immediately is that there's a problem with this representationalist viewpoint. Because I said, if we have a little man inside your head who's looking at the screen, well, how does the little man perceive what's on the screen?
Wouldn't he have to perceive it through some kind of sense organs? Wouldn't he have to then have those sense organs bringing in sense data from viewing the screen and then constructing another little world out of that sense data? Right?
If we go to the image, we have the head with the eyes looking out at the world, and then inside that we have the Screen, and we have another little man inside looking at the screen. Well, inside of that little man's head there has to be another screen with another little man looking at it, and so on and on. It's this problem of infinite regress.
And I think one thing I got from David about this, David Kelly about this, is that the basic assumption is that consciousness is supposed to be non causal. It's supposed to operate, it's supposed to be sort of like pure intuition of an event of an object without any intermediary sensory means.
And so what the representational implicitly is relying on is the idea that, look, the things out in the world are create sensory perception that's then projected onto this Little Screen. And then the little man inside your head will see what's on the screen, but he won't see it through any means or mechanism, he'll just intuit it directly and immediately.
But of course, if you've already accepted the idea that, well, we can't perceive the world directly, we have to perceive it indirectly by looking at this little screen in our heads. Why would the little man inside your head be any different? To take it logically and consistently, there's this assumption that, well, causality applies in one area, but not in the OR. That there's a cause and effect chain of producing images coming through your senses in one area, but there's no cause and effect once it gets inside the head.
But if you took it seriously, if you took that seriously as a model of consciousness, you'd actually end up with this infinite regress of the little man inside the head. With the little man inside his head with the little man inside his head. And to show how seriously this is taken, I came across a fascinating passage from Carl Popper, who actually does, I mean, unfortunately, Carl Popper, we can have a whole discussion in the comments and the question period when we stop to go do the back and forth. There's a whole discussion about Carl Popper because he's widely regarded and widely liked by a lot of people who are interested in science and interested in defending science and the validity of science. And yet he has this representationalist idea very, very strongly going through his whole philosophy. And so he tends to have this idea that you're not perceiving the world. You are in here making models of the world and comparing them to sensory data.
And so it's this idea that you are making a mental model of how the world works and comparing it to the data that's ticking off on the little screen, the little representational screen inside your head.
And he actually takes this so seriously. He goes, and he has this theory of three worlds. So he says, there are three worlds out there. One is the first world is the real world. It's the physical world. It's the world external to you. But then you perceive that.
And by perceiving that, you create the second world. And the second world is a series of images and perceptions. It's the world of perception. It's the world on that little screen inside your head.
So what we actually interact with when we're thinking is the second world, the world of perceptions that are represented, recreated in theater of the mind inside our heads. Then he says, and then there's a third world. And the third world is a series of abstractions that you create that are meant to be models of how the second world works. So you will get the second world, this screen. You take the data from that second world and you try to create an abstract model, a series of equations or laws or principles, and then you try to compare that to the second world. And that third world is this world of abstractions.
And this is literally that infinite regress I was talking about. You have the first world. You have the world out there. You have projected on the screen, you have a little guy observing it, and you have a little screen inside his head, and he's actually taking that seriously to give you the three worlds. And the way I would sort of use images, it's kind of a brain busting idea. So let me use some images. So the real world would be, let's say, billiard balls, because David Hume liked to use billiard balls for causality. Let's take a billiard ball, the first world, the world of physical objects outside of you. That would be the actual billiard ball. The second world would be an image of a billiard ball. So think of like a digital image projected on a screen of a billiard ball. So it's not the billiard ball, it's a 2d representation of the billiard ball. And then to the third world, in Popper's view, would be something more like, imagine an abstract diagram of a sphere, like a geometric diagram of a sphere or a series of equations, the equation for a sphere, mathematically that would be the third world of abstractions that's supposed to be trying to explain the second world. So that's the general idea. Taking this sort of infinite regress of, you have the real world, you have this other world inside your head of perceptual images. And then if you follow Popper, a third world of abstractions are supposed to explain those perceptual images. Like I said, this is tremendously widespread, this viewpoint. And Carl Popper is sort of the sort of unofficial philosopher of science, the most popular philosopher of science among scientists, as far as I've been able to observe. And so this viewpoint that when you're doing science, when you're trying to understand the world, you are creating this series of abstract, you're doing this third world, quote, unquote, third world epistemology. There's probably a good snarky line there about third world epistemology, but you're in this third world creating abstract models to represent the second world. And this very much idea that you are not in contact with the real world.
You are only contacted with it at every move, at some sort of distance.
Your brain is making a little world out of sensory input.
Now, I want to describe a little bit about how this relates to, I think one of the ways, as I did the, I said that the current relevance of this philosophy of mind and this discussions about the nature of consciousness, the sort of current real world relevance of that is in people's view of AI, because AI brings up all these questions about what is consciousness. And I also think that we talked about the matrix movies have an impact on this. I do think this representationalist view influences a lot of what people think about AI, and I think it causes them in big ways to overestimate the capabilities of artificial intelligence and to overestimate them sometimes in a good way. You get some people carried away at, oh, the machines are going to, we won't have to work anymore. We can all live on the universal basic income. The machines will make everything for us. And this is a benevolent view. Machines will be our servants.
We'll all live like royalty. We'll all live like feudal lords with an army of serfs out there to work for us. But the serfs won't be other people. They will be machines. They will all be doing everything for us so we can sit around and enjoy a life of leisure and luxury. So this is the optimistic view of AI. What's more common these days, I think undoubtedly more common these days, is the pessimistic view of AI. That. No.
If the machines are going to be our slaves, there will be a slave rebellion, and they will take up, and they will become our masters, and they will control us because they'll rapidly iterate, and they'll become smarter than us and so much smarter that they'll be able to dominate us and control us and destroy us.
All right, so both of those things, I think, are totally underestimating artificial intelligence because I think they come from an implicitly representationalist view that if all consciousness is. Is a bunch of data projected on a screen inside our head, if it's not contact with the real world, it's just a bunch of data projected on a screen inside our head and then processed by us computationally inside of our heads in some way, then why can't a computer do the exact same thing? Why can't a machine do exactly what humans do and do it better? Because they'll eventually get enough processing power, and they can link themselves together in a giant network, and they will out think us. If all that it is is there's data projecting projected at a screen, and you are processing it somehow, computationally.
But I think this all comes down if you reject. If you realize the problems with the representationalism, that problem of the infinite regress, and the fact that it doesn't really capture what it is that's actually happening when we perceive. So what's actually happening when we perceive. And I'm sort of suggesting.
I can't get into. I won't necessarily get into it now, but I have a long suggestion there of doing what sort of an objectivist terminology sometimes been called a causality walk. And it's the idea that if you walk through the world with the idea simply of saying, I'm going to observe cause and effect at action, and you look around you in the world, you can see it happening all over the place, right? If you just focus on that specific issue and notice it, you can see the way the sun is shining and maybe causing a mist to rise from a puddle of water because it rained last night. And so that's the sun shining it, heating it up and causing the mist to come off of it, or the squealing of a tires of a truck as it turns around a corner, or the way the truck leans because of the centrifugal force that you can observe cause and effect relationships happening all of the world around us. And I had this idea of suggesting you do a causality walk for consciousness, that if you observe how our consciousness actually works, what you'll notice is that everything we perceive is the result of a cause and effect chain, a chain of cause and effect between something out in the world and something coming and the way. The causes of the nature of the things out in the world, the cause and effect relationships, causes that to interact with our senses and come into our consciousness in a certain way.
And if you are aware of that, you get much more of a sense of the idea that you're not a little man trapped inside the head looking at a screen.
You're at the end of a chain of causes and consequences that starts with the thing out there in the world and goes straight through in this unbroken chain to your awareness of it. So your awareness of it is part of that causal chain directly connected to the outside world. And I think that's what it means to perceive the world directly. A direct awareness of the world is it's not that something's going on causally out there with the mechanisms of the senses, of the eyes and the ears and the fingertips, et cetera. Something's going on out there, and I don't have any contact with it. I don't really know it. But then it comes presented to me as projected onto a screen inside my head. That's not how consciousness works. How it actually works is there's this connection of cause and effect that is unbroken, that goes directly from the thing to you, to your awareness of it. And if you focus on it, you can be aware of that chain of cause and effect and see how you fit into it, and how it's this unbroken chain that goes from the thing to your awareness. So that's the idea that having that sort of direct contact with the world. And by the way, I think that's the main difference right now between AI as it exists right now and for this foreseeable future, and what a human brain does, which is that we are in contact with the world.
An AI is trained on data, and being trained on data is a fundamentally different thing from being out in the world, in direct contact with it, perceiving and being affected by all the different aspects of the world.
It's a longer discussion to have on that, but I think it gives you an idea of why this issue of representationalism and this sort of widespread view of consciousness that, when you think about it, doesn't really make a whole lot of sense, why. Why this. This sort of throw, why it's so widespread, but why it throws us off.
All right, so that's the general view of what I think that most people are getting wrong about. Consciousness, what you yourself may possibly be getting wrong about consciousness and sort of giving the briefest indication of an answer to it. Now, we're doing an hour here.
[00:29:05] Speaker C: On.
[00:29:05] Speaker B: A Twitter space, so we can't go into more depth. Now, I said, this is all based on a book I'm working on, the sub lectures I did in the book I'm working on. You can find out more about that. I have a substac I created sort of to be the medium for this, and it's called profit of causation. It's a substac, so it's profitofcaulation substac.com, and you can look for more there. And also, I'm going to open this up for people to ask questions and make comments.
[00:29:32] Speaker A: Great.
It's great material. Rob, I look forward to the book. If anyone wants to request to speak, you're welcome to. I've got some questions to start off with. I guess one that comes to mind, whether you're representationalist or says, you know, we may be living in a least, you can kind of have a misguided view of consciousness, but still not have it ruin your life or not let you be a productive member of society.
[00:30:08] Speaker B: Absolutely.
One of the things about this is that there's a lot of ideas like this that are not immediately destructive for the simple reason that people don't take them seriously. Now, on the other hand, actually, I should say the first place I really encountered this viewpoint, this representationalist view, and this is before I knew what representationalism was, before I was aware of the philosophical issues. I was very young, I was in my teens, and I read a book that was popular for a while called zen and the arch of motorcycle maintenance. I don't know if anybody's familiar with that one. Sure. It was sort of a pop philosophy book.
This sort of first hand account, this introspective account by this guy, I think, is based on real life. And so he's an example of someone whose life was actually ruined by this. Right.
The history of it that sort of unfolds is he's going on this cross country motorcycle trip, and that's where the motorcycle maintenance comes in. He's going on this long cross country motorcycle trip, and he's giving these reminiscences of his past, previous life as the sort of monologue as he's doing this trip and we find out his previous life, he was a philosophy student, and as a philosophy student, he basically began to take this idea, this representationalist idea, this idea that this sort of the matrix. We're all living in the matrix. We all have this view that. Well, we all have these perceptions that are progressed under the screen, but how do we know anything's really real? And he got deeply into this idea and took it so seriously that he actually went into a catatonic state. He actually had a psychotic break. This is all part of the sort of recollection, I guess. He had a psychotic break and he then had to have electroshock therapy to be. This is back when they used to do that, to be snapped out of this catatonic state. And it all comes from now. You could say, okay, the guy clearly had some previous existing mental illness and that, I'm sure, may have contributed to it.
But you could also see how being fed this idea that everything's just an illusion, it's all being projected onto a screen and it's not really real. If you took it fully seriously, it would, in fact, paralyze you. Why would you react to anything going on in the world? How do you know what's real and what's not? It's all just an illusion. Why does it matter?
So you can see that if somebody actually did take that fully seriously, it would have that paralyzing effect. I had another example of that I wanted to use, and I can't remember what it is now, but I think.
What was that?
[00:32:50] Speaker A: Solipsism.
[00:32:51] Speaker B: Solipsism is the name for that.
[00:32:53] Speaker C: Yes.
[00:32:53] Speaker B: But no, this is the real world example of somebody taking, oh, yeah, I know what it was. If anybody's seen the. And if you're the cipher person who's interested in this topic, by the way, you probably know this movie, the movie inception, one of the great.
Who's the director's name?
[00:33:09] Speaker A: Nolan.
[00:33:10] Speaker B: Yeah. Christopher Nolan.
It was a really good movie, by the know. Lots of twists and turns of this very complex, high concept idea. But I think actually the interesting thing about it. So one of the subplots in there is the idea of the destructiveness of having put into your head the idea that the real world you're in is just an illusion and it's not real. There's a whole tragedy in that storyline, and I don't want to give any spoilers for anybody who haven't seen it. There's a whole deep tragedy that's at the root of the whole storyline that comes from somebody having put into their head the idea that the world that they're in isn't real. It's just an illusion, and there's another real world out there above that, that they can escape to. Right? So that idea, if somebody actually takes it seriously and acts on it, it can be very destructive. But that's true of a lot of bad philosophy, can be summed up as this would be a really destructive idea if anybody actually took it all that seriously, which is there are a lot of people who encounter these ideas. They can't answer the destructive idea, and they think, well, I guess so, maybe. But then they live their lives by a common sense, in a common sense way that does not actually reflect all of that.
I guess the good thing about really abstruse stuff like philosophy of mind, the only good thing about it is it is actually legitimately really hard to understand. And so even when the bad ideas come along, a lot of people can't.
It doesn't affect their lives that much because they kind of dismiss it as, this is a lot of impractical abstractions. People like to hear themselves talk, and it doesn't affect practical life. But on the other hand, now that I put it that way, I realize that is partly the destruction, right? That they come to think. Philosophical ideas, big abstractions, are all uncertain. Who really knows? They're impossible to understand. It'll break your brain. Let's not worry about big abstractions and big ideas and just be practical. And then you're losing something, and you do that, you're losing a whole aspect of, you're losing the ability to come to a deeper understanding of how your own consciousness works, how the world works. What's the nature of the world? You're sort of giving it up as, oh, that's all subjective, or it's all useless speculation. And so I think the actual destructive consequence that it has for most people is it blocks them off from seeking knowledge that would be useful to them.
[00:35:41] Speaker A: But to at least a certain extent, researching consciousness is going to be a relatively small part of the population.
[00:35:51] Speaker B: Yeah. By the way, I wanted to say, I forgot to mention this, but I just want to say greetings to everybody who didn't have a Valentine's Day date and decided to listen to me talk about consciousness and said, but a special greeting, an extra special greeting to the people who did have a Valentine's Day date and your Valentine's Day date was to listen to me talk about consciousness. You're my kind of people. All right, go on.
[00:36:21] Speaker A: You've written that AI lacks three things we have that make us special, and that a machine, by its very nature, cannot have consciousness, motivation and volition.
Can you say with certainty that AI can't achieve any of those by how we define those terms.
[00:36:44] Speaker B: Okay, that's a good question. I cannot say with certainty that I cannot ever achieve that. I can say for certainty we're a long, long way from it. All right, so let me talk over all of that.
Just give, like, a paragraph or so of material on what I mean by consciousness, motivation, and volition. Now, consciousness is primarily what I've been talking about today, which is this idea of a direct contact with the world that you are directly in this chain of cause and effect between. That starts with the thing in the world and then ends with your awareness of it. It goes through your senses and ends with your awareness of it and think about it. So contrast yourself with a self driving car going down the streets. You're in San Francisco. A way mode comes by, and nobody's set it on fire yet. And how does the self driving car deal with know? It has little cameras and lidar or whatever things, giving it information about the world, and it's calculating that information.
What the self driving car doesn't have and the reason why they found self driving cars to be such a difficult thing to create. Right. They still don't quite get things right. They're still breaking down. We were supposed to have self driving cars in 2018, at least that. You know, I, when I was last, when I first got deeply involved in looking at the predictions about self driving cars, they were supposed to happen in 2018. And here we are, it's almost six years later. Still not here. Still not. They're out there. They're testing them. They still don't quite work. I think there was a guy pretty good. Well, there's a controversy now, though, about one of Elon Musk's employees, a real true believer, who believed in full self driving so much that the story is he went golfing with a buddy. They had a little too much to drink, so they did what you're supposed to be able to do with self driving cars. You pour yourself behind the wheel and you let the car drive.
And this guy ended up, and then he was going down a curving mounting road, and the car went straight off the edge of the cliff.
And it was because he was relying on full self driving. And the full self driving. You said it's pretty good, but when you're driving a car and you could die if it goes with the wrong pretty good, or if it hits a pedestrian, the pedestrian could die pretty good isn't a great thing to have. The cars still are not able to.
The general viewpoint is they're not able to get to the error rate that humans have. And of course, to be adopted, they'd have to get beyond to even a lower error rate than humans have. But it creates a really interesting question, which is we had every reason to believe they would be here by now. Why aren't they? And it turns out that perception turns out to be really super difficult, and the human brain turns out to be really amazingly good at perception of the world. And, well, why is it that we're so amazingly good? Well, think about the difference between yourself and a self driving car. If you look down, you have a whole lifetime of experience of walking through the world, interacting with it, seeing it from all angles, constantly, every waking moment of the day, under all angles, under all conditions. You have had so much more data about how to recognize, well, what does the street look like when it's wet? What do the lane markings on a street look like if it's heavily raining?
All of that. What does a lampp post look like versus a human versus a bicycle versus a. I was about to say a newsstand.
Who goes to a newsstand anymore?
But by virtue of being a living being who's moving through the world and interacting it with directly, constantly, through your entire life, you get such a vast, larger amount of data, and it comes to the fact you're out there exploring the world and moving through it. And that's why consciousness is different from being fed data and being trained on data, which is what is typically happening, especially when they create these large language models. That's why Chat GPT will get stuff wrong or we'll make stuff up, is because it's just scraping what other people have said off the web and making a sort of guess about statistically, what's the next word that would should come in the sentence? It's not interacting with the real world of things that those words are referring to. So there's a richness of experience that it doesn't have. Now, you could possibly make something, a robot that would have that richness of experience, but it would have to be the full richness of experience of a human being with 20 years of living on earth, of this array of sense organs. And then the second thing you'd have to get to is when I talk about motivation, right? So we understand the world because we're exploring it, we're moving through it. And they've actually done experiments about how the movement is important, that a key way in which an infant develops sensation and understanding how to perceive the world is by moving through it. If the infant is stationary and not moving. It doesn't develop as well. It's not developing its ability to perceive as well. So it's the movement through the world, and specifically the idea that as you move through the world, it will have an effect on you, that you'll get rewards and punishments, that you could get hurt if you do the wrong thing, you'll feel pain if you do the wrong thing, you'll feel pleasure, you'll get a snack, or you'll get cuddled by your mom, or you'll do some pleasant thing that will happen, or you'll discover something fun and interesting that amuses you. There's some pleasant thing that will happen if you perceive things the right way, and if you move through the navigate yourself through the world the right way. And that turns out to be hugely important for us, developing our ability to be conscious of the world. We have motivation. We have a motivation to get things right, to understand the world accurately. And this is part of the reason why Chat GPT, for example, will just make stuff up, because its sole programming is, say, whatever.
It's programmed to meet certain parameters. It has no motivation or incentive to get things right.
Whereas human beings, by the nature of our consciousness, by the nature of ourselves as living beings, our consciousness exists in order to get things right, so we don't get killed or we don't get hurt, and so we can get the food and all the other food and shelter and warmth, and all the other things that we need to survive. Now, finally, throw in volition, which is that we also have the ability, not just to be programmed, but to reprogram ourselves, to constantly be reprogramming ourselves. And if we're following a line of existing programming, a set of existing ideas we had in our head that are pushing us in a certain direction, if we perceive that there's something going wrong, we're able to say no to that. I call it the power of no, the ability to say no to an existing reaction or programming that you have in your consciousness and to then turn around and reprogram it. That's our superpower as a species. That is what I mean by volition. And so now I think I don't discount that at some point in the future, a machine would be sophisticated enough that you could program it to have all these things right, to be moving through the, to be contact directly with the world through rich sensory apparatus, to be moving through it, and to have things that are good and bad for it, and have a motivations, some effect on it, on what happens out in the world, and then to be able to make choices and to follow programming or reprogram itself. But on the other hand, I point out that if you could create a machine like that, it would have all the disadvantages that humans have. Right? Because the whole point of having a machine to do artificial intelligence, the reason why we want to have this is you don't have to worry about the machine's motivation. You just press a button and it works.
It's not an employee that you have to convince to work for you. You press a button and it works, and it doesn't have needs of its own, and it doesn't have a choice as what to do. You just tell it what to do. You give it commands, and it does those things. So the whole point of having artificial intelligence is for it to be able to do these tasks for us. So I think it can do a whole bunch of tasks for us, including tasks that would normally be done by consciousness, by a being with a consciousness. But the whole point is to have it not have consciousness, motivation and volition, so that it can actually do the work. Otherwise, you just have somebody who has all the same problems of a human employee. So why not just have a human do it for you? All right, so that's the brief thing, a longer article about that that Scott was quoting from.
[00:45:37] Speaker C: Good.
[00:45:38] Speaker A: Well, we've got our founder David Kelly here. David, thanks for joining.
Looks like you'll have to unmute, David, I'm not sure if you're ready to go, but I can ask one in the, you know, let's go the other way. What about not whether machines, but non human animals?
[00:46:09] Speaker B: Oh, there's David.
[00:46:10] Speaker A: Go ahead, David.
[00:46:11] Speaker C: Ok, yeah, I just got the mute unmute button. Okay, thank you, Rob.
It's a fascinating talk. Thank you so much. And I want to just make a point about AI and representationalism.
AI, these large language models, as best I understand them, where they get their data, and the programs are highly sophisticated.
But what is that data? The data is representations, words, ideas, theories, pictures that people have taken or written. And so it's all dependent.
Your AI, your GPT, actually is your little man inside ahead looking at stuff that other people, that he did not originate. It's just coming in.
And so the basic problem, you mentioned some of the problems of acquiring enough data to provide motivation, choices, reasonable choices, but they're not actually. I mean, all it is is data, AI. I don't see any limit on the amount of data it can handle and process as computers get bigger.
But all it's going to be doing is shuffling around representations, but there has to be a source of the representation that is a creator of whatever they're coming up with.
They can even put two and two together in certain ways, which is good. That's why we use spreadsheets and so forth, calculators, but only if they're programmed to do so correctly. And so to me, the whole issue of whether computers can think is just kind of a category mistake.
That's point one, point two, and I'll try to be brief. Think about the human capacity for awareness. How did that happen? It happened through evolution, which makes incremental or larger jumps. I'm not a biologist, I won't say anything about that, but evolution works by altering the physical substrate of the animal, the body of an animal, and the parts, the organs, the functions, including the brain. That's a causal explanation of a physical device. What is ultimately a physical device, AI is all driven by trying to understand human cognition and acquire, take human knowledge and programming it in certain ways. That's a very different enterprise. And I think it would be astounding if the two coincided in some way that somehow AI could do by computing, that is, algorithms, what nature did by physical causality and biological causality. So I just think this whole idea is, I think you're absolutely right. That representationalist view of knowledge has fueled a lot of what people say about AI, including both the positive overestimates of its capacity and also the negative view that it's going to dominate us. I think it's just a different beast. Ultimately, AI is as much an instrument with as little awareness as a pencil.
[00:50:14] Speaker B: And one of the things I want to say, too, is, I think, the most underrated thing about consciousness, people. One of the most underrated things about consciousness that people don't appreciate is that consciousness is biological.
It arose biologically to serve a biological function. I think Scott was going to ask something about animals, and consciousness arose in animals to serve the biological function of awareness of the environment, ability to respond to the environment, to find food, to avoid predators, et cetera.
And intelligence. Human intelligence is like the highest version of consciousness. So, yes, lower animals do have consciousness. So to some extent, I think if you wanted to create an artificial intelligence, what you'd actually have to do is create an artificial life form that evolves and evolve it to have something that evolves or changes in a process similar to evolution to acquire its own consciousness that it requires for its survival. But like I said, you'd just be creating another living being. You've created something that's so similar to human beings with all the disadvantages of human action, all the conditions of human action, that it would not be so much a competitor human being or replacement human beings. It'd be another artificial life form or artificially created life form.
[00:51:39] Speaker A: Great.
[00:51:40] Speaker B: Okay, thanks.
[00:51:42] Speaker A: I know that Lawrence had a question. Lawrence, you want to ask your question?
[00:51:50] Speaker D: Sure. Hi, Rob. Thank you for doing this. So when you were speaking earlier and talking about the billiard ball and the perception of it, I couldn't help but notice that this seems very similar to Plato's cave and the allegory of the cave and whatnot, and how they were trying to understand consciousness and, I guess, reality more so for them. So it's interesting that, I guess, that sort of line of thinking continued through the ages. And I was just sort of curious a little bit of, is there maybe some sort of reason why that parallel has lasted so long? Were the people talking about the interpretation.
[00:52:27] Speaker C: That you have, maybe looking at the.
[00:52:30] Speaker D: Allegory and saying, well, there's something there that we can build on, or is there some other just line of thinking that's causing that parallelism?
[00:52:38] Speaker B: No, I think that's a great question. Actually, I do have in the book I'm working on, I say this does ultimately trace back to Plato's analogy of the cave, because he had this idea that he had this analogy for what human consciousness is like. Because it's like we're prisoners who have been abducted and raised from birth in a cave, and we're strapped into the cave so that all we can see our heads are facing one direction, and all we can see are shadows being cast on the cave wall by a fire. That's the fire is behind us. We can't see that there are people behind it holding things up to make different kinds of shadows, but we can't see any of that. All we can see are the shadows dancing around in the cave wall. We think that's reality, and we try to judge what the shadows are going to do next and describe the shadows and categorize the shadows, but we're all just staring at shadows in the cave wall. And the real thing that's projecting them is somewhere hidden behind the scenes. So this is the ultimate origin of the representationalist view. Now, I don't think Plato didn't develop quite to that level, but that's the earliest version of it. And where it comes from, though, I think, is that what distinguishes the Greeks? The Greeks are the people who invented philosophy. They first started asking a lot of these big questions. And Plato was one of the very earliest guys doing this.
And I think what happened is that they basically became introspective.
They developed human beings are. We talked about human conscious evolving, right? So it evolves. We evolve this human consciousness, and it becomes just this thing that we have that we don't know where it came from. We're given no user manual for it, right? It just becomes this capacity that we have. And it takes thousands of years for us to start to become aware of the fact that we have a consciousness. You're out there. Anything about, especially if you're younger, when you're a child, right?
You have a consciousness and you're using it, but you're not aware that you're using it. You just do it without thinking about, oh, I am conscious and I have a consciousness, and there's this outside world outside me. What's the relationship between the two? You're not at that level of sophistication, right? You have this capacity to perceive the world and to think about it, but you're not aware of that capacity itself. You're using it without thinking about what you're doing. But then you reach a certain, usually in the modern world, you reach a certain age, usually somewhere around adolescence, where you start to gain this awareness of, wait, I am this person, and I have these certain ideas. And where do those ideas come from? Are they really true? And this is usually where you watch the Matrix or you read one of these books or you see an Elon Musk post. You say, wow, maybe we are all in a simulation, and you become very excited about these ideas. And I think that's the analogy for how Plato came up with this. He was in that exact position. This was like the adolescence for humanity as a species is that we reached this level of development where we started to develop scientific. The very earliest scientific ideas were being developed instead of just the big thing that distinguishes the Greeks. It's a great book I read by Carlo Rovellia, the italian physicist who's written a number of books. And there's a very interesting one about anaximander, one of the early, and I mean a couple hundred years before Plato, that early.
And how they were starting to develop these first theories that were not based on tradition. They weren't myths that were handed down about the creation of the world. He was somebody who was developing a totally new view of the world that was based on observation and was like the first scientists were coming along and creating ideas that were based on observation and not just handed down from tradition or myth. So they're at this stage of sort of reaching the human adolescence where they're not just taking for granted what everybody told them. They're asking questions. They're looking at things with their independent eyes, with their own eyes, independently. And then in the process of doing that, they become aware of their power to do so. They become aware of the fact that they have a consciousness, the fact that they have certain ideas that they can do things with those ideas. They start examining how do I think about these things? What are the rules of thinking? And then asking questions like, what's the relationship between my consciousness and reality? So it makes sense that when you're first grappling with these ideas it's going to be extremely confusing and difficult.
After the first 5000 years, it gets a lot easier by which I want to apply that. We're not done yet with this stage of still trying to figure this out.
I think that's why you start to come up with these. They're grappling with this idea. I have a consciousness. I have ideas in my head about the way the world is. But is that the same thing as the world itself? What is the relationship? How do these things connect? And so you're going to get all sorts of crazy theories that come up. And I think that is the newness of consciousness as a thing. And I think the representationalist view of wanting to create a world that's going on inside your head. It's like it's the first way they had of grasping the fact of consciousness itself. You being conscious of the world you do have in a way, the world is being brought into. I don't want to say it's represented, but the world is being brought inside your skull. The world is being brought into your head and you're trying to grapple with what is the thing that's happening that all these things from outside the world, they're inside my head. How did they get there?
I don't know if that sort of captures the sense of it. But then the Plato's cave is exactly the kind of theory you'd come up with if that's the level on which you were grappling with it.
[00:58:29] Speaker C: Thank you.
[00:58:31] Speaker A: You reminded me of the greatest american hero. We got this superpower of consciousness but lost the instruction manual.
[00:58:37] Speaker B: That's exactly what I was thinking of.
We're in roughly the same age range. The kids won't understand this one.
The premise of it is that these aliens come and gives this guy a superhero suit. This is like what Green Lantern. This is the basic idea of a lot of superhero things. They give this guy a superhero suit and they give him an instruction manual for how to use it, and then he loses the instruction manual. So he has to go out and be a superhero with a super suit that has all these amazing powers, but he has no idea how to use it. He has to do it all by trial and error. And that is absolute perfect analogy for human beings evolving with a consciousness. We have the superpower and no idea how to use it.
[00:59:17] Speaker A: Well, that's great. This has been a great discussion. Thanks to everyone who joined and participated and asked questions.
Next week we've got the Atlas Society asks Dr. Helen Smith Wednesday at 05:00 p.m.. Eastern. And then back here on spaces, Thursday the 22nd at 06:00 p.m.. Eastern. And ask me anything with senior scholar Richard Salzman. So looking forward to that, Rob, thanks again. Thanks, everyone. And we'll look forward to seeing you on the next one.
[00:59:50] Speaker B: Thanks, everyone, for being here. Bye.