Episode Transcript
Speaker 0 00:00:00 All right. Well, uh, we have Rob Sinski here with us for today's conversation. Uh, we also have the Atlas societies, founder, David Kelly in the room, and, um, a few people starting to come in. So Rob, before we even get started, maybe you can tell me what is better ridges law of headlines. You want to put yourself off mute?
Speaker 1 00:00:32 Yeah. Sorry about that. I was on a, you think, you think we know by now, after all the time on zoom and everything? Oh, wait. Anyway. Um, on Twitter, I posted a little notice about this particular discussion and mentioned better headlines, better to better just love headlines says that a newspaper headline that N that it takes the form of a question can usually be answered with. No. Uh, so when you say, when I question for tonight, is, are robots going to take away our jobs? My answer, well, my answer is going to be kind of yes and no, but mostly, no.
Speaker 0 00:01:10 All right. Well, why don't you set us up and tell you, tell us why you chose this topic? Um, I know it's definitely in the news and a lot of our recent, um, honorees at our galas have been in that space in terms of automation and artificial intelligence. Um, uh, you know, clearly we also have minimum wage laws happening that, uh, are, are just mandates that are kind of putting on pressures for more automation in, in the service industry. But, um, increasingly with artificial intelligence, this is coming for, or at least the fear is it's coming for white collar jobs. So tell us about,
Speaker 1 00:02:01 So I wanted to start that in terms of the overall context. So just as I was tweeting that out about butter, just love headlines. I also noticed that Andrew Yang, uh, who was the guy who wrote it, who ran for democratic party nomination for president, despite having no prior political experience, but he's just like Silicon valley utopian guy who has been trying to run for various political offices. And he tweeted something out almost exact same time, put something out about a book that is claiming artificial intelligence is going to change everything, which is probably is going to have a lot of wide effects. But yang is as a guy who indicates why this is the political significance of it because there's sort of a, I either utopian or dystopian attitude towards artificial intelligence. And they're kind of, they people kind of alternate between the two. So the dystopian thing is artificial intelligence is going to become so powerful.
Speaker 1 00:02:57 It's going to be able to do everything it's to be able to do all the things humans do, both in manufacturing and a white collar jobs. And so therefore we're all going to be out of work. That's the dystopian thing. And then the utopian flip side of this, which is what Andrew Yang represents is, well, actually, this is a great opportunity. What we'll do is we'll tax all the owners of the AI on the robots and use all that money to provide everyone with a guaranteed universal, basic income. Everybody gets guaranteed living provided for them, and nobody has to work. The robots will take all of our jobs and that will be okay because the government will pay us all money to sit home and, and, and do whatever we feel like. So that's sort of the, the utopian aspect. And basically I am against both the dystopian and utopian approach to this.
Speaker 1 00:03:42 You know, I think when we talk about AI, the problem we have is that the only framework the average person has for dealing with this is Hollywood movies and Hollywood movies. It always ends up being the robots take over and they try to kill us all. I mean, they have basic that the Terminator and various other variations on that theme of the robot apocalypse is there is your literary frame of reference for dealing with what artificial intelligence is. So let me back up a little on this. So one of the things is that artificial intelligence, you know, we think about it. We think about the terminators. We think about the, you know, uh, uh, uh, a human, like brain capable of full, uh, full abstract thinking, the actual definition and the actual meaning of artificial intelligence as it's used by the people who are creating it, uh, and who are using it in the technology field.
Speaker 1 00:04:34 Is it simply anything in which you can get a machine to do something that would otherwise require human intelligence? So it doesn't mean that the machine is thinking, it means that you have somehow created a complex mechanical system for doing something that would normally require a human brain to do so by this definition, a pocket calculator is artificial intelligence, right? Because it can add numbers, even though it's adding numbers by means of a fairly simple set of electrics of electrical circuits, nothing near like an actual abstract intelligence, the only intelligence has evolved in the design of a pocket calculator. Uh, no, we don't have pocket calculators anymore because you have a thing in your pocket that has a calculator and a billion other things it can do. But I remember the days of the calculator. So let's keep that simple. The only intelligence evolved in a, in a calculator is the intelligence of the person who created those circuits and programmed it.
Speaker 1 00:05:34 All right? So that's why artificial intelligence is, is very wide term. Now, the reason why people are right to be concerned about this, and I wrote a piece, gosh, about eight or 10 years ago now called, uh, we are all obsolete now, uh, meaning that, uh, I think a lot of jobs, a lot of work that people do that used to be considered something that couldn't be automated. It was too complex to be automated. A lot of those things are going to be automated. And the big example people use usually is legal zoom, which is this website you go to, and if you want to do some fairly simple, uh, legal process, you go to the website and you basically answer a bunch of questions and it gives you the forms that you need to fill out in order to, in order to write a will or to incorporate a business.
Speaker 1 00:06:26 So this is a way in which basically it's putting a bunch of white collar people, a bunch of lawyers out of work, uh, quote, unquote, out of work. Although, cause I don't think the number of lawyers is actually decreasing in the country, but the idea is it's replacing the work of lawyers because somebody come up with a complex enough algorithm or, you know, programming decision, tree, and knowledge base that you can go in and without interacting with a human being, just by answering a bunch of essentially a long questionnaire, it can direct you towards the forms you need to fill out in order to legally have in order to accomplish something in terms of legally filing some in order to be called legal filing.
Speaker 1 00:07:11 Uh, now this comes from the fact that there's a lot of work that we've considered white collar work that requires a college education that is really just complicated form filling, right? You know, th th there are people who have these white collar desk jobs, and they basically said, spend all day sitting around and doing the equivalent of just filling out a form. It's not something that requires high level abstract thinking. It just requires that you be familiar with the forms that need to be filled out. And that's sort of the low hanging fruit of things that can be automated. So there's going to be a lot of stuff like this that will get automated. And of course, that's following on the, you know, the, the industrial revolution and the application of computers to the industrial evolution, which is also greeting bringing greater automation to manufacturing production.
Speaker 1 00:08:02 So that's the grain of truth to all of us. And I think that's all going to be extremely beneficial because the more of that stuff you automate, the more you automate the form filling, the more you automate the rote mechanical, repetitive things, the more you free people up to do the far more productive work that requires thinking, but the hysteria comes in both the dystopian and the utopian view of this comes in where people vastly overestimate what it is that artificial intelligence can accomplish. And also vast and strange enough. They're also underestimating its value to us as workers, as producers. And all of the things that we actually get out of can get that the actual way that artificial intelligence will affect our work is not by putting us out of our jobs, but by augmenting our ability to do our jobs. Uh, now, uh, I want to a couple of years ago, uh, I put out on, I think it was on Twitter.
Speaker 1 00:08:59 I put out a challenge sort of to all comers to say, name to me one job that has actually disappeared because of automation, oh, you over the 200 year, 200 plus year history of the industrial revolution, names me a job that has disappeared because of automation. And of course, a bunch of people, uh, came up with, well candlemakers. And I pointed out that, well, there's, you know, the couple coming Yankee candle and a couple other candles, a couple other companies out there that manufacture candles to this day and they have, you know, thousands of employees. So there are people actually working as in, in these factories as candlemakers, uh, or people, somebody pointed out a Sawyer, a Sawyer is an old term. You know, it's sort of a name now that people don't that's like Cooper people don't think about what it originally was. But a Sawyer was a guy who literally operated a handsaw to, to cut a large piece of wood, a large plank of wood, or to cut down, you know, if you felt trees and you were cutting them down to make them into lumber, is a Sawyer, was a person who operated a large hands-off.
Speaker 1 00:10:01 Well, the funny thing is when somebody said that to me, I had recently talked with a Sawyer, uh, and what a Sawyer means in the 21st century is this is a guy who has a small mobile automated sawmill that he will come. And if you cut down a big tree on your property, he will come and cut it up for you and turn it into lumber that you can use. So this is if you have like a, you know, a rare and unusual species of wood that you want to harvest in order to make furniture from it, you would hire this guy to come out and cut it up and make it into lumber for you. So a Sawyer becomes a, you know, it still exists, but he's not a guy with a handsaw anymore. He's the guy with a expensive, uh, uh, piece of machinery that does the saw milling.
Speaker 1 00:10:47 And that's the basic pattern for how automation is actually, if you look at, in reality, how it's actually affecting people's jobs is that people don't stop doing the jobs. They do the jobs in a new way, uh, and that the, the jobs still exist, but it exists. It consists of working with the machinery and the automation that augments the judgment of the actual worker who's doing it. Uh, the other example I pointed to is somebody who has to worry about type centers, type centers no longer exist in this modern age. And I said, yes, but what's the Mo if you think about what's the modern equivalent of a typesetter, there are 1,000,001 web designers out there. And a web designer is basically just a 21st century type setter, or he's somebody who figures out how to take the words that you want to publish and put them into publishable form in this case, by creating this structure and, um, uh, operations for, for your website.
Speaker 1 00:11:46 So again and again, when you look at this and how automation and technology actually affects jobs, and we have, you know, like I said, centuries worth of data, of, of evidence to go from what you actually see is that the machines come along and they augment human activity. They change how you do the job and they change, and they greatly increase, increase the productivity with what you do the job, but they don't actually take away everybody's jobs. So the bad news is the fantasy that the robots are going to do all of our work, and we will sit back and do nothing. That's not going to happen. The good news is that we will also be in a job. And in fact, we'll have better paying and more productive jobs in the future because of technology and automation. So that's my short spiel on that. And I just want to open it up now.
Speaker 0 00:12:37 Great. Well, we have, uh, also David Kelly, um, our founder here. David, did you have any thoughts on what Rob just shared?
Speaker 2 00:12:47 Um, not yet. Uh, thanks Jack for asking, but, um, you know, I, I take all of Rob's points is very interesting, you know, the examples of the candle maker and the, um, uh, um, the Sawyer. And I'm assuming you would say something similar about the horse and buggy, um, after the automobile.
Speaker 1 00:13:11 Well, I wouldn't actually, while you were, you said, I just want to throw it a point there. I came across something recently that about a whole bunch of companies that exist that are suppliers and makers of parts for automobiles started out as suppliers, that makers of parts for horse and buggies.
Speaker 2 00:13:29 Interesting. Yeah. Rob one thing, and I, I dunno how, how much we want to get into underlying philosophy of AI, but AI is long and part of cognitive science, which I've, um, I've worked in and taught, um, going back in time. And one of the issues there is whether machines are actually intelligent and, um, whether they can actually reproduce all functions of human consciousness. And I, some philosophers think, think, yes, because the mind is just a computer. Uh, our, our cognitive functions are just software running on, um, wetware instead of Silicon ware. But the, uh, and I think that's crazy, um, because there's no reason to believe that the computers are conscious or have direct contact with reality, um, independently of their creators and the uses that they're put to. But, um, I guess my question would be, have you, in, in your reading and thought about this topic, have you come across the idea that, well, actually we're going to have, um, machines that are smarter than humans in the sense of having intelligence, like, um, emotional intelligence, um, the ability to understand context, you have ability to literally see and be perceptually aware of the world, not to have, you know, cameras or, um, that film certain things and feed into an algorithm, but actually see and experience the way we do consciously.
Speaker 1 00:15:21 All right. Yeah. So this is actually the basis for the whole robot apocalypse idea, right? So the computers are going to become capable of intelligence the same way we are. And then, because, you know, you can build computers, keep, you know, because of Moore's law, because the fact that computers keep getting more and more powerful, that will continue. And not only will they match us, but they will eventually exceed us. They've become so much more intelligent than we are that we will basically be, be doomed and they will become the rulers of the planet and the rulers of the universe. And we will be, we will become their pets or we'll become enslaved, or they're simply kill us off, uh, pick your pick your favorite Hollywood movie version. Uh, I think that the most, the interesting philosophical issue here to me is I think all of this underestimates and, and sort of denies, and that this is a longstanding, philosophical issue way before people had were way before Silicon valley and all of that.
Speaker 1 00:16:17 And the big thing is what they don't realize is the extent to which human intelligence and the ability to abstract is connected to the requirements of a living being. And that's something that I think is iron ranch is particularly strong about not that she ever thought about much about she never thought or studied or wrote much about computers or AI. That was all very, very new stuff in, in, late in her life. Uh, but one thing she did really grasp that we think was revolutionary about her philosophy was CA was the idea that essentially that, that, that intelligence and abstraction and the ability to reason has a biological function, right? That it exists to serve a biological need. It exists to serve it exists by a living being as its means of survival. And I think that I've only just begun to sort of putter around with us and write a few things about it.
Speaker 1 00:17:11 I've got more some plan, but I think when you go into it in a lot of detail, you see that this is, this is absolutely essential to intelligence from the very, very beginning, um, for the beginning of human consciousness. So for example, when my kids were very young, I took to studying a little bit of stuff about, you know, how, because I was watching it happen, right? How, how a child, I would infant develops, uh, its ability to, to see and to move and to understand objects around it. I was watching all this happen and I read some things about how it works. And one of the things that really intrigued me was that the importance of motion. So when a child trying to figure out his environment to basically, why are his ability to just to perceive the world motion is hugely important, uh, that he has to move back and forth and see things from different perspectives.
Speaker 1 00:18:02 But it wasn't just, motion was also self motion is really important. So he has to move himself and then see how the way things he perceives things, the way things appeared changes as he moves himself. So this idea of a being that's interacting with the world and moving around in it is essential to the very way we wire our ability to perceive. But I would also add that what's also essential is motivation. It's essential. What's essential to the ability of consciousness to function is the fact that you need to know things that you want to know, things that knowing things makes a difference to your life in terms of, you know, whether you're hungry or, or, uh, or fed, or whether you're hot or cold, or whether you get rained on or whether you're protected, et cetera, that the, the fact that thinking is a means of survival, that it has an impact on your wellbeing is essential to the ability of a consciousness to function. And I think that's the thing that's sort of discounted, I think, has been discounted false solidly for a long time. Um, and it's discounted in a lot of these sort of utopian or dystopian schemes about how we're going to have the ability to have machines have intelligence. You know what my question is, not just why can't a machine think, but why would it think, you know, what, what would be, what would that machine's motivation be
Speaker 0 00:19:34 Great point. I agree, Rob. Um, well, first of all, I want him to welcome Scott shift to the stage, uh, would not be a clubhouse chat without you Scott. Um, second, I wanted to remind everybody we are going to record this and make it available on our podcast platform. Um, I, you know, had a question, Rob, kind of related to what you were reviewing in your setup about Andrew Yang's, um, premise and his proposal for universal income. Uh, maybe a little bit of background on your universal income, um, where it came from and you are, you know, thoughts on pros and cons.
Speaker 1 00:20:22 Alright, so UBI or universal basic income is the new name for it used to be called a guaranteed minimum income. I think back during the McGovern campaign and, you know, in 1972, um, it's like a lot of these ideas that, you know, they changed the name and suddenly it sounds modern and fresh again instead of old and tired out and debunked. Uh, but it's just the whole lifelong dream, right? To live without working. I call it democratized aristocracy, because if you read like Tocqueville, he talks about how the difference between the American attitude and the aristocratic attitude in, in Europe, it says in Europe work is viewed as ignoble. Cause, you know, in Europe, the distinguishing mark of a gentleman or an aristocrat is that you don't have to work. You have an income provided to you. You have money provided to you because you are the owner of a giant estate, or you have some sort of, you know, uh, hereditary source of income and you don't have to manage that.
Speaker 1 00:21:15 You don't have to do anything. You don't have to work in the field. Somebody else does that. And you are able to live a life of leisure and devoted to politics and war. And, um, uh, and maybe in the best case, you devoted to science in the worst case, you devoted to, you know, gambling on horses. Uh, but you know, the idea that these are the noble activities of the, of the, of the elite and only the ignoble only the, the, the, the, the base, uh, commoners are the people who have to grub about working. And he talks about being a fundamentally different attitude that Europeans had towards work versus Americans. Well, I think it's invaded America, but it's become democratized. And what that means is the idea that work is still viewed as ignoble. It's viewed as an imposition, as something you should want to avoid if you possibly can, but it shouldn't be in the democratized version.
Speaker 1 00:22:09 The ability to avoid working, isn't viewed as the province of a small elite of aristocrats, it should be available to everyone. And the basic income is basically trying to realize that, that great aristocratic, democratized, aristocratic dream, without that everyone will have the privilege of living like a sex, like a 17th century, French aristocrat, and, you know, and sitting around and having servants who to provide him with everything that he needs. Um, so the, uh, the problem I see with the UBI now, the sort of the, the way it, the really vicious part about it, I think, especially in the case of automation and the advances in technology is what it actually does. Like a lot of quote, unquote progressive ideas. What it actually tends to do is create a two tier class society. And what it does is it creates a, uh, uh, one group of people who are encouraged not to work, not to develop any of the habits of work, not to develop any of the skills of work and simply to, to live without having to think about taking any thought for how they're going to provide for themselves, but they live on whatever it is, the minimum that the government chooses to provide stuff.
Speaker 1 00:23:26 So they basically like are permanent underclass that's created. And we've seen this already with the welfare state, you know, with, uh, the Cabrini green housing projects in Chicago. When I was young, that you had this group of people who were set alert into a life of idleness, but living in squalor and the slums. Now, the province of promise of UBI is, will be a better version of that. You won't be living a Scott and squalor and a high-rise housing project. You'll be living in a nice apartment and you'll have more money, but it's the same, the idea that you will be paid basically to stay out of work, to stagnate, to have no skills, to not advance your life in any way, and to be lured into the life of idleness. Meanwhile, there are small minority of people who are still ambitious and interested in who still want to work.
Speaker 1 00:24:11 They will go out and do all the actually productive things in a society. And of course, in a society that's increasingly automated and high tech and advanced, those are going to be extremely remunerative, well paying jobs. Uh, so what you're basically doing is you're creating an overclass of technological overlords. If you could actually implement this, which I doubt they could actually do, but if you could do it, you'd have it. Overclass a small number of technological overlords off at Google managing all the automated systems and then an underclass of people who are paid to be idle, and to basically live at this sort of bare minimum. What strikes me is just totally perverse. Now it gets to an issue I'm going to talking about it when a clubhouse, a couple of weeks from now, which is work ism, which is this idea, this exactly, this aristocratic idea, that work is ignoble. That work shouldn't be something that offers meaning and value to your life. And that's really what this is what this is sort of based on is this idea that work is work is not something that's a value in your life. Whereas I look at a high-tech super industrialized, super automated system, and they will, I think of all the things that will be possible to us if we keep working at this much, much higher or much more productive level.
Speaker 0 00:25:29 Got it. Um, Scott,
Speaker 3 00:25:33 Hi there. Thank you. Good. Uh, topic. Um, I read an article, I guess a couple of years ago about, uh, this AI playing go and versus championship players. And, uh, one thing that struck me was the players saying that, um, you know, even though I think they may have drawn, you know, versus the number of games, he said that the AI actually helped him think about the game differently because of how he approached it. And so I guess I see that in some ways, you know, there's, there's potential of, of us just almost merging with machines. Um, some of the, you know, transhumanism sees that as like a way to functional immortality, uh, you know, just were, you would be able to exist as long as the machine did. And I mean, there's some incremental, you know, what you're talking about, about how there, you know, there is a push right now to have few of us work and for there to be a technocratic overclass, um, you know, even, you know, just us being all in our phones and everything is, is almost like a first incremental step of, uh, you know, just us.
Speaker 3 00:26:53 Uh, it, it just being something natural that that happens.
Speaker 1 00:26:59 Yeah. I mean, I, I want to talk a little bit about that merging with machines thing. Now, first of all, we are a way, way, way, way, way far off from actually doing that. Because the technology about that around that involves some extremely difficult questions. And the main one was, is there's a lot of people doing work on BCI brain, computer interfaces, basically. How do you get your brain to interact directly with a computer, not through your eyes and fingers or not through your thumbs, if you have a phone, but by means of just simply thinking they've done some experiments with this, where they have people who are paralyzed, for example, uh, who are able to think about a word, and then the word appears, uh, on, uh, on the computer screen. Uh, so they've been able to do some very, some early things with that.
Speaker 1 00:27:44 It's very difficult though, because hardware doesn't like to work with wetware, right? You, you put electrodes inside your brain and the electrodes tend not to do not to work and play well with the biochemical stuff. Uh, so that's a big problem. That's going to need to be solved. And of course the brain is so in or enormously complex that we only understand a very little bit of out works. So having those two things work together seamlessly is going to, because it's going to take a hundred years to do it. It's not going to happen in 20 years, it's going to happen in a hundred years. I mean, that's just, I mean, any prediction would be predictions are, are, you know, no, nobody can make an exact Bridget about this sort of thing, but we could simply give orders of magnitude that this is many decades away.
Speaker 1 00:28:30 But I do think that in the longterm, that is the answer that if we create all these amazing new capabilities, part of the assumption of the machines taking away our jobs is that we're going to create all these amazing capabilities for the machines and let the machines have them and not want them for ourselves. Well, eventually what we're going to want is we're going to want to implant that. And I, you know, I say, you know, sometime in the future, in the far future, uh, you know, some 19 year old, some 14 year old kid will walk into a museum and they'll have displayed an iPhone, you know, in the museum of antique technology, they'll have an iPhone displayed and he'll look at it with disappointment and say, oh, it's external. Uh, cause he'll have an implant in his brain that allows him to call people and talk to people and, and, uh, look up information and what have you.
Speaker 1 00:29:16 But like I said, that's, that's a long way away again, it's another thing where science fiction has really prepared us well for that. Cause what's you think about what's our big model for what it would be like to have a human being combined with machinery? Well, as a star Trek fan, I immediately think of the Bork. Right. And they're the bad guys. So, uh, th again, there's science fiction and it's been the trend of our science fiction for the last 50 years or so that it's extremely dystopian that, uh, um, you know, it's like, there's a TV show, a British TV show a couple of years ago called dark mirror and that the word dark there, it was all the, it was sort of a Twilight zone style thing of these, each, each show was a vignette, but it was always about how some new technology is going to change our lives in some way that makes it horrible.
Speaker 1 00:30:05 And so it was just total dystopian science fiction, um, practically every episode. And that's sort of been the model for, for how we look at these things and we don't look at well, you know, what would be the, what would be the upsides? Um, you know, I just think it's funny that, you know, star Trek as a model that, you know, one thing that really jumps out at you when you watch the original series, you know, filmed in the late sixties, is they're sitting at the wheel at the, up behind the hell with this giant star ship that can travel faster than the speed of light. And they're all flipping these little Bakelite toggle switches, which, which, you know, by the time, uh, I was watching it when I was a kid that was like an obsolete way of controlling something. Uh, and so, you know, it's hard to, hard to project what the technology, the actual 23rd century star ship probably won't have any displays or any controls at all. And people would just simply control it, you know, through a brain computer interface. But like I said, that's sort of, you know, that's very far off in the future. And the idea of could you use that to encase a human intelligence and live forever? That's also still very much in the science section, speculative stage.
Speaker 3 00:31:20 It may be, uh, I, my vision is for that to be the thing that, that objectivism of philosophy with life as the standard should be focused on. And once Kennedy directed our purpose of getting to the moon, we did it in 10 years and that, uh, you know, um, that can be something to unite us and against the people of death.
Speaker 1 00:31:52 Yeah. I, I do think though that they mean you have to not underestimate the extreme, uh, there's so many things that we, not only that we don't know, but that we don't know that we don't know. And I said, was it Don, Don Rumsfeld I'm channeling, Don Rhonda solitary, the unknown unknowns, uh, somebody things about the brain and about how to work with the brain and about how consciousness, I mean, the whole mystery of what is the physical basis for consciousness. And then of course, you know, the connection of that to the biological requirements of human life, uh, you know, how do you code for volition? Okay. So I wanted to expand a little bit of what I talked about with David earlier, because talking about the biologic, um, intelligence as a, uh, human intelligence as a biological function and the importance that that has.
Speaker 1 00:32:37 So I broke it down to three issues that saw as being important. Uh, three things that we have that machines don't have the first is consciousness and by things, Greg Hodgson, I mean specifically a direct connection to reality. So for example, a lot of AI that's being done right now is trained on datasets that are curated by human beings. So they'll say, okay, we want to train an AI to recognize birds. So what we do is we come up with this whole, this set of, you know, 5,000 images of birds, and we feed those images to the AI and we teach it how to recognize birds. Well, that's a very different thing from the AI, you know, opening it up, opening its eyelids and looking out and interacting with the world and observing birds. So, uh, AI, as it exists right now is something that has no direct connection to reality of its own.
Speaker 1 00:33:27 Its connection is generally mediated through human researchers who are feeding it, you know, curated collections of data. Uh, but the second issue that that's, that, that, that could possibly be overcome that first one, the second issue, which is much thornier is volition. And I think volition is absolutely essential to, uh, abstraction to the process of abstraction, because it means you have to be able to make choices about what kind of abstract connections you're going to make. You have to, uh, you know, machines, machines get stuck in feedback loops, where they start out with some initial condition and it pushes them in a certain way. And we humans because we possess volition are able to break up those, you know, those feedback loops and direct them in different directions when we realized that it's not working properly. Uh, so how you encode a machine that has volition that's, that's an enormous metaphysical mystery, uh, not just a technological one.
Speaker 1 00:34:25 And then the last issue is so it's, it's consciousness, it's, uh, volition. And the last one is motivation. You know, why would a machine do something well for motivation for humans, it's very easy. We want to stay alive. We want to enjoy the various things that we can enjoy in life. And so, you know, we w we're gonna survive and thrive, whereas a machine doesn't have that characteristics. So again, the creation of an artificial general intelligence and intelligence is capable of doing everything a human can do is not just a, it's a philosophical and technological problem of the very first order. So, uh, going to the mood is easy by comparison. You guys, you know, going to the moon actually evolved mostly using existing evolved, using existing physics, mostly using existing mathematics, and then a little bit of new technology, mostly in very primitive computers, but it was, it was a much smaller order of magnitude, um, uh, technological and philosophical problem. Then creating, creating artificial intelligence and then merging human beings with that. I mean, I think that's, you know, I don't want to the VAR, the far future, I just want to tap people's especially expectations down for the, uh, for the near future.
Speaker 2 00:35:47 David, thanks, JAG. And, um, and in responses to Scott and also to the general issue of, you know, artificial intelligence becoming super, you know, surpassing human beings in their intelligent building, there's an analogy that I've used in connection with, uh, anarchism. But I think it applies here to think of an engineer who is working hard to reduce the friction in the machine as getting down as close to zero as possible. That's a highly productive activity and with potentially great gains in saving of energy and efficiency of the machines, but someone who's trying to invent an artificial perpetual motion machine that never stops that has no friction, no loss of energy is a crank.
Speaker 2 00:36:50 The application of anarchism I'll leave the decided just briefly, it's reducing the size of government and outsourcing as many things as possible is a great idea, but highly productive getting rid of government altogether is like in my analogy, the perpetual motion machine crank. Well, the same, I think is true of artificial intelligence. Sure. It will take over more and more, uh, functions that human beings are capable of forming thanks to human efforts to improve the algorithms and, uh, everything else that goes into it. But the idea that it's going to actually replicate human adults is the credit scores, not the productive version. Uh, and so, you know, every time I hear one of these advances, like, you know, a computer program beating someone at chess or go, or, um, algorithms that supersede doctor's judgements in diagnosing disease on and on and on. There've been a lot of advances, but they're all helping us and lightening our burden and increasing our income because we can all do much more productive. The idea that it, that the idea that any of this will actually equal and replace human intelligence is to me, it's like someone who believes in preventable emotion.
Speaker 2 00:38:29 So after that, but what's worse. But, um, yeah, you guys, I look
Speaker 3 00:38:35 At it more from the, you know, random position that it's all solvable that every, you know, a disease of aging is subject to repair with some aspect of the law of identity, or, you know, that merging with machines. It may take a while, but you know, one of the things is the capitalism increases the rate of technological progress. And so we can be, you know, calling for that to, to get there faster.
Speaker 1 00:39:06 W well, here's where all I agree with you on one thing, which is, I do think that there's a certain complacency that has fallen in where we were spending our time worrying about what pronouns people use. And, um, it's like the, because we're a wealthy and advanced nation, the urgency, you know, because 200 years ago the urgency was how do we build houses? How do we carve farms out of Virgin forests? How do we feed ourselves? How do we get, how do we close ourselves? How do we provide ourselves with the basic necessities of life? And now we've become, it's like, we've become so wealthy that we've become complacent and well, we can now have the luxury to argue about all these, well, literally these first world problems and, uh, argue about what pronouns we use and how big should the welfare state be and all of that.
Speaker 1 00:39:57 And I think there's sort of a drum beat out there. People saying, well, how come we're not doing these big ambitious things? Well, how come we're not regarding it as important? I mean, there's still people out there doing big ambitious things, but how come we culturally don't regard it as important anymore as glamorous as exciting to do these big moonshot kind of ideas. And so I definitely think that that's a reorientation that we need to do culturally to say, wait a minute. You know what, we, we are not, we're not at the end of the development of, of human technology and science. We're still at the beginning, we've got so much more to do and let's get all hands on deck and let's get everybody racing as fast as possible to go into this science fiction future. And we should have, you know, if, if it's at all, if it is actually at all possible, there are some physical limits we should be trying to have, you know, warp drive.
Speaker 1 00:40:50 We should be trying to have supercomputers that can, uh, uh, create something like, or, or, you know, create ever in greater amounts of, uh, approximations to, to, to our intelligence or augmentations of our intelligence. Uh, we should be trying to do all these amazing things. And I think where I, where I'm sympathetic with you is the idea that culturally, we need to reorient ourselves towards that's the real fundamental, main issue of life and all this other stuff we spend 90% of our time arguing about, especially in politics and in the culture wars, all of that is, is stupid and irrelevant compared to the question of how do we race into this science fiction future.
Speaker 3 00:41:32 Absolutely. But without a higher purpose to people get out of growth mode and into redistribution mode. And looking back to our past
Speaker 1 00:41:43 That who who's to blame for what, and that this would get us back into more of a, you know, what we can gain instead of what we're trying to keep. Yeah. Now the other thing I wouldn't mentioned, by the way we talk about these high-tech things as being the future, and that we talked about high tech jobs being taken away. One of the things I want to throw in this as a theory of bouncing around for a while, and I'm working in our article on this and I call it say's law of robots. So it's an application of say's law, which is a principle from economics, Jean-Baptiste say, uh, who was one of the, who was the creator of it? Uh, I don't think Richard's here today or I'd have him, uh, you might want to jump in on that Richard Saltzman. I learned about say's law from Richard Saltzman.
Speaker 1 00:42:25 So, uh, but say us law basically says, well, in the version it's usually given, it says supply creates its own demand. What it actually means to supply is demand, which is, let's say the example that I got from Richard is let's say you were on a deserted island. There are only two people on the island. One grows oranges, and one grows apples, right? And you, if they want to trade together, what does the guy who grows oranges have to trade in order to get apples while all he has to what he, if he, if he demands apples, the only supply he has to trade it to trade for it is oranges. So in other words, everything you produce consists of constitutes your demand for what, uh, for what other people produce. Now, this is sort of seems like you might be thinking, okay, why is this important?
Speaker 1 00:43:15 Well, it means there can never be any such thing as overproduction, right? That the more, uh, the more a society produces as a whole, the wealthier everybody gets because the more other people produce, the more they create increased their productivity, the more demand they have to buy, whatever it is that you produce. So the, uh, real life, uh, widely recognized principle of this is something, it has an awful name, but it's actually an interesting principle. It's called Baumol's cost disease. Now the disease here is supposedly costs spiraling upwards, but what Baumol's cost disease actually says is it's the observation that when productivity in the economy as a whole increases, the pay increases also increases for jobs whose productivity has not increases. So for example, the greatest per the usual example, given as a string quartet. Yeah, but we don't want a string quartet to play Vivaldi for you.
Speaker 1 00:44:14 Right? Well, a string quartet has not gotten any more productive since Vivaldi wrote these things 300 year, 300 and some years ago, right? Uh, the, the basic mechanisms of a string quartet, the, uh, the ability to play, how, how many players it takes, how long it takes them, the number amount of time they have to take to learn, to play their instruments. None of that has fundamentally changed in hundreds of years. So a string quartet has not increased as productivity fundamentally since, you know, in the way that a weaving weaving a weave in cloth at us in a textile mill is vastly more productive than doing it on a handloom 300 years ago. So, so, you know, certain things have increased their productivity enormously, and other things like the string quartet have not really fundamentally increased their productivity. And yet the string quartet will benefit from the wealth that's created in the rest of the society. Because in order to hire a string quartet, you need to pay the musicians enough money that they can afford to live on. You're competing with all the other potential jobs that those, those musicians could have. So you have to pay them more. You have to bid them up, even though their, their productivity hasn't increased. You still have to pay them more because you need to give them an incentive to put in the time and effort to develop the skills of a professional musician.
Speaker 1 00:45:41 And I view this as an application of say's law, because what it basically needs, if, if the everybody else's production is the demand for your work. What that means is as everything becomes more and more automated, the vast amount of wealth Preuss by all the automation constitutes the demand for everything that is not automated. So take a string quartet to take, um, you want to go get a massage, right? Uh, now maybe we'll until you develop robots that can give you a massage, but you might like the idea of getting a massage from an actual human being. So anything that you actually wants to be done by an actual human being, anything that still has to be done by an actual human being will benefit from automatic automization, because the demand for that good or service will be fueled by all the increased production from the things that are automated. So I'd want to point out that in this high tech future, it's not just that the people who are, who own the robots are programmed, the robots are designed the robots. They're not just going to get raw wealthy. Everybody's going to get wealthy, including people who are doing things who, you know, like, like musicians or massage therapists or whatever things whose productivity has not fundamentally increased.
Speaker 3 00:46:59 Yeah. Uh, you brought up star Trek and, um, you know, by the next generation, there were replicators, which at the point you can 3d print anything you want. Uh, it does start to make scarcity, seem more obsolete, you know, how long it'll take.
Speaker 1 00:47:17 Okay. So I've written about this too, and I want to just stop you there because scarcity and economic scarcity does not mean the, um, presence of something in a small quantity. It means that everything PR everything exists in a limited quantity, whatever that quantity is. So scarcity will never cease to exist in the economic sense, take a replicator. So when you think about it, it creates matter out of energy. Can you imagine the extraordinary amount of energy required to do that? So what, you know, let's say you weren't able to event the star Trek replicator. You then have the problem of, well, you need this massive, enormous quantity of energy that you're able to use. And as somebody wrote an article about it with the, with the title who minds that I lithium, meaning that somebody has gotta be working to create all the technology and all the material, the rare and unusual materials required to produce all that energy. So you just, you kick the scarcity question back to a, to a different level.
Speaker 3 00:48:16 It's for functional purposes. You know, it can, will poverty go way for fun, you know, when you can get assuming they have the energy thing solved. Um, but I think it's important because it, uh, you know, it's part of what's going on today. That's giving people the illusion that socialism is working, because for now they're able to, you know, hand things out and, and the economy seems to still be growing
Speaker 1 00:48:48 Well. That is true. You become wealthy enough that you can, you can have a lot of looting going on in the economy, uh, without, uh, uh, without necessarily noticing without the effects becoming immediately disastrous. You know, that if you, if you have a, you have a stagnant economy, instead of people starving, let's put it that way. You know, that, that that's your, your bad outcome is that you have a stagnant economy, whereas in a less advanced economy, a bad outcome of bad economic policies as people are starving. Um, but I also say that, you know, one of the things I've been writing about the basic income, one of the calculations I made is what if you had decided at the beginning of early in the industrial revolution, and there were people who had this idea, by the way, this is not a new idea. There are people who had this idea early in the industrial revolution in the early 19th century that, oh, with all this, I mean, this is part of what was behind marks that with all of this enormous new productive capacity of, of modern factories, we could have for each, according to his ability, to each, according to his need, we could have these utopian schemes of redistribution because now we have the technological means to produce this.
Speaker 1 00:49:56 So this is not a new idea by any means. It's been proposed for the early 19th century to the mid 20th century and all the way up to now, we're saying, oh, we're going to have star Trek replicators there. We're projecting it into the future. So it's, it's, it's not the technological means that have changed. And I did a cut back. You then flip calculation and say, okay, suppose you had done this at the time. When say, uh, you know, 150 years ago, you had introduced a basic income. Well, you could continue support people possibly at the standard of living that they would expect from somebody living in 1880. But, but it's the standard of living with somebody living in 1880 is cloud obviously way far below the standard of living people have people living in 2020 21. So what you're giving up, when you do that, assuming you make the system work at all, what you're giving up is all the future potential of the system, all the future things that you don't even know you want yet, because nobody's invented them and nobody will invent them if everybody stops working and the economy stops growing.
Speaker 1 00:51:06 So, you know, th th that's what I think is sort of the absurdity of this socialist schemes is you're really giving up is you're sacrificing the star Trek future that's I guess that that's the, that's a good formulation. That's the ultimate perversity does the guy who wrote a book called star Trek economics. And it's proposing precisely this idea that, oh, well, I'll have replicators scarcity will no longer exist. And so we can live in the socialist utopia. And the thing is that in promoting this idea of, we should all be able to live without working. What you're actually doing is you're sacrificing that star Trek future. Well, not that you're doing it too, because the thing is in the 24th century, let's say where it start the world of star Trek, the next generation. And by the way, you have to stop me, Jennifer, you have to stop me if I start talking about star Trek too much, because I will just do that,
Speaker 0 00:51:55 Uh, fellow fellow Trekkie here, uh, one of the earliest adopters of a whole seven of nine last place I would go ahead.
Speaker 1 00:52:06 Sorry. I was distracted there. Um, alright. So, uh, yeah, so let's project ourselves into the 24th century and into, um, you know, the, the world of star Trek, the next generation think about what they would be sacrificing if they actually now, they they've sort of in the, in the series, they say, oh, we no longer work for money. And it never makes any sense. And they never show you how that works. And nobody ever thought that through, but let's say they were actually to have that socialist system where everybody stagnates and nobody works, then you, they would be giving up whatever the amazing future will come after the 24th century technology. Uh, so, you know, there's the, every time you say, oh, we're done now with the, the basic promise premise of, of, of UBI. And if any sort of anti-worker ethnos like that is that well, we've done, you know, human, human, creativity and creation and productivity is over.
Speaker 1 00:53:02 We've done. We created everything there is to create our current standard of living is the final form that human human, that human life will take. And therefore we're going to stop now and stagnant. And that's the basic premise behind that. And, uh, think about all the things that you're giving us. Now, first of all, I don't think you can actually stagnate. I think when you start to stagnate, you begin to decline because people aren't learning what they need even to maintain the system, you know, very calm, the more complex the system, the more work it takes to maintain it. Um, but the big thing is you're giving up all the amazing things you could be doing beyond that. So I think we will, you know, maybe, you know, a thousand years, 3000 years from now we'll reach the limit, but, uh, uh, but you know, we're not anywhere close to that. We're more at the beginning of the growth of technology and wealth, uh, then at the end of it.
Speaker 0 00:53:57 Well, on that very, I think hopeful note, uh, we, we've got a couple of minutes left, so I want to thank you, Rob. Thanks David. Thanks Scott. Thanks everybody for joining us today. Um, I hope you will join me tomorrow. Uh, I'm going to have a live interview with Eric Prince. He's the founder of Blackwater and an iron Rand fan. So I have been really looking forward to that. And then we'll be back on clubhouse, same time, same bat channel with, uh, our senior scholar professor Jason Hill. It's going to be an ask me anything. So, um, bring all your questions about objectivism, about robots, about politics, foreign policy. What have you, and we will supply some of our own from our 60,000 followers on, uh, on Instagram. So, um, thanks everybody for making this a fun conversation and I'll see you tomorrow or Thursday.