Cowen: Mainly what they have done is tricked people. The Apollo program was a big trick. It was not intended as a trick. I’m pretty sure almost everyone behind it was quite sincere that it would lead to whatever. It was vague all along, but everyone was truly excited back then. I even remember those times, but it didn’t lead to what we were promised at all.
And you see that when you compare science fiction over time. So I think the norm is that new technology comes and people are tricked. Again, it doesn’t have to be a sinister, devious, conspiracy laden thing, but in fact, they’re tricked. And then it happens anyway. And then we clean up the mess and deal with it and move on to the next set of problems.
And that’s what I think it will be with AI as well.
Murphy: What is the trick with AI?
Cowen: It’s the old paradox. When you add grains of sugar to your coffee. Every extra grain is fine, or it may even taste better, but at some point, you’ve just added too many grains. So that’s the way it is with change. People use ChatGPT. It diagnoses your dog. Do I need to take the dog to the vet? What’s with this rash?
You take the photo…You get a great answer. Everyone’s happy. They’re not actually going to be happy at all the changes that will bring. And here I’m talking about positive ones. I’m not saying, oh, it’s going to kill us all. People just don’t like change that much. So they’ll be sold on the immediate, concrete things and end up seeing things happen where they feel there’s too much change because it will devalue their human capital, and we’ll adjust and get over it and move on to the next set of tricks. That’s my forecast.
Murphy: People don’t like change, but also people are bad at long term planning. Yeah. You’ve spoken before about how faith is a key requirement in terms of being able to plan over the long term. How do you bring that idea to policymakers?
Cowen: I don’t know, I think things will get pushed through for myopic reasons, like we must outpace China, which might even be true, to be clear, but it’s a somewhat myopic reason, and that will be the selling point. You know, I’ve read a lot of texts from the early days of the Industrial Revolution. Adam Smith is one of them, but there’s many others, and a lot of people are for what’s going on, they understand they will be richer, maybe healthier.
They do see the downsides, but they have a pretty decent perspective. But no one from then understood. You’d have this second order fossil fuel revolution, say the 1880s where just things explode and the world is very much different. And whether they would have liked that, you can debate, but they just didn’t see it at all.
We’re probably in a somewhat analogous position. I would say that the Second Industrial Revolution was the more important one. It was a very good thing, even though climate change is a big problem, but it really built the modern world. And with something like AI or any advance, there’s probably some second order version of it that’s coming in our equivalent of 1880 that we just don’t see, and it will be wonderful for us.
But if you told us, we’d be terrified. So how should you feel about myopia? I think as an intellectual, you should be willing to talk about it openly and honestly. But at the end of the day, I think myopia still will rule. And I’m not in a big panic about that.




