I began the day catching up on Sam Harris' "Waking Up Podcast". He interviewed Gary Kasparov who spoke clearly about the threat of Russian President/Dictator Vladimir Putin. Socialism, Kasparov observed at one point, cannot value the individual in the way capitalism does. Whereas free, open, capitalist societies see any individual human death as a terrible tragedy, socialism will see the "value of life" quite differently: sometimes we must sacrifice individuals. Individual death of innocents is not necessarily a tragedy. This huge assymetry in the values of the two systems when it comes to life is absolutely crucial to keep in view when discussing the apparent merits of socialism. So mainstream has the fawning over socialism become that ABC reporters/comedians in Australia are now sneeringly admonishing politicians because they are not socialists (see, for example, Tom Gleeson's interview on "The Weekly" with Senator Cory Bernadi). The superemely high value capitalism places upon human life (compared to socialism which places many other things like "preserving the system as it is" higher than any individual) is a consequence, I would argue, of the creative output of producers; creativity - making things better; progress - is valued highly and because the next improvement can come from anywhere - all human life is especially sacred. But socialism is the idea that there is a utopia that we can enact: a system where problems (like inequity) can be once-and-for-all eliminated. And this might require the elimination of some individuals. Not so with open, free capitalist societies that must recognise the inevitability of problems and, therefore, the possibility of their creative solutions. But I digress.
What was also wonderful about Sam's interview, was Gary explaining how at first Deep Blue was beaten by a human (him). Then the computer won. They did not play a third match. They should have. Jaron Lanier in his first book "You are not a gadget" speaks in a similar way to the way Gary did when assessing that loss: it was largely about psychology. Gary got spooked by the computer and the environment. The poker element of chess undid the human player because there was no face on the computer. Eventually, of course, the computers all got so much better that they can more often than not beat a human player. But what is often left out of these discussions of apparent machine "superiority" in this narrow area is: a human player with a computer can always beat a computer with no human. Gary spoke of this with such confidence almost as if it was a law of nature. So the combination of raw computing power (and a fixed algorithm) is always worse than raw computing power AND the creativity of the human mind. People will always beat machines - just so long as people get to use machines too. And why shouldn't they? The whole point of machines is to serve people. They're just dumb machines. We owe nothing to our microwaves, iPhones or AlphaGos. Given some of Sam's remarks, I'm still not sure he quite understands what the point of "universality" is when it comes to the ability of humans to be creative and explain anything. Sam thinks that you can just keep adding module after module after module to an AI so that, for example, it's the best chess playing machine, then the best Alpha Go player, then the best car driver then the best manufacturer of coca cola and you keep doing this for every automated task you can think of and - well - eventually once you've done this for "everything" you have an "intelligence" that - by definition exceeds us for every task.
No. That's false. It can't possibly exceed us for every task because you cannot describe "every task" like you can for the rules of chess. You can't program in: tasks not yet concieved. You can't program in the ability to use imagination to construct creative explanations. Or if you can: you don't need to program in all all those other things! You only need the one thing (the program for creating explanations - i.e: a program for actual universal learning) in which case you have a person. A genuine AGI. And at that point you're not allowed to go forcing it to: play chess or drive your car or anything because people have human rights. Right?!
Hours after this I began to read "The Undoing Project" by Michael Lewis on a completely unrelated topic (eventually it turns out to be about how two Israeli psychologists did some Nobel Prize winning work in psychology for which they were awarded the economics prize). I'll read anything by Michael Lewis: he's a great writer. You can read my page here about his book "The Big Short". This new book of Lewis' begins with a parable about how the top national basketball teams in the USA choose (buy!) their players. Traditionally it was simply "expert opinion" that was used to help teams choose. People with lots of experience (called 'scouts') would choose from the list of available players (the draft). This method was riddled with error (it was basically just guessing although sometimes things like: number of points scored in college or something like that was taken into account). But then people started to make "mathematical models" where lots of factors like "time on court and number of points scored" were weighted. In a basketball match sometimes players don't play the whole game. Or even factors like: how high could the player jump and how fast were they over 100m? Things like that. Some factors might have been given a 0.5 weighting, others 2x or perhaps a square or another exponential or who knows how? But whatever the case, those who relied on models and not just expert opinion started to beat out the experts. Then everyone started to get models and awful mistakes were found with the models (so some team would invest in a new player based on a model alone and the player turned out to be a terrible choice). Long story short here: the solution was to use both the creative theory formation of experts augmented with lots of data and a model. This way the expert and the model could refute each other. Now Lewis doesn't say this - but that, it seemed to me, was clearly what was now going on. Where the expert and the data/model agreed - the choice was better than when at least one of them disagreed.
So today, in the space of hours, in two quite divergent situations I encountered something really important that I've known for some time but which now seems to be entering the cultural "intellectual" vernacular: human creativity adds something qualitatively different. Because it is something qualitatively different. Pure data, or "mindless" computing (which is what all computers except us, are) will always be worse than a person (with some relevant knowledge) and a good predictive theory based on a good explanation and a powerful computer to crunch out some numbers fast.
But the lesson: human creativity is qualitatively different to all other kinds of computation we know about in the universe. Simply adding ever more data, or processing speed or memory to a computer cannot possibly get you AGI. For that: you really do need a jump to universality as David Deutsch explains in The Beginning of Infinity. And it will be a jump. It won't gradually happen (as AI is gradually getting more and more useful in more and more places) it will come abruptly. Just like those chess champions were suddenly able to beat the best computers or those scouts beat the mathematical models...when they too had computers.