Superintelligence
(Part 2: seductive philosophers)
My intention here is not to be merely antagonistic or simply contrarian. I have another purpose. I believe AGI is eminently possible - indeed I know it is, for reasons I will come to. But - and here are the two crucial ways I differ from Bostrom and those who follow him - it is neither a bad thing (in the sense of being an existential threat) and nor is it "close" or "around the technological corner". Surprisingly, these opinions do not come to me from any scientific theory currently in existence or a shoehorning of Moore's Law into a polemic. These disputes are purely philosophical. Which is to say: these are matters of rationality where I think the arbiter of the dispute is reason. I observe that merely observing Bostrom's philosophical mistakes is tantamount to heresy in some circles. To even articulate the alternative point of view - that AGI is a necessary good, that is nonetheless elusive because of bad philosophy, is to push back against what is now an entrenched orthodoxy that has swept Silicon Valley and institutions such as The Singularity University (yes there is such a place and yes they take themselves very seriously).
Bostrom is one of the favorite philosophers of technologists with much money (Elon Musk, Ray Kurzweil and others). He is, in other words, an uber geek’s kind of nerd. That is no insult: he is an accomplished philosopher and original thinker loved by creative pioneers. If only we had more analytical Western philosophers who had at least some input into the direction business and power take. Bostrom's book "Superintelligence", that I will be criticising, is important. It is a must read for anyone interested in these matters - not because it is right (I will argue here that it is almost entirely wrong - root and branch) but because it is motivating so much of the talk amongst otherwise sober technologists, philosophers and scientists. It is, in brief, the working philosophical manual for the pessimistic attitude that pervades talk about artificial general intelligence and how the whole field is being hamstrung perhaps somewhat by going down blind alleys but more importantly being fixated on false philosophies and strange doomsday scenarios with weird singulatarian eschatologies. Importantly, he is the philosopher that philosopher Sam Harris defers to on this matter, so it seems. And that is no small thing: Sam has a strong following of smart people and on almost all other topics, Sam seems to have developed his own unique and critical attitude. Sam almost everytime he has put finger to key, pen to paper or opened his mouth publicly, articulated an eloquently rational critique of nonsense.
But philosophers can be alluring - even to other philosophers. A man of wide reading - like Sam Harris - is rarely, I would guess, surprised these days by what he reads. That is to say: intellectual thrill can become elusive the more one learns. One might reach a point where rather than "you learn something everyday" the time period between truly novel, challenging concepts becomes weekly...then monthly. One hungers for the thrill of what it felt like to first experience a good philosophy lecture (if you've ever been to one) or the first time one encountered some of the more esoteric truths of quantum theory or relativity where blow after blow your preconceptions are demolished. Or perhaps the first time you watched one of the better TED talks. That feeling of intellectual vertigo where the sense of the foundations falling out from under your mental feet can be so disorienting - and exciting. For so long, someone like Sam, might become comfortable in the answers they possess to the deepest questions so much so that when they encounter something truly new for which they lack clear answers they can feel that any answer given in the vocabulary to which they are most familiar - just might be correct. My hunch is this is how Sam has felt in the presence of some of these arguments like those found in Bostrom's book and more broadly on the topic of AGI. For the first time, in some time, Sam was at a loss to find holes. And that can be a passionately thrilling intellectual feeling. For Bostrom provides just large enough of a answer:question ratio to keep even a great intellectual spellbound. At least for a time. I want to break that spell. I truly do not think Sam has the philosophical or mathematical facts in hand that he needs to.
Nick Bostrom articulates only one point of view that can be constructed around AGI. Others have been here first - and in a far more sober way and Bostrom takes no account of their prior work, which - if only he had - would have saved him some considerable time in following bad premisses to false conclusions. Surprisingly (for a philosopher) Bostrom's whole book is not a point of view grounded in solid philosophical principles - or where philosophy makes an appearance it is to ignore deep, fundamental truths of epistemology that are as well established as the laws of thermodynamics (if not as well known). What is perhaps most striking about "Superintelligence" it that it reads like a dystopian fairytale peppered with just the right amount of the latest jargon from physics, genetics and mathematics to give it the flavour of popular science, if not the substance. Yet it is not science. It is not philosophy. It is a work of science fiction (only lacking a coherent narrative). So what is so wrong with it and, subsequently, the whole movement that sees imminence and danger lurking around the corner with AGI?
Let me first concede that Bostrom begins his book with hedges - and caveat after caveat fills page after page. And yet, one of the more frustrating things about the book (and there are many frustrating things) is that the hedging later on becomes interleaved with the most egregious claims of certain or near-certain truth. It is clear Bostrom does not take epistemological fallibalism seriously - but this is common. He seems to be a justificationist and a foundationalist (believes final truths can be discovered - or proved). This is significant - it colours how he thinks machines will think because he mistakingly believes this is how people do think. So while we read that "I might be wrong about everything" early on, we find some pages later "it is unavoidable that..." or other words to that effect. And he is rarely speaking about some law of nature but rather just his opinions about machine psychology. That the first chapter of the book itself includes the overarching hedge that the entire thesis could be fundamentally flawed -is no obstacle whatever to Bostrom claiming that some very dubious claims are "probably" or "certainly" true and at times there is "little doubt" about, for example, the motivations of an AGI. Strong modal language is a feature of anti-fallibalist thinking and indicative of a deep philosophical error: Bayesian reasoning. Indeed Bostom makes explicit mention of Bayesian reasoning and assumes apriori that this is precisely how intelligent machines of the future will reason. Although Bostrom did not, I will later, explain Bayesian reasoning. It is important to understand and it is important to observe that Bostrom did not include a good explanation of what Bayesian reasoning is. But let us come to that later and keep in view the importance of the objective: undermining the central pessamistic assumptions (imminence and danger) in "Superintelligence" and why so many are convinced he is correct.
For those who follow Bostrom (like Harris) one would assume an empirical discovery of the apocalypse had been excavated fossil-like from the Earth itself - rather than merely imagined. So, Bostrom has an out - right there in chapter one: "I might be wrong" he claims. That is, of course, a statement of quite reasonable fallibalist philosophy. Unfortunately what follows has been taken by many to be derivations from some sort of physical law. Not mere prophesies of a learned man. Thankfully there are philosophies to help us grapple with these questions - and it is those, more fundamental truths about reality, that we can judge Bostrom's thesis against. That will be my project here.
Here is what we know about computers: they get better at what they do given improvements in two basic abstract things: memory and speed, which can be instantiated with different physical substrates (solid state devices with more transistors, silicon processors with higher clock speeds, or more neurons, faster switching times, etc). Whatever the physical substrate, increased memory and speed seem to come in lock step, so far as our current technology goes, with improvements to the number of transistors per unit area and the increase in clock speed of processors. All of this is described under what is really a catch-all term called "Moore's Law" - an empirical observation (not an actual law of nature derived from any physical law more fundamental) that approximately every 18 months the computer "power" (by some measure) doubles. This trend has occurred over the last 30 years or so and is predicted to continue until some physical limit is reached. If we do not have quantum computers that avoid the decoherence problem then that physical limit will be when transistors get too small to avoid electrons leaking from one part of a circuit to another. If we do have quantum computers - the limit will be a switching speed at the speed of light.
More important than all of this is a deeper truth about reality: the universality of computation. Whatever physical process is computable must be able to be programmed - somehow. David Deutsch's extension of this idea brought computer science into physics: a quantum computer must be able to simulate any physical process. And our brain is performing a physical process, therefore whatever human intelligence is must be reproducible in a computer. That constitutes a law of physics. So it must be possible for a computer to think because we are physical things and we think. If you do not think this is true then you think that there is something non physical that our thinking depends upon (or perhaps also depends upon). That essentially amounts to belief in the supernatural.
Part 3
Bostrom is one of the favorite philosophers of technologists with much money (Elon Musk, Ray Kurzweil and others). He is, in other words, an uber geek’s kind of nerd. That is no insult: he is an accomplished philosopher and original thinker loved by creative pioneers. If only we had more analytical Western philosophers who had at least some input into the direction business and power take. Bostrom's book "Superintelligence", that I will be criticising, is important. It is a must read for anyone interested in these matters - not because it is right (I will argue here that it is almost entirely wrong - root and branch) but because it is motivating so much of the talk amongst otherwise sober technologists, philosophers and scientists. It is, in brief, the working philosophical manual for the pessimistic attitude that pervades talk about artificial general intelligence and how the whole field is being hamstrung perhaps somewhat by going down blind alleys but more importantly being fixated on false philosophies and strange doomsday scenarios with weird singulatarian eschatologies. Importantly, he is the philosopher that philosopher Sam Harris defers to on this matter, so it seems. And that is no small thing: Sam has a strong following of smart people and on almost all other topics, Sam seems to have developed his own unique and critical attitude. Sam almost everytime he has put finger to key, pen to paper or opened his mouth publicly, articulated an eloquently rational critique of nonsense.
But philosophers can be alluring - even to other philosophers. A man of wide reading - like Sam Harris - is rarely, I would guess, surprised these days by what he reads. That is to say: intellectual thrill can become elusive the more one learns. One might reach a point where rather than "you learn something everyday" the time period between truly novel, challenging concepts becomes weekly...then monthly. One hungers for the thrill of what it felt like to first experience a good philosophy lecture (if you've ever been to one) or the first time one encountered some of the more esoteric truths of quantum theory or relativity where blow after blow your preconceptions are demolished. Or perhaps the first time you watched one of the better TED talks. That feeling of intellectual vertigo where the sense of the foundations falling out from under your mental feet can be so disorienting - and exciting. For so long, someone like Sam, might become comfortable in the answers they possess to the deepest questions so much so that when they encounter something truly new for which they lack clear answers they can feel that any answer given in the vocabulary to which they are most familiar - just might be correct. My hunch is this is how Sam has felt in the presence of some of these arguments like those found in Bostrom's book and more broadly on the topic of AGI. For the first time, in some time, Sam was at a loss to find holes. And that can be a passionately thrilling intellectual feeling. For Bostrom provides just large enough of a answer:question ratio to keep even a great intellectual spellbound. At least for a time. I want to break that spell. I truly do not think Sam has the philosophical or mathematical facts in hand that he needs to.
Nick Bostrom articulates only one point of view that can be constructed around AGI. Others have been here first - and in a far more sober way and Bostrom takes no account of their prior work, which - if only he had - would have saved him some considerable time in following bad premisses to false conclusions. Surprisingly (for a philosopher) Bostrom's whole book is not a point of view grounded in solid philosophical principles - or where philosophy makes an appearance it is to ignore deep, fundamental truths of epistemology that are as well established as the laws of thermodynamics (if not as well known). What is perhaps most striking about "Superintelligence" it that it reads like a dystopian fairytale peppered with just the right amount of the latest jargon from physics, genetics and mathematics to give it the flavour of popular science, if not the substance. Yet it is not science. It is not philosophy. It is a work of science fiction (only lacking a coherent narrative). So what is so wrong with it and, subsequently, the whole movement that sees imminence and danger lurking around the corner with AGI?
Let me first concede that Bostrom begins his book with hedges - and caveat after caveat fills page after page. And yet, one of the more frustrating things about the book (and there are many frustrating things) is that the hedging later on becomes interleaved with the most egregious claims of certain or near-certain truth. It is clear Bostrom does not take epistemological fallibalism seriously - but this is common. He seems to be a justificationist and a foundationalist (believes final truths can be discovered - or proved). This is significant - it colours how he thinks machines will think because he mistakingly believes this is how people do think. So while we read that "I might be wrong about everything" early on, we find some pages later "it is unavoidable that..." or other words to that effect. And he is rarely speaking about some law of nature but rather just his opinions about machine psychology. That the first chapter of the book itself includes the overarching hedge that the entire thesis could be fundamentally flawed -is no obstacle whatever to Bostrom claiming that some very dubious claims are "probably" or "certainly" true and at times there is "little doubt" about, for example, the motivations of an AGI. Strong modal language is a feature of anti-fallibalist thinking and indicative of a deep philosophical error: Bayesian reasoning. Indeed Bostom makes explicit mention of Bayesian reasoning and assumes apriori that this is precisely how intelligent machines of the future will reason. Although Bostrom did not, I will later, explain Bayesian reasoning. It is important to understand and it is important to observe that Bostrom did not include a good explanation of what Bayesian reasoning is. But let us come to that later and keep in view the importance of the objective: undermining the central pessamistic assumptions (imminence and danger) in "Superintelligence" and why so many are convinced he is correct.
For those who follow Bostrom (like Harris) one would assume an empirical discovery of the apocalypse had been excavated fossil-like from the Earth itself - rather than merely imagined. So, Bostrom has an out - right there in chapter one: "I might be wrong" he claims. That is, of course, a statement of quite reasonable fallibalist philosophy. Unfortunately what follows has been taken by many to be derivations from some sort of physical law. Not mere prophesies of a learned man. Thankfully there are philosophies to help us grapple with these questions - and it is those, more fundamental truths about reality, that we can judge Bostrom's thesis against. That will be my project here.
Here is what we know about computers: they get better at what they do given improvements in two basic abstract things: memory and speed, which can be instantiated with different physical substrates (solid state devices with more transistors, silicon processors with higher clock speeds, or more neurons, faster switching times, etc). Whatever the physical substrate, increased memory and speed seem to come in lock step, so far as our current technology goes, with improvements to the number of transistors per unit area and the increase in clock speed of processors. All of this is described under what is really a catch-all term called "Moore's Law" - an empirical observation (not an actual law of nature derived from any physical law more fundamental) that approximately every 18 months the computer "power" (by some measure) doubles. This trend has occurred over the last 30 years or so and is predicted to continue until some physical limit is reached. If we do not have quantum computers that avoid the decoherence problem then that physical limit will be when transistors get too small to avoid electrons leaking from one part of a circuit to another. If we do have quantum computers - the limit will be a switching speed at the speed of light.
More important than all of this is a deeper truth about reality: the universality of computation. Whatever physical process is computable must be able to be programmed - somehow. David Deutsch's extension of this idea brought computer science into physics: a quantum computer must be able to simulate any physical process. And our brain is performing a physical process, therefore whatever human intelligence is must be reproducible in a computer. That constitutes a law of physics. So it must be possible for a computer to think because we are physical things and we think. If you do not think this is true then you think that there is something non physical that our thinking depends upon (or perhaps also depends upon). That essentially amounts to belief in the supernatural.
Part 3