Superintelligence
part 3: deutsch's principle
So we know that brains are physical and brains allow abstract minds to think. This must be the case following Deutsch's discovery of the universality of quantum computation. But that it is physically possible does not make it imminent. What could predict its imminence? Here is where Bostrom seems to have severed ties with good philosophy - Moore's Law is not a ladder to human-like intelligence. Simply doubling speeds, numbers of transistors and increasing memory is not the same as thinking. Thinking, for all we know, is not a function of any of these things. We know we do it. We don't know how we do it. Absent an explanation of how thinking works in humans means we cannot write a program which represents that process. And without that - we cannot program a computer to think. If this seems obvious - it is because it is. If it is easy to overlook - it is because it is. And not only is it easy to overlook, having this pointed out to some people - that the missing piece is not engineering but a philosophy (of thinking and learning) - seems to do absolutely nothing to inform the present debate. Those who have read Bostrom happily skip over this objection to say "No - it's just around the corner. You don't understand. We have programs that can learn." But those people do not know what learning actually is. For a machine to learn - actually learn in the way humans do - we need a philosophy of knowledge creation. How is knowledge created? Both at the level of the individual and as a society? There is only one way. But it is notable due to the way it is almost ubiquitously ignored by anyone actually engaged in the question of what knowledge is and how it is constructed. Teachers, philosophers (even of epistemology), AI programmers - people whose business it is to create or impart knowledge typically are not familiar with the deepest explanation we have of how knowledge is created. And this is something that is absolutely essential if we want to program something that can create knowledge itself just as we can (that is to say: solve problems).
I must dwell on this point because of its deep significance. This is no mere speedhump on the road to AGI - this is the veritable philosophical wall yet to be surmounted. David Deutsch has a crucial principle that unites epistemology and computer science in a fundamental way and which undermines the entire "imminence" argument (from "The Beginning of Infinity") and that principle is:
If you can't program it, you haven't understood it
That is a profound, and exciting, claim. In just 9 words it captures what is wrong with thinking that ever faster and better hardware is the key to creating just the right software. Because that is what we are talking about when it comes to thinking: the abstract rules - the algorithm(s) - needed for a thing to be able to think. We humans can program computers to do all sorts of amazing things: we can simulate galaxies of stars colliding (because we understand many-body gravity interactions), we can program computers to replicate the structure of buildings or how populations of rabbits might grow. We understand those things (first) and so construct an algorithm and then code this to write a program. In that order. We can even program computers to simulate (very narrowly) aspects of evolution by natural selection. Because we understand some of what this is about. It is important to note here that some people are unaccountably impressed by evolutionary programs. For example: a program is designed that will simulate a pair of legs that do not walk and then, after many iterations which randomly generate new ways for the legs to fall, eventually the legs improve their walking. Of course the legs don't really evolve as such. They very narrowly improve in some limited domain. We understand that much about the algorithm governing natural selection. This is good work - but it is not quite the same as programming an artificial DNA molecule with the reach to manufacture, from non living materials, lifeforms from bacteria to humans to dinosaurs. We don't understand everything about evolution - so we cannot program computers to create genuinely new simulated "living" things.
Returning for a moment to my parable of flight in 300BC: Imagine there were already people among us (Philosophers of Flight) who understood quite a bit about flight while the builders continued their toiling away at tower construction. Imagine there was the equivalent of a theory for what flight entailed. Some upstart actually dared to suggest that what all flying things hitherto discovered seemed to posses was wings. Flapping seemed important but not essential. And anyone who tried flying without some kind of wing was doomed to fail.
"No" said the naysayers "Wings are not that important. Look at seed pods from plants: they are carried by the wind with no wings! Sometimes a bird tucks its wings away and still flies when you look closely. And even if wings were useful sometimes, it can't be the key! Ostriches are huge birds with wings, penguins do too, and chickens don't do well either - and they don't fly at all! So - we can rule out wings."
But the Philosophers of Flight thought wings were at least part of the solution: "Wings are important. We don't know all the details - but there is some combination of wings coupled with lift (something to do with the weight of the beast and the power of those wings) that creates flight. Wings are the key to progress here! We need to look more into the philosophy of how wings work. Perhaps, especially, how some birds spread their wings without flapping and yet still climb. There must be something more to learn there."
"So you do it!" comes the retort. "Build a wing that will help a person fly!" But Philosophers of Flight are not engineers. And they have other interests. And so no one wants to help. Or to see the way through. All the Philosophers have is the problem - not a complete engineering solution. Importantly they seem to have spotted a flaw with purported solutions (height=flight) and so have found that far from making progress in that direction, it is wasted energy. But they lack a detailed enough plan and need help with creative engineering. They know they have made crucial progress. It's not a plan detailed enough to construct an aircraft: but progress all the same - although no one outside their circle will admit it. And the building - and debates over the towers - just continue.
This is precisely where we are with the AGI debate among some. An AGI is not just around the corner and we know that because it seems that everyone working on AI or talking about it - seems to have the wrong philosophy that is actually preventing them from solving the problems that need to be addressed. We know all this because we know that we do not know how 'intelligence' (creative problem solving) within us works - how knowledge creation, problem solving - that whole suite of characteristics we share with other people - arises in a human brain. But, again, we know something. A program has never been created to learn something that the programmer did not already know. We are not close to that. We don't know how far we are from that because we don't even know what it would begin to take to do that. Knowledge creation - problem solving - requires two things: creativity and criticism.
In order to solve an as yet unsolved problem we need to conjecture new solutions (an act of creation). And then we need to criticise those solutions. This may (sometimes) involve testing through experiment. Other times the criticism comes from 'testing' against some other real world feature (another, deeper, theory say). This is how knowledge creation works. This is the only way knowledge creation works.
So why not just program a computer with that? We can't - because we do not know how. Creating new solutions to problems is a creative act. And we cannot express that creative act as an algorithm. If we could - we would. We would program a computer to be creative and then it would create knowledge - it would learn. But no computer has ever created a new explanation. And that there is the nub of it. Not only has it never done such a thing, no computer has even come close to creating explanatory knowledge. And it is the creation of explanatory knowledge that is the hallmark of human intelligence. Nothing else. Humans are universal knowledge creators. But they need not be this into the future. But understanding why they are unique, at the moment, is key to programming a computer with human-like intelligence.
Part 4
I must dwell on this point because of its deep significance. This is no mere speedhump on the road to AGI - this is the veritable philosophical wall yet to be surmounted. David Deutsch has a crucial principle that unites epistemology and computer science in a fundamental way and which undermines the entire "imminence" argument (from "The Beginning of Infinity") and that principle is:
If you can't program it, you haven't understood it
That is a profound, and exciting, claim. In just 9 words it captures what is wrong with thinking that ever faster and better hardware is the key to creating just the right software. Because that is what we are talking about when it comes to thinking: the abstract rules - the algorithm(s) - needed for a thing to be able to think. We humans can program computers to do all sorts of amazing things: we can simulate galaxies of stars colliding (because we understand many-body gravity interactions), we can program computers to replicate the structure of buildings or how populations of rabbits might grow. We understand those things (first) and so construct an algorithm and then code this to write a program. In that order. We can even program computers to simulate (very narrowly) aspects of evolution by natural selection. Because we understand some of what this is about. It is important to note here that some people are unaccountably impressed by evolutionary programs. For example: a program is designed that will simulate a pair of legs that do not walk and then, after many iterations which randomly generate new ways for the legs to fall, eventually the legs improve their walking. Of course the legs don't really evolve as such. They very narrowly improve in some limited domain. We understand that much about the algorithm governing natural selection. This is good work - but it is not quite the same as programming an artificial DNA molecule with the reach to manufacture, from non living materials, lifeforms from bacteria to humans to dinosaurs. We don't understand everything about evolution - so we cannot program computers to create genuinely new simulated "living" things.
Returning for a moment to my parable of flight in 300BC: Imagine there were already people among us (Philosophers of Flight) who understood quite a bit about flight while the builders continued their toiling away at tower construction. Imagine there was the equivalent of a theory for what flight entailed. Some upstart actually dared to suggest that what all flying things hitherto discovered seemed to posses was wings. Flapping seemed important but not essential. And anyone who tried flying without some kind of wing was doomed to fail.
"No" said the naysayers "Wings are not that important. Look at seed pods from plants: they are carried by the wind with no wings! Sometimes a bird tucks its wings away and still flies when you look closely. And even if wings were useful sometimes, it can't be the key! Ostriches are huge birds with wings, penguins do too, and chickens don't do well either - and they don't fly at all! So - we can rule out wings."
But the Philosophers of Flight thought wings were at least part of the solution: "Wings are important. We don't know all the details - but there is some combination of wings coupled with lift (something to do with the weight of the beast and the power of those wings) that creates flight. Wings are the key to progress here! We need to look more into the philosophy of how wings work. Perhaps, especially, how some birds spread their wings without flapping and yet still climb. There must be something more to learn there."
"So you do it!" comes the retort. "Build a wing that will help a person fly!" But Philosophers of Flight are not engineers. And they have other interests. And so no one wants to help. Or to see the way through. All the Philosophers have is the problem - not a complete engineering solution. Importantly they seem to have spotted a flaw with purported solutions (height=flight) and so have found that far from making progress in that direction, it is wasted energy. But they lack a detailed enough plan and need help with creative engineering. They know they have made crucial progress. It's not a plan detailed enough to construct an aircraft: but progress all the same - although no one outside their circle will admit it. And the building - and debates over the towers - just continue.
This is precisely where we are with the AGI debate among some. An AGI is not just around the corner and we know that because it seems that everyone working on AI or talking about it - seems to have the wrong philosophy that is actually preventing them from solving the problems that need to be addressed. We know all this because we know that we do not know how 'intelligence' (creative problem solving) within us works - how knowledge creation, problem solving - that whole suite of characteristics we share with other people - arises in a human brain. But, again, we know something. A program has never been created to learn something that the programmer did not already know. We are not close to that. We don't know how far we are from that because we don't even know what it would begin to take to do that. Knowledge creation - problem solving - requires two things: creativity and criticism.
In order to solve an as yet unsolved problem we need to conjecture new solutions (an act of creation). And then we need to criticise those solutions. This may (sometimes) involve testing through experiment. Other times the criticism comes from 'testing' against some other real world feature (another, deeper, theory say). This is how knowledge creation works. This is the only way knowledge creation works.
So why not just program a computer with that? We can't - because we do not know how. Creating new solutions to problems is a creative act. And we cannot express that creative act as an algorithm. If we could - we would. We would program a computer to be creative and then it would create knowledge - it would learn. But no computer has ever created a new explanation. And that there is the nub of it. Not only has it never done such a thing, no computer has even come close to creating explanatory knowledge. And it is the creation of explanatory knowledge that is the hallmark of human intelligence. Nothing else. Humans are universal knowledge creators. But they need not be this into the future. But understanding why they are unique, at the moment, is key to programming a computer with human-like intelligence.
Part 4