The above image links to the original video, or just click here. As always: a wonderfully insightful talk. This is not a summary of the talk but rather my two favourite quotes (passages?). The talk contains important material on what an AGI must be and why we must not treat them differently to any other "race" of people. There is a wonderful part about how the feared existentially deadly "black balls" in urns can be turned into white balls - given the existence of people and when they create the requisite knowledge. The black balls are supposed to be problems we cannot solve in time - as though our inventions are random things we pluck from the unknown and cannot learn more about over time, let alone in time. I leave those matters aside for now.
Emphasis (in bold and/or underlined) is my own.
Quote 1 (From 13 mins 30 secs)
“Once we have a universal constructor, all construction, all repetitive labour, will be replaced by writing computer programs to control the universal constructor. And wealth will consist of our library of programs. The universal constructor can be programmed to self reproduce, so once you have one you soon have 2^n of them and it can be programmed to perform self maintenance too, all from scratch: starting with mining the raw materials – perhaps from the asteroid belt using solar energy or whatever. The program may be hard to write, but once it’s written and if you own the rights to those asteroids – you can sit back and watch your 2^n Teslas roll in, with zero additional effort. And no, we are not going to have a universal constructor apocalypse and be converted to grey goo. A universal constructor is just an appliance – it can’t think – it doesn’t know that its present job is to make 2^n Teslas and it doesn’t want anything. Unless of course you put an AGI program into it. Then it does become indeed potentially dangerous without limit. But that’s for the same reason that you are. Each of you is precisely one of those universal constructors endowed with an AGI program. Or I should say GI? It makes no difference.”
Quote 2 at 17 mins exactly (this is after describing two kinds of dangers most people are concerned about - known dangers like deadly viruses without cures, for example).
“The third category of dangers are the ones to which most efforts should be devoted and yet they are the ones that are currently least feared because they are ones that are not yet known. Like in 1900 no one knew that smoking was dangerous. By the time the knowledge…that it was dangerous had been created decades later, cigarettes had killed hundreds of millions of people. Again – if that had been an existential danger, whom could we sue? So: how can we create the knowledge to protect ourselves from existential or near existential dangers that we do not know? How to address the risk that by the time we do know, we won’t have time enough to create the requisite knowledge? The answer is: by creating general purpose knowledge – deep and fundamental knowledge – as fast as possible – the more we know of the world, the faster we can create new knowledge about novel aspects of it that turn up and become urgent. This is important – I don’t think it’s widely appreciated – the survival of our species depends absolutely on progress in fundamental research in science and on the speed at which we make progress there. And here the key thing in the medium term is understanding the theory of universal constructors, so that we shall know in principle – in theory – how to program them to produce, say, a billion space ships in a hurry customised to deflect an approaching shard of neutronium* or 10 billion doses of a new vaccine in a hurry against a sudden and deadly disease. So that’s how we deal with the third category of the unknown. By rapid progress of every kind: especially the fundamental. The fourth category is at once even more dangerous and yet, in a sense less worrisome because we already have the knowledge -at least the theoretical knowledge to deal with it. This fourth category is not the unknown – the unknowable. It’s a bit paradoxical that the unknowable is less dangerous than the merely unknown but that’s because the only thing that is unknowable is the content of explanatory knowledge that hasn’t been created yet and so the only truly dangerous things in that sense in the universe are entities that create explanatory knowledge - us - people – AGIs too are people. Now, the knowledge of how to prevent people from being dangerous is very counter intuitive – it took our species many millennia to create it but now we do have that knowledge. The only way to prevent people from being dangerous is to make them free. Specifically it is the knowledge of liberal values, individual rights, the open society, the Enlightenment and so on. In such societies, the overwhelming majority of people regardless of their hardware characteristics – are decent. Perhaps there will always be individuals who aren’t: enemies of civilisation, people who take it into their head to program a universal constructor to convert everything in sight into paperclips and they may devote their creativity to doing that – but the great majority will devote – that is the great majority of the population of such a society will devote some of their creativity to thwarting that and they will win provided they keep creating knowledge fast in order to stay ahead of the bad guys."
*Note: neutronium is a kind of matter out of which neutron stars are made. It is composed entirely of neutrons and is thus exceedingly dense and would, for example, destroy a planet made of normal matter by causing severe gravitational effects on the planet (by, say, converting all of the planet itself into neutronium if it came in contact with the planet).
The most valuable thing you can offer to an idea