After listening to the first half of the “Donald Hoffman with The Harrises’” podcast I wrote down a bunch of thoughts here. I found little to disagree with, because so much sounded like something between mainstream ancient philosophy (we don't have direct access to ontological reality) and the work of David Deutsch in "The Fabric of Reality". So in the second half I was looking forward to hearing something controversial. It did not disappoint. What I want to say at the outset is that although I ultimately disagree in places on points of epistemology and science and even whether Donald's theory even constitutes science, I think his overall approach is interesting and there’s no easy way to refute what he is saying. But this is just to say: almost any theory of consciousness is not easy to refute. Panpsychism is not easy to refute. As we have no theory of consciousness nor even a good working definition of what it even is, all approaches are more or less welcome at this point. So I want to say upfront: I think this is useful work. It’s worth trying. Almost anything is worth trying. That said, let’s get into trying to explain what he is saying and what criticisms one might conjure in response. Also: I actually read a couple of Donald’s papers to supplement his podcast. At times he seemed to deviate (especially under questioning from Annaka) with what he has said in published material. I’m going to stick with his words rather than try to argue with “the two Donalds” as I see them (spoken and written Donald). For what it's worth I read http://cogsci.uci.edu/~ddhoff/Chapter17Hoffman.pdf and also link.springer.com/content/pdf/10.1007%2FBF03379572.pdf and may get to commenting more on those in more detail at some other time.
Donald wants to find “a mathematically precise theory of consciousness” – and by this he means something like looking at all the ways consciousnesses interact (note the plural there: so we are conscious of course, but it’s not clear “how far down the phylogenetic tree” Donald sees consciousness as running. At times he even speaks as though he agrees with panpsychism and Sam basically seemed to put him in the panpsychist box at times – and I tend to agree) – but whatever the case he wants a way of mathematizing the relationships between individual instances of consciousness. So he talked about, for example, big data and how we might use that to analyze how humans interact online and everywhere else. If he could analyse these interactions or model them in some way then he could have some sort of mathematical description of what human consciousnesses are doing. This was an interesting approach to my mind because he was contrasting it to the “usual” way of doing things which, as he explained, was basically to look “in” matter for consciousness. So most people begin with neurons and try to work their way to consciousness. They start with the physical and try to get to the mental. Donald said he was trying to take the opposite approach. He’s doing this, he says, because he’s motivated to find a “dynamics” of consciousness. But for this he needs a mathematically precise definition of consciousness. He says he has such a precise definition. Now I should say: definitions are one thing – but people have tried defining something like “the electron” again and again and it’s no easy task. And, along with Popper, definitions are possibly misleading and always fallible anyways. What we need are explanations. So right away Donald is sort of in an instrumentalist camp. We will return to this. Whatever his "mathematically precise" definition we never actually hear what the rough-in-English definition is.
Whatever the case – at the 1 hour and 14 minute point in the podcast I hear the first remarks I can really begin to raise my eyebrows at – although not disagree with out of hand. What he claims is that his notional theory of consciousness – his “conjecture” one might say – is that consciousness is the most fundamental thing in the universe. So it is more fundamental than quantum field theory and general relativity and evolution by natural selection and indeed everything else we basically know of science. So his theory, if correct, must be able to generate quantum theory, general relativity and evolution by natural selection and everything else. And indeed he'd be right that the latter follows from a theory purporting to be the most fundamental of all. He says at one point “All of these scientific theories are wonderful tools but they have only been the science of our interface, not science of objective reality.” On this view even space-time is part of our “user interface” that he defined in the previous hour.
Now here is where I may part company with him. What is the role of science then? And explanatory knowledge generally? If we are trapped inside this user interface – a sort of Platonic Cave – then science becomes about us - a parochial arm of psychology. It's about what we are doing and how our brains (or minds, I'm not clear) are doing. But I'm not sure on his thoughts about the relationship between brain and mind (see below). Even general relativity, on this view, is mainly about us – and not a separate space time. It seems on Hoffman’s view space time is not “out there” but rather “in us” in some way - as the manifestation of an interface with objective reality or an interface between conscious creatures, or something. Now I am of the view: general relativity is a theory of stuff out there – it describes a really existing space time. Now could it be the case that general relativity gets refuted? Not only do I think this could be the case, I think it must be the case. In some ultimate sense it has to be false because (1) it disagrees with quantum theory so there must be a deeper theory than both (2) as a matter of principle all theories are improveable – we cannot have a final theory. If general relativity is the final word we must ask: WHY? And if there is no answer to that question we can find, progress stops and we are doomed because some problems are insoluble not because a physical law says but because our capacity to understand the universe is bounded by a brain that is unable to grok something about physical laws. This would mean our mind is not universal (now I just as an aside admit here that in some places – namely here: http://cogsci.uci.edu/~ddhoff/Chapter17Hoffman.pdf) that in Hoffman straight up admits: the mind is not what the brain does (see page 11). I am unconvinced and he never grapples with the universality of computation anywhere - not in the podcast and not in his two papers that I read. This is a deep problem for his thesis, so far as I can see it. And his quick dismissal of actual, literal, realist quantum theory is probably also disqualifying. But then, few physicists take quantum theory as being literally true. We’ll come back to this also.
The idea that spacetime is part of us, or of “consciousness’ user interface” seems to me to be the mistake of thinking that a scientific theory is at best a fiction – perhaps a useful model – but not really connected with reality. This is an epistemological claim. A realist says: no – although ultimately false, scientific theories are not merely fictional though useful models. They are accounts of what is actually there. There is a connection, therefore to ontology. Hoffman seems to be discounting this. Even if some consciousness theory is deeper than all other scientific theories this does not mean those other scientific theories never have a basis in objective reality. Consider by analogy any other demonstrably false theories. I’ll go to my favorite example: Newtonian Gravity. It says that there exists a force between particles that varies inversely as the square of the distance between them. Now that theory has been shown by experiment to be false. But is it utterly false? A complete fiction? No. There really is a way that particles influence each other in space (so that's correct) but it is via the spacetime between them. Einstein’s general relativity replaces Newton’s theory but it does not say of it: utter fiction. The motion of things in space does indeed approximately obey the inverse square law. It’s not a random guess. Approximations are approximations to something: and they cannot be approximations to simply other complete fictions. They have to be getting to something if the predictions being made are reliable to some extent (more than random guessing).
Whatever the case: Hoffman argues that because his theory of consciousness means consciousness is more fundamental than space and time that it means consciousness itself is outside space and time. Well…that’s rather neat. I don’t think there is the slightest bit of evidence refuting the claim that consciousness is IN space and time, but there we have it for now. We need an experiment (and more precise theories too) in order to decide between the two claims: consciousness is inside space time vs consciousness is outside space time.
Annaka asked a question about “The Fundamental Nature of Reality” and this comes up more than once. Hoffman responds at this point that his theory of consciousness would indeed answer to that label. So, ultimately, Hoffman believes in an end-to-science: he thinks there can be a final theory and that he is on the way to finding it. It won’t explain “everything” – he concedes that – but within the realm of fundamental physics – he would have a final theory. It would be non-improveable (he doesn’t say this: I’d like to ask him – but his language is telling “final” and “fundamental nature of reality” and so on). So while he doesn’t think quantum theory or general relativity or natural selection can be the final word (quite right) he does think his theory could be.
But once we have that theory, why can we not ask: why isn’t it some other theory? Like if someone came up with an alternative – why must it be ruled out a priori? Will his theory explain why all other theories are not possible? And if so, will it then explain why his mathematics is infallible? Why he cannot possibly have made a mistake and so on? These claims to “absolute final theories” all run into these questions. The alternative is: however deep your theory goes, that is no guarantee you’ve reached the bottom. There may be no bottom. There may just be better and deeper places to delve.
Hoffman speaks of his theory of consciousness as describing consciousness as a “field” not unlike the fields in a physical theory. I suppose he might think – though he didn’t say this – that particular instances – your mind and mine – are excitations of that field rather like the way electrons and photons are excitations of the quantum field. I’ll need to read more about this to find out but that could be a neat idea. Annaka rightly observed that this means this would be a field outside fields – so a field outside spacetime. But Hoffman says he is using the word field in a different sense to that in quantum field theory but admits the conscious field will be outside spacetime (see the 1 hour and 18 minute mark).
Soon after Sam interjects with a question about panpsychism. Incidentally Sam says a thing he often has: consciousness is the one thing that cannot be an illusion (following Descartes). Hoffman is no fan of panpsychism and rejects the label because he says unlike the panpsychists he is not starting with the material world as an assumption. So Sam later suggests he is an idealist. Hoffman also rejects that label. On the "cannot be an illusion" thing, I used to think the same. I was talked out of it by David Deutsch. My own way of putting this now is: what's not an illusion? Consciousness? What about that string of letters? Does it infallibly label what you think it does? Is that an illusion? If it could possibly be, then what exactly is not an illusion? And given we know that this moment actually is being experienced in the future in some sense (because neurons take time to fire), then consciousness is already a complicated thing that could be illusory. But I digress.
Hoffman makes some interesting remarks to further explain his “user interface theory”. He says: when you look in the mirror you see skin, hair, teeth, a face and so on and yet you know there are emotions, sensations, hopes, aspirations there. So the face is like a portal – it’s an icon so to speak – which gives you access to consciousness. So too with other people. Now I quite like that. But he then goes onto say that a cat is an icon for consciousness too although it’s “dimmer” (his word) – he thinks this as he can tell his cat likes some food but not others. Hoffman then goes even further: a rock or a glass of water is an icon of consciousness. Now at one moment he rejects the idea that the rock or water are themselves conscious – instead he says the consciousness is “behind” those icons in some way. And some of those consciousnesses are greater and lesser than ours. This is where he lost me. He seems to be saying that what we see as rocks or glasses of water and so on are the means and ways in which we conscious entities interact. At least that’s my steelman version of what he is getting at. After all – were there no physical objects out there, what point would there be in consciousness and what would we have to interact about?
Sam rightly suggests that this is all panpsychism but it’s in virtual reality.
Hoffman dismisses the quantum multiverse for poor, sadly incoherent reasons revealing a level of ignorance about quantum theory. He says, for example, that it “does not explain the measurement problem”. I do not have the time here and now to explain entirely why that’s wrong but here is a brief sketch:
The so-called “measurement problem” is the question as to why before a measurement is taken in quantum theory, the mathematical formalism describes a system (say a single electron passing through a slit) as taking all physically possible paths. Upon measuring it on the other side of the slit (the observation) we see only ONE path having been taken. But the very point of the multiverse is to take seriously all those paths described by the formalism as equally real before, during and after the measurement. They all exist. But you only ever see one because you only occupy the one universe (in a sense, speaking loosely). But quantum mechanics describes many many universes. This is really not that mysterious. It’s rather like how even Kepler’s laws describe an orbit as consisting of many many points in space…and yet Earth is only ever observed at some point in time to be at one of them. The others really exist though – but we’ll never observe the Earth at all of them. Sure: we can take photos, perhaps when it’s somewhere on January 1stand then 2ndand so on. But we can do the same for the electron – repeating the experiment and noticing it really does take all the possible paths. Whatever the case, Hoffman fundamentally – deeply misunderstands what quantum theory says and how the multiverse solves these misconceptions people have. (Look forward to an upcoming ToKCast for this in just two episodes when we go through “The Multiverse” chapter of “The Beginning of Infinity”…or if you can’t wait – read it yourself now!). For anyone playing along I've written a number of short articles on The Multiverse like this one.
Hoffman is thus also confused about the relationship between quantum theory and probability. I recommend https://www.youtube.com/watch?v=wfzSE4Hoxbc which, in a strongly competitive field of amazing talks by David Deutsch is right near the top for me. The long and short of it here is that: randomness is always just subjective. In the multiverse everything that can happen really does happen. So it happens with Probability = 1. Probability was invented to explain games of chance but has been repurposed and perhaps even perverted for all sorts of things that it perhaps shouldn't be. Now in the case of the multiverse some things happen more often (with so-called “greater measure”) than others but happen they do. And doing probability on infinite sets doesn’t make sense because we can never “do infinite probability experiments” to check if our theory of probability is actually correct. If the theory is supposed to be about the physical world we need to be able to test it against the physical world. So probability theory runs into this deep problem of being untestable. And if it's not supposed to be science but rather a branch of pure mathematics, then it doesn't apply to the real physical world. Whatever the case: Hoffman says that quantum theory simply does give us objective randomness which is a terrible mistake and a gross misunderstanding of the theory. But these misconceptions are all intimately tied together and I think a conversation between him and David Deutsch on this point would be fascinating. But for now, if you do read this Donald – please watch that video and read David’s book “The Beginning of Infinity”. I say this not because I'm wanting to refute your entire thesis but rather - like Annaka - I think there could be something there which might be improved by fixing up these quibbles I have. Moving on…
Free Will: Hoffman broadly agrees with Sam about free will but at the same time Hoffman uses the word “agent” with rather wild abandon in places JHe is tripped up on this by Annaka and it’s not entirely clear what he means by “agent”. He does say he has a mathematical description of what “choice” is and how choice must arise from unconscious factors and hence these choices cannot possibly be real (free) choices. He goes so far as to say (on his theory): you cannot be conscious of making a choice because the mathematical formalism says as such. But I think most of us refute this by personal experience. I am conscious of making a choice and no matter what your mathematics seems to say: I'm choosing to finish this sentence.
This brings me to my most major gripe from the podcast and Hoffman’s approach:
Hoffman very heavily emphasizes the mathematics of his theory whilst deemphasizing experimental testing. He insists upon the formalism. It’s an interesting “tell” by which I mean: many of us have bemoaned so-called “scientism” – the application of scientific methods to problems that are not science (like purely philosophical problems like the very question: are there philosophical problems?) Scientism is something thought of as the stance that no knowledge is “valid” or some such except for scientific knowledge. Now I know this isn’t a word - and I tend to hate neologisms (a point I've made over and again) but I’ve often accused some of “mathematism” – the claim that mathematical theories have some special status or that without some formalism, the theory has less status. I think this is a mistake. Now I’m not (yet!) accusing Hoffman of making this mistake necessarily – he might think the mathematics just arises naturally. But the issue I have is that a scientific theory is typically able to be explained in natural language at least somewhat well to some crude resolution perhaps. Even string theory is about...well, strings - which can be rather simply described using words. I concede the pioneers of quantum theory often failed at this. But even they were able to say something in English. Hoffman doesn’t say much about what consciousness is exactly in natural language (i.e: simple English) – he just says it’s “mathematically precise”. Now this may be all well and good to some extent. For example: we should take something like The Schrodinger wave equation seriously. And that equation – the equation that describes (for example) the motion of an electron over time as having a wave-type shape – tells us that the electron really does occupy many possible places simultaneously and can take many possible paths simultaneously. It is taking this equation – the mathematics - seriously that leads us inexorably to the multiverse (among other things). I hasten to add here: the electron itself is not a wave: it is a thing that can occupy a “sharp” position in space – but the wave aspect means that it has many (initially fungible) instances that can take all the different possible paths. Now the difference here is that: there exist experiments in quantum theory. Many many experiments and observations. We don’t start with the formalism: we start with certain problems prompted by strange observations. In the case of quantum theory some observations are: patterns generated by certain interference experiments. So then we seek an explanation - a solution to our problem. And THEN we seek to formalize all that. This is science – this is reason. It seems to me that Hoffman has no problematic observations he’s really trying to explain. I mean: he has consciousness but it’s not an observation of the form: if we see that, then this refutes my theory or is not refuted by my theory. Instead he seems to be defining into existence something he calls consciousness and then doing some mathematics. But we don’t merely define electrons (for example) into existence – we see their actual marks on photographic plates and so on. Experiments matter, so we absolutely need some experiments to check Hoffman’s thesis. But what would these look like? I cannot even imagine. And crucially we need so-called “crucial experiments” (see here for more about that). An experiment that decides between at least two theories about what’s going on. If the result goes one way, one is refuted and the other theory is not. But without a good working existing definition of consciousness how can we even begin to know how to test Hoffman’s theory. Test? Test against what? People fundamentally misunderstand the nature of science on this point which is why David’ Deutsch’s original paper on all this is so important. (And again, if you want the crib notes on that, see here). So Hoffman uses the phrase “mathematical precise” theory/definition over and again as though if he could just get this robust, self-consistent mathematical theory he will have explained consciousness and other things. But we must recall: Newtonian Gravity is “a mathematically precise theory of gravity”. It is also demonstrably false (which by the way is a strength of the theory: it makes predictions we can observe in the real world and when we fail to observe what is predicted while general relativity is shown to be accurate, we have refuted Newton). So “mathematically precise” does not mean true. Indeed mathematicians make up mathematics all the time that has nothing to do with reality. Famously, Hardy in his "A mathematicians apology" celebrated doing this. Like so many pure mathematicians, he regarded the subject as more an art than anything else.
Now I hasten to add here that Hoffman is no crank (so far as I can tell!) because he readily admits that because he makes the strong claim that consciousness is more fundamental than quantum theory, general relativity and so on, that if his theory cannot produce quantum theory, general relativity and so on as “limiting cases” then his theory is absolutely false. So that’s something at least.
Hoffman invokes Godel’s incompleteness theorems (always a bit of a red flag to me) but not in a too egregious way. He concedes with prompting from The Harrises that taking LSD or mushrooms might help him because Hoffman says that he has never so much as smoked a cigarette. He has said he has spoken to friends who are “experienced psychonauts” and what they report back are certain visualisations and the feeling of being “outside spacetime” that are in very good keeping with his own theory. But of course: that would be some kind of empiricism: knowledge coming from the external world…wherever that world might be.
The final question from Annaka is perhaps the best one as it reveals something about the thought processes of everyone in the conversation, so I’m just going to transcribe the question and the answer in full and make some remarks afterwards. Italics and boldface are my emphasis.
Annaka: “Doesn’t it seem that where we’re left with all of this is that even if you’re able to make great progress with the math and with the theory and that you’re able to come to some point of feeling that you’re confident that this is correct or likely to be correct, what would constitute proof? It seems to me to be outside the realm of scientific experiment. When we’re talking about consciousness you could tell me there’s some theory that says that there is consciousness in the atoms of the sofa I am sitting on but how would we be able to confirm it just seems to me it’s impossible to confirm, so I’m just wondering if there’s some sense in which this can result in some type of scientific experiment or what would result in proof?”
Donald: Right. That’s a critical question for a scientist and the bottom line for me is that if my theory of consciousness when I project it back – so I have to have theory of consciousness and it’s dynamics that is absolutely precise. I have to have a mathematically precise mathematical function that projects it into our spacetime interface because that’s where we’re going to get any data for our experiments. And then the first thing – the first test of my theory will be – if the dynamics when it’s projected into space time does not give me back all of the consequences of evolution by natural selection, general relativity, quantum field theory…if I cannot do that, I’m wrong. I’m flat out – I’m wrong. So already all the sciences we have right now are going to be an acid test for this deeper theory.
Annaka: Except that your starting with this base assumption that you’re calling this foundational math – you’re saying that it represents consciousness but I’m just wondering if there’s any way that that can be confirmed or will that always be an assumption?
Donald: So…what I will be able to do is to see if I’m wrong. Right? So if there are things that if I cannot do them, the theory is absolutely wrong. What I cannot do is prove that I’m right. But that’s not unique to my theory. No scientist can ever prove that they’re right. The best we can do is get evidence that either disconfirms our theory or reduces our credence in the theory or that in some sense increases our credence in the theory. But it’s sort of an elementary point in the philosophy of science that no scientist can ever prove their theory. And even falsification in the strict Popper sense – you may not strictly be able to falsify a theory – you may, because there are so many auxiliary hypotheses that go into a theory that you don’t know which one is the faulty one. But, my theory can be tested as much as any scientific theory can be* and one test that I would take as absolutely critical is if I can’t get back evolution by natural selection and say quantum field theory when I project into our spacetime interface I would take that as clear evidence I’ve got the wrong theory of consciousness.
My comments: I’m going to leave aside all of the anti-Popperian stuff there more or less: lots of talk about “prove” and “confirm” (anti-fallibilist notions of science or confusion between purely deductive systems like mathematics and explanatory sciences like physics) and “confidence” (reducing epistemology to feelings) and even “credence” – the Bayesian mistake (I explain some about that here). As to “cannot strictly falsify” – this is The Duhem Quine Thesis and is all very well. And we cannot prove it true. Indeed.
But this claim that the test of “getting back” quantum field theory is an experimental test is a little dubious. I mean, he’s strictly correct that if he makes the claim everything else is emergent and his theory is fundamental, that if he cannot produce quantum field theory then he’s wrong. But quantum field theory rests not just upon the fact that classical mechanics is a limiting case of sorts but rather many many real world observations only explicable by quantum theory. So can his *theory can be tested as much as any scientific theory can be*? I don't think so. Consider: in the case of quantum theory alone. It is testable against:the photoelectric effect, Compton scattering, radioactive decay, interference experiments, quantum computation, etc, etc. I mean a deep theory seeks to solve many discrete problems and is tested against novel observations and experiments. At least typically. Even evolution by natural selection: solves the problems about homologous structures, the similarity of DNA, embryos, the fossil record, Galapagos finches, etc, etc, etc. And General Relativity: neutron stars and black holes, the precession of Mercury’s orbit, Eddington’s observations of starlight during an eclipse, the GPS system, etc, etc. But Hoffman’s theory – so far as I can tell – does nothing like this. So it's not as testable as any other scientific theory. Not by a long shot. What are its PREDICTIONS? It defines into existence something he labels consciousness – but he has not told us in the entire podcast what consciousness is in simple terms or what we might observe out there to notice if its on the right track or problematic because it's (say) postulating disembodied consciousness which direct the motion of clouds (or some such). So without a simple definition, explaining WHY that definition is correct except to say: here is some mathematical formalism from which we can generate quantum field theory, general relativity and natural selection and whatever else seems more like a mathematical trick than actual science. What predictions does it make BEYOND just: here is a theory that has as limiting cases all other theories? That’s certainly interesting…but is it consciousness?
General relativity explains what gravity is: the bending of spacetime by mass/energy. And then it predicts things like black holes. Which are observed! And which predicts the bending of light during an eclipse – which is observed. And sure it “gives us back” Newtonian gravity as a limiting case…but that is almost beside the point.
So for Hoffman’s theory to fit this scheme I expect not only for his theory of consciousness to “give us back quantum field theory (and the rest) – it has to tell us what consciousness is exactly (as General relativity tells us what gravity is) AND it has to predict things we hitherto do not know about like black holes and allows us to test this theory by making precise predictions about what we should observe. And if the response is: but that would be about “the interface” and not “objective reality” we have a theory immune from experimental criticism. And that ceases to be science.
But this does not disqualify it because it could be very interesting metaphysics and the beginnings of an actual research project.
I have to read more of his material.
A number of people have recommended the work of Donald Hoffman to me. I am not very familiar with him – I have not read any of his books nor listened to his talks (like his TED talk). I think I will soon. What I have done is listen to the first 1 hour and 7 minutes of his interview with Sam Harris.
I was expecting to encounter something other than I did – but I’m not sure what. Long story short: I did not find anything he said particularly controversial (I emphasize this as someone who has read Popper and Deutsch). I imagine what he says would be controversial to mainstream thinking about the relationship between mind, our mental models and objective reality – but linguistic quibbles aside, I found what he said to be refreshingly familiar to me. In particular what he said could almost have been conjured by anyone who has read Chapter 5 of "The Fabric of Reality" by David Deutsch. It was that close...some minor linguistic quibbles aside and perhaps some hints of relativism.
So, some notes on the conversation, roughly in chronological order in which they occurred in the podcast:
Hoffman says he is reacting against his colleagues (in psychology, I presume) who he says regard evolution as having shaped us to “see truths about the world”. In other words to see the actual, final truth in a sense – or objective reality as it is. This is all well and good. In Popperian terms: he objects to empiricism (the idea we derive knowledge about reality from our senses) – and quite right too.
Hoffman gives an account of his “user interface” idea – which, it seems to me on a cursory understanding of what he said in this podcast, is perfectly harmonious with the idea that we do not have direct access to reality. Indeed. We will come back to this later with a discussion on virtual reality. This is inline also with what Plato, Descartes and what the the Wachowskis conveyed in the Matrix and various other movies have told us over the years. What we tend to experience isn't reality exactly and we could be terribly deceived. Thinking we're actually in a Matrix is, of course, a step far too far. But the point is: our senses can deceive us and may not always (or even typically) give us reliable knowledge of objective reality.
Annaka Harris points out during the discussion that “we don’t know what light is” and “we don’t know the fundamental nature of reality.” I only have a linguistic quibble on the first point: we do know– so long as one takes on board the Deutsch/Popper notion of what knowledge and “to know” is. It means: have a fallible explanation of. It does not mean “grok the final theory”. So we know what light is – but we don’t know what light ultimately is. We never shall. All theories must be improvable because we don’t have direct access to reality. But we can gain fallible knowledge about light (and everything else) to some extent. As for not knowing the fundamental nature of reality: yes, correct. And uncontroversial so far as that goes for a critical rationalist/fallibilist. For more about what "to know" means see here for a brief explanation.
Hoffman says that “belief in science is not a helpful attitude” - which is to say believing scientific theories is not a helpful attitude. Wonderful. If he has not read Deutsch or Popper, then he has independently converged on that epistemological worldview to a large extent. Hoffman regards scientific theories as “the best tools we have so far”. A minor quibble on the use of “tool”. Tool conjures up a device for just solving certain problems. Now it depends on one’s emphasis here: if the only purpose of science is to predict (a tool for prediction) – I disagree. That is one thing science can do. But science gives us some approximation to reality as well. So I regard scientific theories as tools that tell us something about objective reality – they give us some account of what is really there. Hoffman seems to more or less agrees with this in some later remarks.
At 41 mins in, Hoffman invokes “the virtual reality metaphor”. He talks about putting on a virtual reality headset and seeing race cars in a computer game. Obviously the cars don’t actually exist. Now he says this is basically the state we are always in. Yes – exactly. I guess Plato would not disagree about this (the shadows on the cave wall being not the objects in themselves – and we are trapped in the cave (the VR simulation).) Now in “The Fabric of Reality” chapter 5 is titled “Virtual Reality” and it is a long thesis precisely about exactly this point and some other things. This "virtual reality" explanation is no mere metaphor either: we really are simulating the external world in our minds using sense data. David writes on page 120 in that chapter, “So it is not just science – reasoning about the physical world – that involves virtual reality. All reasoning, all thinking, all external experience are forms of virtual reality.” I actually think David is making a stronger claim even than Hoffman. David published his book in 1997 – that seems to predate anything of Hoffman’s on the topic by over a decade (I can see the user interface theory was published in around 2010: https://link.springer.com/article/10.1007%2FBF03379572) and of course Plato wrote about his cave in around 400BC - so these kinds of ideas, or aspects of these ideas have been refined and rediscovered or reformulated again and again.
Hoffman admits “something” is there in reality but he says he just doesn’t know exactly what. The word “exactly” does a lot of work there. The sentence is true with it, and false without it. Annaka says “We do not understand the deeper reality” (the reality deeper than the one more fundamental than what quantum theory says) – and of course this too is correct and uncontroversial: we do not know what we do not yet know (in this case, what the successor to quantum theory will say…and what the successor to the successor will say and so on).
David engaged with at least some of Hoffman’s ideas here: https://twitter.com/stealthytooth/status/723555137478905856?s=20
And that language Hoffman uses which seems to deny the existence of objective reality (or parts of it - they mention the moon and a glass of water on the table) he tightens up – corrected I should say by Annaka Harris. It seems Hoffman steps back from the claim there is no objective reality or that the moon or glasses of water in particular don't have an objective reality. So he can have a habit of sliding into a sort of soft-relativism or something…yet when pressed seems to concede realism. At one point Annaka rightly interjects when Hoffman uses the word "miracle" to describe "premises of a scientific theory". Hoffman seems to needlessly confuse the reader or listener with terms that make things seem more controversial than they are. A premise need not be regarded as a "miracle" (some mystical violation of physical law) just because it cannot be derived from any known theory. Hoffman admits he should just use the word "premise" or "axiom" instead. It is this kind of style that, it seems to me, may make some of his ideas sound more controversial or counterintuitive than they really are.
Overall, I liked what he had to say, with some minor quibbles over his use of language – especially that which touches upon anti-fallibilist thinking (but this is true of almost anyone who hasn't taken Popper on board). So, so far, what he says seems roughly in line with what David wrote in Chapter 5 of “The Fabric of Reality”. But I stopped the podcast just as they were beginning a discussion of free will and consciousness. I guess what comes in the next hour and a half of the podcast could prompt something else...but so far I am nodding in broad agreement and didn't once roll my eyes.
In around 300BC, in Euclid's "Elements" the oldest known proof of Pythagoras' theorem was published. There are possibly more proofs of the theorem that in a right angled triangle "the square of the hypotenuse is equal to the sum of the squares of the other two sides". Early on in high school usually c^2 = a^2 + b^2 is given without proof and students (sadly) "drill" special cases. Like if c=5 and b = 4, what is a? The theorem is assumed as true. But what if an inquisitive student asks "How do we know it's true for all cases?" Short hand we can say: It's been proven. And if the student asks how, we can provide one of any of the approximately 367 proofs there are.
Now E = mc^2, also called the "Mass-Energy Equivalence" can be considered a law of physics. (I just note here as an aside that E=mc^2 is the "non-relativistic" case. It's not generally applicable because it only "works" in the rest mass of the object with mass. But this is irrelevant to my point, so we can skip past this complication). Einstein was the first to discover it. But how? Well, via a proof. He derived it. How do we know E=mc^2? Because it's provably the case. How? Well, what Einstein did was begin with his two "postulates" (first: the laws of physics are the same for all observers and second: the speed of light is constant). From these postulates he proved the so-called "Lorentz transformations". And from there the kinetic energy of particles can be derived and...long story short, he concluded - he proved E = mc^2. So he didn't begin with the assumption that E=mc^2. He began with the two postulates. Everything else just followed as a matter of mathematical necessity.
A proof just means, in "critical rationalist terms" we have found a very good (exceedingly hard to vary) mathematical explanation of something (we may call it a theorem in mathematics or a conclusion in logic or a law in physics (these are not strict terms - they are rough)). Of course, if we find that the premisses upon which Euclid's elements are based are false, or the premisses upon which even deeper theories of geometry are based are false, then we would refute Pythagoras' theorem. And if we found Einstein's postulates were false, this too could refute E = mc^2. But this whole - quite reasonable - notion that we should expect things we know not to be perfect and thus to be improvable is not a prescription for not taking seriously what we know (fallibly) today. We can use Pythagoras' theorem and E = mc^2 to solve actual problems today. So too with David Deutsch's proof of Turing's Principle in the context of quantum theory.
So when we say "David Deutsch proved the Turing Principle(1) in 1985 that "every finitely realizable physical system can be simulated to arbitrary precision by a universal model computing machine operating by finite means" (2) we mean that beginning with some uncontroversial assumptions from mainstream quantum theory along with what was already known from classical computing as discovered by Turing, he was able to reach that conclusion. So he didn't begin there. He reached there on the assumption of known physics.
So to say something is "proved mathematically" is not to give it a special "less-than-fallible" status. But it is to give it its due as a good explanation. But the full explanation as to WHY the "principle" is to be regarded as a "law of physics" isn't just any old "good explanation" because it's an explanation not only in terms of natural language but also largely in terms of a mathematical deductive system. So that gets a special name: a proof.
Now incidentally the proof has a kind of transitive nature to it: on the assumption that quantum theory is true then any physical system can be simulated (to arbitrary precision) by a universal computer. But also: given the principle, then all physical processes can be regarded as computations; that is to say: quantum theory satisfied the Turing Principle.
The significance of the proof (and so the principle) for the nature of personhood (and thus "cognition") is that any physical system (so that includes us...in the form of our brains and minds) can be simulated by a computer - or to "arbitrary precision" by a quantum computer. This does not mean the human brain is a quantum computer (many of us guess it will not be that because the human brain is warm and hence noisy - an environment quantum computers appear to not like) - we guess the human brain is just a classical computer. But it's a classical computer running a special kind of software. Whatever the case if a quantum computer - or indeed just a classical computer, but this is beside the point - can simulate a working human brain then it will be a working human brain - just made out of other stuff. It will be computing - performing the physical processes a brain does. Nothing spooky - nothing requiring new physics. And so - it will be running a mind. It will have a mind and thus it will be a person.
Perhaps take a moment to just consider again that sentence above. It is rather a profound claim. Not only does it takes "spookiness" out of the "what is a mind?" question it regards so called "Artificial General Intelligence" or AGI as a person and thus with the full legal rights and moral status of a person. Anything less is genuinely racism. The spookiness also removes caveats about an "analogy". That "the brain is a computer" is demonstrably the case for the reasons stated above is no analogy. It is mainstream physics. The mind is a kind of software: it's what brains do. It's the abstract software running on the brain. A mind in a human brain already is a simulation: it is simulating the reality delivered to it by the sense data that it interprets. So a computer that is able to simulate a mind really is a mind. Minds are abstract things. This is important because a quantum computer - or any computer - that simulates, let's say, a bullet...has not created an actual bullet. Simulated bullets are not real physical bullets. There's a difference there. If a computer gamer is playing "Call of Duty" and shooting bullets from a gun, neither the gun nor bullets are real. This should go without saying. But minds are not physical. They're abstract. So simulating them "to arbitrary precision" is to create them in reality. It's rather more akin to the person at the warehouse doing stocktake and adding up by hand all the products. They have many sums to calculate. Now if they do the calculation by hand with pen and paper or by using an abacus that's one thing. It's a real calculation. But if they then take the calculation and use a computer to do it: they are in a real sense "simulating" the action of the abacus or ink-and-paper calculation. In either case it's a real calculation and one is not more or less "real" than the other for being done with the hands or with a computer.
So we (humans, in the form of David Deutsch and anyone else who wants to try) can prove the "Turing Principle" and we can notice that as humans are made of atoms the principle applies to us just as it applies to any other physical system in reality. We can be simulated. But more than that we are computers too. But more than that: we are much more than computers. For more on that, see here: http://www.bretthall.org/physics-and-learning-styles.html or here http://www.bretthall.org/alien-intelligence.html
I've sometimes been told the principle is "just an assumption". It isn't. It's been proved. We are told time and time and time again that it's an analogy. It isn't. It's been proved. That doesn't mean it's "infallibly the case" - it just means it's a conclusion...mathematically derived from what is already known about physics. If I was to be asked "How is it proved?" I'd be unable to do better than the paper itself. And that runs for around 17 pages and has to this date won the author a number of prestigious prizes in Physics...so it's not something that can be easily summarized in a blog nor on twitter. So I refer the reader to the original paper.
One of those times above is from the 2014 Edge Question "What scientific idea is ready for retirement?" and the author there is arguing "The brain is a computer". But actually this thesis seems very poorly subscribed if one does a cursory search on google for "brain is a computer" AND "neuroscience" we get stuff like this: faculty.washington.edu/chudler/bvc.html - and that's for kids. Now I am unfamiliar with the present state of actual neuroscience and the professional literature there but if popular accounts are anything to go by, there is still a strain of mystical thinking lurking there. The truth is, the brain is mysterious - but not mystical. We know it must be a computer of some sort, given The Turing Principle - but we don't have almost even the first clue as to what the software is that is running on that computer. That, so far as we know, entirely unique creative software that generates consciousness and the experience of freewill and - most importantly - new explanations. We know there must be some code that can be captured in an algorithm. We just don't know what it is. That of course is another story.
(*) Note that "It's been proved" on the assumption quantum theory is true. So of course if quantum theory is refuted, then the proof is worthless. Like any proof, the soundness/truth of the conclusion is only as good as the premisses one begins with. Now I can imagine scenarios where quantum theory is, technically, refuted, but the newer, improved, deeper theory has quantum theory as a limiting case in which case the proof may still indeed be valid.
(1) Note that what the principle is called is a matter of some confusion. David Deutsch refers to the principle as the "Turing Thesis" in his original paper and the "Turing Principle" elsewhere. Some - like mathematicians Roger Penrose and Robin Gandy have insisted that Alonzo Church conjectured (guessed!) the same thing and so have called the same principle the "Church-Turing" Principle and still others have suggested it be called the "Church-Turing-Deutsch" principle as David actually proved the conjecture beginning with quantum theory as a premise. The upshot of that was that computer science then became a branch of physics because computers were no longer the ideal mathematical objects supposed by Turing (or Church for that matter) but rather real physical objects that obeyed the physical laws known as quantum physics. And of course they must because all computers are made of matter and not Platonic Ideals.
(2) Note that this is not the way the principle is put in the original paper found here: http://www.daviddeutsch.org.uk/wp-content/deutsch85.pdf I have changed the phrasing. The original formulation is "every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means". David knows better than most that "perfectly" isn't correct and, though I cannot find it, I recall a tweet exchange between himself and quantum physicist Michael Nielsen on exactly this point.
Somebody asked for some comments about this video: https://youtu.be/OoIcsj9ysvs . It is an attempt to criticise the optimism of people like David Deutsch - but the more explicit target seems to be Steven Pinker. It is not an easy watch for people who are familiar with optimism. Not because there is any substance to what the speaker says but because it is (a) frustrating and (b) one knows that this sort of thing will be latched onto by many. It purports to be a reasonable analysis. It is anything but. Here are some quick responses because something longer I am afraid isn't worth the time. In line with David Deutsch's own approach to these things, time is far more fruitfully spent working on solutions - and therefore having an actual optimistic approach, than dealing with all the false ideas out there. This was just a litany of false ideas. But the motivation behind them I must address. The speaker is a socialist so he despises capitalism (because he doesn't understand it). That is the tone and that is the motivation. So he must attack people who see the great good that freedom in law and in markets (i.e: capitalism) do for people while actually tending towards socialism tends towards severe shortages and lack of growth in both societal and individual health. But let's move onto specifics:
The speaker (Roland Paulsen) has concerns about what he calls “reality” but by this he means inequality The concern is completely misplaced. But that’s an anticapitalist, antifreedom socialist for you.
I concede: from the perspective of a continental European, one may well think things are getting worse. In much of *Europe* they are. Also unsurprising for a sociologist to think things are getting bad. In that profession...perhaps.
He says means are not reality. He's wrong. Means are reality. He is actually worried about the standard deviation – that is indeed getting bigger. But, and I’ve made this point many times before, inequality could get way worse while everything infact gets way better. Eg: in the distant future the poorest people could be multi billionaires could all own their own islands and get Amazon to deliver stuff by drone, while the richest people ($10^20aires) own entire planets and 3D print literally anything they want…from fusion reactors to food. The inequality will be way way greater than today (when the poorest people have close to zero and the richest are mere billionaires).
He says we shouldn't be concerned about the averages anyways - say the average increase in income in a nation because:
Income might remain the same (as he says) for 4 decades - but this is silly. A person 4 decades ago isn't earning the same today. But his point is in some places the average doesn't change. But who cares? The people on average income or at the lower level are NOT the same people. And besides: their *quality of life* is far far better. Those people on the same income 40 years ago did not have:
-Access to good medical care
-The internet and all the world’s information (for goodness sake!)
-CHEAPER food and energy and so on as a percentage of their wage
-Greater mobility. The people at the average or median or even bottom *do not remain there* in capitalism.
-Far more leisure time.
Only under non-capitalist systems are people *condemned to poverty*. Under capitalism people are free to change jobs and create and earn more by working more. Under socialism the more you work has no effect on how much you earn. So his concern about “distributions” is just the usual socialist academic misconception and hatred of wealth. Any amount of “inequality” in income is seen as evil. But as I’ve just explained: everyone might be massively more wealthy than people today but people like that guy will say inequality is evil. Period. But it’s not. Only absolute poverty is – and it’s declininig.
People really are not starving like they once were. No one in the USA will ever starve.
His stuff on medicine was ridiculous – people are getting healthier and happier. But all such studies that attempt to measure things that cannot possibly be measured (say the nebulous “well being”.). That section is pseudoscientific nonsense. Those happiness indicies are, again, a socialist trope that ask leading questions. It’s not science and gives social science the bad name it has. It’s bad sociology where the answer is known before the questions are asked. They are motivated in their research: they *want* rich nations to be more depressed and poorer nations to be happy. So what questions do they ask? #notscience
Extremely dishonest stuff about Mao towards the end. Tries to say that there were some good things about Mao. Now we’re into actual evil. Nothing more worth responding to.
So does he have a point? Not even half a point, sorry.
Eric Weinstein is a very intelligent person. I'm on his side in many things (but absolutely not the top-down control he simply assumes MUST be a part of the global economy and "free" market. See here for example: https://www.edge.org/conversation/lee_smolin-stuart_a_kauffman-zoe-vonna_palmrose-mike_brown-can-science-help-solve-the#21964 (in a sense he may even have earned much fame for calling for an "economic Manhattan project". If only he meant: let's have a huge government program to get government out of the business of tinkering with economies, it'd be great. But actually? He means something more like the opposite...). Whatever the case, Eric does have lots to say and many people listen. He can't be dismissed as a postmodernist - but one could understandably make the mistake, because his use of the English language has a style that eschews clarity because of its idiosyncrasies. We all use language idiosyncratically, of course - but the desire to almost continually invent new words or usages for old ones is a strong impulse in some. Eric is simply a prominent example.
Take the talk by Eric at https://bigthink.com/videos/eric-weinstein-capitalism-is-in-trouble-socialist-principles-can-save-it (The transcript is available there). At one point he says, “Now the danger of that is that what we didn’t realize is that our technical training for occupations maneuvers the entire population into the crosshairs of software.” Translation: Everyone might lose their jobs to computers. Now aside from the fact this is flat out false (creativity, from what we know is a unique feature people have and will always be needed) it’s just expressed in such tortuous, clunky language as to muddy the meaning. Anyways that’s just one example. False philosophy shrouded in jargon. It’s not postmodernist nonsense. But it’s flirting with the style if not the substance. The whole talk, by the way, is an appeal for power and influence. He wants scientists to have more authority and bemoans the fact politicians are from “softer disciplines”. He’s upset and demands change. He says, “One of the things that I find most perplexing is that our government is still populated by people who come from sort of softer disciplines if you will. Whether that’s law, whether these people come from poli-sci, very few people in government come from a hard core technical background. There are very few senators who could solve a partial differential equation or representatives who could program a computer.” That’s clear and lacks jargon! He should stick with that style (though the substance itself this time is terrible: No thought is given to how useful those things are in creating legislation or making decisions - the task of politicians. There are probably very few Engineers or Scientists who could effectively debate, consult widely, speak clearly and publicly and simultaneously manage large groups of people. Eric himself may be one of the rare exceptions, granted. I digress:
The following is meant purely as friendly fun (ok, to make a point and help out allies, perhaps). Again, Eric makes some excellent points when speaking and writing. Yet I think sometimes those points would be so much more powerful if only they were clearer. To that end, here is the beginnings of a generator for creating your own Eric-sounding neologisms. I was going to name it after him, or make fun of his name - but that seemed to cross a line. So, instead, the name of my generator commits the same sin as it perpetuates.
Here's my advertisement:
Do you have something insightful to say but want to cloud it in strange idiosyncratic nomenclature? Or perhaps you've no real point to make, and just feel a little "postmodern"? With the idiosyncratic neologism generator you can cloak any clear message in obtuse usage of otherwise pedestrian words. Take any term from the left hand column and pick any term on the right - it's that easy. Maybe you want to observe that sometimes people tend to waste some of their time by making silly bordering-on-mean blog posts about famous intellectuals? Need a term for that? What about...hmmm..."inversion gimmickry". And right there, you're done. Just take a pinch of column A and a random sprinkling of column B and you can spice up any vanilla concept. Turn any mundane turn of phrase into something cosmically momentous now!
"I just wanted to complain about being unable to find a date on Friday night, but didn't want to take any personal responsibility. Now I can attribute it to "ubiquitous dispersed network effects" and I feel so much better about myself!" - Terry, 19, Dubbo
"I've always been a communist but because promoting those ideas is so very difficult I just complain about "amplified late capitalism" instead and people now nod judiciously! Thanks idiosyncratic neologism generator!" Jill, 52, New York
"My final undergrad project counted how many times the word "man" was used in Time magazine between 1989 and 2016. But "How many times the word "man" was used in Time magazine between 1989 and 2016" as a title was rejected by my supervisor. With "Institutional hamiltonian calculus of gendered language in popular media: 1989 to 2016" I was able to get Honours First Class!" - Summer Clouds, No age, Citizen of the Universe
I recall watching this speech Sam Harris gave at the Aspen Ideas Festival well over a decade ago now. https://youtu.be/-j8L7p-76cU I found it amazing then and I must have watched it more than a dozen times since. I recall wanting to learn to speak like that. Even now I haven’t seen a clearer, good humoured and more forceful defence of reason against faith. There’s a strong sense in which I feel I owe Sam some gratitude for having taught me to talk. His style is an ideal to move towards: speaking clearly, with good humour and concede where concession is warranted. In that speech you can hear for yourself all the ways in which Sam’s most vociferous detractors and opponents lie about his positions and have misrepresented his motivation. And where he concedes religion can indeed be very useful and consoling and more besides.
Sam has had to defend himself many times against the charge he’s unusually or even unfairly focussed on religion. And one religion in particular. He has been absolutely right to respond that in fact this doesn’t quite get to the heart of what truly motivates him. Actually, what Sam is concerned about rather often - and this comes through in his talk and in his books - is dogma. Religion is not centre of the bullseye (even if it’s on the dart board). The central concern is dogma. It is just that religion is rather typically, one of the largest most robust repositories of dogma. And this focus on dogma exists precisely because it can cause such harm - and we often don’t realise how until the harm is or has been done. Almost always it’s unintentional. A great example Sam uses is how the Catholic Church teaches that “human life begins at the moment of conception”. This seems quaint-even sweet and good. In one sense it’s true (zygotes are alive) but on the other hand zygotes are not people. And Sam observes that if the argument is “they are potential people” then given the right conditions so is any cell in your body. So when one scratches their nose, on this view, they’re engaged in a veritable genocidal level of murder of “potential humans”.
But this Catholic doctrine – this dogma is a foundational claim. It is from here that they build moral structure – they reach other conclusions about the rightness and wrongness of many other things; for example, abortion and the use of stem cells. This foundational claim about human life* beginning at conception does real harm. But the harm isn’t due primarily to the fact it’s false (and it is false - zygotes are not people - at a minimum a nervous system is needed to encode the knowledge that makes a person a person)- it’s damaging because the church will not even consider the possibility that there might be a way to learn more on this topic or to consider it different. Because it’s a foundational claim. It’s Church doctrine. A dogma. And this is why it can result in terrible suffering in ways the early church scholars could never possibly have foreseen. For in the context of a world that can treat actual suffering of actual people if only we could use embryonic stem cells, we have a problem. (Now, by the way, I don’t think it’s at all clear WHEN a zygote becomes a person. I know it’s not one. Nor would a blastocyst be one. But an embryo? Now I don’t know. This is a sorites problem of real consequence) so the moral foundation “Human life begins at the moment of conception” - good though it sounds as a way of enshrining the sanctity of life - turns out in the context of modern medical procedures - to cause real harm. Or in the case of abortion where early term abortions are made unavailable to victims of rape, the foundation would seem to be a perfect engine of suffering.
So Sam is absolutely right to root out and condemn dogma. Dogma are irrational. But it’s how religions build belief systems. They build upon axiomatic claims – foundations. It is purported to be somewhat like a mathematical system. Here are the axioms: now, let’s see what follows. Of course nothing can ever show axioms are true and indeed they may be false. So what follows in such a case is liable to be false also. Some mathematicians - it must be said - can sometimes admit (in better moods) that they aren’t interested in what is actually true in reality. Rather: just what follows as a matter of logical necessity. Quite right too!
(*Note: by “human life” the Catholic Church means the life of the zygote is a human. They mean: there are human souls in those zygotes. )
So Sam rejects dogma because it’s dogma. He understands that dogma are those things we cannot improve if we take seriously the idea they must be true. He’s focussed on that. And I couldn’t agree more. But what is the difference between a foundation (even a weak one) and a dogma? Moreover, what exactly follows from Sam’s axioms? Can they be the basis of some nascent all encompassing moral system of a kind?
One thing we might observe is that if morality is about “the well being of conscious creatures” then this reduces morality to a domain of feelings. Indeed Sam’s other axioms: “we should avoid the worst possible misery for everyone” is explicitly about the feeling of suffering. But this *central* focus on feelings in objective matters is a mistake. It takes what should be an objective domain of enquiry (morality) and reduces it to questions of “how do you feel?” or “how do we feel?” and so on. Now very often our feelings of pain or joy are indeed relevant. But are feelings the best guide in all cases? Could we formulate moral systems without these axioms? Let us consider any other objective domain of enquiry and the relationship there between knowledge creation (i.e: the solving of problems) and the existence of “foundations” or axioms.
In physics there exist postulates for various reasons. So Einstein “built” special relativity upon two postulates: the speed of light is constant for all observers and the laws of physics are the same for all observers. But this hardly helps with thermodynamics. And large parts of quantum theory were created to solve problems without being concerned about the postulates of special relativity. That’s physics. As to mathematics – well there is the preeminent example of an domain where axiomatic systems rule the day. But Gödel showed in mathematics we cannot have a complete set of axioms that can ever solve all mathematical problems.
So in physics: not everything *follows from* the 2 postulates of special relativity. And in mathematics it is provably the case we cannot prove everything from any given set of axioms. So much for axiomatic systems being needed to create knowledge and solve problems. Instead of a focus on axioms, the truth is that in all cases creativity is how we find solutions. It does not happen via derivation. If this is true in mathematics and physics - that the majority of what we know *cannot be derived* from a fixed set of axioms why should we think it possible in morality?
As to Sam’s two premises - I have no great criticisms against morality being concerned with the problems of conscious creatures and that we need to avoid the worst possible misery for everyone. But I’ve no criticisms against either of those or Einstein’s postulates or indeed many of the best ideas. That’s why they’re best. I just don’t ever elevate my best ideas to foundational or dogmatic nor indeed regard them as any kind of “necessary starting point”. So while I lack “coherent criticisms” of Sam’s axioms, they’re not necessary as a foundation or a starting point for any moral discussion. They’re just useful if our interlocutor tries to assert that x is better than y even when x causes lots more suffering. Or that feelings never matter at all. If indeed we tried to build a system of ethics upon them, we’d be talking about suffering and feelings constantly. We’d descend into subjective debates about subjectivity.
We don’t need foundations – just claims that remain tentative. As indeed Einstein’s postulates in special relativity are. I cannot conceive how Einstein’s postulates might be true (in our actual universe). They must be, it seems to me given what else I know. Likewise “the worst possible misery for everyone is bad” is an excellent critique against those who would push a moral relativism. There is no argument I know against that claim so cannot conceive of how it might be false.
But now from here what do we do? If this is the starting point, where then? Do we move left or right, north or south away from the worst possible misery? While we agree we must move – is it a coin toss? If not what should we do? That’s the real moral question that the foundation simply cannot help with.
Sam’s foundational claims may seem unproblematic. But then so too did the claims of the early Church scholars who laid the foundation that “Human life begins at the moment of conception”. In both cases the mistake is the same: deriving consequences from firm foundations isn’t the way problem solving works and the way forward is in rejecting dogma and embracing fallibilism.
“Effective altruism is about answering one simple question: how can we use our resources to help others the most?” – The first sentence at https://www.effectivealtruism.org
Altruism isn’t generosity. Altruism is about acting specifically for others at some cost to yourself. There is sacrifice involved. Many people think sacrifice is good. If you give a lot to a poor person – that’s great. But if you give a lot to the poor until it starts to hurt you so you cannot afford the latest iPhone, that’s even better. If you’re forced to go without “frivolous things” you are virtuous, on this moral take. And the more you go without in your quest to help others – the better. There’s a religious asymptote we are admonished to pursue here. As Jesus Christ is said to have done himself by sacrificing his whole life and as he implored the rest of his followers in Luke 18:24 “ Sell everything you have and give to the poor, and you will have treasure in heaven. Then come, follow me.” So that’s the very best you can do: be as altruistic – selfless – as possible. Give it away and the more it hurts, the more moral you are. But most of us can only manage a little altruism. So we're a little better than those who are not. Right?
Altruism goes beyond mere generosity. As the effectivealtruism.org starting sentence implores us: how can we use our resources to help others the most? Others. To help yourself isn’t really a part of the picture. That’s selfish. So long as you have just enough – well, that’s optimal. Indeed to help others the most means, logically, helping yourself the least. Well - so long as you’re physically able to keep helping others, everything else can go by the wayside. It’s Jesus at his best.
There was a complaint made by Christopher Hitchens one time about Mother Theresa. He said it wasn’t that she loved the poor so much as she loved poverty. There’s a sense in which the new "Effective Altruism" (EA) movement too suffers from this. The "take action" section of their website is about giving money to their designated charities. To give to organisations less well off - typically ones that address poverty. So the focus is on poverty. But we shouldn’t love poverty. We should hate it and want to eradicate it not merely try to alleviate some of it. How can we do this? Should we give away money to the poor? Redistribute? Or should we create wealth as fast as possible by making progress? By all of us doing what we are, in our own ways, best at?
Let’s consider the case of the great Bill Gates. A very wealthy man - the founder of Microsoft - who made a lot of progress and who also is very generous. His charity is now his primary focus in life and so he does great work in helping those less fortunate improve their position. And he is solving problems. So he invests in actual cures – solutions – for things like malaria. (As an aside: I happen to agree with the sentiments of Yaron Brook: it might’ve been even better for the world had Gates stuck to making even more money through producing even more widgets and software rather than giving the money away. Maybe in an alternate universe where Gates didn’t focus all his time and wealth on charity, and instead took that time and wealth to direct the production of an even better next generation Microsoft Windows that provided just the right boost needed to the computer at a medical institute that found the cheap cure to malaria). But Gates can give away much without hurting himself much. No doubt he’s having fun and that’s the main thing. But what about the rest of us?
If you’ve $3000 and want to help fix, say malaria what can you do? Here’s one thing: donate that money to a charity and buy a bunch of mosquito bed nets. Very well. Good. A focus on helping the individual. On other people you do not know and will never meet. Or: what about this: donate the money to a pharmaceutical company working on treatments for malaria? I’d say: better. Most people would say: dubious. Those “evil” companies would treat your paultry $3000 as a joke and it’d barely cover the bar tab and their next company picnic. Cynicism never much helped anyone. What about this: invest the money in yourself and whatever you are good at and work on solving your very own problems – whatever they are. Perhaps you’re a software developer. Perhaps you’re working on data base software which is interesting enough but not your primary passion. But that $3000 – maybe you just invest it in giving yourself a few weeks away from the office, on sabbatical, where you can focus solely on figuring out how to improve the accuracy of 3D modelling in a computer game you’re working on in your spare time. You solve the problem. Now the thing is: the growth of knowledge is unpredictable. Your improved 3D modelling technique just might be the kind of thing pharmaceutical companies need. Maybe they buy your little bit of code for $300,000 and you can quit your other job and focus solely on computer games for a while. Oh and that code the pharmaceutical company bought? It was used to model drugs and a cure for malaria was found 5 years sooner than it otherwise would have been. And you were instrumental in this in a way you wouldn’t have been had you donated it to nets.
I am not saying: “stop the nets!” I am saying sacrificing yourself, your money, your time is not inherently the highest moral good. We’ve been blinded by the supposed moral good of altruism. John 8:12 “When Jesus spoke again to the people, he said, "I am the light of the world. Whoever follows me will never walk in darkness, but will have the light of life." Sometimes that light is so bright as to be blinding. Even to avowed atheists. The idea that sacrifice is good – that selflessness is good rather than a rational interest in your own self is pervasive. And false. And ultimately – an evil. It is a cause of many problems and a solution to very few. And any solution that creates more worse problems than it solves is no solution.
What is actually effective is solving problems and there are many ways problems are solved. Mostly the path to a solution cannot be predicted beforehand.
So what is moral here? Let us compare altruism to generosity and compassion.
Firstly compassion (as others have observed “empathy” is morally misleading also). Compassion lets you understand the suffering of others and think about how to help. (Empathy on the other hand asks you to feel something of their suffering). Compassion, properly construed, can be seen as dispassionate. It’s appreciating that the suffering of someone else really exists and includes something of a desire to help find a solution. We’d want our surgeons to be compassionate – but not empathetic. The latter would be distracting. Empathy is moreover misleading because objective morality cannot be primarily about feelings. But nevertheless compassion can be useful in order to be motivated to act to help others especially in those situations where those others seem not to be directly connected to us and so we cannot immediately expect some kind of reciprocity. (But perhaps we live in a community and so compassion of this kind does indeed help us in the long run).
Now generosity. Consider that people are often praised for being “generous” with their time. But no one is expected to be “altruistic” with their time. Indeed in that context you can see altruism as the morally dubious principle it is. We’ve only a finite amount of time each day and if anything is our own – it’s our time. So people who are generous with their time act out of compassion and love for their friends and family or others they care for in order to help. “How generous you’ve been!” people say if we spend some hours with them helping them on some project or to reach some goal. In those cases of generosity we – the giver – really are getting something in return. Good conversation with another person. Other people are great – the most valuable things in the world. Spending time with them is one of the most amazing gifts of life.
But altruistic? That would be something like: well now I’ve given you all the time I want to – but now I’ll give you some more because that’d be the noble thing to do. I need to sacrifice. This needs to hurt a little (or a lot). I’m not getting as much from you as I really want, but I’ll continue to give because, well, that’s altruism! Expecting nothing whatever in return but a warm glow of self satisfaction later. If you were a believer it’d be because God was watching and will reward you “with treasure in heaven” as Jesus said. Altruists like Peter Singer argue for us giving away some percentage of our wages or salaries to charity - just as Christian tithing is intended to and other religions similarly prescribe. But rarely do they say: when you've helped a person some, give 10% more of your time still. Or any free time you have each week, or sleep - give 10% of that to someone who needs it more.
Let’s consider why is it that money is regarded so differently to time in this case. It seems the case that being altruistic with your money is seen as moral in a way that being altruistic with your time is not. Here is a guess: because the prevailing view in the West for some millenia now has been that money is an evil – a corrupting influence. Rich people are rarely seen as good people until they give their money away (like Gates. Gates was an evil industrialist for most of his business life. Until he started giving away all his money. Now, in the eyes of many, he's made up for some of his evil richness.) Of course this is just another Christian hang up. 1 Timothy 6: “For the love of money is the root of all kinds of evil. And some people, craving money, have wandered from the true faith and pierced themselves with many sorrows.” And of course Jesus in Matthew 19:24 “Again I tell you, it is easier for a camel to go through the eye of a needle than for someone who is rich to enter the kingdom of God." Money isn’t good on this view. It’s a path to evil. So, it’s perfectly logical given those biblical premises that the conclusion follows as a matter of rigorous deduction: “give it away”. To give away your money must a great virtue it is thought. One of the highest moral goods. For money is an evil liable to corrupt. So you can be altruistic with it. Be generous with your time (for it is yours – you own it and have moral claim to it) but be altruistic with your money (for you’ve probably, somewhere in your history – inherited some by ill-gotten means. It was a sinful acquisition. You were born with some wealth – undeserved. So the only way to make penance is to give it up and approach the greater purity that is closer to poverty).
Altruism doesn’t expect anything in return. Indeed to expect anything in return is itself a moral failing (on the altruistic view). Yet the exact opposite is true. Reciprocity, sometimes maligned, is actually an important means by which progress is made. People cooperate and find solutions faster when working together on the occasions they want to. So this anti-reciprocity (and, really on careful examination – anti-cooperation) sentiment is another reason altruism is a kind of moral failing. With generosity we actually participate in reciprocity: we get as we give. But with altruism – nothing is ever expected in return. Indeed that would be to pollute altruism. The genuine altruist would reject all thankyous – even if the recipient wanted to pay back the altruist – the altruist should never accept. Because then they’d get payment for services rendered. They'd turn into a capitalist! Especially if the reward was very great. But the generous benefactor (to be contrasted with the altruist)? Well if one day the recipient arrived at the door with payment and interest? They’d take the gift and reinvest and the cycle of generosity and wealth creation could continue.
Morality should not be regarded primarily as a focus on others. The focus should remain on finding solutions to problems. To answer: what we should do? The question is not “What should we do to help others?” it is “What should we do?”. It simply is the case that making progress as fast as possible cannot involve altruism as any kind of deep principle but rather the deep rule is more like its antithesis. Because when people focus on themselves and the problems they are genuinely passionate about they make progress faster. And that’s our situation: to solve problems as fast as possible. And as a consequence, somewhere down the road, other people get helped as a by-product and so much faster. Bill Gates never set out to solve problems in medicine and chemistry, physics, engineering and pollution and a thousand other things. He aimed to write software. That’s it. And people bought it. And he became very wealthy because so very many people found what he created useful and valuable. And many of his buyers went on to solve important problems using Microsoft machines in medicine, science, engineering and everything else and as a consequence countless lives were saved and improved. All because Gates (being self interested) aimed for progress in one area on problems he cared about and created wealth. And that wealth bootstrapped more wealth creation and problem solving across the world. If we aim to solve problems and create wealth as great industrialists do and have done then problems get solved so much faster. And more people get helped. And that’s so much better than other methods that help solve fewer problems and help far fewer people.
We have to make progress as fast as possible. It’s the best thing for everyone. Giving wealth away - taking it from where progress is happening fastest and gifting it to where it’s not hurts more people than it ever helps.
So if you think morality is about helping the most people as fast as possible, altruism is not that. It’s the opposite and so by a utilitarian standard is actually evil. This is the moral blindspot and evil kernel at the heart of calls for “redistribution”. It steals from children of the future to help some people today. It says: those who produce wealth have always done so by some corrupt means and though they make some progress, that virtue cannot make up for the sin of wealth creation by ill-gotten means. Of course all the arguments that the wealth was ill-gotten and not heroically created through discovering the knowledge that solves the problems people are willing to pay for is ignored.
So if altruism is about helping other people as the EA people claim...then EA isn’t maximally altruistic in the long run. But creating wealth would be.
If we put aside altruism and utilitarianism as our moral compass then we can simply consider solving moral problems directly and not merely mitigate some of their effects. But moral problems require that solutions are found quickly so suffering can be alleviated for the everyone. And this means: fast progress. The creation of knowledge. To do that we need time and because “time is money” we need wealth. And we need to go faster. That needs improvements to technology. Better technology. And we need research - scientific and other kinds. All of this requires more wealth. Wealth has to be created: it’s not a finite amount to be split up and distributed more fairly. It is a thing people create and then solve problems with to the extent they know how. We must continually create more wealth to discover more knowledge and make progress fast enough so the rate of solution finding always outpaces the rate of problem encountering. If things slow and stagnate we risk it all. We risk everyone.
Consistent with every speech he gives, this is a wonderful talk by Douglas Murray. The center of the bullseye is for Douglas, as always, a concern about politics and existentially important cultural issues. He is not really doing philosophy (much less epistemology). So this may seem terribly unfair and pedantic. Nevertheless my interest is epistemology and so hearing the grave intonations of Douglas Murray utter such a philosophical cliché so early on, I felt the need to say something on the matter. At around the 40s mark into the speech above, Douglas says:
“It’s very easy to be a critic. It’s very hard to create. Yet it’s creation, not criticism that builds societies and indeed inspires people. And gives life meaning.”
The irony is Douglas is one of the most brilliant critics of our time! His books are excellent critiques of much received wisdom, politics, politicians and some of the most pressing global issues. The cliché I wish to highlight is this problem where people distinguish creation from criticism with a bright line and regard criticism as somehow bad – or easy and creativity as only ever good. What Douglas I assume means, and what I guess most people mean when they have a go at “criticism” is something more like “insults”. Insults are not criticism. Or mere contradiction is not a criticism. “You’re wrong” barely makes the grade for actual criticism.
So what is criticism actually? Well firstly it’s a creative act. Hence the way in which it cannot be divorced from creativity. (And creativity, for what it’s worth, can only become useful innovation when a careful application of criticism is applied. Not all flights of imaginative “creativity” are good.). Criticism is an explanation of how something is wrong or bad or deficient and why. Of course this is in the ideal case. Sometimes criticisms fall short and might be “bad explanations” or only partially make the case that some idea or creative thing has a weakness or flaw in some way. The criticism might not be valid. Or even when it is valid is might not be fatal because there may be no alternatives on offer.
What Douglas does in the rest of his speech is criticize. He’s a critic! He criticizes politicans and political systems, he criticizes lots of ideas and practices. He criticizes whole cultures (even his own) – in short he is a grand critic in the great tradition of British orators. But he creates all these wonderful criticisms and defends them with good explanations. Some I disagree with, but the overwhelming majority are good observations of actual things going wrong and how and why. And that’s what great criticism is.
When Douglas devised this speech, or speeches like it, and wrote his books – he created. But I’m sure he made more than one draft. He criticized his own work. He was a critic of his own work. Did he find that easy, I wonder? I doubt it. And to come up with this long list of deeply insightful criticisms of European Union policies – did that not take great creativity?
Here is the key: someone who says, “Douglas, you’re wrong. You’re a fool” is not an actual critic. They’re something else. Absent further good explanation they’re just a mean person! Critics are not necessarily mean. And being a mean, cruel or insulting person doesn’t make you a critic.
So we need both. What builds society is indeed creation. But only when coupled with criticism. An imaginative architect can conjure the most fantastic design. “How wonderfully creative you are!” people may exclaim. But when the engineer arrives to say “That wall there is not physically possible. It simply cannot support the roof (for reasons x, y and z)” this criticism is neither bad nor easy. The engineer may have to call on specific pieces of physics and other sciences to create an explanation of how the design fails. Applying general principles to specific cases takes creativity. The creative design in this case may indeed have been the easier thing and, ultimately, the bad thing. Creativity uncoupled from criticism is just imagination. Creativity coupled with criticism brings innovation.
So let us alter Douglas’ introduction just a little,
“It’s not easy to be a critic. Here I stand, bravely pointing out some difficult truths of our time. It’s very hard to create such criticisms of ideas some people hold so dear. Yet it’s this kind of creative criticism that builds societies and indeed inspires people. And gives life meaning.”
I looked into Universal Basic Income (UBI) as it has been a hot topic recently. Here's what I found: it’s welfare. So it’s Socialism. There is absolutely nothing whatsoever new about this idea. It is money taken from the taxpayer given without conditions to people who do not work.
Except it’s worse than normal socialist welfare because it applies to absolutely everyone regardless. So it’s closer to Communism.
Except it’s worse even than that. At least with communism people are ostensibly required to do something productive, even if most of the wealth they create is confiscated. With UBI you aren’t expected even to do that much. You don’t have to produce anything.
None of this would prevent people from actually being creative of course. But it will eliminate one of the important motivations people have for being so. Namely - so they can produce something of value to others and gain income from doing so. If they gain income for doing nothing at least some will decide not to produce anything of value. Not everyone. Some. This is much more difficult a life decision to make if you survival depends on your creating something of value.
UBI begins with the assumption that robots - AI - will take almost all the jobs that presently exist. UBI ignores that the only jobs that can possibly be taken by AI are ones that can be automated. This has always been the case. It is exactly the same situation we have always been in since the loom or the computer first appeared. Yet unemployment hasn’t risen. It’s remained stable or even decreased. And living standards continue to rise anywhere economic freedom is implemented.
People have moved from drudgery - work that can be automated - into creative work and continue to do so. We are all creative. Anyone who asserts otherwise simply doesn’t understand what a person actually is. We are creative entities. Not draught horses. A draught horse just pulls a heavy load. The "work" they do is very much the way physics defines work: it is the product of a force over some distance. The draught horse drags a load across the ground moving it from place to place. It is drudgery.
People are above that. We should all be moving away from draught horse type work (anything that can be automated) into creative work. Work that requires problems to be identified and then solved. Ugliness that needs to be made beautiful. Evil that needs to be made good. This is what we do.
If AGI arrives, all the better. AGI are people too. They won’t take "our" jobs. They’ll be people - like us. And the more people, the better. The more ideas. The more solutions. The faster we can address the problems of the world. And the problems of the world cannot be known in advance. We need to produce knowledge to create the wealth so we can fund the solutions of tomorrow. So we all need to be directed towards creative output. Not engaged in pulling loads like horses.
People are worried about job losses as industries change. But it has always been the case that industries change. "But now is different" they say. It's not. That too has been said before. Change and progress are inevitable and good in an open society - in a culture of criticism. People are, right now, particularly worried about industries like transportation. All those truck drivers, taxi and Uber drivers, train driver, couriers, delivery people - anyone involved in driving as an occupation. The fear is this will all soon be automated - and all those people out of a job. And then: crisis. But people move from job to job all the time. Again: there is nothing new here. Indeed more and more people spend less and less time in a single job. Why people think truck drivers are especially unable to learn new skills, I do not know. They can - as much as anyone else. But we are told the crisis is coming. Millions of people out of work overnight. Crisis. Upheaval. Discontinuities.
Hence the need for UBI.
But here’s another solution if you really are concerned that trucks drivers and the like are some special case. Actually here is a solution regardless of where you stand on the "almost all people are soon to be automated out of their jobs" end-times scenario. If you are genuinely concerned about this - are a serious politician, say - then cut taxes now. Cut taxes on vehicles - now. Cut income tax - now. Allow those drivers - or indeed anyone engaged in a non-creative job to save their money and not have it extracted by the government NOW. Let them save a “nest egg” so when something seen or unforeseen happens (like job loss) they’ve sufficient wealth saved in cash or property to support themselves. And they don't have to turn to the taxpayer for retribution. Take out the middle man. Why tax these people so heavily now, only to give it back to them when they become redundant? Let them save their own wealth now.
This then shifts the burden of “who is responsible for providing income to an individual?" from the collective back to the individual.
Socialist memes are deeply entrenched. Even if people begin to appreciate that communism (or that some aspects of communism) were in error and so people begin to question and criticise these terrible dogmas - they rise up again in new forms, repackaged. Thus it is with “UBI” - it is no more than a repackaging of the old idea that people should earn the same amount of money regardless of what they do. But as I said - it is even worse than this because it does not even require that you work. It assumes people are not creative - but rather cogs in a great machine. We exist in order to perform labour (i.e: arduous work). But this is not our nature. We are creative. The Marxists are simply wrong that arduous, difficult work is what people do and is what creates wealth. No. What creates wealth is ideas. The rest can be automated. How can we move from a mindset of "people need to labour and sweat to earn money" to "people need to be creative and have fun and find solutions - leave the "labour" to the robots"? We simply need to allow people more opportunity to be creative. And they will have this if they can keep the money they earn and not have it in large part confiscated by the government.
Creative people need freedom, and the only system that allows people to be free - the only economic and social system that has at its heart a principle not to use force, engage in theft of wealth created and allow people to trade or not with those they choose is Capitalism. Only Capitalism explicitly has an injunction against the extraction by force the wealth that has been created by Alice to give to Bob regardless of what Bob has done with his life.
UBI rejects all this. UBI takes from Alice the wealth she has created because of the pessimistic assumption that Bob simply cannot create wealth. It views Alice as somehow having gained her wealth through illegitimate means. As such - Bob, no matter what he has been up to, actually deserves some of it. And the only people who can ensure that Alice does indeed hand over the products of her labour are the government. And should Alice refuse, then men with guns will come to her door and demand her wealth. Wealth she might otherwise have used to create more wealth.
The alternative to this dystopian view of people and civilisation is an open society of optimism and kindness. People can create wealth. All of us. Even Bob. It is our nature. It is what we do: create. And as a community we enjoy and value the creations of others and engage in kind and generous exchanges of ideas, creations, services and goods. Not in equal measure - but this too is good that some may succeed through extra hard work and great inspiration and rise up and change the whole civilisation. Others can find success in fertile little subcultures which arise where everyone does their own little (but valuable!) thing where people trade one with each other because they want to. Money is exchanged for goods desired and people we want to pay get paid. The only real factors that slow this wonderful flowering way in which ideas flourish is force and its threat. When criminals or the government come with weapons to take some of what we have created and use it to purchase goods and services we were not in the market for to gift it to people we do not know - that’s wrong. That's theft. That's evil.
We people are, most of us, kind and generous and had we wanted to gift the money to a charity or indeed to an individual in need, we now are unable to. Because what we had, has been taken from us at the point of a gun by people who claim they know better.
UBI is not needed. What is needed is an understanding that people are creative. In particular they create wealth. And if they are allowed to keep the wealth they create through their hard work - creative or otherwise, then they will be able to save. And if they were permitted to save sufficiently, UBI wouldn’t be on the cards at all. It would be seen for what it actually is: theft.