In around 300BC, in Euclid's "Elements" the oldest known proof of Pythagoras' theorem was published. There are possibly more proofs of the theorem that in a right angled triangle "the square of the hypotenuse is equal to the sum of the squares of the other two sides". Early on in high school usually c^2 = a^2 + b^2 is given without proof and students (sadly) "drill" special cases. Like if c=5 and b = 4, what is a? The theorem is assumed as true. But what if an inquisitive student asks "How do we know it's true for all cases?" Short hand we can say: It's been proven. And if the student asks how, we can provide one of any of the approximately 367 proofs there are.
Now E = mc^2, also called the "Mass-Energy Equivalence" can be considered a law of physics. (I just note here as an aside that E=mc^2 is the "non-relativistic" case. It's not generally applicable because it only "works" in the rest mass of the object with mass. But this is irrelevant to my point, so we can skip past this complication). Einstein was the first to discover it. But how? Well, via a proof. He derived it. How do we know E=mc^2? Because it's provably the case. How? Well, what Einstein did was begin with his two "postulates" (first: the laws of physics are the same for all observers and second: the speed of light is constant). From these postulates he proved the so-called "Lorentz transformations". And from there the kinetic energy of particles can be derived and...long story short, he concluded - he proved E = mc^2. So he didn't begin with the assumption that E=mc^2. He began with the two postulates. Everything else just followed as a matter of mathematical necessity.
A proof just means, in "critical rationalist terms" we have found a very good (exceedingly hard to vary) mathematical explanation of something (we may call it a theorem in mathematics or a conclusion in logic or a law in physics (these are not strict terms - they are rough)). Of course, if we find that the premisses upon which Euclid's elements are based are false, or the premisses upon which even deeper theories of geometry are based are false, then we would refute Pythagoras' theorem. And if we found Einstein's postulates were false, this too could refute E = mc^2. But this whole - quite reasonable - notion that we should expect things we know not to be perfect and thus to be improvable is not a prescription for not taking seriously what we know (fallibly) today. We can use Pythagoras' theorem and E = mc^2 to solve actual problems today. So too with David Deutsch's proof of Turing's Principle in the context of quantum theory.
So when we say "David Deutsch proved the Turing Principle(1) in 1985 that "every finitely realizable physical system can be simulated to arbitrary precision by a universal model computing machine operating by finite means" (2) we mean that beginning with some uncontroversial assumptions from mainstream quantum theory along with what was already known from classical computing as discovered by Turing, he was able to reach that conclusion. So he didn't begin there. He reached there on the assumption of known physics.
So to say something is "proved mathematically" is not to give it a special "less-than-fallible" status. But it is to give it its due as a good explanation. But the full explanation as to WHY the "principle" is to be regarded as a "law of physics" isn't just any old "good explanation" because it's an explanation not only in terms of natural language but also largely in terms of a mathematical deductive system. So that gets a special name: a proof.
Now incidentally the proof has a kind of transitive nature to it: on the assumption that quantum theory is true then any physical system can be simulated (to arbitrary precision) by a universal computer. But also: given the principle, then all physical processes can be regarded as computations; that is to say: quantum theory satisfied the Turing Principle.
The significance of the proof (and so the principle) for the nature of personhood (and thus "cognition") is that any physical system (so that includes us...in the form of our brains and minds) can be simulated by a computer - or to "arbitrary precision" by a quantum computer. This does not mean the human brain is a quantum computer (many of us guess it will not be that because the human brain is warm and hence noisy - an environment quantum computers appear to not like) - we guess the human brain is just a classical computer. But it's a classical computer running a special kind of software. Whatever the case if a quantum computer - or indeed just a classical computer, but this is beside the point - can simulate a working human brain then it will be a working human brain - just made out of other stuff. It will be computing - performing the physical processes a brain does. Nothing spooky - nothing requiring new physics. And so - it will be running a mind. It will have a mind and thus it will be a person.
Perhaps take a moment to just consider again that sentence above. It is rather a profound claim. Not only does it takes "spookiness" out of the "what is a mind?" question it regards so called "Artificial General Intelligence" or AGI as a person and thus with the full legal rights and moral status of a person. Anything less is genuinely racism. The spookiness also removes caveats about an "analogy". That "the brain is a computer" is demonstrably the case for the reasons stated above is no analogy. It is mainstream physics. The mind is a kind of software: it's what brains do. It's the abstract software running on the brain. A mind in a human brain already is a simulation: it is simulating the reality delivered to it by the sense data that it interprets. So a computer that is able to simulate a mind really is a mind. Minds are abstract things. This is important because a quantum computer - or any computer - that simulates, let's say, a bullet...has not created an actual bullet. Simulated bullets are not real physical bullets. There's a difference there. If a computer gamer is playing "Call of Duty" and shooting bullets from a gun, neither the gun nor bullets are real. This should go without saying. But minds are not physical. They're abstract. So simulating them "to arbitrary precision" is to create them in reality. It's rather more akin to the person at the warehouse doing stocktake and adding up by hand all the products. They have many sums to calculate. Now if they do the calculation by hand with pen and paper or by using an abacus that's one thing. It's a real calculation. But if they then take the calculation and use a computer to do it: they are in a real sense "simulating" the action of the abacus or ink-and-paper calculation. In either case it's a real calculation and one is not more or less "real" than the other for being done with the hands or with a computer.
So we (humans, in the form of David Deutsch and anyone else who wants to try) can prove the "Turing Principle" and we can notice that as humans are made of atoms the principle applies to us just as it applies to any other physical system in reality. We can be simulated. But more than that we are computers too. But more than that: we are much more than computers. For more on that, see here: http://www.bretthall.org/physics-and-learning-styles.html or here http://www.bretthall.org/alien-intelligence.html
I've sometimes been told the principle is "just an assumption". It isn't. It's been proved. We are told time and time and time again that it's an analogy. It isn't. It's been proved. That doesn't mean it's "infallibly the case" - it just means it's a conclusion...mathematically derived from what is already known about physics. If I was to be asked "How is it proved?" I'd be unable to do better than the paper itself. And that runs for around 17 pages and has to this date won the author a number of prestigious prizes in Physics...so it's not something that can be easily summarized in a blog nor on twitter. So I refer the reader to the original paper.
One of those times above is from the 2014 Edge Question "What scientific idea is ready for retirement?" and the author there is arguing "The brain is a computer". But actually this thesis seems very poorly subscribed if one does a cursory search on google for "brain is a computer" AND "neuroscience" we get stuff like this: faculty.washington.edu/chudler/bvc.html - and that's for kids. Now I am unfamiliar with the present state of actual neuroscience and the professional literature there but if popular accounts are anything to go by, there is still a strain of mystical thinking lurking there. The truth is, the brain is mysterious - but not mystical. We know it must be a computer of some sort, given The Turing Principle - but we don't have almost even the first clue as to what the software is that is running on that computer. That, so far as we know, entirely unique creative software that generates consciousness and the experience of freewill and - most importantly - new explanations. We know there must be some code that can be captured in an algorithm. We just don't know what it is. That of course is another story.
(*) Note that "It's been proved" on the assumption quantum theory is true. So of course if quantum theory is refuted, then the proof is worthless. Like any proof, the soundness/truth of the conclusion is only as good as the premisses one begins with. Now I can imagine scenarios where quantum theory is, technically, refuted, but the newer, improved, deeper theory has quantum theory as a limiting case in which case the proof may still indeed be valid.
(1) Note that what the principle is called is a matter of some confusion. David Deutsch refers to the principle as the "Turing Thesis" in his original paper and the "Turing Principle" elsewhere. Some - like mathematicians Roger Penrose and Robin Gandy have insisted that Alonzo Church conjectured (guessed!) the same thing and so have called the same principle the "Church-Turing" Principle and still others have suggested it be called the "Church-Turing-Deutsch" principle as David actually proved the conjecture beginning with quantum theory as a premise. The upshot of that was that computer science then became a branch of physics because computers were no longer the ideal mathematical objects supposed by Turing (or Church for that matter) but rather real physical objects that obeyed the physical laws known as quantum physics. And of course they must because all computers are made of matter and not Platonic Ideals.
(2) Note that this is not the way the principle is put in the original paper found here: http://www.daviddeutsch.org.uk/wp-content/deutsch85.pdf I have changed the phrasing. The original formulation is "every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means". David knows better than most that "perfectly" isn't correct and, though I cannot find it, I recall a tweet exchange between himself and quantum physicist Michael Nielsen on exactly this point.
Somebody asked for some comments about this video: https://youtu.be/OoIcsj9ysvs . It is an attempt to criticise the optimism of people like David Deutsch - but the more explicit target seems to be Steven Pinker. It is not an easy watch for people who are familiar with optimism. Not because there is any substance to what the speaker says but because it is (a) frustrating and (b) one knows that this sort of thing will be latched onto by many. It purports to be a reasonable analysis. It is anything but. Here are some quick responses because something longer I am afraid isn't worth the time. In line with David Deutsch's own approach to these things, time is far more fruitfully spent working on solutions - and therefore having an actual optimistic approach, than dealing with all the false ideas out there. This was just a litany of false ideas. But the motivation behind them I must address. The speaker is a socialist so he despises capitalism (because he doesn't understand it). That is the tone and that is the motivation. So he must attack people who see the great good that freedom in law and in markets (i.e: capitalism) do for people while actually tending towards socialism tends towards severe shortages and lack of growth in both societal and individual health. But let's move onto specifics:
The speaker (Roland Paulsen) has concerns about what he calls “reality” but by this he means inequality The concern is completely misplaced. But that’s an anticapitalist, antifreedom socialist for you.
I concede: from the perspective of a continental European, one may well think things are getting worse. In much of *Europe* they are. Also unsurprising for a sociologist to think things are getting bad. In that profession...perhaps.
He says means are not reality. He's wrong. Means are reality. He is actually worried about the standard deviation – that is indeed getting bigger. But, and I’ve made this point many times before, inequality could get way worse while everything infact gets way better. Eg: in the distant future the poorest people could be multi billionaires could all own their own islands and get Amazon to deliver stuff by drone, while the richest people ($10^20aires) own entire planets and 3D print literally anything they want…from fusion reactors to food. The inequality will be way way greater than today (when the poorest people have close to zero and the richest are mere billionaires).
He says we shouldn't be concerned about the averages anyways - say the average increase in income in a nation because:
Income might remain the same (as he says) for 4 decades - but this is silly. A person 4 decades ago isn't earning the same today. But his point is in some places the average doesn't change. But who cares? The people on average income or at the lower level are NOT the same people. And besides: their *quality of life* is far far better. Those people on the same income 40 years ago did not have:
-Access to good medical care
-The internet and all the world’s information (for goodness sake!)
-CHEAPER food and energy and so on as a percentage of their wage
-Greater mobility. The people at the average or median or even bottom *do not remain there* in capitalism.
-Far more leisure time.
Only under non-capitalist systems are people *condemned to poverty*. Under capitalism people are free to change jobs and create and earn more by working more. Under socialism the more you work has no effect on how much you earn. So his concern about “distributions” is just the usual socialist academic misconception and hatred of wealth. Any amount of “inequality” in income is seen as evil. But as I’ve just explained: everyone might be massively more wealthy than people today but people like that guy will say inequality is evil. Period. But it’s not. Only absolute poverty is – and it’s declininig.
People really are not starving like they once were. No one in the USA will ever starve.
His stuff on medicine was ridiculous – people are getting healthier and happier. But all such studies that attempt to measure things that cannot possibly be measured (say the nebulous “well being”.). That section is pseudoscientific nonsense. Those happiness indicies are, again, a socialist trope that ask leading questions. It’s not science and gives social science the bad name it has. It’s bad sociology where the answer is known before the questions are asked. They are motivated in their research: they *want* rich nations to be more depressed and poorer nations to be happy. So what questions do they ask? #notscience
Extremely dishonest stuff about Mao towards the end. Tries to say that there were some good things about Mao. Now we’re into actual evil. Nothing more worth responding to.
So does he have a point? Not even half a point, sorry.
Eric Weinstein is a very intelligent person. I'm on his side in many things (but absolutely not the top-down control he simply assumes MUST be a part of the global economy and "free" market. See here for example: https://www.edge.org/conversation/lee_smolin-stuart_a_kauffman-zoe-vonna_palmrose-mike_brown-can-science-help-solve-the#21964 (in a sense he may even have earned much fame for calling for an "economic Manhattan project". If only he meant: let's have a huge government program to get government out of the business of tinkering with economies, it'd be great. But actually? He means something more like the opposite...). Whatever the case, Eric does have lots to say and many people listen. He can't be dismissed as a postmodernist - but one could understandably make the mistake, because his use of the English language has a style that eschews clarity because of its idiosyncrasies. We all use language idiosyncratically, of course - but the desire to almost continually invent new words or usages for old ones is a strong impulse in some. Eric is simply a prominent example.
Take the talk by Eric at https://bigthink.com/videos/eric-weinstein-capitalism-is-in-trouble-socialist-principles-can-save-it (The transcript is available there). At one point he says, “Now the danger of that is that what we didn’t realize is that our technical training for occupations maneuvers the entire population into the crosshairs of software.” Translation: Everyone might lose their jobs to computers. Now aside from the fact this is flat out false (creativity, from what we know is a unique feature people have and will always be needed) it’s just expressed in such tortuous, clunky language as to muddy the meaning. Anyways that’s just one example. False philosophy shrouded in jargon. It’s not postmodernist nonsense. But it’s flirting with the style if not the substance. The whole talk, by the way, is an appeal for power and influence. He wants scientists to have more authority and bemoans the fact politicians are from “softer disciplines”. He’s upset and demands change. He says, “One of the things that I find most perplexing is that our government is still populated by people who come from sort of softer disciplines if you will. Whether that’s law, whether these people come from poli-sci, very few people in government come from a hard core technical background. There are very few senators who could solve a partial differential equation or representatives who could program a computer.” That’s clear and lacks jargon! He should stick with that style (though the substance itself this time is terrible: No thought is given to how useful those things are in creating legislation or making decisions - the task of politicians. There are probably very few Engineers or Scientists who could effectively debate, consult widely, speak clearly and publicly and simultaneously manage large groups of people. Eric himself may be one of the rare exceptions, granted. I digress:
The following is meant purely as friendly fun (ok, to make a point and help out allies, perhaps). Again, Eric makes some excellent points when speaking and writing. Yet I think sometimes those points would be so much more powerful if only they were clearer. To that end, here is the beginnings of a generator for creating your own Eric-sounding neologisms. I was going to name it after him, or make fun of his name - but that seemed to cross a line. So, instead, the name of my generator commits the same sin as it perpetuates.
Here's my advertisement:
Do you have something insightful to say but want to cloud it in strange idiosyncratic nomenclature? Or perhaps you've no real point to make, and just feel a little "postmodern"? With the idiosyncratic neologism generator you can cloak any clear message in obtuse usage of otherwise pedestrian words. Take any term from the left hand column and pick any term on the right - it's that easy. Maybe you want to observe that sometimes people tend to waste some of their time by making silly bordering-on-mean blog posts about famous intellectuals? Need a term for that? What about...hmmm..."inversion gimmickry". And right there, you're done. Just take a pinch of column A and a random sprinkling of column B and you can spice up any vanilla concept. Turn any mundane turn of phrase into something cosmically momentous now!
"I just wanted to complain about being unable to find a date on Friday night, but didn't want to take any personal responsibility. Now I can attribute it to "ubiquitous dispersed network effects" and I feel so much better about myself!" - Terry, 19, Dubbo
"I've always been a communist but because promoting those ideas is so very difficult I just complain about "amplified late capitalism" instead and people now nod judiciously! Thanks idiosyncratic neologism generator!" Jill, 52, New York
"My final undergrad project counted how many times the word "man" was used in Time magazine between 1989 and 2016. But "How many times the word "man" was used in Time magazine between 1989 and 2016" as a title was rejected by my supervisor. With "Institutional hamiltonian calculus of gendered language in popular media: 1989 to 2016" I was able to get Honours First Class!" - Summer Clouds, No age, Citizen of the Universe
I recall watching this speech Sam Harris gave at the Aspen Ideas Festival well over a decade ago now. https://youtu.be/-j8L7p-76cU I found it amazing then and I must have watched it more than a dozen times since. I recall wanting to learn to speak like that. Even now I haven’t seen a clearer, good humoured and more forceful defence of reason against faith. There’s a strong sense in which I feel I owe Sam some gratitude for having taught me to talk. His style is an ideal to move towards: speaking clearly, with good humour and concede where concession is warranted. In that speech you can hear for yourself all the ways in which Sam’s most vociferous detractors and opponents lie about his positions and have misrepresented his motivation. And where he concedes religion can indeed be very useful and consoling and more besides.
Sam has had to defend himself many times against the charge he’s unusually or even unfairly focussed on religion. And one religion in particular. He has been absolutely right to respond that in fact this doesn’t quite get to the heart of what truly motivates him. Actually, what Sam is concerned about rather often - and this comes through in his talk and in his books - is dogma. Religion is not centre of the bullseye (even if it’s on the dart board). The central concern is dogma. It is just that religion is rather typically, one of the largest most robust repositories of dogma. And this focus on dogma exists precisely because it can cause such harm - and we often don’t realise how until the harm is or has been done. Almost always it’s unintentional. A great example Sam uses is how the Catholic Church teaches that “human life begins at the moment of conception”. This seems quaint-even sweet and good. In one sense it’s true (zygotes are alive) but on the other hand zygotes are not people. And Sam observes that if the argument is “they are potential people” then given the right conditions so is any cell in your body. So when one scratches their nose, on this view, they’re engaged in a veritable genocidal level of murder of “potential humans”.
But this Catholic doctrine – this dogma is a foundational claim. It is from here that they build moral structure – they reach other conclusions about the rightness and wrongness of many other things; for example, abortion and the use of stem cells. This foundational claim about human life* beginning at conception does real harm. But the harm isn’t due primarily to the fact it’s false (and it is false - zygotes are not people - at a minimum a nervous system is needed to encode the knowledge that makes a person a person)- it’s damaging because the church will not even consider the possibility that there might be a way to learn more on this topic or to consider it different. Because it’s a foundational claim. It’s Church doctrine. A dogma. And this is why it can result in terrible suffering in ways the early church scholars could never possibly have foreseen. For in the context of a world that can treat actual suffering of actual people if only we could use embryonic stem cells, we have a problem. (Now, by the way, I don’t think it’s at all clear WHEN a zygote becomes a person. I know it’s not one. Nor would a blastocyst be one. But an embryo? Now I don’t know. This is a sorites problem of real consequence) so the moral foundation “Human life begins at the moment of conception” - good though it sounds as a way of enshrining the sanctity of life - turns out in the context of modern medical procedures - to cause real harm. Or in the case of abortion where early term abortions are made unavailable to victims of rape, the foundation would seem to be a perfect engine of suffering.
So Sam is absolutely right to root out and condemn dogma. Dogma are irrational. But it’s how religions build belief systems. They build upon axiomatic claims – foundations. It is purported to be somewhat like a mathematical system. Here are the axioms: now, let’s see what follows. Of course nothing can ever show axioms are true and indeed they may be false. So what follows in such a case is liable to be false also. Some mathematicians - it must be said - can sometimes admit (in better moods) that they aren’t interested in what is actually true in reality. Rather: just what follows as a matter of logical necessity. Quite right too!
(*Note: by “human life” the Catholic Church means the life of the zygote is a human. They mean: there are human souls in those zygotes. )
So Sam rejects dogma because it’s dogma. He understands that dogma are those things we cannot improve if we take seriously the idea they must be true. He’s focussed on that. And I couldn’t agree more. But what is the difference between a foundation (even a weak one) and a dogma? Moreover, what exactly follows from Sam’s axioms? Can they be the basis of some nascent all encompassing moral system of a kind?
One thing we might observe is that if morality is about “the well being of conscious creatures” then this reduces morality to a domain of feelings. Indeed Sam’s other axioms: “we should avoid the worst possible misery for everyone” is explicitly about the feeling of suffering. But this *central* focus on feelings in objective matters is a mistake. It takes what should be an objective domain of enquiry (morality) and reduces it to questions of “how do you feel?” or “how do we feel?” and so on. Now very often our feelings of pain or joy are indeed relevant. But are feelings the best guide in all cases? Could we formulate moral systems without these axioms? Let us consider any other objective domain of enquiry and the relationship there between knowledge creation (i.e: the solving of problems) and the existence of “foundations” or axioms.
In physics there exist postulates for various reasons. So Einstein “built” special relativity upon two postulates: the speed of light is constant for all observers and the laws of physics are the same for all observers. But this hardly helps with thermodynamics. And large parts of quantum theory were created to solve problems without being concerned about the postulates of special relativity. That’s physics. As to mathematics – well there is the preeminent example of an domain where axiomatic systems rule the day. But Gödel showed in mathematics we cannot have a complete set of axioms that can ever solve all mathematical problems.
So in physics: not everything *follows from* the 2 postulates of special relativity. And in mathematics it is provably the case we cannot prove everything from any given set of axioms. So much for axiomatic systems being needed to create knowledge and solve problems. Instead of a focus on axioms, the truth is that in all cases creativity is how we find solutions. It does not happen via derivation. If this is true in mathematics and physics - that the majority of what we know *cannot be derived* from a fixed set of axioms why should we think it possible in morality?
As to Sam’s two premises - I have no great criticisms against morality being concerned with the problems of conscious creatures and that we need to avoid the worst possible misery for everyone. But I’ve no criticisms against either of those or Einstein’s postulates or indeed many of the best ideas. That’s why they’re best. I just don’t ever elevate my best ideas to foundational or dogmatic nor indeed regard them as any kind of “necessary starting point”. So while I lack “coherent criticisms” of Sam’s axioms, they’re not necessary as a foundation or a starting point for any moral discussion. They’re just useful if our interlocutor tries to assert that x is better than y even when x causes lots more suffering. Or that feelings never matter at all. If indeed we tried to build a system of ethics upon them, we’d be talking about suffering and feelings constantly. We’d descend into subjective debates about subjectivity.
We don’t need foundations – just claims that remain tentative. As indeed Einstein’s postulates in special relativity are. I cannot conceive how Einstein’s postulates might be true (in our actual universe). They must be, it seems to me given what else I know. Likewise “the worst possible misery for everyone is bad” is an excellent critique against those who would push a moral relativism. There is no argument I know against that claim so cannot conceive of how it might be false.
But now from here what do we do? If this is the starting point, where then? Do we move left or right, north or south away from the worst possible misery? While we agree we must move – is it a coin toss? If not what should we do? That’s the real moral question that the foundation simply cannot help with.
Sam’s foundational claims may seem unproblematic. But then so too did the claims of the early Church scholars who laid the foundation that “Human life begins at the moment of conception”. In both cases the mistake is the same: deriving consequences from firm foundations isn’t the way problem solving works and the way forward is in rejecting dogma and embracing fallibilism.
“Effective altruism is about answering one simple question: how can we use our resources to help others the most?” – The first sentence at https://www.effectivealtruism.org
Altruism isn’t generosity. Altruism is about acting specifically for others at some cost to yourself. There is sacrifice involved. Many people think sacrifice is good. If you give a lot to a poor person – that’s great. But if you give a lot to the poor until it starts to hurt you so you cannot afford the latest iPhone, that’s even better. If you’re forced to go without “frivolous things” you are virtuous, on this moral take. And the more you go without in your quest to help others – the better. There’s a religious asymptote we are admonished to pursue here. As Jesus Christ is said to have done himself by sacrificing his whole life and as he implored the rest of his followers in Luke 18:24 “ Sell everything you have and give to the poor, and you will have treasure in heaven. Then come, follow me.” So that’s the very best you can do: be as altruistic – selfless – as possible. Give it away and the more it hurts, the more moral you are. But most of us can only manage a little altruism. So we're a little better than those who are not. Right?
Altruism goes beyond mere generosity. As the effectivealtruism.org starting sentence implores us: how can we use our resources to help others the most? Others. To help yourself isn’t really a part of the picture. That’s selfish. So long as you have just enough – well, that’s optimal. Indeed to help others the most means, logically, helping yourself the least. Well - so long as you’re physically able to keep helping others, everything else can go by the wayside. It’s Jesus at his best.
There was a complaint made by Christopher Hitchens one time about Mother Theresa. He said it wasn’t that she loved the poor so much as she loved poverty. There’s a sense in which the new "Effective Altruism" (EA) movement too suffers from this. The "take action" section of their website is about giving money to their designated charities. To give to organisations less well off - typically ones that address poverty. So the focus is on poverty. But we shouldn’t love poverty. We should hate it and want to eradicate it not merely try to alleviate some of it. How can we do this? Should we give away money to the poor? Redistribute? Or should we create wealth as fast as possible by making progress? By all of us doing what we are, in our own ways, best at?
Let’s consider the case of the great Bill Gates. A very wealthy man - the founder of Microsoft - who made a lot of progress and who also is very generous. His charity is now his primary focus in life and so he does great work in helping those less fortunate improve their position. And he is solving problems. So he invests in actual cures – solutions – for things like malaria. (As an aside: I happen to agree with the sentiments of Yaron Brook: it might’ve been even better for the world had Gates stuck to making even more money through producing even more widgets and software rather than giving the money away. Maybe in an alternate universe where Gates didn’t focus all his time and wealth on charity, and instead took that time and wealth to direct the production of an even better next generation Microsoft Windows that provided just the right boost needed to the computer at a medical institute that found the cheap cure to malaria). But Gates can give away much without hurting himself much. No doubt he’s having fun and that’s the main thing. But what about the rest of us?
If you’ve $3000 and want to help fix, say malaria what can you do? Here’s one thing: donate that money to a charity and buy a bunch of mosquito bed nets. Very well. Good. A focus on helping the individual. On other people you do not know and will never meet. Or: what about this: donate the money to a pharmaceutical company working on treatments for malaria? I’d say: better. Most people would say: dubious. Those “evil” companies would treat your paultry $3000 as a joke and it’d barely cover the bar tab and their next company picnic. Cynicism never much helped anyone. What about this: invest the money in yourself and whatever you are good at and work on solving your very own problems – whatever they are. Perhaps you’re a software developer. Perhaps you’re working on data base software which is interesting enough but not your primary passion. But that $3000 – maybe you just invest it in giving yourself a few weeks away from the office, on sabbatical, where you can focus solely on figuring out how to improve the accuracy of 3D modelling in a computer game you’re working on in your spare time. You solve the problem. Now the thing is: the growth of knowledge is unpredictable. Your improved 3D modelling technique just might be the kind of thing pharmaceutical companies need. Maybe they buy your little bit of code for $300,000 and you can quit your other job and focus solely on computer games for a while. Oh and that code the pharmaceutical company bought? It was used to model drugs and a cure for malaria was found 5 years sooner than it otherwise would have been. And you were instrumental in this in a way you wouldn’t have been had you donated it to nets.
I am not saying: “stop the nets!” I am saying sacrificing yourself, your money, your time is not inherently the highest moral good. We’ve been blinded by the supposed moral good of altruism. John 8:12 “When Jesus spoke again to the people, he said, "I am the light of the world. Whoever follows me will never walk in darkness, but will have the light of life." Sometimes that light is so bright as to be blinding. Even to avowed atheists. The idea that sacrifice is good – that selflessness is good rather than a rational interest in your own self is pervasive. And false. And ultimately – an evil. It is a cause of many problems and a solution to very few. And any solution that creates more worse problems than it solves is no solution.
What is actually effective is solving problems and there are many ways problems are solved. Mostly the path to a solution cannot be predicted beforehand.
So what is moral here? Let us compare altruism to generosity and compassion.
Firstly compassion (as others have observed “empathy” is morally misleading also). Compassion lets you understand the suffering of others and think about how to help. (Empathy on the other hand asks you to feel something of their suffering). Compassion, properly construed, can be seen as dispassionate. It’s appreciating that the suffering of someone else really exists and includes something of a desire to help find a solution. We’d want our surgeons to be compassionate – but not empathetic. The latter would be distracting. Empathy is moreover misleading because objective morality cannot be primarily about feelings. But nevertheless compassion can be useful in order to be motivated to act to help others especially in those situations where those others seem not to be directly connected to us and so we cannot immediately expect some kind of reciprocity. (But perhaps we live in a community and so compassion of this kind does indeed help us in the long run).
Now generosity. Consider that people are often praised for being “generous” with their time. But no one is expected to be “altruistic” with their time. Indeed in that context you can see altruism as the morally dubious principle it is. We’ve only a finite amount of time each day and if anything is our own – it’s our time. So people who are generous with their time act out of compassion and love for their friends and family or others they care for in order to help. “How generous you’ve been!” people say if we spend some hours with them helping them on some project or to reach some goal. In those cases of generosity we – the giver – really are getting something in return. Good conversation with another person. Other people are great – the most valuable things in the world. Spending time with them is one of the most amazing gifts of life.
But altruistic? That would be something like: well now I’ve given you all the time I want to – but now I’ll give you some more because that’d be the noble thing to do. I need to sacrifice. This needs to hurt a little (or a lot). I’m not getting as much from you as I really want, but I’ll continue to give because, well, that’s altruism! Expecting nothing whatever in return but a warm glow of self satisfaction later. If you were a believer it’d be because God was watching and will reward you “with treasure in heaven” as Jesus said. Altruists like Peter Singer argue for us giving away some percentage of our wages or salaries to charity - just as Christian tithing is intended to and other religions similarly prescribe. But rarely do they say: when you've helped a person some, give 10% more of your time still. Or any free time you have each week, or sleep - give 10% of that to someone who needs it more.
Let’s consider why is it that money is regarded so differently to time in this case. It seems the case that being altruistic with your money is seen as moral in a way that being altruistic with your time is not. Here is a guess: because the prevailing view in the West for some millenia now has been that money is an evil – a corrupting influence. Rich people are rarely seen as good people until they give their money away (like Gates. Gates was an evil industrialist for most of his business life. Until he started giving away all his money. Now, in the eyes of many, he's made up for some of his evil richness.) Of course this is just another Christian hang up. 1 Timothy 6: “For the love of money is the root of all kinds of evil. And some people, craving money, have wandered from the true faith and pierced themselves with many sorrows.” And of course Jesus in Matthew 19:24 “Again I tell you, it is easier for a camel to go through the eye of a needle than for someone who is rich to enter the kingdom of God." Money isn’t good on this view. It’s a path to evil. So, it’s perfectly logical given those biblical premises that the conclusion follows as a matter of rigorous deduction: “give it away”. To give away your money must a great virtue it is thought. One of the highest moral goods. For money is an evil liable to corrupt. So you can be altruistic with it. Be generous with your time (for it is yours – you own it and have moral claim to it) but be altruistic with your money (for you’ve probably, somewhere in your history – inherited some by ill-gotten means. It was a sinful acquisition. You were born with some wealth – undeserved. So the only way to make penance is to give it up and approach the greater purity that is closer to poverty).
Altruism doesn’t expect anything in return. Indeed to expect anything in return is itself a moral failing (on the altruistic view). Yet the exact opposite is true. Reciprocity, sometimes maligned, is actually an important means by which progress is made. People cooperate and find solutions faster when working together on the occasions they want to. So this anti-reciprocity (and, really on careful examination – anti-cooperation) sentiment is another reason altruism is a kind of moral failing. With generosity we actually participate in reciprocity: we get as we give. But with altruism – nothing is ever expected in return. Indeed that would be to pollute altruism. The genuine altruist would reject all thankyous – even if the recipient wanted to pay back the altruist – the altruist should never accept. Because then they’d get payment for services rendered. They'd turn into a capitalist! Especially if the reward was very great. But the generous benefactor (to be contrasted with the altruist)? Well if one day the recipient arrived at the door with payment and interest? They’d take the gift and reinvest and the cycle of generosity and wealth creation could continue.
Morality should not be regarded primarily as a focus on others. The focus should remain on finding solutions to problems. To answer: what we should do? The question is not “What should we do to help others?” it is “What should we do?”. It simply is the case that making progress as fast as possible cannot involve altruism as any kind of deep principle but rather the deep rule is more like its antithesis. Because when people focus on themselves and the problems they are genuinely passionate about they make progress faster. And that’s our situation: to solve problems as fast as possible. And as a consequence, somewhere down the road, other people get helped as a by-product and so much faster. Bill Gates never set out to solve problems in medicine and chemistry, physics, engineering and pollution and a thousand other things. He aimed to write software. That’s it. And people bought it. And he became very wealthy because so very many people found what he created useful and valuable. And many of his buyers went on to solve important problems using Microsoft machines in medicine, science, engineering and everything else and as a consequence countless lives were saved and improved. All because Gates (being self interested) aimed for progress in one area on problems he cared about and created wealth. And that wealth bootstrapped more wealth creation and problem solving across the world. If we aim to solve problems and create wealth as great industrialists do and have done then problems get solved so much faster. And more people get helped. And that’s so much better than other methods that help solve fewer problems and help far fewer people.
We have to make progress as fast as possible. It’s the best thing for everyone. Giving wealth away - taking it from where progress is happening fastest and gifting it to where it’s not hurts more people than it ever helps.
So if you think morality is about helping the most people as fast as possible, altruism is not that. It’s the opposite and so by a utilitarian standard is actually evil. This is the moral blindspot and evil kernel at the heart of calls for “redistribution”. It steals from children of the future to help some people today. It says: those who produce wealth have always done so by some corrupt means and though they make some progress, that virtue cannot make up for the sin of wealth creation by ill-gotten means. Of course all the arguments that the wealth was ill-gotten and not heroically created through discovering the knowledge that solves the problems people are willing to pay for is ignored.
So if altruism is about helping other people as the EA people claim...then EA isn’t maximally altruistic in the long run. But creating wealth would be.
If we put aside altruism and utilitarianism as our moral compass then we can simply consider solving moral problems directly and not merely mitigate some of their effects. But moral problems require that solutions are found quickly so suffering can be alleviated for the everyone. And this means: fast progress. The creation of knowledge. To do that we need time and because “time is money” we need wealth. And we need to go faster. That needs improvements to technology. Better technology. And we need research - scientific and other kinds. All of this requires more wealth. Wealth has to be created: it’s not a finite amount to be split up and distributed more fairly. It is a thing people create and then solve problems with to the extent they know how. We must continually create more wealth to discover more knowledge and make progress fast enough so the rate of solution finding always outpaces the rate of problem encountering. If things slow and stagnate we risk it all. We risk everyone.
Consistent with every speech he gives, this is a wonderful talk by Douglas Murray. The center of the bullseye is for Douglas, as always, a concern about politics and existentially important cultural issues. He is not really doing philosophy (much less epistemology). So this may seem terribly unfair and pedantic. Nevertheless my interest is epistemology and so hearing the grave intonations of Douglas Murray utter such a philosophical cliché so early on, I felt the need to say something on the matter. At around the 40s mark into the speech above, Douglas says:
“It’s very easy to be a critic. It’s very hard to create. Yet it’s creation, not criticism that builds societies and indeed inspires people. And gives life meaning.”
The irony is Douglas is one of the most brilliant critics of our time! His books are excellent critiques of much received wisdom, politics, politicians and some of the most pressing global issues. The cliché I wish to highlight is this problem where people distinguish creation from criticism with a bright line and regard criticism as somehow bad – or easy and creativity as only ever good. What Douglas I assume means, and what I guess most people mean when they have a go at “criticism” is something more like “insults”. Insults are not criticism. Or mere contradiction is not a criticism. “You’re wrong” barely makes the grade for actual criticism.
So what is criticism actually? Well firstly it’s a creative act. Hence the way in which it cannot be divorced from creativity. (And creativity, for what it’s worth, can only become useful innovation when a careful application of criticism is applied. Not all flights of imaginative “creativity” are good.). Criticism is an explanation of how something is wrong or bad or deficient and why. Of course this is in the ideal case. Sometimes criticisms fall short and might be “bad explanations” or only partially make the case that some idea or creative thing has a weakness or flaw in some way. The criticism might not be valid. Or even when it is valid is might not be fatal because there may be no alternatives on offer.
What Douglas does in the rest of his speech is criticize. He’s a critic! He criticizes politicans and political systems, he criticizes lots of ideas and practices. He criticizes whole cultures (even his own) – in short he is a grand critic in the great tradition of British orators. But he creates all these wonderful criticisms and defends them with good explanations. Some I disagree with, but the overwhelming majority are good observations of actual things going wrong and how and why. And that’s what great criticism is.
When Douglas devised this speech, or speeches like it, and wrote his books – he created. But I’m sure he made more than one draft. He criticized his own work. He was a critic of his own work. Did he find that easy, I wonder? I doubt it. And to come up with this long list of deeply insightful criticisms of European Union policies – did that not take great creativity?
Here is the key: someone who says, “Douglas, you’re wrong. You’re a fool” is not an actual critic. They’re something else. Absent further good explanation they’re just a mean person! Critics are not necessarily mean. And being a mean, cruel or insulting person doesn’t make you a critic.
So we need both. What builds society is indeed creation. But only when coupled with criticism. An imaginative architect can conjure the most fantastic design. “How wonderfully creative you are!” people may exclaim. But when the engineer arrives to say “That wall there is not physically possible. It simply cannot support the roof (for reasons x, y and z)” this criticism is neither bad nor easy. The engineer may have to call on specific pieces of physics and other sciences to create an explanation of how the design fails. Applying general principles to specific cases takes creativity. The creative design in this case may indeed have been the easier thing and, ultimately, the bad thing. Creativity uncoupled from criticism is just imagination. Creativity coupled with criticism brings innovation.
So let us alter Douglas’ introduction just a little,
“It’s not easy to be a critic. Here I stand, bravely pointing out some difficult truths of our time. It’s very hard to create such criticisms of ideas some people hold so dear. Yet it’s this kind of creative criticism that builds societies and indeed inspires people. And gives life meaning.”
I looked into Universal Basic Income (UBI) as it has been a hot topic recently. Here's what I found: it’s welfare. So it’s Socialism. There is absolutely nothing whatsoever new about this idea. It is money taken from the taxpayer given without conditions to people who do not work.
Except it’s worse than normal socialist welfare because it applies to absolutely everyone regardless. So it’s closer to Communism.
Except it’s worse even than that. At least with communism people are ostensibly required to do something productive, even if most of the wealth they create is confiscated. With UBI you aren’t expected even to do that much. You don’t have to produce anything.
None of this would prevent people from actually being creative of course. But it will eliminate one of the important motivations people have for being so. Namely - so they can produce something of value to others and gain income from doing so. If they gain income for doing nothing at least some will decide not to produce anything of value. Not everyone. Some. This is much more difficult a life decision to make if you survival depends on your creating something of value.
UBI begins with the assumption that robots - AI - will take almost all the jobs that presently exist. UBI ignores that the only jobs that can possibly be taken by AI are ones that can be automated. This has always been the case. It is exactly the same situation we have always been in since the loom or the computer first appeared. Yet unemployment hasn’t risen. It’s remained stable or even decreased. And living standards continue to rise anywhere economic freedom is implemented.
People have moved from drudgery - work that can be automated - into creative work and continue to do so. We are all creative. Anyone who asserts otherwise simply doesn’t understand what a person actually is. We are creative entities. Not draught horses. A draught horse just pulls a heavy load. The "work" they do is very much the way physics defines work: it is the product of a force over some distance. The draught horse drags a load across the ground moving it from place to place. It is drudgery.
People are above that. We should all be moving away from draught horse type work (anything that can be automated) into creative work. Work that requires problems to be identified and then solved. Ugliness that needs to be made beautiful. Evil that needs to be made good. This is what we do.
If AGI arrives, all the better. AGI are people too. They won’t take "our" jobs. They’ll be people - like us. And the more people, the better. The more ideas. The more solutions. The faster we can address the problems of the world. And the problems of the world cannot be known in advance. We need to produce knowledge to create the wealth so we can fund the solutions of tomorrow. So we all need to be directed towards creative output. Not engaged in pulling loads like horses.
People are worried about job losses as industries change. But it has always been the case that industries change. "But now is different" they say. It's not. That too has been said before. Change and progress are inevitable and good in an open society - in a culture of criticism. People are, right now, particularly worried about industries like transportation. All those truck drivers, taxi and Uber drivers, train driver, couriers, delivery people - anyone involved in driving as an occupation. The fear is this will all soon be automated - and all those people out of a job. And then: crisis. But people move from job to job all the time. Again: there is nothing new here. Indeed more and more people spend less and less time in a single job. Why people think truck drivers are especially unable to learn new skills, I do not know. They can - as much as anyone else. But we are told the crisis is coming. Millions of people out of work overnight. Crisis. Upheaval. Discontinuities.
Hence the need for UBI.
But here’s another solution if you really are concerned that trucks drivers and the like are some special case. Actually here is a solution regardless of where you stand on the "almost all people are soon to be automated out of their jobs" end-times scenario. If you are genuinely concerned about this - are a serious politician, say - then cut taxes now. Cut taxes on vehicles - now. Cut income tax - now. Allow those drivers - or indeed anyone engaged in a non-creative job to save their money and not have it extracted by the government NOW. Let them save a “nest egg” so when something seen or unforeseen happens (like job loss) they’ve sufficient wealth saved in cash or property to support themselves. And they don't have to turn to the taxpayer for retribution. Take out the middle man. Why tax these people so heavily now, only to give it back to them when they become redundant? Let them save their own wealth now.
This then shifts the burden of “who is responsible for providing income to an individual?" from the collective back to the individual.
Socialist memes are deeply entrenched. Even if people begin to appreciate that communism (or that some aspects of communism) were in error and so people begin to question and criticise these terrible dogmas - they rise up again in new forms, repackaged. Thus it is with “UBI” - it is no more than a repackaging of the old idea that people should earn the same amount of money regardless of what they do. But as I said - it is even worse than this because it does not even require that you work. It assumes people are not creative - but rather cogs in a great machine. We exist in order to perform labour (i.e: arduous work). But this is not our nature. We are creative. The Marxists are simply wrong that arduous, difficult work is what people do and is what creates wealth. No. What creates wealth is ideas. The rest can be automated. How can we move from a mindset of "people need to labour and sweat to earn money" to "people need to be creative and have fun and find solutions - leave the "labour" to the robots"? We simply need to allow people more opportunity to be creative. And they will have this if they can keep the money they earn and not have it in large part confiscated by the government.
Creative people need freedom, and the only system that allows people to be free - the only economic and social system that has at its heart a principle not to use force, engage in theft of wealth created and allow people to trade or not with those they choose is Capitalism. Only Capitalism explicitly has an injunction against the extraction by force the wealth that has been created by Alice to give to Bob regardless of what Bob has done with his life.
UBI rejects all this. UBI takes from Alice the wealth she has created because of the pessimistic assumption that Bob simply cannot create wealth. It views Alice as somehow having gained her wealth through illegitimate means. As such - Bob, no matter what he has been up to, actually deserves some of it. And the only people who can ensure that Alice does indeed hand over the products of her labour are the government. And should Alice refuse, then men with guns will come to her door and demand her wealth. Wealth she might otherwise have used to create more wealth.
The alternative to this dystopian view of people and civilisation is an open society of optimism and kindness. People can create wealth. All of us. Even Bob. It is our nature. It is what we do: create. And as a community we enjoy and value the creations of others and engage in kind and generous exchanges of ideas, creations, services and goods. Not in equal measure - but this too is good that some may succeed through extra hard work and great inspiration and rise up and change the whole civilisation. Others can find success in fertile little subcultures which arise where everyone does their own little (but valuable!) thing where people trade one with each other because they want to. Money is exchanged for goods desired and people we want to pay get paid. The only real factors that slow this wonderful flowering way in which ideas flourish is force and its threat. When criminals or the government come with weapons to take some of what we have created and use it to purchase goods and services we were not in the market for to gift it to people we do not know - that’s wrong. That's theft. That's evil.
We people are, most of us, kind and generous and had we wanted to gift the money to a charity or indeed to an individual in need, we now are unable to. Because what we had, has been taken from us at the point of a gun by people who claim they know better.
UBI is not needed. What is needed is an understanding that people are creative. In particular they create wealth. And if they are allowed to keep the wealth they create through their hard work - creative or otherwise, then they will be able to save. And if they were permitted to save sufficiently, UBI wouldn’t be on the cards at all. It would be seen for what it actually is: theft.
All sorts of unconscious phenomena enter into our considerations , decisions and choices. If you are waiting for the 9:47am bus and it fails to arrive - this event enters into your consciousness unbidden by you. You had no control over it. But now you are thinking “Oh no, I may be late.” At that moment a taxi approaches. Again: unbidden by you and more thoughts, also that you did not author, enter your mind. You now consider “should I hail the taxi?”. You deliberate. You try to create a good explanation.
Was your meeting to be at 10:30am or 11:30am? Maybe you’ve time enough for the next bus. But maybe you shouldn’t risk it and take the taxi.
Parts of this process are unconscious. Much indeed. But parts are conscious as you think and reason to form (create!) a good explanation of what to do next. You have choice before you. The world need not be one way or another. “Bus or taxi?” you must think quick. You must choose. The meeting is at 11:30am you recall in a few milliseconds. “I’ll just wait”. You’ve chosen by reason. Nothing has forced your hand. The decision was a free choice. And exercise of your free will.
Had a terrorist come behind you and pressed to your side a gun that you could see and said “Get that taxi” then new information would come. Now, I would say, when you obey this is different. Certainly you might object - but really you are doing OTHER THAN YOU WANT. Other than you desire. Other than you would have chosen. You are being COERCED. When there is coercion it is not the exercise of FREE WILL. It is something else. It is a decision under duress. Your creativity is being impeded. It is subservient to your survival and emotion and fear especially. You aren’t thinking clearly.
Now in the scenario of the late bus and you just wait peacefully for the next notice that this account has required: creativity, choice and free will. I don’t think we can easily remove any of those. Or if we can they simply “pop up” as another mystery. You may deny free will or even choice. But surely creativity is something you cannot deny. But what are we creating? Explanations. Why one explanation rather than another? We desire - surely. But why? Why desire anything? Do we just slavishly obey impulses or is there deliberation? What is this deliberation? An illusion? So it doesn’t matter if we deliberate? Surely it does matter if we take time to reflect. Surely we create better things? Make better decisions? And isn’t that decision to take time itself something that can be learned? And doesn’t it become a choice? And isn’t choosing to do so a free choice? You aren't being coerced?
What makes people unique? What is this thing? Is it creativity alone? There is something there - something fundamentally different about humans compared to other animals. Whatever it is seems to allow us to break free of our genes and our instincts. Cities, computers, our languages - in short our explanatory knowledge is not encoded in our genes. So that stuff we accomplish that is not encoded in our genes is being generated by our minds by a process we barely understand. We call it "creativity". But it's a thing we direct. We choose to direct our attention, and thus our creativity to this or that thing. And that conscious act of direction is an exercise of free will. What we're often creating is knowledge about how to solve our problems. But what knowledge to create isn't something that is in our genes and it's not "in" the laws of physics. But somehow it nonetheless is "in" the universe - it's part of reality. So when we choose to use this creativity of ours it is a parsimonious technique to simply call this an exercise of our free will.
Exploration of what properly constrains the production of knowledge is a very interesting topic and ethics forms but a part our considerations of what limits the creation of knowledge. Those constrains are however far broader than what is dictated by parochial concerns about what *should* be done in terms of generating knowledge. Because the growth of knowledge is inherently unpredictable, an argument looms that perhaps the only ethical principle one requires here is: do not apply ethical prohibitions upon the creation of knowledge. Of course, practically speaking, we should not seek to discover what is the most hurtful thing we can do to make people suffer? That would be abhorent. Or what is the most dangerous risk we can take? We can play games like this and suggest that therefore we need tight restrictions on what problems people should try to solve. Such concerns are not genuine limits upon the growth of knowledge but rather silly moral-thought experiments about how values seem to conflict (on the one hand the value of knowledge production and on the other valuing personal autonomy, for example) and they are always resolvable with a little bit of critical enquiry.
So ethics, typically, is not - or should never be - the biggest constraint upon the growth of knowledge. The growth of knowledge is motivated by problems that arise. That is what the growth of knowledge is: the search for solutions to some problem situation we find ourselves in, personally or as a community or civilization.
But there are other constraints upon knowledge. From logic for example: we cannot hope to discover simultaneously that eggs are simultaneously good to eat and also deadly poison (modulo logic games like: some people are lethally allergic to eggs, or that eating 100 of them might kill a person).
Knowledge production is of course limited due to physical law, there are limitations due to time, space and energy, there are perhaps limits yet to be explored (like the so called “no go” theorems found in pure mathematics and physics - but perhaps there are more we’ve yet no notion of). David Deutsch has explained the great dichotomy when it comes to the limits of knowledge: whatever is not prohibited by physical law is possible. So the only thing preventing us from accomplishing something we want to, and which we've decided is good to do, is *knowing* how. That's an amazing thing. Resources are almost always plentiful - the universe is vast. So taking a cosmic perspective on these things, it is not matter and energy and time that is scarce (the universe provides these in abundance, as it happens) but rather it is knowledge that is always scarce. (Of course, see his books for this - or his Ted Talk).
But also, now, and in the other direction - it is not only constraints upon knowledge but also it is the availability of knowledge - which is the limiting reagent in both the universe and our lives. Knowledge itself provides the constraint that prevents us personally, as families, communities and whole civilisations from accomplishing what we want. When we lack *that* resource - knowledge - everything else (importantly progress) stagnates. Most especially, civilisations do, and so do our own personal lives.
This idea of "constraints" as some kind of theme through which to view knowledge can be a useful one. Ethics, on this view, is but one example of the constraints on knowledge and also that there are many ways the production of knowledge is constrained...and also many constraints resulting from our lack of knowledge and lack of progress in our creation of knowledge. “Constraints” might seem to be a gloomy lens through which to view a thing, but on analysis this is an uplifting lesson to learn. Creating knowledge - learning more - is typically, in our world as it now is - the only thing (or at worst the main thing) limiting each of us personally and as a civilisation from accomplishing our goals. Your choice to know more really is the way to move forward.
*Credit goes to Ric Sims (@sharpcomposer) for remarks inspiring parts of this piece.
The Search For Truth
The prevailing view of “knowledge” - handed down from Plato - is that knowledge is some kind of justified true belief. Modern incarnations, descended with mutations to fill the niche occupied by this desire for justified truth include Bayesianism (a more mathematically inclined twin of inductivism) - where the idea is that knowledge is justified as close to true by repeated confirming instances. Whether Bayesian or Inductivist, these kinds of justificationism applied to science hold that the more frequently one observes an hypothesis to work, the more confident one can be in expecting it is actually true, more true, or probably true compared to its rivals.
But Bayesianism, in claiming that some theory has some quantifiable (indeed calculable) and precise amount of truth we can discover, cannot explain how despite repeated “confirmations” increasing one’s confidence in the truth of a theory, nevertheless, it can still be shown utterly false by an observation that theory cannot accommodate. Indeed it cannot explain how it is that when confidence in truth is at its highest, this is when theories are typically shown false. In other words, on Bayesianism, when we have every reason to expect the theory to be true, it is shown false. So for example, every single observation that occurred prior to around 1919 was a “confirming instance” that would have granted “Bayesian credibility” to Newton’s theory of gravity. (If this date is in dispute, we need only move it back to around 1859 where Newton’s theory had never been known to produce any anomalous predictions (it was in this year that Urbain Le Verrier in “Celestial Mechanics” published data dating from around 1697 to 1842 which, when investigated carefully, appeared to reveal some anomalies with Mercury’s orbit. In principle these could reasonably, at that time, have been interpreted as consistent with Newton’s theory on the assumption the orbit was being perturbed by some other massive body (this is not unprecedented given the method of discovery of Neptune relied on something quite similar)). Whatever the case, absent any other theory, the Bayesian method of increasing confidence that a theory is true, given repeated instances that are consistent with a theory, meant that Newton’s theory of gravity was at its highest confidence right before it was shown false. At which point all of those observations that it was correct now “flowed” in some sense to its replacement: Einstein’s General Theory of Relativity. Or if they did not “flow” then the count started again and Einstein’s General Theory - being without rivals - just continues to grow and grow in truth to this day. And with each passing day we should be more confident, not less, that it is true. But nothing in Bayesianism - no matter how many confirmations there are - can rule out the possibility that Einstein’s General Theory will be ruled out by a process similar to that which Newton’s went through. Namely some observation inconsistent with Einstein’s General Theory but consistent with some other theory that does everything Einstein’s does but also accurately predicting where Einstein’s cannot work. Indeed we should expect it to be shown false because we should always expect some deeper theory to explain everything some currently accepted best theory does...and more. That is: we should admit theories are improvable and progress is always possible because knowledge continues to grow. In particular we should expect a theory in physics to be found that is deeper than both quantum theory and general relativity - one single theory that can explain why both work and which also do something new that neither is able to: perhaps explain dark matter and dark energy or something like that. Something at a deeper level. That is what we should expect. We should expect falsity to be shown and so we should expect that General Relativity is, now, strictly, false. We just don't know how it is and cannot show it is yet. One day, we will because we will have both a replacement for it, and a test to distinguish the replacement from General Relativity by comparing it against reality in some way (we call such comparisons "crucial tests" or "crucial experiments".)
To remain with Bayesianism for a moment, it is also important to note that Bayesianism alone cannot explain how an ad-hoc modification to a theory is not also “verified” to the same degree. As explained in “The Fabric of Reality” by David Deutsch the idea that the currently accepted theory of gravity is justified as true or probably true because of all the observations that people have ever made consistent with it applies also to the theory that the prevailing theory is justified as true or probably true except in cases where it is defied on those occasions when objects levitate for reasons not accounted for by the theory of gravity. The “our best theory of gravity is true except when things levitate” is justified by precisely all of those observations that justify the current accepted theory of gravity.
So it cannot be the case that theories are justified by repeated observations - no matter how many there are. If they were, the ad-hoc modification that “things sometimes also levitate” would also be justified - even if we have never (yet!) witnessed such levitation that would be inconsistent with the first theory (that the best theory of gravity always applies everywhere).
This is an argument against induction and against Bayesianism. Repeated observations are not needed. That is not how knowledge is produced. Instead theories are guessed (conjectured) and then attempts are made to refute these theories. This is the rare best case scenario: there are multiple competing theories. All these theories then get tested against reality by some means. The means - the methods of criticism - along with the subject matter itself - are what define a “discipline” or “subject area” or “domain of inquiry” or any other such synonym for fields like “Science” as compared to “Mathematics” and “Philosophy” and “History” and “Morality” and so on.
So let us recap all of this in light of the broad brush strokes that the majority of people interested in this topic of epistemology - no matter where they are on the spectrum between Plato’s JTB and Bayesianism sit.
Knowledge, they sometimes argue is some kind of belief (not all Bayesians do this: some believe in knowledge that need not be about personal thoughts). But belief cannot be a property needed for knowledge as Karl Popper observed and David Deutsch has clarified in many places. Knowledge is not only something that is in minds. It is also in objects. A telescope contains the knowledge of how to focus light. A jet engine contains the knowledge of how to convert chemical energy into heat and thrust and motion. The DNA molecule contains knowledge of how to construct an organism. A book contains knowledge, as does a computer. But none of these dumb, unthinking objects have beliefs.
So knowledge is not about belief. Must it nevertheless be justified true? Justified true means “shown to be true” - but we have just seen that there is no method whereby a theory can ever be shown to be finally, once and for all, true. There is always some way it might be shown false (and we cannot rule this out). This is true in science, but even in mathematics and is basically the philosophy of "fallibilism" - the claim that error is impossible to avoid. Mathematicians make mistakes and (this is poorly understood but absolutely crucial to appreciate) proofs in mathematics are computations. Proofs are done by something. They are done by a mathematician (or a computer) using some physical object (their brain, or pen and paper or a calculator) and physical objects obey the laws of physics. And if the laws of physics say that necessarily physical processes are error prone (cannot 100% be shown to produce the same outcome every time (this is a consequence of the laws of quantum theory - our deepest physical theory)) then methods of proof will likewise not be 100% in all cases absolutely perfect. More than that - for reasons stated above about Bayesianism - we cannot even put a “close to 100%” number on it or any probability at all. My favourite example here remains Euclid’s demonstration of the obvious - clear to everyone - fact that through any two points a unique straight line can be drawn. We know this now to be false because there exist such things as curved (“non-Euclidean”) geometries and in these cases many straight lines can be drawn through any two points. For more on that, see here.
Knowledge is likewise never justified because if it could be the justifications would have to be justified. And if they could not be then our original claim would not be justified as true. But if the justifications for the justifications were true on this view, then this would only be because they were justified and so on, leading to an infinite regress. So “justification” cannot work as some kind of deep truth about how knowledge works because it rests on either an infinite regress of needing to justify justifications or stopping at some point where the justifications are unjustified meaning that “justificationism” is no kind of deep and universal truth about knowledge.
And finally “true”. When people here use “true” they seem to mean “certain”. And we cannot be certain because we can never be without doubt. And besides, certainty is just a feeling - one feels certain or not. And objective knowledge cannot be about one’s subjective feelings.
So there we have it for the moment: knowledge is not justified and it is not true and it is not about belief. Everything about Plato’s definition is wrong. Instead what is the case is that knowledge is about guessing theories (that solve some problem we have) and then criticising those theories. If we’re fortunate (because we’ve been sufficiently creative and critical and perhaps have cooperated with other similarly creatively critical people) - we manage to have many such theories. And then the critical process of experimenting (in science) or disproving (in mathematics) or trying just to argue (in all areas) and reveal weaknesses and flaws and contradictions we whittle away all the theories that fail to meet our criticisms and - again if we are fortunate - we’re left with just one theory standing. If we are not left with only one this, in science, is where we can do a crucial experiment. The experiment where the outcome is predicted to be one way given one theory but another way given another theory and that allows us to decide which is false. Whatever the case, in whatever the domain, usually we’re left with identically one theory that does what we want it to: solve our problem. And we call that The Explanation.
So we have jettisoned “justified” and “belief” in their entirety from this conception of knowledge. But what about “truth”. Is knowledge nonetheless a quest for “truth” as Popper says? Above I seemed unable to avoid the word, or its negation more than once. Of course we have seen the quest for knowledge cannot be a quest for certainty (100% infallible truth) but can it be a quest for something lesser? Well for the same reason that it cannot be a quest for 100% certain truth, it cannot be a quest for 99.99% truth or 99% truth or 50% truth.
So is truth a chimera?
Let us return to mathematics briefly. Surely it is about proving things true? What things? Well in mathematics what we assume we have are propositions (claims that are identically true, false or undecidable) and we use rules of inference to reach conclusions. But many pure mathematicians understand that because one needs to start somewhere (with axioms) that themselves must remain unproven assertions about the world, mathematics is actually not about proving things true. Rather it is just a domain of showing what necessarily follows from the axioms. Now if you assume the axioms are true then you can assume what is proved from them is true. But it is all just an assumption. If the axioms are false, well so much for your conclusion. Now because we have no method for showing that our axioms are actually true or false or undecidable - but rather that they are just assumptions, we may call what follows from them, on the assumption they are true and in the knowledge that moving from one mathematical claim (like an assumption/premise) to the next mathematical claim by following some rule of inference, we are not moving from proposition to proposition (actually, demonstrably true "meaningful sentences") we may more accurately say we are moving from statement to statement (approximations to such propositions) . So mathematics is about showing claims (that although we cannot know are true) do proceed logically (necessarily) one from another.
This works also for any domain of knowledge outside of mathematics and follows from what is called in the business “Tarski’s theory of truth” (named for Alfred Tarski). This is actually the person Popper refers to in “Objective Knowledge” (p 44 onwards) where he makes some “Remarks on Truth”. He makes the distinction there, following Tarski, that truth is “correspondence with the facts” and so it is sometimes also called the “Correspondence” theory of truth (this is the commonsense view, Popper says. I would add that this is to distinguish it from competing claims like: truth is about “Consensus” - that is, that a thing can be deemed true when some group of people agree that it is (a rather relativist notion if ever there was one. Each group, by this measure, when they disagree, has merely agreed upon contradictory “truths”) and there is also something known as the “Coherence theory of truth” where a thing is true if it coheres (agrees) with some other known true propositions. Of course how those propositions are known true is because they agree with each other and with some other “true” claims and so on. But at no point need anything need correspond with reality.
Popper begins this section on truth with the claim that “Our main concern in philosophy and science should be the search for truth…We should seek to see or discover the most urgent problems, and we should try to solve them by proposing true theories…or at any rate by proposing theories which come a little nearer to the truth than those of our predecessors.”
Is he wrong about some of that? Namely the first sentence? Should that - the search for truth - be our main concern? It would seem our main concern is solving problems. But does Popper suggest there that solving problems is to be identified with the search for truth? We cannot ask him, so I propose that this is indeed what we are doing in solving problems. We are searching for truth by eliminating error to bring us a little closer to truth. By uncovering tiny parts of it and eliminating falsehoods.
If we consider that statements are approximations to propositions (the latter what we cannot utter because those are actual truths or actual falsehoods) then the statement - being an approximation - is an approximation to truth or an approximation to falsehood. And in general terms, to correct errors is to make progress - to improve. But improvement or progress occurs in some direction. When we solve a problem it is that things get actually better. There is a direction. The direction is in bringing the approximation closer in line with reality. That is to say the statement comes to reflect that reality with increased fidelity. But this increased fidelity - this better way of capturing reality with the statement or the theory - this is an objective improvement. How is it an objective improvement? Well it solves the problem that a previous theory could not. That previous statements were unable to explain. The previous theory is shown wanting. In what way? Well the successfully criticised theory, the one refuted, cannot be the truth because it has been refuted - shown false by observation (or other criticism). Cannot be the final truth? No. Of course, as always, we may be mistaken. But having to make this caveat each time one uses the word "true" or "truth" can be cumbersome and violates Popper's injunction to "speak clearly...and avoid...complications." And regard brevity as important (p 44 "Objective Knowledge")
Theories solve problems. That is their purpose. But how can you know your problem is solved? Well - the solution has worked. That is to say that what was a problem (the position of the planet was there at point Y but you predicted point X because of theory “A”) but now you have solved that problem with your replacement theory “B” so when you do the calculation, "B" gives you the answer Y, and the old theory gave you a calculation leading to X. So the solution worked. The new theory worked. This is what “worked” means. It means it corresponded to something in the world. You compared it to something in reality. Reality matters: it is the adjudicator between your theories. Now of course you might have made a mistake. But modulo that, what do we say about theory A? It has been refuted. What does that mean? It means it cannot account for the observation that your planet was predicted to be at X but was not.
We cannot jettison truth. Knowledge has something to do with truth. But what? Well knowledge creation is about solving problems and that involves correcting errors. And correction of errors brings us closer to reality such that our statements about it are approximations to the actual truth. Now what it could mean to “hit on” the actual truth (some call this the “ontological truth”) is difficult to say. Could it be possible that “triangles have 3 sides” is in some sense the actual ontological truth? No. It can always be the case that this could be improved in some way. Being unable to imagine a way is no refutation of the idea that people improve their ideas. We cannot rule out the possibility that some future civilisation will agree (because, I don’t know, (let's be fantasical for a moment) they have uploaded themselves into some some holographic higher dimensional space) where triangles, it turns out, are rough approximations to figures that, when viewed from our meagre 4-dimensional spacetime, only appear to have 3 sides and in fact, viewed from a broader and deeper perspective available only to more enlightened higher-dimensional beings, actually have more sides. This might seem bizarre but I’d say it’s no more bizarre than, having mathematically proved from the “self evidently true axioms” that triangles have an internal angle sum of 180º degrees - you then learn about geometries where this is “self evidently true” NOT to be the case. So claims in mathematics - shown true, are sometimes overturned. We cannot know that when we think we’ve got it correct, that we’re going to be moments later shown how we’ve been in error. That there’s a problem.
So is knowledge a search for truth at all? So long as we solve problems and correct our errors such that the new theory that solves the problem by correcting the errors better corresponds to reality as compared to all rival - isn’t this enough? Yes - but there is a succinct way to put this.
The new theory contains more truth. Or: the new theory is more true. The old theory is demonstrably false and we know it’s false. Do we know it is once-and-for-all certainly false? No. Do we know the new one is once and for all true? No.
Is it true at all? Yes.
Can we say one is more true that the other? Yes!
Can we say by how much more? No. It’s merely a binary distinction. But it’s convenient. One theory has more truth to it than the other.
Are we sure?
No. We never need to be.
Can we say a theory is “true”. Yes - so long as we understand “true” there is shorthand for “fallibly, provisionally true” or “pragmatically true” which we can take to mean: we act as if it is true. And why not? If the proverbial life-and-death situation is before us, we should not act any other way. The patient’s heart has stopped and the epistemologically savvy emergency doctor calls for the (external defibrillator) paddles STAT(!). Those assisting need not debate whether it’s true that the paddles will work. They act as if it’s a true claim “they work”.
“Is it true those paddles work?” someone asks our critical rational doctor later. “Yes” he says - and quite right too. To say “Well, I don’t know if it’s true they do. But I do know they work” is not only cumbersome, but it misses something important in fallible critical rationalism. And that is that the word “true” should come to be known to mean “provisionally true” - this is the default position. Someone who thinks “true” means “certainly true” is making the mistake. That’s the error. And it doesn't matter if the majority are making the error and the minority understand how epistemology actually works. After all, most people think "knowledge" means "justified true belief" but we can still use the word "know" and "knowledge" without being overly concerned about each and every time providing the caveats. When we spot the errors, we point them out and in the case of "truth" if we want to highlight or criticise that error then affix the “certainly” adjective yourself to remind people that is not what the word "true" should be thought to mean in common day-to-day usage. Why should dogmatists be able to claim the word? Let's not cede that territory.
It is quite right to say that General Relativity has more truth (corresponds closer to reality and solves more problems and corrects errors with…) Newton’s theory of gravity which itself contains more truth than some “law” of gravity like F = 2GMm/r^4 but we cannot measure the quantity of truth. Truth is not a quantity that can be measured but it is a quality that a theory possesses compared to some other. There are many things we cannot measure and yet we can make reasonable and sensible claims as to difference in kind. For example, in biology, it is a routine matter to distinguish one species from another or even one breed from another. There may be edge cases, but in general the identification that a particular organism belongs to this species and not that species is done largely on the basis of appearance of kind or type. These days we can do this with greater precision using genetic analysis. In terms of epistemology we are not there yet but there is some symmetry here (and that is no coincidence).
As in any domain, in epistemology we want to solve a problem. The problem before us here and now is: how can we most effectively - that is to say clearly and efficiently and accurately - convey the epistemology that is critical rationalism? Should we jettison the idea that we are seeking truth? Or should we look at ways of preserving what is useful with that word and modifying what most understand the term to mean? In part this is what I have attempted above. We must be cautious we are not misunderstood as denying the possibility of truth - that may be viewed as a kind of relativism. Of course we can always be misunderstood. I return once more to Popper in “Remarks on Truth” as he says in many other places words to the same effect that and that I quoted only partially above “…aiming at simplicity and lucidity is a moral duty of all intellectuals: lack of clarity is a sin and pretentiousness is a crime. (Brevity is also important…but it is of lesser urgency, and it sometimes is incompatible with clarity).” Preserving not only the word truth, but also the idea that we are engaged in a search for it, helps with brevity and with clarity. Rather than avoiding the claim that science and reason broadly is a search for truth, we can merely correct people when they think it is about the search for certain truth, or final truth or “complete” truth (or a “complete science” as Sam Harris is fond of saying). Rather, we just correct them to “provisional truth”. Provisional truth that solves our problems.
So is it true "We aren't seeking truth"? Well is "seeking truth" synonymous with "solving problems"? Might it not be parsimonious to use these interchangeably given the facility of both terms? "I'm looking for the truth!" exclaims the exasperated scientist trying to uncover if the wobbly motion of their planet is a sign of yet another, as yet, unobserved planet. Are they wrong to do so? Should it be "I'm trying to solve this problem!".
I don't think it matters.
Do theories need to be falsifiable to be science?
That theories need to be falsifiable is a necessary but not sufficient condition for science. For example, the claims: Eating 1.00000 kg of grass cures the common cold or The world will end at 2am UTC on 2/2/22 are falsifiable theories. But they are not scientific. Without a good explanation to accompany them, they are not science. They are just “falsifiable claims”. A scientific theory should be a good explanation that also happens to be testable/falsifiable. Popper figured out that falsifiability is an improvement on verifiability of the logical positivists. It is falsifiability that better separates science from non-science. This includes separating science from pseudo-science like astrology and homeopathy as well as things like morality and philosophy broadly. But it has never been the case that all falsifiable theories are scientific theories. For example: those two claims I started this paragraph with.
But is it nevertheless necessary that scientific theories need to be falsifiable? Well the scientific theory for some phenomenon - or any theory that purports to be the scientific theory for some phenomenon must be a good (hard to vary) explanation of that phenomenon. Part of this “hard to vary” quality is that the theory is falsifiable - testable by experiment. In principle. Now it need not be in practise. But that doesn’t change its testability in principle. So, for example: many people have observed that string theory is very, very difficult to test. Some have asserted that to observe the predictions of string theory would take a particle accelerator half the size of the galaxy. Now this is impractical. So does this mean the theory is unfalsifiable? No! In practise we cannot build such a particle accelerator. But in principle it could be done. So it’s still falsifiable in principle. And perhaps there exist "natural" particle accelerators such a quasars - observations of which might rule out string theory? We do not know.
So, it’s science. It makes predictions. We need not jettison falsifiability on the basis of that. What we might do is search for better ways to test it. If it’s a claim about the physical world, then the physical world must be the adjudicator of the truth about string theory. Can we rule it out? Can we refute it? Then it’s falsifiable. But notice there are two kinds of falsifiability: in principle and in practise. In principle is a black-and-white quality of a theory that is required for science. It is just the claim that some observation of physical reality could in principle rule out the theory. But if no such observation can - that is to say no such observation exists in any possible world - then the theory is not about the physical world. There is no comparison to be made between the actual physical world where the theory holds and an imaginary fictitious physical reality where the theory does not. Or vice versa.
Let us take an even more extreme case than string theory (which I argue is science - but for reasons I will come to is not necessarily “good” or “optimal” science) - and that is the theory that there exist other universes where the very laws of physics are themselves different. So universes outside our own, but where the laws are different. Now it was once thought that such universes are in principle unobservable and so therefore not testable and this makes them unfalsifiable and not science. After all: another universe? Outside our own? How can we access that? Well as it turns out - in principle - we could see such a universe. A universe where the laws are different will have different physical constants and as far back as 1999 physicists claimed to have observed a changing fine structure constant. This would be evidence of a region of space where the laws were different. Another universe (by some definitions). It turned out they were wrong (see that very same link above) - but it is this kind of observation that, in principle, could allow us to observe other universes beyond our own. (Or force us to change what we mean by “universe”).
But this “falsifiable in principle” (necessary as it is) as a criterion to demarcate science from metaphysics (for example) is also not sufficient to make something a good, hard to vary, explanation. Let us return to string theory. What we’re interested in is solving problems in physics and string theory is an attempt to unify quantum mechanics (a physics of discrete entities like particles and energy) with general relativity (a physics of continuous entities like space and time). As we have already seen, string theory could in principle be tested with a particle accelerator half the size of the galaxy. That's the worst case scenario - likely things are not that grim. But say they were. There is probably not enough matter for several lightyears to construct such an experiment. It’s impractical. So “in practice” we would have to say it’s not falsifiable. It’s “not falsifiable in practice”. But this is not a black-and-white all-or-nothing thing. In practice means something like “we lack the wealth to do so” - we cannot actually perform the physical transformation of the matter to do this. Actually we do not even know how to gather enough matter - using the technology we have presently - to build one. So knowledge is also a problem here.
The fact that string theory makes this assertion of itself (as being practically not testable right now), as things currently stand, makes it an “easy to vary” theory even though it’s testable in principle. This is because minor modifications of the theory making similar predictions cannot be distinguished by experiment. And many such varieties of string theory exist. So “untestable” in practice is a weakness. This does not make string theory unscientific - it just makes it a poor explanation. For now. Maybe someone will think up a better test. Maybe someone will make a prediction that operates at lower energies requiring a smaller particle accelerator. Or - and this is key - maybe someone will come up with a theory that makes all similar predictions string theory can but which itself can be tested by some routine means available here on Earth - making that new theory a good explanation and worthy replacement for both quantum theory and the general theory of relativity. Such a theory - testable in practice as well as principle - would be a very good explanation. And string theory would then not be.
What is an example of a good theory within science that is unfalsifiable in principle? I do not know of one. Why is it important to distinguish between science and other subjects or disciplines anyway? It is largely a matter of convenience but also important to distinguish efficiently and effectively between pseudoscience and scientistic arguments (so arguments that claim something like: science can tell us what we ought to do - that there can be a science of morality or a science of economics or politics). Knowledge is some kind of unified whole, it is true. But "falsifiability" is a useful necessary criterion for science. And it is useful to know that demanding, say, moral theories are testable would be a terrible error. This would mean requiring the conducting of experiments on people (say) in order to determine if an even more pure version of Communism than anything China or North Korea has ever tried would be a good idea because - science! No, we do not need to experiment. We begin instead with moral explanations about people and rule out the "a falsification is required here before we can properly reject this theory". Morality is not science and we should not require it to be. But science is a place where experiments - conducted on the physical world are necessary. I think it's necessary we preserve this distinction.
A note on evolution
Quite rightly I was altered to and corrected upon a misconception I had about a particular kind of exception to this strict requirement for falsification in science. For reasons we shall see this does not undermine the central idea that scientific theories must be falsifiable. Now in the case of some (large number!) of theories they are not practically testable because there are no viable alternatives. This means we need to split the meanings of "falsifiable" and "testable in practice". Because there are no viable alternatives to Neo-Darwin "Evolution by natural selection" it cannot be "tested" - because to be testable it needs to be tested against something. And there is nothing. As David Deutsch observes in The Beginning of Infinity: if we observed something inconsistent with the prevailing theory of evolution by natural selection, nothing could be said except the test we used to find the inconsistency was faulty. It is often said, following Haldane that "Rabbits in the pre-Cambrian" would refute evolution by natural selection. But they would not. They would be a problem - but they could be explained by being a rare complex organism that somehow got there earlier than anything else (unlikely) or that a mistake was made by our geologist or paleontologist or evidence of a prankster. Many things would need to be ruled out (and how?) before we ruled out evolution on the basis of rabbits in the pre-Cambrian. But this untestability does not mean unfalsifiable in principle. These are different things.
If an organism (or many organisms, many different species) we found to undergo only or mainly favorable mutations then this would be better explained by Lamarckism and would rule out Darwinism. But then there are all those organisms we already know of that would refute Lamarckism. But the point would be Darwinism would be refuted as a universal (applies to all cases, everywhere) explanation for the evolution of life. It would just be a special case - presumably of some deeper explanation that accounted for why both Lamarckism and Darwinism worked within their less-than-universal domains. So testability and falsifiability are not synonyms. While the latter is needed (and Darwinism is that) the former is about the practical ability of performing some test (experiment) and having somewhere else to "jump to". Some viable alternative theory to test our theory against.
Not everything in science, it should also be noted, is falsifiable. Some eminently scientific claims are unfalsifiable. In physics we say "Work is a form of energy". That's a scientific claim. It's also unfalsifiable because it's essentially a definition. One will never calculate the physical work done (by using a classical formula for work like Work = Force x Distance) and discover that it is not a form of energy. These are just words and terms and though scientific, untestable and unfalsifiable. So some things in science are unfalsifiable. But they are not explanatory theories as such but more like frameworks within which we do science. In chemistry the scientific claim that "The 6th element on the periodic table is carbon". Or "The element with 6 protons is carbon" is a scientific claim. But it too is unfalsifiable. No one can possibly ever, in any world, discover an atom containing only 6 protons and conclude it is not an atom of carbon. No one will find some element which, upon analysis is carbon but contains 7 protons in every nucleus (because that would be nitrogen). And no one will find an element that contains only 5.5 protons in the nucleus bumping carbon up one position on the periodic table. These things are ruled out by the definitions of words like "element" and "atom" and "proton" and "carbon". So unfalsifiable claims in science are common. But the explanatory theories in which these definitions are used and themselves explained make predictions that can turn out to be false. It would not falsify the definitions - but the theories. In particular all existential claims of the form "X exists" are unfalsifiable. So the claim "gravity exists" is not falsifiable. But the claim "Gravity is a force" is, and was falsified. Gravity still existed - it just turned out not to be a force but rather, as Einstein showed, was the manifestation of space being warped by energy and matter. "Gravity" is a word used to describe some phenomenon that exists. The concept of gravity cannot be "falsified" - only what it appears to be, or what it is claimed to be, can be. In an extreme case, the idea "Matter exists" cannot be falsified, though matter may not be the most fundamental thing, in the final analysis. Maybe it is true that there is something deeper - a Platonic realm of sorts from which the appearance of matter arises. But that would just be to explain that matter is an emergent feature. The appearance of it - which is to say its measurable qualities - would still be real emergent things.
So falsifiablility is a necessary quality for scientific theories to possess. But not all claims in science are falsifiable. And falsifiability is not the same as testability. In particular the theory of evolution is not obviously testable in practise. Though it is in principle falsifiable. What we call science in the final analysis is an open question. It is a domain of study focussed on discovering how the physical world works - the patterns in nature, their beauty and their dangers. In part so we can control our environment and use it to our advantage. We guess what's true and compare our guess against that physical reality in some way. So long as we are making progress and solving our problems, that is what matters. But if progress is slow, that can be when these debates can be extra useful to understand.
This post has been motivated by some inspiring Tweets by Lulie Tannet (@reasonisfun) which then resulted in a subsequent exchange of ideas with David Deutsch (@daviddeutschoxf) and others. As always, "The Beginning of Infinity" and "The Fabric of Reality" underpin much of what I say - but errors are my own and nothing I say should be seen as an endorsement by David Deutsch. You can (and should!) buy both books here: https://www.daviddeutsch.org.uk/books/the-beginning-of-infinity/
The most valuable thing you can offer to an idea