Eric Weinstein is a very intelligent person. I'm on his side in many things (but absolutely not the top-down control he simply assumes MUST be a part of the global economy and "free" market. See here for example: https://www.edge.org/conversation/lee_smolin-stuart_a_kauffman-zoe-vonna_palmrose-mike_brown-can-science-help-solve-the#21964 (in a sense he may even have earned much fame for calling for an "economic Manhattan project". If only he meant: let's have a huge government program to get government out of the business of tinkering with economies, it'd be great. But actually? He means something more like the opposite...). Whatever the case, Eric does have lots to say and many people listen. He can't be dismissed as a postmodernist - but one could understandably make the mistake, because his use of the English language has a style that eschews clarity because of its idiosyncrasies. We all use language idiosyncratically, of course - but the desire to almost continually invent new words or usages for old ones is a strong impulse in some. Eric is simply a prominent example.
Take the talk by Eric at https://bigthink.com/videos/eric-weinstein-capitalism-is-in-trouble-socialist-principles-can-save-it (The transcript is available there). At one point he says, “Now the danger of that is that what we didn’t realize is that our technical training for occupations maneuvers the entire population into the crosshairs of software.” Translation: Everyone might lose their jobs to computers. Now aside from the fact this is flat out false (creativity, from what we know is a unique feature people have and will always be needed) it’s just expressed in such tortuous, clunky language as to muddy the meaning. Anyways that’s just one example. False philosophy shrouded in jargon. It’s not postmodernist nonsense. But it’s flirting with the style if not the substance. The whole talk, by the way, is an appeal for power and influence. He wants scientists to have more authority and bemoans the fact politicians are from “softer disciplines”. He’s upset and demands change. He says, “One of the things that I find most perplexing is that our government is still populated by people who come from sort of softer disciplines if you will. Whether that’s law, whether these people come from poli-sci, very few people in government come from a hard core technical background. There are very few senators who could solve a partial differential equation or representatives who could program a computer.” That’s clear and lacks jargon! He should stick with that style (though the substance itself this time is terrible: No thought is given to how useful those things are in creating legislation or making decisions - the task of politicians. There are probably very few Engineers or Scientists who could effectively debate, consult widely, speak clearly and publicly and simultaneously manage large groups of people. Eric himself may be one of the rare exceptions, granted. I digress:
The following is meant purely as friendly fun (ok, to make a point and help out allies, perhaps). Again, Eric makes some excellent points when speaking and writing. Yet I think sometimes those points would be so much more powerful if only they were clearer. To that end, here is the beginnings of a generator for creating your own Eric-sounding neologisms. I was going to name it after him, or make fun of his name - but that seemed to cross a line. So, instead, the name of my generator commits the same sin as it perpetuates.
Here's my advertisement:
Do you have something insightful to say but want to cloud it in strange idiosyncratic nomenclature? Or perhaps you've no real point to make, and just feel a little "postmodern"? With the idiosyncratic neologism generator you can cloak any clear message in obtuse usage of otherwise pedestrian words. Take any term from the left hand column and pick any term on the right - it's that easy. Maybe you want to observe that sometimes people tend to waste some of their time by making silly bordering-on-mean blog posts about famous intellectuals? Need a term for that? What about...hmmm..."inversion gimmickry". And right there, you're done. Just take a pinch of column A and a random sprinkling of column B and you can spice up any vanilla concept. Turn any mundane turn of phrase into something cosmically momentous now!
"I just wanted to complain about being unable to find a date on Friday night, but didn't want to take any personal responsibility. Now I can attribute it to "ubiquitous dispersed network effects" and I feel so much better about myself!" - Terry, 19, Dubbo
"I've always been a communist but because promoting those ideas is so very difficult I just complain about "amplified late capitalism" instead and people now nod judiciously! Thanks idiosyncratic neologism generator!" Jill, 52, New York
"My final undergrad project counted how many times the word "man" was used in Time magazine between 1989 and 2016. But "How many times the word "man" was used in Time magazine between 1989 and 2016" as a title was rejected by my supervisor. With "Institutional hamiltonian calculus of gendered language in popular media: 1989 to 2016" I was able to get Honours First Class!" - Summer Clouds, No age, Citizen of the Universe
I recall watching this speech Sam Harris gave at the Aspen Ideas Festival well over a decade ago now. https://youtu.be/-j8L7p-76cU I found it amazing then and I must have watched it more than a dozen times since. I recall wanting to learn to speak like that. Even now I haven’t seen a clearer, good humoured and more forceful defence of reason against faith. There’s a strong sense in which I feel I owe Sam some gratitude for having taught me to talk. His style is an ideal to move towards: speaking clearly, with good humour and concede where concession is warranted. In that speech you can hear for yourself all the ways in which Sam’s most vociferous detractors and opponents lie about his positions and have misrepresented his motivation. And where he concedes religion can indeed be very useful and consoling and more besides.
Sam has had to defend himself many times against the charge he’s unusually or even unfairly focussed on religion. And one religion in particular. He has been absolutely right to respond that in fact this doesn’t quite get to the heart of what truly motivates him. Actually, what Sam is concerned about rather often - and this comes through in his talk and in his books - is dogma. Religion is not centre of the bullseye (even if it’s on the dart board). The central concern is dogma. It is just that religion is rather typically, one of the largest most robust repositories of dogma. And this focus on dogma exists precisely because it can cause such harm - and we often don’t realise how until the harm is or has been done. Almost always it’s unintentional. A great example Sam uses is how the Catholic Church teaches that “human life begins at the moment of conception”. This seems quaint-even sweet and good. In one sense it’s true (zygotes are alive) but on the other hand zygotes are not people. And Sam observes that if the argument is “they are potential people” then given the right conditions so is any cell in your body. So when one scratches their nose, on this view, they’re engaged in a veritable genocidal level of murder of “potential humans”.
But this Catholic doctrine – this dogma is a foundational claim. It is from here that they build moral structure – they reach other conclusions about the rightness and wrongness of many other things; for example, abortion and the use of stem cells. This foundational claim about human life* beginning at conception does real harm. But the harm isn’t due primarily to the fact it’s false (and it is false - zygotes are not people - at a minimum a nervous system is needed to encode the knowledge that makes a person a person)- it’s damaging because the church will not even consider the possibility that there might be a way to learn more on this topic or to consider it different. Because it’s a foundational claim. It’s Church doctrine. A dogma. And this is why it can result in terrible suffering in ways the early church scholars could never possibly have foreseen. For in the context of a world that can treat actual suffering of actual people if only we could use embryonic stem cells, we have a problem. (Now, by the way, I don’t think it’s at all clear WHEN a zygote becomes a person. I know it’s not one. Nor would a blastocyst be one. But an embryo? Now I don’t know. This is a sorites problem of real consequence) so the moral foundation “Human life begins at the moment of conception” - good though it sounds as a way of enshrining the sanctity of life - turns out in the context of modern medical procedures - to cause real harm. Or in the case of abortion where early term abortions are made unavailable to victims of rape, the foundation would seem to be a perfect engine of suffering.
So Sam is absolutely right to root out and condemn dogma. Dogma are irrational. But it’s how religions build belief systems. They build upon axiomatic claims – foundations. It is purported to be somewhat like a mathematical system. Here are the axioms: now, let’s see what follows. Of course nothing can ever show axioms are true and indeed they may be false. So what follows in such a case is liable to be false also. Some mathematicians - it must be said - can sometimes admit (in better moods) that they aren’t interested in what is actually true in reality. Rather: just what follows as a matter of logical necessity. Quite right too!
(*Note: by “human life” the Catholic Church means the life of the zygote is a human. They mean: there are human souls in those zygotes. )
So Sam rejects dogma because it’s dogma. He understands that dogma are those things we cannot improve if we take seriously the idea they must be true. He’s focussed on that. And I couldn’t agree more. But what is the difference between a foundation (even a weak one) and a dogma? Moreover, what exactly follows from Sam’s axioms? Can they be the basis of some nascent all encompassing moral system of a kind?
One thing we might observe is that if morality is about “the well being of conscious creatures” then this reduces morality to a domain of feelings. Indeed Sam’s other axioms: “we should avoid the worst possible misery for everyone” is explicitly about the feeling of suffering. But this *central* focus on feelings in objective matters is a mistake. It takes what should be an objective domain of enquiry (morality) and reduces it to questions of “how do you feel?” or “how do we feel?” and so on. Now very often our feelings of pain or joy are indeed relevant. But are feelings the best guide in all cases? Could we formulate moral systems without these axioms? Let us consider any other objective domain of enquiry and the relationship there between knowledge creation (i.e: the solving of problems) and the existence of “foundations” or axioms.
In physics there exist postulates for various reasons. So Einstein “built” special relativity upon two postulates: the speed of light is constant for all observers and the laws of physics are the same for all observers. But this hardly helps with thermodynamics. And large parts of quantum theory were created to solve problems without being concerned about the postulates of special relativity. That’s physics. As to mathematics – well there is the preeminent example of an domain where axiomatic systems rule the day. But Gödel showed in mathematics we cannot have a complete set of axioms that can ever solve all mathematical problems.
So in physics: not everything *follows from* the 2 postulates of special relativity. And in mathematics it is provably the case we cannot prove everything from any given set of axioms. So much for axiomatic systems being needed to create knowledge and solve problems. Instead of a focus on axioms, the truth is that in all cases creativity is how we find solutions. It does not happen via derivation. If this is true in mathematics and physics - that the majority of what we know *cannot be derived* from a fixed set of axioms why should we think it possible in morality?
As to Sam’s two premises - I have no great criticisms against morality being concerned with the problems of conscious creatures and that we need to avoid the worst possible misery for everyone. But I’ve no criticisms against either of those or Einstein’s postulates or indeed many of the best ideas. That’s why they’re best. I just don’t ever elevate my best ideas to foundational or dogmatic nor indeed regard them as any kind of “necessary starting point”. So while I lack “coherent criticisms” of Sam’s axioms, they’re not necessary as a foundation or a starting point for any moral discussion. They’re just useful if our interlocutor tries to assert that x is better than y even when x causes lots more suffering. Or that feelings never matter at all. If indeed we tried to build a system of ethics upon them, we’d be talking about suffering and feelings constantly. We’d descend into subjective debates about subjectivity.
We don’t need foundations – just claims that remain tentative. As indeed Einstein’s postulates in special relativity are. I cannot conceive how Einstein’s postulates might be true (in our actual universe). They must be, it seems to me given what else I know. Likewise “the worst possible misery for everyone is bad” is an excellent critique against those who would push a moral relativism. There is no argument I know against that claim so cannot conceive of how it might be false.
But now from here what do we do? If this is the starting point, where then? Do we move left or right, north or south away from the worst possible misery? While we agree we must move – is it a coin toss? If not what should we do? That’s the real moral question that the foundation simply cannot help with.
Sam’s foundational claims may seem unproblematic. But then so too did the claims of the early Church scholars who laid the foundation that “Human life begins at the moment of conception”. In both cases the mistake is the same: deriving consequences from firm foundations isn’t the way problem solving works and the way forward is in rejecting dogma and embracing fallibilism.
“Effective altruism is about answering one simple question: how can we use our resources to help others the most?” – The first sentence at https://www.effectivealtruism.org
Altruism isn’t generosity. Altruism is about acting specifically for others at some cost to yourself. There is sacrifice involved. Many people think sacrifice is good. If you give a lot to a poor person – that’s great. But if you give a lot to the poor until it starts to hurt you so you cannot afford the latest iPhone, that’s even better. If you’re forced to go without “frivolous things” you are virtuous, on this moral take. And the more you go without in your quest to help others – the better. There’s a religious asymptote we are admonished to pursue here. As Jesus Christ is said to have done himself by sacrificing his whole life and as he implored the rest of his followers in Luke 18:24 “ Sell everything you have and give to the poor, and you will have treasure in heaven. Then come, follow me.” So that’s the very best you can do: be as altruistic – selfless – as possible. Give it away and the more it hurts, the more moral you are. But most of us can only manage a little altruism. So we're a little better than those who are not. Right?
Altruism goes beyond mere generosity. As the effectivealtruism.org starting sentence implores us: how can we use our resources to help others the most? Others. To help yourself isn’t really a part of the picture. That’s selfish. So long as you have just enough – well, that’s optimal. Indeed to help others the most means, logically, helping yourself the least. Well - so long as you’re physically able to keep helping others, everything else can go by the wayside. It’s Jesus at his best.
There was a complaint made by Christopher Hitchens one time about Mother Theresa. He said it wasn’t that she loved the poor so much as she loved poverty. There’s a sense in which the new "Effective Altruism" (EA) movement too suffers from this. The "take action" section of their website is about giving money to their designated charities. To give to organisations less well off - typically ones that address poverty. So the focus is on poverty. But we shouldn’t love poverty. We should hate it and want to eradicate it not merely try to alleviate some of it. How can we do this? Should we give away money to the poor? Redistribute? Or should we create wealth as fast as possible by making progress? By all of us doing what we are, in our own ways, best at?
Let’s consider the case of the great Bill Gates. A very wealthy man - the founder of Microsoft - who made a lot of progress and who also is very generous. His charity is now his primary focus in life and so he does great work in helping those less fortunate improve their position. And he is solving problems. So he invests in actual cures – solutions – for things like malaria. (As an aside: I happen to agree with the sentiments of Yaron Brook: it might’ve been even better for the world had Gates stuck to making even more money through producing even more widgets and software rather than giving the money away. Maybe in an alternate universe where Gates didn’t focus all his time and wealth on charity, and instead took that time and wealth to direct the production of an even better next generation Microsoft Windows that provided just the right boost needed to the computer at a medical institute that found the cheap cure to malaria). But Gates can give away much without hurting himself much. No doubt he’s having fun and that’s the main thing. But what about the rest of us?
If you’ve $3000 and want to help fix, say malaria what can you do? Here’s one thing: donate that money to a charity and buy a bunch of mosquito bed nets. Very well. Good. A focus on helping the individual. On other people you do not know and will never meet. Or: what about this: donate the money to a pharmaceutical company working on treatments for malaria? I’d say: better. Most people would say: dubious. Those “evil” companies would treat your paultry $3000 as a joke and it’d barely cover the bar tab and their next company picnic. Cynicism never much helped anyone. What about this: invest the money in yourself and whatever you are good at and work on solving your very own problems – whatever they are. Perhaps you’re a software developer. Perhaps you’re working on data base software which is interesting enough but not your primary passion. But that $3000 – maybe you just invest it in giving yourself a few weeks away from the office, on sabbatical, where you can focus solely on figuring out how to improve the accuracy of 3D modelling in a computer game you’re working on in your spare time. You solve the problem. Now the thing is: the growth of knowledge is unpredictable. Your improved 3D modelling technique just might be the kind of thing pharmaceutical companies need. Maybe they buy your little bit of code for $300,000 and you can quit your other job and focus solely on computer games for a while. Oh and that code the pharmaceutical company bought? It was used to model drugs and a cure for malaria was found 5 years sooner than it otherwise would have been. And you were instrumental in this in a way you wouldn’t have been had you donated it to nets.
I am not saying: “stop the nets!” I am saying sacrificing yourself, your money, your time is not inherently the highest moral good. We’ve been blinded by the supposed moral good of altruism. John 8:12 “When Jesus spoke again to the people, he said, "I am the light of the world. Whoever follows me will never walk in darkness, but will have the light of life." Sometimes that light is so bright as to be blinding. Even to avowed atheists. The idea that sacrifice is good – that selflessness is good rather than a rational interest in your own self is pervasive. And false. And ultimately – an evil. It is a cause of many problems and a solution to very few. And any solution that creates more worse problems than it solves is no solution.
What is actually effective is solving problems and there are many ways problems are solved. Mostly the path to a solution cannot be predicted beforehand.
So what is moral here? Let us compare altruism to generosity and compassion.
Firstly compassion (as others have observed “empathy” is morally misleading also). Compassion lets you understand the suffering of others and think about how to help. (Empathy on the other hand asks you to feel something of their suffering). Compassion, properly construed, can be seen as dispassionate. It’s appreciating that the suffering of someone else really exists and includes something of a desire to help find a solution. We’d want our surgeons to be compassionate – but not empathetic. The latter would be distracting. Empathy is moreover misleading because objective morality cannot be primarily about feelings. But nevertheless compassion can be useful in order to be motivated to act to help others especially in those situations where those others seem not to be directly connected to us and so we cannot immediately expect some kind of reciprocity. (But perhaps we live in a community and so compassion of this kind does indeed help us in the long run).
Now generosity. Consider that people are often praised for being “generous” with their time. But no one is expected to be “altruistic” with their time. Indeed in that context you can see altruism as the morally dubious principle it is. We’ve only a finite amount of time each day and if anything is our own – it’s our time. So people who are generous with their time act out of compassion and love for their friends and family or others they care for in order to help. “How generous you’ve been!” people say if we spend some hours with them helping them on some project or to reach some goal. In those cases of generosity we – the giver – really are getting something in return. Good conversation with another person. Other people are great – the most valuable things in the world. Spending time with them is one of the most amazing gifts of life.
But altruistic? That would be something like: well now I’ve given you all the time I want to – but now I’ll give you some more because that’d be the noble thing to do. I need to sacrifice. This needs to hurt a little (or a lot). I’m not getting as much from you as I really want, but I’ll continue to give because, well, that’s altruism! Expecting nothing whatever in return but a warm glow of self satisfaction later. If you were a believer it’d be because God was watching and will reward you “with treasure in heaven” as Jesus said. Altruists like Peter Singer argue for us giving away some percentage of our wages or salaries to charity - just as Christian tithing is intended to and other religions similarly prescribe. But rarely do they say: when you've helped a person some, give 10% more of your time still. Or any free time you have each week, or sleep - give 10% of that to someone who needs it more.
Let’s consider why is it that money is regarded so differently to time in this case. It seems the case that being altruistic with your money is seen as moral in a way that being altruistic with your time is not. Here is a guess: because the prevailing view in the West for some millenia now has been that money is an evil – a corrupting influence. Rich people are rarely seen as good people until they give their money away (like Gates. Gates was an evil industrialist for most of his business life. Until he started giving away all his money. Now, in the eyes of many, he's made up for some of his evil richness.) Of course this is just another Christian hang up. 1 Timothy 6: “For the love of money is the root of all kinds of evil. And some people, craving money, have wandered from the true faith and pierced themselves with many sorrows.” And of course Jesus in Matthew 19:24 “Again I tell you, it is easier for a camel to go through the eye of a needle than for someone who is rich to enter the kingdom of God." Money isn’t good on this view. It’s a path to evil. So, it’s perfectly logical given those biblical premises that the conclusion follows as a matter of rigorous deduction: “give it away”. To give away your money must a great virtue it is thought. One of the highest moral goods. For money is an evil liable to corrupt. So you can be altruistic with it. Be generous with your time (for it is yours – you own it and have moral claim to it) but be altruistic with your money (for you’ve probably, somewhere in your history – inherited some by ill-gotten means. It was a sinful acquisition. You were born with some wealth – undeserved. So the only way to make penance is to give it up and approach the greater purity that is closer to poverty).
Altruism doesn’t expect anything in return. Indeed to expect anything in return is itself a moral failing (on the altruistic view). Yet the exact opposite is true. Reciprocity, sometimes maligned, is actually an important means by which progress is made. People cooperate and find solutions faster when working together on the occasions they want to. So this anti-reciprocity (and, really on careful examination – anti-cooperation) sentiment is another reason altruism is a kind of moral failing. With generosity we actually participate in reciprocity: we get as we give. But with altruism – nothing is ever expected in return. Indeed that would be to pollute altruism. The genuine altruist would reject all thankyous – even if the recipient wanted to pay back the altruist – the altruist should never accept. Because then they’d get payment for services rendered. They'd turn into a capitalist! Especially if the reward was very great. But the generous benefactor (to be contrasted with the altruist)? Well if one day the recipient arrived at the door with payment and interest? They’d take the gift and reinvest and the cycle of generosity and wealth creation could continue.
Morality should not be regarded primarily as a focus on others. The focus should remain on finding solutions to problems. To answer: what we should do? The question is not “What should we do to help others?” it is “What should we do?”. It simply is the case that making progress as fast as possible cannot involve altruism as any kind of deep principle but rather the deep rule is more like its antithesis. Because when people focus on themselves and the problems they are genuinely passionate about they make progress faster. And that’s our situation: to solve problems as fast as possible. And as a consequence, somewhere down the road, other people get helped as a by-product and so much faster. Bill Gates never set out to solve problems in medicine and chemistry, physics, engineering and pollution and a thousand other things. He aimed to write software. That’s it. And people bought it. And he became very wealthy because so very many people found what he created useful and valuable. And many of his buyers went on to solve important problems using Microsoft machines in medicine, science, engineering and everything else and as a consequence countless lives were saved and improved. All because Gates (being self interested) aimed for progress in one area on problems he cared about and created wealth. And that wealth bootstrapped more wealth creation and problem solving across the world. If we aim to solve problems and create wealth as great industrialists do and have done then problems get solved so much faster. And more people get helped. And that’s so much better than other methods that help solve fewer problems and help far fewer people.
We have to make progress as fast as possible. It’s the best thing for everyone. Giving wealth away - taking it from where progress is happening fastest and gifting it to where it’s not hurts more people than it ever helps.
So if you think morality is about helping the most people as fast as possible, altruism is not that. It’s the opposite and so by a utilitarian standard is actually evil. This is the moral blindspot and evil kernel at the heart of calls for “redistribution”. It steals from children of the future to help some people today. It says: those who produce wealth have always done so by some corrupt means and though they make some progress, that virtue cannot make up for the sin of wealth creation by ill-gotten means. Of course all the arguments that the wealth was ill-gotten and not heroically created through discovering the knowledge that solves the problems people are willing to pay for is ignored.
So if altruism is about helping other people as the EA people claim...then EA isn’t maximally altruistic in the long run. But creating wealth would be.
If we put aside altruism and utilitarianism as our moral compass then we can simply consider solving moral problems directly and not merely mitigate some of their effects. But moral problems require that solutions are found quickly so suffering can be alleviated for the everyone. And this means: fast progress. The creation of knowledge. To do that we need time and because “time is money” we need wealth. And we need to go faster. That needs improvements to technology. Better technology. And we need research - scientific and other kinds. All of this requires more wealth. Wealth has to be created: it’s not a finite amount to be split up and distributed more fairly. It is a thing people create and then solve problems with to the extent they know how. We must continually create more wealth to discover more knowledge and make progress fast enough so the rate of solution finding always outpaces the rate of problem encountering. If things slow and stagnate we risk it all. We risk everyone.
Consistent with every speech he gives, this is a wonderful talk by Douglas Murray. The center of the bullseye is for Douglas, as always, a concern about politics and existentially important cultural issues. He is not really doing philosophy (much less epistemology). So this may seem terribly unfair and pedantic. Nevertheless my interest is epistemology and so hearing the grave intonations of Douglas Murray utter such a philosophical cliché so early on, I felt the need to say something on the matter. At around the 40s mark into the speech above, Douglas says:
“It’s very easy to be a critic. It’s very hard to create. Yet it’s creation, not criticism that builds societies and indeed inspires people. And gives life meaning.”
The irony is Douglas is one of the most brilliant critics of our time! His books are excellent critiques of much received wisdom, politics, politicians and some of the most pressing global issues. The cliché I wish to highlight is this problem where people distinguish creation from criticism with a bright line and regard criticism as somehow bad – or easy and creativity as only ever good. What Douglas I assume means, and what I guess most people mean when they have a go at “criticism” is something more like “insults”. Insults are not criticism. Or mere contradiction is not a criticism. “You’re wrong” barely makes the grade for actual criticism.
So what is criticism actually? Well firstly it’s a creative act. Hence the way in which it cannot be divorced from creativity. (And creativity, for what it’s worth, can only become useful innovation when a careful application of criticism is applied. Not all flights of imaginative “creativity” are good.). Criticism is an explanation of how something is wrong or bad or deficient and why. Of course this is in the ideal case. Sometimes criticisms fall short and might be “bad explanations” or only partially make the case that some idea or creative thing has a weakness or flaw in some way. The criticism might not be valid. Or even when it is valid is might not be fatal because there may be no alternatives on offer.
What Douglas does in the rest of his speech is criticize. He’s a critic! He criticizes politicans and political systems, he criticizes lots of ideas and practices. He criticizes whole cultures (even his own) – in short he is a grand critic in the great tradition of British orators. But he creates all these wonderful criticisms and defends them with good explanations. Some I disagree with, but the overwhelming majority are good observations of actual things going wrong and how and why. And that’s what great criticism is.
When Douglas devised this speech, or speeches like it, and wrote his books – he created. But I’m sure he made more than one draft. He criticized his own work. He was a critic of his own work. Did he find that easy, I wonder? I doubt it. And to come up with this long list of deeply insightful criticisms of European Union policies – did that not take great creativity?
Here is the key: someone who says, “Douglas, you’re wrong. You’re a fool” is not an actual critic. They’re something else. Absent further good explanation they’re just a mean person! Critics are not necessarily mean. And being a mean, cruel or insulting person doesn’t make you a critic.
So we need both. What builds society is indeed creation. But only when coupled with criticism. An imaginative architect can conjure the most fantastic design. “How wonderfully creative you are!” people may exclaim. But when the engineer arrives to say “That wall there is not physically possible. It simply cannot support the roof (for reasons x, y and z)” this criticism is neither bad nor easy. The engineer may have to call on specific pieces of physics and other sciences to create an explanation of how the design fails. Applying general principles to specific cases takes creativity. The creative design in this case may indeed have been the easier thing and, ultimately, the bad thing. Creativity uncoupled from criticism is just imagination. Creativity coupled with criticism brings innovation.
So let us alter Douglas’ introduction just a little,
“It’s not easy to be a critic. Here I stand, bravely pointing out some difficult truths of our time. It’s very hard to create such criticisms of ideas some people hold so dear. Yet it’s this kind of creative criticism that builds societies and indeed inspires people. And gives life meaning.”
I looked into Universal Basic Income (UBI) as it has been a hot topic recently. Here's what I found: it’s welfare. So it’s Socialism. There is absolutely nothing whatsoever new about this idea. It is money taken from the taxpayer given without conditions to people who do not work.
Except it’s worse than normal socialist welfare because it applies to absolutely everyone regardless. So it’s closer to Communism.
Except it’s worse even than that. At least with communism people are ostensibly required to do something productive, even if most of the wealth they create is confiscated. With UBI you aren’t expected even to do that much. You don’t have to produce anything.
None of this would prevent people from actually being creative of course. But it will eliminate one of the important motivations people have for being so. Namely - so they can produce something of value to others and gain income from doing so. If they gain income for doing nothing at least some will decide not to produce anything of value. Not everyone. Some. This is much more difficult a life decision to make if you survival depends on your creating something of value.
UBI begins with the assumption that robots - AI - will take almost all the jobs that presently exist. UBI ignores that the only jobs that can possibly be taken by AI are ones that can be automated. This has always been the case. It is exactly the same situation we have always been in since the loom or the computer first appeared. Yet unemployment hasn’t risen. It’s remained stable or even decreased. And living standards continue to rise anywhere economic freedom is implemented.
People have moved from drudgery - work that can be automated - into creative work and continue to do so. We are all creative. Anyone who asserts otherwise simply doesn’t understand what a person actually is. We are creative entities. Not draught horses. A draught horse just pulls a heavy load. The "work" they do is very much the way physics defines work: it is the product of a force over some distance. The draught horse drags a load across the ground moving it from place to place. It is drudgery.
People are above that. We should all be moving away from draught horse type work (anything that can be automated) into creative work. Work that requires problems to be identified and then solved. Ugliness that needs to be made beautiful. Evil that needs to be made good. This is what we do.
If AGI arrives, all the better. AGI are people too. They won’t take "our" jobs. They’ll be people - like us. And the more people, the better. The more ideas. The more solutions. The faster we can address the problems of the world. And the problems of the world cannot be known in advance. We need to produce knowledge to create the wealth so we can fund the solutions of tomorrow. So we all need to be directed towards creative output. Not engaged in pulling loads like horses.
People are worried about job losses as industries change. But it has always been the case that industries change. "But now is different" they say. It's not. That too has been said before. Change and progress are inevitable and good in an open society - in a culture of criticism. People are, right now, particularly worried about industries like transportation. All those truck drivers, taxi and Uber drivers, train driver, couriers, delivery people - anyone involved in driving as an occupation. The fear is this will all soon be automated - and all those people out of a job. And then: crisis. But people move from job to job all the time. Again: there is nothing new here. Indeed more and more people spend less and less time in a single job. Why people think truck drivers are especially unable to learn new skills, I do not know. They can - as much as anyone else. But we are told the crisis is coming. Millions of people out of work overnight. Crisis. Upheaval. Discontinuities.
Hence the need for UBI.
But here’s another solution if you really are concerned that trucks drivers and the like are some special case. Actually here is a solution regardless of where you stand on the "almost all people are soon to be automated out of their jobs" end-times scenario. If you are genuinely concerned about this - are a serious politician, say - then cut taxes now. Cut taxes on vehicles - now. Cut income tax - now. Allow those drivers - or indeed anyone engaged in a non-creative job to save their money and not have it extracted by the government NOW. Let them save a “nest egg” so when something seen or unforeseen happens (like job loss) they’ve sufficient wealth saved in cash or property to support themselves. And they don't have to turn to the taxpayer for retribution. Take out the middle man. Why tax these people so heavily now, only to give it back to them when they become redundant? Let them save their own wealth now.
This then shifts the burden of “who is responsible for providing income to an individual?" from the collective back to the individual.
Socialist memes are deeply entrenched. Even if people begin to appreciate that communism (or that some aspects of communism) were in error and so people begin to question and criticise these terrible dogmas - they rise up again in new forms, repackaged. Thus it is with “UBI” - it is no more than a repackaging of the old idea that people should earn the same amount of money regardless of what they do. But as I said - it is even worse than this because it does not even require that you work. It assumes people are not creative - but rather cogs in a great machine. We exist in order to perform labour (i.e: arduous work). But this is not our nature. We are creative. The Marxists are simply wrong that arduous, difficult work is what people do and is what creates wealth. No. What creates wealth is ideas. The rest can be automated. How can we move from a mindset of "people need to labour and sweat to earn money" to "people need to be creative and have fun and find solutions - leave the "labour" to the robots"? We simply need to allow people more opportunity to be creative. And they will have this if they can keep the money they earn and not have it in large part confiscated by the government.
Creative people need freedom, and the only system that allows people to be free - the only economic and social system that has at its heart a principle not to use force, engage in theft of wealth created and allow people to trade or not with those they choose is Capitalism. Only Capitalism explicitly has an injunction against the extraction by force the wealth that has been created by Alice to give to Bob regardless of what Bob has done with his life.
UBI rejects all this. UBI takes from Alice the wealth she has created because of the pessimistic assumption that Bob simply cannot create wealth. It views Alice as somehow having gained her wealth through illegitimate means. As such - Bob, no matter what he has been up to, actually deserves some of it. And the only people who can ensure that Alice does indeed hand over the products of her labour are the government. And should Alice refuse, then men with guns will come to her door and demand her wealth. Wealth she might otherwise have used to create more wealth.
The alternative to this dystopian view of people and civilisation is an open society of optimism and kindness. People can create wealth. All of us. Even Bob. It is our nature. It is what we do: create. And as a community we enjoy and value the creations of others and engage in kind and generous exchanges of ideas, creations, services and goods. Not in equal measure - but this too is good that some may succeed through extra hard work and great inspiration and rise up and change the whole civilisation. Others can find success in fertile little subcultures which arise where everyone does their own little (but valuable!) thing where people trade one with each other because they want to. Money is exchanged for goods desired and people we want to pay get paid. The only real factors that slow this wonderful flowering way in which ideas flourish is force and its threat. When criminals or the government come with weapons to take some of what we have created and use it to purchase goods and services we were not in the market for to gift it to people we do not know - that’s wrong. That's theft. That's evil.
We people are, most of us, kind and generous and had we wanted to gift the money to a charity or indeed to an individual in need, we now are unable to. Because what we had, has been taken from us at the point of a gun by people who claim they know better.
UBI is not needed. What is needed is an understanding that people are creative. In particular they create wealth. And if they are allowed to keep the wealth they create through their hard work - creative or otherwise, then they will be able to save. And if they were permitted to save sufficiently, UBI wouldn’t be on the cards at all. It would be seen for what it actually is: theft.
All sorts of unconscious phenomena enter into our considerations , decisions and choices. If you are waiting for the 9:47am bus and it fails to arrive - this event enters into your consciousness unbidden by you. You had no control over it. But now you are thinking “Oh no, I may be late.” At that moment a taxi approaches. Again: unbidden by you and more thoughts, also that you did not author, enter your mind. You now consider “should I hail the taxi?”. You deliberate. You try to create a good explanation.
Was your meeting to be at 10:30am or 11:30am? Maybe you’ve time enough for the next bus. But maybe you shouldn’t risk it and take the taxi.
Parts of this process are unconscious. Much indeed. But parts are conscious as you think and reason to form (create!) a good explanation of what to do next. You have choice before you. The world need not be one way or another. “Bus or taxi?” you must think quick. You must choose. The meeting is at 11:30am you recall in a few milliseconds. “I’ll just wait”. You’ve chosen by reason. Nothing has forced your hand. The decision was a free choice. And exercise of your free will.
Had a terrorist come behind you and pressed to your side a gun that you could see and said “Get that taxi” then new information would come. Now, I would say, when you obey this is different. Certainly you might object - but really you are doing OTHER THAN YOU WANT. Other than you desire. Other than you would have chosen. You are being COERCED. When there is coercion it is not the exercise of FREE WILL. It is something else. It is a decision under duress. Your creativity is being impeded. It is subservient to your survival and emotion and fear especially. You aren’t thinking clearly.
Now in the scenario of the late bus and you just wait peacefully for the next notice that this account has required: creativity, choice and free will. I don’t think we can easily remove any of those. Or if we can they simply “pop up” as another mystery. You may deny free will or even choice. But surely creativity is something you cannot deny. But what are we creating? Explanations. Why one explanation rather than another? We desire - surely. But why? Why desire anything? Do we just slavishly obey impulses or is there deliberation? What is this deliberation? An illusion? So it doesn’t matter if we deliberate? Surely it does matter if we take time to reflect. Surely we create better things? Make better decisions? And isn’t that decision to take time itself something that can be learned? And doesn’t it become a choice? And isn’t choosing to do so a free choice? You aren't being coerced?
What makes people unique? What is this thing? Is it creativity alone? There is something there - something fundamentally different about humans compared to other animals. Whatever it is seems to allow us to break free of our genes and our instincts. Cities, computers, our languages - in short our explanatory knowledge is not encoded in our genes. So that stuff we accomplish that is not encoded in our genes is being generated by our minds by a process we barely understand. We call it "creativity". But it's a thing we direct. We choose to direct our attention, and thus our creativity to this or that thing. And that conscious act of direction is an exercise of free will. What we're often creating is knowledge about how to solve our problems. But what knowledge to create isn't something that is in our genes and it's not "in" the laws of physics. But somehow it nonetheless is "in" the universe - it's part of reality. So when we choose to use this creativity of ours it is a parsimonious technique to simply call this an exercise of our free will.
Exploration of what properly constrains the production of knowledge is a very interesting topic and ethics forms but a part our considerations of what limits the creation of knowledge. Those constrains are however far broader than what is dictated by parochial concerns about what *should* be done in terms of generating knowledge. Because the growth of knowledge is inherently unpredictable, an argument looms that perhaps the only ethical principle one requires here is: do not apply ethical prohibitions upon the creation of knowledge. Of course, practically speaking, we should not seek to discover what is the most hurtful thing we can do to make people suffer? That would be abhorent. Or what is the most dangerous risk we can take? We can play games like this and suggest that therefore we need tight restrictions on what problems people should try to solve. Such concerns are not genuine limits upon the growth of knowledge but rather silly moral-thought experiments about how values seem to conflict (on the one hand the value of knowledge production and on the other valuing personal autonomy, for example) and they are always resolvable with a little bit of critical enquiry.
So ethics, typically, is not - or should never be - the biggest constraint upon the growth of knowledge. The growth of knowledge is motivated by problems that arise. That is what the growth of knowledge is: the search for solutions to some problem situation we find ourselves in, personally or as a community or civilization.
But there are other constraints upon knowledge. From logic for example: we cannot hope to discover simultaneously that eggs are simultaneously good to eat and also deadly poison (modulo logic games like: some people are lethally allergic to eggs, or that eating 100 of them might kill a person).
Knowledge production is of course limited due to physical law, there are limitations due to time, space and energy, there are perhaps limits yet to be explored (like the so called “no go” theorems found in pure mathematics and physics - but perhaps there are more we’ve yet no notion of). David Deutsch has explained the great dichotomy when it comes to the limits of knowledge: whatever is not prohibited by physical law is possible. So the only thing preventing us from accomplishing something we want to, and which we've decided is good to do, is *knowing* how. That's an amazing thing. Resources are almost always plentiful - the universe is vast. So taking a cosmic perspective on these things, it is not matter and energy and time that is scarce (the universe provides these in abundance, as it happens) but rather it is knowledge that is always scarce. (Of course, see his books for this - or his Ted Talk).
But also, now, and in the other direction - it is not only constraints upon knowledge but also it is the availability of knowledge - which is the limiting reagent in both the universe and our lives. Knowledge itself provides the constraint that prevents us personally, as families, communities and whole civilisations from accomplishing what we want. When we lack *that* resource - knowledge - everything else (importantly progress) stagnates. Most especially, civilisations do, and so do our own personal lives.
This idea of "constraints" as some kind of theme through which to view knowledge can be a useful one. Ethics, on this view, is but one example of the constraints on knowledge and also that there are many ways the production of knowledge is constrained...and also many constraints resulting from our lack of knowledge and lack of progress in our creation of knowledge. “Constraints” might seem to be a gloomy lens through which to view a thing, but on analysis this is an uplifting lesson to learn. Creating knowledge - learning more - is typically, in our world as it now is - the only thing (or at worst the main thing) limiting each of us personally and as a civilisation from accomplishing our goals. Your choice to know more really is the way to move forward.
*Credit goes to Ric Sims (@sharpcomposer) for remarks inspiring parts of this piece.
The Search For Truth
The prevailing view of “knowledge” - handed down from Plato - is that knowledge is some kind of justified true belief. Modern incarnations, descended with mutations to fill the niche occupied by this desire for justified truth include Bayesianism (a more mathematically inclined twin of inductivism) - where the idea is that knowledge is justified as close to true by repeated confirming instances. Whether Bayesian or Inductivist, these kinds of justificationism applied to science hold that the more frequently one observes an hypothesis to work, the more confident one can be in expecting it is actually true, more true, or probably true compared to its rivals.
But Bayesianism, in claiming that some theory has some quantifiable (indeed calculable) and precise amount of truth we can discover, cannot explain how despite repeated “confirmations” increasing one’s confidence in the truth of a theory, nevertheless, it can still be shown utterly false by an observation that theory cannot accommodate. Indeed it cannot explain how it is that when confidence in truth is at its highest, this is when theories are typically shown false. In other words, on Bayesianism, when we have every reason to expect the theory to be true, it is shown false. So for example, every single observation that occurred prior to around 1919 was a “confirming instance” that would have granted “Bayesian credibility” to Newton’s theory of gravity. (If this date is in dispute, we need only move it back to around 1859 where Newton’s theory had never been known to produce any anomalous predictions (it was in this year that Urbain Le Verrier in “Celestial Mechanics” published data dating from around 1697 to 1842 which, when investigated carefully, appeared to reveal some anomalies with Mercury’s orbit. In principle these could reasonably, at that time, have been interpreted as consistent with Newton’s theory on the assumption the orbit was being perturbed by some other massive body (this is not unprecedented given the method of discovery of Neptune relied on something quite similar)). Whatever the case, absent any other theory, the Bayesian method of increasing confidence that a theory is true, given repeated instances that are consistent with a theory, meant that Newton’s theory of gravity was at its highest confidence right before it was shown false. At which point all of those observations that it was correct now “flowed” in some sense to its replacement: Einstein’s General Theory of Relativity. Or if they did not “flow” then the count started again and Einstein’s General Theory - being without rivals - just continues to grow and grow in truth to this day. And with each passing day we should be more confident, not less, that it is true. But nothing in Bayesianism - no matter how many confirmations there are - can rule out the possibility that Einstein’s General Theory will be ruled out by a process similar to that which Newton’s went through. Namely some observation inconsistent with Einstein’s General Theory but consistent with some other theory that does everything Einstein’s does but also accurately predicting where Einstein’s cannot work. Indeed we should expect it to be shown false because we should always expect some deeper theory to explain everything some currently accepted best theory does...and more. That is: we should admit theories are improvable and progress is always possible because knowledge continues to grow. In particular we should expect a theory in physics to be found that is deeper than both quantum theory and general relativity - one single theory that can explain why both work and which also do something new that neither is able to: perhaps explain dark matter and dark energy or something like that. Something at a deeper level. That is what we should expect. We should expect falsity to be shown and so we should expect that General Relativity is, now, strictly, false. We just don't know how it is and cannot show it is yet. One day, we will because we will have both a replacement for it, and a test to distinguish the replacement from General Relativity by comparing it against reality in some way (we call such comparisons "crucial tests" or "crucial experiments".)
To remain with Bayesianism for a moment, it is also important to note that Bayesianism alone cannot explain how an ad-hoc modification to a theory is not also “verified” to the same degree. As explained in “The Fabric of Reality” by David Deutsch the idea that the currently accepted theory of gravity is justified as true or probably true because of all the observations that people have ever made consistent with it applies also to the theory that the prevailing theory is justified as true or probably true except in cases where it is defied on those occasions when objects levitate for reasons not accounted for by the theory of gravity. The “our best theory of gravity is true except when things levitate” is justified by precisely all of those observations that justify the current accepted theory of gravity.
So it cannot be the case that theories are justified by repeated observations - no matter how many there are. If they were, the ad-hoc modification that “things sometimes also levitate” would also be justified - even if we have never (yet!) witnessed such levitation that would be inconsistent with the first theory (that the best theory of gravity always applies everywhere).
This is an argument against induction and against Bayesianism. Repeated observations are not needed. That is not how knowledge is produced. Instead theories are guessed (conjectured) and then attempts are made to refute these theories. This is the rare best case scenario: there are multiple competing theories. All these theories then get tested against reality by some means. The means - the methods of criticism - along with the subject matter itself - are what define a “discipline” or “subject area” or “domain of inquiry” or any other such synonym for fields like “Science” as compared to “Mathematics” and “Philosophy” and “History” and “Morality” and so on.
So let us recap all of this in light of the broad brush strokes that the majority of people interested in this topic of epistemology - no matter where they are on the spectrum between Plato’s JTB and Bayesianism sit.
Knowledge, they sometimes argue is some kind of belief (not all Bayesians do this: some believe in knowledge that need not be about personal thoughts). But belief cannot be a property needed for knowledge as Karl Popper observed and David Deutsch has clarified in many places. Knowledge is not only something that is in minds. It is also in objects. A telescope contains the knowledge of how to focus light. A jet engine contains the knowledge of how to convert chemical energy into heat and thrust and motion. The DNA molecule contains knowledge of how to construct an organism. A book contains knowledge, as does a computer. But none of these dumb, unthinking objects have beliefs.
So knowledge is not about belief. Must it nevertheless be justified true? Justified true means “shown to be true” - but we have just seen that there is no method whereby a theory can ever be shown to be finally, once and for all, true. There is always some way it might be shown false (and we cannot rule this out). This is true in science, but even in mathematics and is basically the philosophy of "fallibilism" - the claim that error is impossible to avoid. Mathematicians make mistakes and (this is poorly understood but absolutely crucial to appreciate) proofs in mathematics are computations. Proofs are done by something. They are done by a mathematician (or a computer) using some physical object (their brain, or pen and paper or a calculator) and physical objects obey the laws of physics. And if the laws of physics say that necessarily physical processes are error prone (cannot 100% be shown to produce the same outcome every time (this is a consequence of the laws of quantum theory - our deepest physical theory)) then methods of proof will likewise not be 100% in all cases absolutely perfect. More than that - for reasons stated above about Bayesianism - we cannot even put a “close to 100%” number on it or any probability at all. My favourite example here remains Euclid’s demonstration of the obvious - clear to everyone - fact that through any two points a unique straight line can be drawn. We know this now to be false because there exist such things as curved (“non-Euclidean”) geometries and in these cases many straight lines can be drawn through any two points. For more on that, see here.
Knowledge is likewise never justified because if it could be the justifications would have to be justified. And if they could not be then our original claim would not be justified as true. But if the justifications for the justifications were true on this view, then this would only be because they were justified and so on, leading to an infinite regress. So “justification” cannot work as some kind of deep truth about how knowledge works because it rests on either an infinite regress of needing to justify justifications or stopping at some point where the justifications are unjustified meaning that “justificationism” is no kind of deep and universal truth about knowledge.
And finally “true”. When people here use “true” they seem to mean “certain”. And we cannot be certain because we can never be without doubt. And besides, certainty is just a feeling - one feels certain or not. And objective knowledge cannot be about one’s subjective feelings.
So there we have it for the moment: knowledge is not justified and it is not true and it is not about belief. Everything about Plato’s definition is wrong. Instead what is the case is that knowledge is about guessing theories (that solve some problem we have) and then criticising those theories. If we’re fortunate (because we’ve been sufficiently creative and critical and perhaps have cooperated with other similarly creatively critical people) - we manage to have many such theories. And then the critical process of experimenting (in science) or disproving (in mathematics) or trying just to argue (in all areas) and reveal weaknesses and flaws and contradictions we whittle away all the theories that fail to meet our criticisms and - again if we are fortunate - we’re left with just one theory standing. If we are not left with only one this, in science, is where we can do a crucial experiment. The experiment where the outcome is predicted to be one way given one theory but another way given another theory and that allows us to decide which is false. Whatever the case, in whatever the domain, usually we’re left with identically one theory that does what we want it to: solve our problem. And we call that The Explanation.
So we have jettisoned “justified” and “belief” in their entirety from this conception of knowledge. But what about “truth”. Is knowledge nonetheless a quest for “truth” as Popper says? Above I seemed unable to avoid the word, or its negation more than once. Of course we have seen the quest for knowledge cannot be a quest for certainty (100% infallible truth) but can it be a quest for something lesser? Well for the same reason that it cannot be a quest for 100% certain truth, it cannot be a quest for 99.99% truth or 99% truth or 50% truth.
So is truth a chimera?
Let us return to mathematics briefly. Surely it is about proving things true? What things? Well in mathematics what we assume we have are propositions (claims that are identically true, false or undecidable) and we use rules of inference to reach conclusions. But many pure mathematicians understand that because one needs to start somewhere (with axioms) that themselves must remain unproven assertions about the world, mathematics is actually not about proving things true. Rather it is just a domain of showing what necessarily follows from the axioms. Now if you assume the axioms are true then you can assume what is proved from them is true. But it is all just an assumption. If the axioms are false, well so much for your conclusion. Now because we have no method for showing that our axioms are actually true or false or undecidable - but rather that they are just assumptions, we may call what follows from them, on the assumption they are true and in the knowledge that moving from one mathematical claim (like an assumption/premise) to the next mathematical claim by following some rule of inference, we are not moving from proposition to proposition (actually, demonstrably true "meaningful sentences") we may more accurately say we are moving from statement to statement (approximations to such propositions) . So mathematics is about showing claims (that although we cannot know are true) do proceed logically (necessarily) one from another.
This works also for any domain of knowledge outside of mathematics and follows from what is called in the business “Tarski’s theory of truth” (named for Alfred Tarski). This is actually the person Popper refers to in “Objective Knowledge” (p 44 onwards) where he makes some “Remarks on Truth”. He makes the distinction there, following Tarski, that truth is “correspondence with the facts” and so it is sometimes also called the “Correspondence” theory of truth (this is the commonsense view, Popper says. I would add that this is to distinguish it from competing claims like: truth is about “Consensus” - that is, that a thing can be deemed true when some group of people agree that it is (a rather relativist notion if ever there was one. Each group, by this measure, when they disagree, has merely agreed upon contradictory “truths”) and there is also something known as the “Coherence theory of truth” where a thing is true if it coheres (agrees) with some other known true propositions. Of course how those propositions are known true is because they agree with each other and with some other “true” claims and so on. But at no point need anything need correspond with reality.
Popper begins this section on truth with the claim that “Our main concern in philosophy and science should be the search for truth…We should seek to see or discover the most urgent problems, and we should try to solve them by proposing true theories…or at any rate by proposing theories which come a little nearer to the truth than those of our predecessors.”
Is he wrong about some of that? Namely the first sentence? Should that - the search for truth - be our main concern? It would seem our main concern is solving problems. But does Popper suggest there that solving problems is to be identified with the search for truth? We cannot ask him, so I propose that this is indeed what we are doing in solving problems. We are searching for truth by eliminating error to bring us a little closer to truth. By uncovering tiny parts of it and eliminating falsehoods.
If we consider that statements are approximations to propositions (the latter what we cannot utter because those are actual truths or actual falsehoods) then the statement - being an approximation - is an approximation to truth or an approximation to falsehood. And in general terms, to correct errors is to make progress - to improve. But improvement or progress occurs in some direction. When we solve a problem it is that things get actually better. There is a direction. The direction is in bringing the approximation closer in line with reality. That is to say the statement comes to reflect that reality with increased fidelity. But this increased fidelity - this better way of capturing reality with the statement or the theory - this is an objective improvement. How is it an objective improvement? Well it solves the problem that a previous theory could not. That previous statements were unable to explain. The previous theory is shown wanting. In what way? Well the successfully criticised theory, the one refuted, cannot be the truth because it has been refuted - shown false by observation (or other criticism). Cannot be the final truth? No. Of course, as always, we may be mistaken. But having to make this caveat each time one uses the word "true" or "truth" can be cumbersome and violates Popper's injunction to "speak clearly...and avoid...complications." And regard brevity as important (p 44 "Objective Knowledge")
Theories solve problems. That is their purpose. But how can you know your problem is solved? Well - the solution has worked. That is to say that what was a problem (the position of the planet was there at point Y but you predicted point X because of theory “A”) but now you have solved that problem with your replacement theory “B” so when you do the calculation, "B" gives you the answer Y, and the old theory gave you a calculation leading to X. So the solution worked. The new theory worked. This is what “worked” means. It means it corresponded to something in the world. You compared it to something in reality. Reality matters: it is the adjudicator between your theories. Now of course you might have made a mistake. But modulo that, what do we say about theory A? It has been refuted. What does that mean? It means it cannot account for the observation that your planet was predicted to be at X but was not.
We cannot jettison truth. Knowledge has something to do with truth. But what? Well knowledge creation is about solving problems and that involves correcting errors. And correction of errors brings us closer to reality such that our statements about it are approximations to the actual truth. Now what it could mean to “hit on” the actual truth (some call this the “ontological truth”) is difficult to say. Could it be possible that “triangles have 3 sides” is in some sense the actual ontological truth? No. It can always be the case that this could be improved in some way. Being unable to imagine a way is no refutation of the idea that people improve their ideas. We cannot rule out the possibility that some future civilisation will agree (because, I don’t know, (let's be fantasical for a moment) they have uploaded themselves into some some holographic higher dimensional space) where triangles, it turns out, are rough approximations to figures that, when viewed from our meagre 4-dimensional spacetime, only appear to have 3 sides and in fact, viewed from a broader and deeper perspective available only to more enlightened higher-dimensional beings, actually have more sides. This might seem bizarre but I’d say it’s no more bizarre than, having mathematically proved from the “self evidently true axioms” that triangles have an internal angle sum of 180º degrees - you then learn about geometries where this is “self evidently true” NOT to be the case. So claims in mathematics - shown true, are sometimes overturned. We cannot know that when we think we’ve got it correct, that we’re going to be moments later shown how we’ve been in error. That there’s a problem.
So is knowledge a search for truth at all? So long as we solve problems and correct our errors such that the new theory that solves the problem by correcting the errors better corresponds to reality as compared to all rival - isn’t this enough? Yes - but there is a succinct way to put this.
The new theory contains more truth. Or: the new theory is more true. The old theory is demonstrably false and we know it’s false. Do we know it is once-and-for-all certainly false? No. Do we know the new one is once and for all true? No.
Is it true at all? Yes.
Can we say one is more true that the other? Yes!
Can we say by how much more? No. It’s merely a binary distinction. But it’s convenient. One theory has more truth to it than the other.
Are we sure?
No. We never need to be.
Can we say a theory is “true”. Yes - so long as we understand “true” there is shorthand for “fallibly, provisionally true” or “pragmatically true” which we can take to mean: we act as if it is true. And why not? If the proverbial life-and-death situation is before us, we should not act any other way. The patient’s heart has stopped and the epistemologically savvy emergency doctor calls for the (external defibrillator) paddles STAT(!). Those assisting need not debate whether it’s true that the paddles will work. They act as if it’s a true claim “they work”.
“Is it true those paddles work?” someone asks our critical rational doctor later. “Yes” he says - and quite right too. To say “Well, I don’t know if it’s true they do. But I do know they work” is not only cumbersome, but it misses something important in fallible critical rationalism. And that is that the word “true” should come to be known to mean “provisionally true” - this is the default position. Someone who thinks “true” means “certainly true” is making the mistake. That’s the error. And it doesn't matter if the majority are making the error and the minority understand how epistemology actually works. After all, most people think "knowledge" means "justified true belief" but we can still use the word "know" and "knowledge" without being overly concerned about each and every time providing the caveats. When we spot the errors, we point them out and in the case of "truth" if we want to highlight or criticise that error then affix the “certainly” adjective yourself to remind people that is not what the word "true" should be thought to mean in common day-to-day usage. Why should dogmatists be able to claim the word? Let's not cede that territory.
It is quite right to say that General Relativity has more truth (corresponds closer to reality and solves more problems and corrects errors with…) Newton’s theory of gravity which itself contains more truth than some “law” of gravity like F = 2GMm/r^4 but we cannot measure the quantity of truth. Truth is not a quantity that can be measured but it is a quality that a theory possesses compared to some other. There are many things we cannot measure and yet we can make reasonable and sensible claims as to difference in kind. For example, in biology, it is a routine matter to distinguish one species from another or even one breed from another. There may be edge cases, but in general the identification that a particular organism belongs to this species and not that species is done largely on the basis of appearance of kind or type. These days we can do this with greater precision using genetic analysis. In terms of epistemology we are not there yet but there is some symmetry here (and that is no coincidence).
As in any domain, in epistemology we want to solve a problem. The problem before us here and now is: how can we most effectively - that is to say clearly and efficiently and accurately - convey the epistemology that is critical rationalism? Should we jettison the idea that we are seeking truth? Or should we look at ways of preserving what is useful with that word and modifying what most understand the term to mean? In part this is what I have attempted above. We must be cautious we are not misunderstood as denying the possibility of truth - that may be viewed as a kind of relativism. Of course we can always be misunderstood. I return once more to Popper in “Remarks on Truth” as he says in many other places words to the same effect that and that I quoted only partially above “…aiming at simplicity and lucidity is a moral duty of all intellectuals: lack of clarity is a sin and pretentiousness is a crime. (Brevity is also important…but it is of lesser urgency, and it sometimes is incompatible with clarity).” Preserving not only the word truth, but also the idea that we are engaged in a search for it, helps with brevity and with clarity. Rather than avoiding the claim that science and reason broadly is a search for truth, we can merely correct people when they think it is about the search for certain truth, or final truth or “complete” truth (or a “complete science” as Sam Harris is fond of saying). Rather, we just correct them to “provisional truth”. Provisional truth that solves our problems.
So is it true "We aren't seeking truth"? Well is "seeking truth" synonymous with "solving problems"? Might it not be parsimonious to use these interchangeably given the facility of both terms? "I'm looking for the truth!" exclaims the exasperated scientist trying to uncover if the wobbly motion of their planet is a sign of yet another, as yet, unobserved planet. Are they wrong to do so? Should it be "I'm trying to solve this problem!".
I don't think it matters.
Do theories need to be falsifiable to be science?
That theories need to be falsifiable is a necessary but not sufficient condition for science. For example, the claims: Eating 1.00000 kg of grass cures the common cold or The world will end at 2am UTC on 2/2/22 are falsifiable theories. But they are not scientific. Without a good explanation to accompany them, they are not science. They are just “falsifiable claims”. A scientific theory should be a good explanation that also happens to be testable/falsifiable. Popper figured out that falsifiability is an improvement on verifiability of the logical positivists. It is falsifiability that better separates science from non-science. This includes separating science from pseudo-science like astrology and homeopathy as well as things like morality and philosophy broadly. But it has never been the case that all falsifiable theories are scientific theories. For example: those two claims I started this paragraph with.
But is it nevertheless necessary that scientific theories need to be falsifiable? Well the scientific theory for some phenomenon - or any theory that purports to be the scientific theory for some phenomenon must be a good (hard to vary) explanation of that phenomenon. Part of this “hard to vary” quality is that the theory is falsifiable - testable by experiment. In principle. Now it need not be in practise. But that doesn’t change its testability in principle. So, for example: many people have observed that string theory is very, very difficult to test. Some have asserted that to observe the predictions of string theory would take a particle accelerator half the size of the galaxy. Now this is impractical. So does this mean the theory is unfalsifiable? No! In practise we cannot build such a particle accelerator. But in principle it could be done. So it’s still falsifiable in principle. And perhaps there exist "natural" particle accelerators such a quasars - observations of which might rule out string theory? We do not know.
So, it’s science. It makes predictions. We need not jettison falsifiability on the basis of that. What we might do is search for better ways to test it. If it’s a claim about the physical world, then the physical world must be the adjudicator of the truth about string theory. Can we rule it out? Can we refute it? Then it’s falsifiable. But notice there are two kinds of falsifiability: in principle and in practise. In principle is a black-and-white quality of a theory that is required for science. It is just the claim that some observation of physical reality could in principle rule out the theory. But if no such observation can - that is to say no such observation exists in any possible world - then the theory is not about the physical world. There is no comparison to be made between the actual physical world where the theory holds and an imaginary fictitious physical reality where the theory does not. Or vice versa.
Let us take an even more extreme case than string theory (which I argue is science - but for reasons I will come to is not necessarily “good” or “optimal” science) - and that is the theory that there exist other universes where the very laws of physics are themselves different. So universes outside our own, but where the laws are different. Now it was once thought that such universes are in principle unobservable and so therefore not testable and this makes them unfalsifiable and not science. After all: another universe? Outside our own? How can we access that? Well as it turns out - in principle - we could see such a universe. A universe where the laws are different will have different physical constants and as far back as 1999 physicists claimed to have observed a changing fine structure constant. This would be evidence of a region of space where the laws were different. Another universe (by some definitions). It turned out they were wrong (see that very same link above) - but it is this kind of observation that, in principle, could allow us to observe other universes beyond our own. (Or force us to change what we mean by “universe”).
But this “falsifiable in principle” (necessary as it is) as a criterion to demarcate science from metaphysics (for example) is also not sufficient to make something a good, hard to vary, explanation. Let us return to string theory. What we’re interested in is solving problems in physics and string theory is an attempt to unify quantum mechanics (a physics of discrete entities like particles and energy) with general relativity (a physics of continuous entities like space and time). As we have already seen, string theory could in principle be tested with a particle accelerator half the size of the galaxy. That's the worst case scenario - likely things are not that grim. But say they were. There is probably not enough matter for several lightyears to construct such an experiment. It’s impractical. So “in practice” we would have to say it’s not falsifiable. It’s “not falsifiable in practice”. But this is not a black-and-white all-or-nothing thing. In practice means something like “we lack the wealth to do so” - we cannot actually perform the physical transformation of the matter to do this. Actually we do not even know how to gather enough matter - using the technology we have presently - to build one. So knowledge is also a problem here.
The fact that string theory makes this assertion of itself (as being practically not testable right now), as things currently stand, makes it an “easy to vary” theory even though it’s testable in principle. This is because minor modifications of the theory making similar predictions cannot be distinguished by experiment. And many such varieties of string theory exist. So “untestable” in practice is a weakness. This does not make string theory unscientific - it just makes it a poor explanation. For now. Maybe someone will think up a better test. Maybe someone will make a prediction that operates at lower energies requiring a smaller particle accelerator. Or - and this is key - maybe someone will come up with a theory that makes all similar predictions string theory can but which itself can be tested by some routine means available here on Earth - making that new theory a good explanation and worthy replacement for both quantum theory and the general theory of relativity. Such a theory - testable in practice as well as principle - would be a very good explanation. And string theory would then not be.
What is an example of a good theory within science that is unfalsifiable in principle? I do not know of one. Why is it important to distinguish between science and other subjects or disciplines anyway? It is largely a matter of convenience but also important to distinguish efficiently and effectively between pseudoscience and scientistic arguments (so arguments that claim something like: science can tell us what we ought to do - that there can be a science of morality or a science of economics or politics). Knowledge is some kind of unified whole, it is true. But "falsifiability" is a useful necessary criterion for science. And it is useful to know that demanding, say, moral theories are testable would be a terrible error. This would mean requiring the conducting of experiments on people (say) in order to determine if an even more pure version of Communism than anything China or North Korea has ever tried would be a good idea because - science! No, we do not need to experiment. We begin instead with moral explanations about people and rule out the "a falsification is required here before we can properly reject this theory". Morality is not science and we should not require it to be. But science is a place where experiments - conducted on the physical world are necessary. I think it's necessary we preserve this distinction.
A note on evolution
Quite rightly I was altered to and corrected upon a misconception I had about a particular kind of exception to this strict requirement for falsification in science. For reasons we shall see this does not undermine the central idea that scientific theories must be falsifiable. Now in the case of some (large number!) of theories they are not practically testable because there are no viable alternatives. This means we need to split the meanings of "falsifiable" and "testable in practice". Because there are no viable alternatives to Neo-Darwin "Evolution by natural selection" it cannot be "tested" - because to be testable it needs to be tested against something. And there is nothing. As David Deutsch observes in The Beginning of Infinity: if we observed something inconsistent with the prevailing theory of evolution by natural selection, nothing could be said except the test we used to find the inconsistency was faulty. It is often said, following Haldane that "Rabbits in the pre-Cambrian" would refute evolution by natural selection. But they would not. They would be a problem - but they could be explained by being a rare complex organism that somehow got there earlier than anything else (unlikely) or that a mistake was made by our geologist or paleontologist or evidence of a prankster. Many things would need to be ruled out (and how?) before we ruled out evolution on the basis of rabbits in the pre-Cambrian. But this untestability does not mean unfalsifiable in principle. These are different things.
If an organism (or many organisms, many different species) we found to undergo only or mainly favorable mutations then this would be better explained by Lamarckism and would rule out Darwinism. But then there are all those organisms we already know of that would refute Lamarckism. But the point would be Darwinism would be refuted as a universal (applies to all cases, everywhere) explanation for the evolution of life. It would just be a special case - presumably of some deeper explanation that accounted for why both Lamarckism and Darwinism worked within their less-than-universal domains. So testability and falsifiability are not synonyms. While the latter is needed (and Darwinism is that) the former is about the practical ability of performing some test (experiment) and having somewhere else to "jump to". Some viable alternative theory to test our theory against.
Not everything in science, it should also be noted, is falsifiable. Some eminently scientific claims are unfalsifiable. In physics we say "Work is a form of energy". That's a scientific claim. It's also unfalsifiable because it's essentially a definition. One will never calculate the physical work done (by using a classical formula for work like Work = Force x Distance) and discover that it is not a form of energy. These are just words and terms and though scientific, untestable and unfalsifiable. So some things in science are unfalsifiable. But they are not explanatory theories as such but more like frameworks within which we do science. In chemistry the scientific claim that "The 6th element on the periodic table is carbon". Or "The element with 6 protons is carbon" is a scientific claim. But it too is unfalsifiable. No one can possibly ever, in any world, discover an atom containing only 6 protons and conclude it is not an atom of carbon. No one will find some element which, upon analysis is carbon but contains 7 protons in every nucleus (because that would be nitrogen). And no one will find an element that contains only 5.5 protons in the nucleus bumping carbon up one position on the periodic table. These things are ruled out by the definitions of words like "element" and "atom" and "proton" and "carbon". So unfalsifiable claims in science are common. But the explanatory theories in which these definitions are used and themselves explained make predictions that can turn out to be false. It would not falsify the definitions - but the theories. In particular all existential claims of the form "X exists" are unfalsifiable. So the claim "gravity exists" is not falsifiable. But the claim "Gravity is a force" is, and was falsified. Gravity still existed - it just turned out not to be a force but rather, as Einstein showed, was the manifestation of space being warped by energy and matter. "Gravity" is a word used to describe some phenomenon that exists. The concept of gravity cannot be "falsified" - only what it appears to be, or what it is claimed to be, can be. In an extreme case, the idea "Matter exists" cannot be falsified, though matter may not be the most fundamental thing, in the final analysis. Maybe it is true that there is something deeper - a Platonic realm of sorts from which the appearance of matter arises. But that would just be to explain that matter is an emergent feature. The appearance of it - which is to say its measurable qualities - would still be real emergent things.
So falsifiablility is a necessary quality for scientific theories to possess. But not all claims in science are falsifiable. And falsifiability is not the same as testability. In particular the theory of evolution is not obviously testable in practise. Though it is in principle falsifiable. What we call science in the final analysis is an open question. It is a domain of study focussed on discovering how the physical world works - the patterns in nature, their beauty and their dangers. In part so we can control our environment and use it to our advantage. We guess what's true and compare our guess against that physical reality in some way. So long as we are making progress and solving our problems, that is what matters. But if progress is slow, that can be when these debates can be extra useful to understand.
This post has been motivated by some inspiring Tweets by Lulie Tannet (@reasonisfun) which then resulted in a subsequent exchange of ideas with David Deutsch (@daviddeutschoxf) and others. As always, "The Beginning of Infinity" and "The Fabric of Reality" underpin much of what I say - but errors are my own and nothing I say should be seen as an endorsement by David Deutsch. You can (and should!) buy both books here: https://www.daviddeutsch.org.uk/books/the-beginning-of-infinity/
My full view is expressed here but this post is just a repeat of some specific remarks about Singer as I do not engage with his position in my piece because I was so disappointed to read his work. An example can be found here: http://www.animal-rights-library.com/texts-m/singer03.htm
Titled “Do animals feel pain?” I do not want to engage much with his conclusions. Let us concentrate primarily on his methods. That is to say: the philosophical techniques he uses to establish his position. They need to be valid arguments, or we can ignore his conclusions (which will be as bad as simply false, or as good as mere assertions). He does write “We also know that the nervous systems of other animals were not artificially constructed--as a robot might be artificially constructed--to mimic the pain behavior of humans.” which I agree with, as I stated. But when he asks the question “If it is justifiable to assume that other human beings feel pain as we do, is there any reason why a similar inference should not be justifiable in the case of other animals?” he answers “no”. He argues, “It is surely unreasonable to suppose that nervous systems that are virtually identical physiologically, have a common origin and a common evolutionary function, and result in similar forms of behavior in similar circumstances should actually operate in an entirely different manner on the level of subjective feelings.” but as I have argued this is completely false. You can indeed share an almost identical architectural hardware (as say chimps and humans do with respect to their brains) but the software (the mind!) can be altogether different. And yes there are hardware differences, of course - and perhaps those hardware differences contain the specialised processing and memory capacity required to run the special “universal knowledge creation” software of a person, but the point is: similar hardware says nothing about software. Two identical Apple Mac computers can run totally different software. One might be running a computer game. Another, a spreadsheet. That look nothing alike. The brain of a chimp might superficially look kind of like the brain of a human: but the mind? Totally different. And so the experiences might be totally different. Indeed I argue they are totally different. But Singer, like most people concerned about this topic, is completely confused about (because he is ignorant of) the relationship between the physical and the abstract; between hardware and software. The brain-mind connection. The mind really is a causal agent. Like software controls the hardware. He does not know about universal knowledge creators and the morally central role concept this plays in our understanding of the potential for a creature to suffer. Of course, this is no fault of his, at the time of writing (that article predates “The Beginning of Infinity” by over 20 years) but I think most people agree “animals can feel pain and all pain is bad so that’s that” kind of thing. But more worrying to me is the following, where Singer writes: “The overwhelming majority of scientists who have addressed themselves to this question agree. Lord Brain, one of the most eminent neurologists of our time, has said: “I personally can see no reason for conceding mind to my fellow men and denying it to animals…”
So Singer resorts to *appeal to authority* and the authority he appeals to resorts to *argument from ignorance*. Singer says “Look, other scientists agree with me” (inference being: scientists are clever people who get things right. Always though?) And the authority “Lord Brain” says “I don’t see any reason to suggest animals don’t have minds like people do” which means “I don’t understand the differences”. Now if I read this from a journalist, or even a scientist I could perhaps forgive these sort of mistakes. But Singer purports to be a professional *philosopher*. One who constructs arguments and explanations in order to establish conclusions. One who knows the logical fallacies - and how to avoid them. But he has not avoided them here. He has deployed them!
“…there are no good reasons, scientific or philosophical, for denying that animals feel pain. If we do not doubt that other humans feel pain we should not doubt that other animals do so too. Animals can feel pain.”
As I have argued: animals may well feel pain. But so does a person exercising: and it feels good, even if painful. An animal that feels pain does not suffer - that is a philosophical position that no science experiment can undermine (yet). These are critical distinctions that, if you are engaged in arguing for so-called "animal rights" and talking about something as ethically important as the morality of pain: you need to take seriously. But given the terrible philosophical arguments made by Singer we must, unfortunately, conclude he is not actually philosophically serious about one of his most cherished areas of expertise. He resorts to arguments from authority, arguments from ignorance and a good measure of the emotive thrown in. Philosophers should be far more cautious because if they have important points to make, people might just stop listening if they demonstrate they cannot "ply their own trade" with competence.
Science and democracy share the feature that they are error correction systems. The former is about correcting errors to our knowledge of the physical world. The latter to our choice of rulers and their policies. With science on the rare occasion when we have two theories competing to explain the same phenomena we can rule one out through a "crucial experiment" (for more on crucial experiments, see here). With democracy, when candidates compete to win elections they put forward policies and if the one who wins, and has the power to actually enact their policies fails to meet our expectations, an election is an opportunity to correct our mistake and try out another candidate.
But in neither case - science or democracy - can we ensure that the theory we have or the candidate we vote for - cannot possibly fail. And we must expect them to fail in ways we can not have foreseen.
“Until the average person is well-educated and well-informed, you will always have a dysfunctional political system. I agree that free high-quality education for all would be costly to implement, but rich economies can afford it. In fact, I think they can't afford not to do it.” - Google Programmer François Chollet (@fchollet), Twitter, 4 Feb 2018
If the average person was educated and informed to a standard that François Chollet approved that would not guarantee that, by his lights, the government was not “terribly dysfunctional” (that it was made up of terrible people or that it never got anything done (See note 1 below)) or even that the system itself was was “dysfunctional” if by that we meant something like: incapable in principle of enabling the worst people - by our personal standards - of being elected. Or perhaps it means something deeper: that there is corruption that makes the democracy rotten to the core. But then well educated, well informed people are still liable to fall into error and nothing can guarantee they cannot be deceived. Indeed here lurks an irony, but it's true: the more well educated you are about a thing, the more blind you can be to the most common errors. You might simply be "used to" making the same mistake over and again. Expertise can sometimes be a liability - even and perhaps especially - in your domain of expertise. The reason is you cannot often think as creatively because you think of all the criticisms. That's what makes you an expert, after all! So you think of all the criticisms against the idea that you are wrong - because you know them. Isn't that strange? It's like an expert Korean linguist who is teaching someone the Korean word for (say) computer (which, as it turns out, is "computer" with a Korean accent ("keompyuteo"). And say the (ignorant!) person they teach comes to them one day and says "I heard from a Korean and they said that's not the only word. There in another word and it's "gaesangi" they insist. But the expert knows they're correct - there's one word only and they consult with some native Korean speakers who agree and besides, they're the expert after all. So they return to the learner and insist "You're mistaken - there is one word. I've researched this. You can trust me. And I've checked - with other native speakers." But experts can be mistaken and in this case the learner just happened to overhear some older North Koreans speaking and using that word...which is indeed an older word in North Korean for "computer" and not well known by South Koreans. So as it turns out the ignorant, less educated person knew the truth. There was more than one word in Korean for computer in existence and no amount of checking with the typical South Korean expert would have fixed that. More education doesn't mean you won't make mistakes that those with less education will not make. We are all equally fallible. There is always an infinite amount we do not know and we must expect others know things we do not. Even (perhaps especially) the "less well" educated and "less well" informed. No system of education can ensure errors of this kind become less frequent. No democratic system can ensure that, for example, terrible rulers might not get elected. So even if President Trump really is/was a terrible mistake, no democratic system which is to say no democratic institution could have prevented his election in principle if he was a legally qualified candidate.
Of course at the extremes that exact criticism is made: he is not legally qualified. But those accusations seem to be just par for the Presidential election course in the United States now. Obama was not born in the United States, or Hillary Clinton was actually a criminal who should have been in gaol and so on. If the institutions investigate and you regard them as having worked in those cases then it is a poor, ad-hoc explanation that says they only ever fail, are corrupt and evidence of a broken or "dysfunctional" system when applied to the candidates and parties you do not support.
Now this may seem a bizarre diversion, but bear with me. The average person probably doesn’t think much about the intricacies of how science generates the knowledge that it does. That’s a rarefied kind of interest of concern only to philosophers of science and some scientists. Then again, so far as “interests” go, there is no "average person" - there’s few academic interests all average people share. Does the average person enjoy learning maths or engaging in deeply refined literary criticism or history lessons or do they want to have a deep understanding of civics and constitutional law? Hint: ask a school-aged student to find out. But the average person is indeed interested in knowledge of all sorts - it may be academic knowledge of a subject of interest to them or some project they are working on (both these often wrongly dismissively referred to as a "hobbies") or it can be knowledge of their own lives, those of their friends and family and how to do their job well and better and other day to day things. The average person has concerns and interests - perhaps not shared by philosophers of science in Sydney, or google programmers in Silicon Valley, say.
It’s not really of great importance, though it may be of some use to the average person to learn that the process that is science is in large part defined by the creation of hard to vary explanations of the physical world that can be tested against physical reality. These “tests” are known as experiments - but they are not the only way we have of criticising scientific explanations. It is just that explanations of the physical world that can be tested against physical reality - by experiments - are precisely the scientific ones. The experiment should be able to be performed in practise, which is to say we should posses an explanation of how the experiment can be conducted by us.
Some versions of string theory that postulate entities that can only be resolved with the energy of a particle accelerator the diameter of the galaxy would be an example of a possible explanation of the physical world that is, in my view, not scientific. Although there is some kind of test possible “in principle” the lack of an “in practise” explanation of how to build such a device given the possible transformations people can actually make in order to test the theory should remove it from serious contention as a way forward in making progress in physical science (as useful as the mathematical techniques discovered from explorations of string theory have been in mathematics.)
Sometimes this process of science generates theories that are false. Indeed this is rather the rule and not the exception. We should expect that the vast majority of scientific theories will turn out to be false. This is simply a claim that the scientific enterprise is unbounded: we will always be able to improve upon any explanation we do discover. And any improvement will show how flawed the unimproved version was and why.
The “average person” might think that science is an engine for generating truths about the world that, once the authority of science in the form of some professorial scientist has deigned to profess that truth that we can trust such claims to stand as “scientific truth”. But science is very much a catalogue of errors. As David Deutsch has said - it would have been much preferred if scientific theories were called scientific misconceptions from the start.
Science, for example, at various times has produced theories such as “spontaneous generation” as an attempt to explain how non-living matter can become living. Some of the earliest theories in chemistry included the “phlogiston” idea where this substance inhabited all matter and it was this that was combustible. Earthquakes, volcanoes, moving continents and other eruptions of the Earth were explained as evidence the planet was expanding. And for centuries it was believed that an instantaneous-acting gravitational force existed between all masses in the universe and that this explained the motion of objects from orbiting planets to falling apples. And these are just some of the more prominent examples from just biology, chemistry, geology and physics. Astronomy is a catalogue of bold conjectures about the nature of the cosmos being utterly decimated by the light of observation. Literally. And we are all familiar with supposedly rock solid medical and nutritional advice seemingly turning on a dime to advice the precise opposite of what we were once taught (cf: eat more carbohydrates and less protein (becomes) eat more protein and less carbohydrates).
So is this system of producing explanations in science flawed? Why should it consistently throw up utter falsehoods? Why won’t it simply provide us with the final correct answer? Of course there is no such answer. Only better and better answers. Approximations of increasing fidelity, reach and depth. So although any given explanation must be expected to be flawed, the system itself cannot be blamed for those flaws. This process where a creative scientist tries to solve a problem with what is known by producing a new theory is roughly the way knowledge generation in all domains works. An idea is guessed and then anyone interested attempts to refute that guess by careful criticism. The criticism might be how the idea is false, or ugly or not so useful compared to some other. But if the criticisms all fail, and the new idea accomplishes everything any competing idea does - and perhaps more (and more elegantly) - the idea survives to earn the moniker “The explanation of…”.
The system must be expected to produce utter falsehood. Indeed it is required to. If science is about generating beautiful explanations, then for each beautiful explanation that becomes “The Scientific explanation of…” defeated rivals will lay in its wake. The decimation of opponents - typically though experiment - is a constant in science. It reveals how what we once thought was correct actually always was utterly false and flawed. And how blind we were not to see. But we are fallible and it is no sin to keep on making these mistakes. That is our nature. We are fallible. Our fallibility is tied intimately to our creativity - that feature of us that strives to make bold conjectures - majestic guesses - in an attempt to improve our lot and what we know. But that process is an undirected one for we cannot know in which direction the ultimate ontological truth about reality lays. We set out from our island of what is known and sail into the unknown, hoping to find a better place. If we fail, we can always find our way back, but there is no guarantee we will land somewhere better. That is our nature. Science cannot provide sure answers - it can only provide the conditions under which those answers can possibly arise.
Now all of that, if absorbed, might make someone somewhat better informed about the process of science and some of its history. And they might learn a little about epistemology besides. But would that do anything to sway them in an election? Precisely what kind of information could make the average person “well informed” enough such that the system was not broken? Should it be about who should be elected?
The process that is democracy is in large part defined by the conditions under which the successes and failure of the rulers of a society can be tested against the expectations of an electorate such that are those expectations not met, the rulers can be removed without violence. The ultimate expression of such “tests” are known as elections - but of course they are not the only way of criticising elected rulers. Rulers are criticised every single day - the media and much of the electorate is obsessed by it. It is just that elections are the means by which rulers who fail to meet the expectations of the electorate - which is to say by some measure of comparing the politicians stated policies to what was actually achieved by them - is a democratic process. Democracy is, or should be seen as, a system whereby we trial some leader (on the basis of their stated policy) and should this leader fail to meet our expectations then we can remove that leader through a process that allows us to install some other leader with different policies should we so choose.
Now people are all very different. We are fallible and have different values, different knowledge and different circumstances. This kaleidoscope of differences ensures that we cannot possibly agree all the time on every topic. Some people are more or less knowledgeable about this or that thing and that different knowledge will come to bear when it comes to making decisions about whether this course or that might best suit their own interests or interests they care about. And this, it must be said, is a wonderful thing. It means that there will always be wildly divergent ideas about how to proceed in life. Each of us as rulers of our own lives guess, trial and correct courses we take, amending our paths and trying to plot out a better course. Often, many of us, fail terribly. We are fallible. We lack the knowledge to know what to do next.
Sam Harris and Russell Brand had a conversation recently on Russell’s podcast radio show called “Under the Skin”. That two hour conversation was an impressive display of just how far apart and what entirely different “language games” two people could play while somehow keeping the conversational ball in the air. At times they really weren’t even playing the same game the disagreement was so great. So while there seemed to be little common ground at any point on almost any issue of substance (except that there exists mysteries in the world and human beings are important), both nevertheless found an opportunity at the 1h 50 min mark to find a point of enthusiastic agreement:
Harris: “Democracy seems impressively broken to me and capitalism seems impressively broken to me…except the alternatives seem worse…this is Churchill, right?”
But why? Why does Sam think this? One need only listen to the Waking Up podcast to get a taste. Donald Trump’s election is a clear sign of a broken system, in Sam’s eyes. Though Sam would have been no fan of Hillary Clinton either and so perhaps the “broken system” is evidenced by the dearth of choice on offer as though the choices on offer were particularly abhorrent. What is remarkable about this is how Harris notices - mere minutes after making the claim that capitalism is broken - that today we live in a wonderful age that seems to keep getting better where only 10% of people are in extreme poverty while a mere 150 years ago those numbers were flipped. Now why is this? Is it the spread of socialism or is it free trade (capitalism). What makes the difference?
But Sam is very worried. He agrees, he says, at the end of that podcast, with some experts that we are basically in a new "Cuban Missile Crisis" but no one has noticed. That now is particularly dangerous. America is at a particularly unstable epoch - irrationality rules, fake news has proliferate, the experts have been shown to be wrong time and again and there is mistrust all around. Congress and the Senate seem incapable of passing legislation (Again, see note 1). There is deadlock. All of this: a sign of a broken system.
Sam's idea that our systems are broken is a common underlying thought of our times. It is shared by many in Europe where Brexit too is seen as evidence of a terribly broken system. These “populist” uprisings. People voting against their own economic interests. The system is broken. The outcomes are unjust and unfair - especially for the least powerful. Those people have been deceived by corrupt double-speakers. Political charlatans interested only in lining their own pockets and those of the powerful corporations. The system is broken.
But when did it break? In the case of the American system: Did it break sometime during Obama’s term? Did it break at the moment Trump was elected? Perhaps when he won the nomination? What exactly is broken, except the expectations of those who do not agree with the outcome of these elections and referendums?
Let us remind outselves of François Chollet (@fchollet) Tweet in full:
“Until the average person is well-educated and well-informed, you will always have a dysfunctional political system. I agree that free high-quality education for all would be costly to implement, but rich economies can afford it. In fact, I think they can't afford not to do it.”
Let us observe (before we return to this shortly) how wondrous is the claim something can be simultaneously "free" and "costly". This is a tactic employed by those who believe government is the best provider of some service - especially something like education. What is meant here is: the education is "free" to the user and "costly" to the taxpayer. (It's not quite like this, of course - because many of us were indeed taxpayers when we were users - so we paid). "Free" and "costly" means: the government extracts taxes so that for some the system is (apparently) free while for everyone else it is costly. That is what is meant by "free" yet "costly". And is why I argue that this entails (logically implies, assuming the preceding holds) that "we need government funded institutions to ensure people vote the right way."
The process works like this: the taxpayer has money extracted from them under penalty of force by the government who then allocates some of that to educational institutions. They don't do this without conditions. After all, if there were no conditions anyone at all could claim they were an educational institution and demand money from the government. So governments require "standards" are met in those institutions they fund. Meeting "standards" requires a comparison between the content the institutions provide and a set of criteria designed by government. So "standards" shape content - which is to to say the curriculum. In reality it's far more prescriptive: standards are the curriculum and also how the curriculum is taught and assessed. Standards - conditions for funding - are extremely restrictive and inspections occur and schools and other educational institutions closed if government requirements for what is taught are not met. And some of that content must include things like: particular interpretations of history, how economic systems and commerce should operate, what the normative response to social and environmental issues is, how a legal system should be set up, the place of religion in society and the proper role and function of government and so on and on. This is a terrible conflict of interest. If the purpose of education is to help young people foster and explore their own creativity and become better critical thinkers, this cannot happen when the government is mandating standards. As governments must do, else how can they possibly decide between the many institutions competing for funding so that education can be provided "free" to students? Hence any simultaneously "free" and "costly" system of education must amount to a government funded system of indoctrination. A system which, in part, has at its core an objective of helping to influence how people view the government and, therefore in democracies, how they choose to vote.
Returning to the Tweet under discussion. That view - popular in some circles - suggests that the outcome of an election is an indication of the “functionality” of the system itself. Which is to say if the outcome is bad, then the system that produced it must be faulty. But that would be rather like arguing that the production of a demonstrably faulty theory is a demonstration that the process of science itself is faulty. But as we have seen: science is in the business of producing faulty theories only to be replaced by better (though we must expect ultimately faulty) theories.
Now you may or may not think that Donald Trump is a great thing for America. But let us go with the most some of the more common positions preferred by his opponents: Donald Trump is a terrible president. He is altogether unsuitable for the position.
Does that mean the system in America is broken? No it can merely mean Donald Trump is terrible, people elected someone who does not deserve to be there (so they made an error) and he needs to go. Happily the system is perfectly designed to solve that problem. What happens is that there is an election every four years in America and a terrible president can be removed. That is what happens. And so far in the history of America that process has occurred without violence except where presidents have been assassinated.
So the system works. What is the alternative?
Now maybe you think: but no! Trump is corrupt and is not entitled to be there and never was. People were hoodwinked by a liar. Now of course accusing politicians of lying is hardly the uncovering of some deep truth. But can't people who voted for Trump decide for themselves?
"But no! They cannot." perhaps the retort may come, "They are incapable. They are too poorly educated. The average person is not well-educated and not well informed. So that is why a charlatan can be elected."
But that cannot be so. People are better informed than ever before. And they have always been fallible and gullible. Those things are constants - but information is now more easily accessed and people can choose among sources and choose criteria for judging those sources.
Back to @fchollet's tweet. What would “free high quality education for all” really entail? Well firstly - it cannot be “free”.
There is no such thing as "free" except, perhaps, the air.
Free of course here, as it always does in these cases, is a euphemism for “taxpayer funded”. Teachers do not work for free. And government funded education is necessarily indoctrination. He who pays the piper calls the tune, after all. North Korea provides “free high-quality education for all”. They really do. Education and learning are not at all identical as I say here. Some North Korean children are excellent at mathematics and some other subjects and of course they can recite all sorts of “facts” about what it is “right” to think when it comes to the government. The system works! It's not broken. It is doing exactly what the government want it to do. And the system is a terrible travesty and tragedy.
What can it mean for a system in a free (in the philosophical, libertarian sense) and open society to provide a high quality education?
Firstly - again - it cannot possibly be free. Whatever a child wants to learn - they should be able to. And that might include - no school at all. It might include doing little more than attending the local park each day with their ipad and their friends. Accessing the internet they have access to more knowledge than anyone has, ever. And if they have a loving set of parents and friends and other wise adults around they can have conversations to correct any errors they might encounter in their learning travels. Children do indeed love to do this (only forced school manages to switch off this natural love of learning). But ipads aren’t free. Or maybe they would like to go and have swimming lessons instead, or piano lessons or Korean language lessons or, or or…whatever the case those lessons won’t typically be free. They cannot be. People become experts at things at high cost to themselves and so they are entitled to sell their services. They shouldn’t be forced to provide their services for free. And likewise nor should the rest of us be required to pay for someone else’s children to have swimming lessons. Maybe we can barely afford to pay for our own child’s swimming lessons - or whatever.
So “free high quality education for all” cannot be free. That makes zero sense.
High quality will mean children must pursue their own interests and therefore will necessarily form very different views about the world and have wildly different preferences, such as for things like who to vote for in elections.
And as “for all” - we don’t want everyone to do the same thing, let alone be forced to. Especially children. The future is in the other direction entirely. Some small number of students might choose to pursue a traditional course of study of the kind François Chollet might approve.
As Popper writes in “The Open Society” (you can find the whole context here at: www.theopensociety.net/2017/12/what-democratic-institutions-may-be-expected-to-do/ thanks to Peter Monnerjahn @PeterMonnerjahn
“The idea that this problem can be tackled, in turn, by an institutional eugenic and educational control is, I believe, mistaken; some reasons for my belief will be given below.) It is quite wrong to blame democracy for the political shortcomings of a democratic state.”
(The problem in question of which Popper speaks is “dissatisfaction with “democratic institutions because they find that these do not necessarily prevent a state or a policy from falling short of some moral standards or of some political demands which may be urgent as well as admirable.”)
And I agree. When the state or policy falls short, it cannot be that ever more education of the people is needed in order to fix the democratic institutions (the system). The system of democracy - like the system of science - cannot prevent flaws and faults and “falling shorts”.
And with respect to education, anyway, the “average person” is now more educated and well informed than ever before at any point in history. The “average person” was once an illiterate person who even if they could read had access to almost zero books and the current goings on of the day. Now the average person can read. They have access to news and the views of their family and friends dispersed throughout the world and - amazingly - the views of some of the best thinkers on the planet - instantly. Some look only at the Instagram and Facebook feeds of young popstars or celebrities famous for being famous, sure. But even the most banal of those people comment on the days news and inform their followers of trends. The “average person” is an amazingly knowledgable, creative nexus of opinion and contradiction and fallibility and knowledge.
If you actually listened to them, you just may find they’ve thought things through. They’ve got reasons. Yes, they might have been mistaken. And the reasons they had were flawed. And they voted based on a mistake.
But when has this never been the case? And how could it possibly be otherwise?
(1) The idea that when a bicameral legislature such as exists in the United States (The Congress and the Senate) or in Australia (The House of Representatives and the Senate) or the United Kingdom (The Commons and the Lords) are at loggerheads and no legislation is being passed is a bad thing is, typically, false. Government is a powerful, dangerous and (at it’s most mundane) simply annoying institution that intrudes into lives and livelihoods. The less it does to interfere, the better. So it is *good* when government, in its best moods, reduces its own powers and lessens the intrusions it makes. But this is rarely the case. Mostly it is legislating to make regulations and ban this or that thing or prevent this or that thing from occurring or being tried and taking money from these people to give to those people and so on and on. The best it can do is pass laws eliminating regulations and reducing taxes. But the second best thing it can do is, as a broad rule: nothing. So when there is a “deadlock” don’t despair. Realise that is government *working* - the two houses working together to prevent the overall government from doing more to hurt people and intrude into their lives. That system is the one that has survived meta-government trials over millenia. It works better than alternatives that have been tried. And when it’s “not working” it’s working.