All sorts of unconscious phenomena enter into our considerations , decisions and choices. If you are waiting for the 9:47am bus and it fails to arrive - this event enters into your consciousness unbidden by you. You had no control over it. But now you are thinking “Oh no, I may be late.” At that moment a taxi approaches. Again: unbidden by you and more thoughts, also that you did not author, enter your mind. You now consider “should I hail the taxi?”. You deliberate. You try to create a good explanation.
Was your meeting to be at 10:30am or 11:30am? Maybe you’ve time enough for the next bus. But maybe you shouldn’t risk it and take the taxi.
Parts of this process are unconscious. Much indeed. But parts are conscious as you think and reason to form (create!) a good explanation of what to do next. You have choice before you. The world need not be one way or another. “Bus or taxi?” you must think quick. You must choose. The meeting is at 11:30am you recall in a few milliseconds. “I’ll just wait”. You’ve chosen by reason. Nothing has forced your hand. The decision was a free choice. And exercise of your free will.
Had a terrorist come behind you and pressed to your side a gun that you could see and said “Get that taxi” then new information would come. Now, I would say, when you obey this is different. Certainly you might object - but really you are doing OTHER THAN YOU WANT. Other than you desire. Other than you would have chosen. You are being COERCED. When there is coercion it is not the exercise of FREE WILL. It is something else. It is a decision under duress. Your creativity is being impeded. It is subservient to your survival and emotion and fear especially. You aren’t thinking clearly.
Now in the scenario of the late bus and you just wait peacefully for the next notice that this account has required: creativity, choice and free will. I don’t think we can easily remove any of those. Or if we can they simply “pop up” as another mystery. You may deny free will or even choice. But surely creativity is something you cannot deny. But what are we creating? Explanations. Why one explanation rather than another? We desire - surely. But why? Why desire anything? Do we just slavishly obey impulses or is there deliberation? What is this deliberation? An illusion? So it doesn’t matter if we deliberate? Surely it does matter if we take time to reflect. Surely we create better things? Make better decisions? And isn’t that decision to take time itself something that can be learned? And doesn’t it become a choice? And isn’t choosing to do so a free choice? You aren't being coerced?
What makes people unique? What is this thing? Is it creativity alone? There is something there - something fundamentally different about humans compared to other animals. Whatever it is seems to allow us to break free of our genes and our instincts. Cities, computers, our languages - in short our explanatory knowledge is not encoded in our genes. So that stuff we accomplish that is not encoded in our genes is being generated by our minds by a process we barely understand. We call it "creativity". But it's a thing we direct. We choose to direct our attention, and thus our creativity to this or that thing. And that conscious act of direction is an exercise of free will. What we're often creating is knowledge about how to solve our problems. But what knowledge to create isn't something that is in our genes and it's not "in" the laws of physics. But somehow it nonetheless is "in" the universe - it's part of reality. So when we choose to use this creativity of ours it is a parsimonious technique to simply call this an exercise of our free will.
Exploration of what properly constrains the production of knowledge is a very interesting topic and ethics forms but a part our considerations of what limits the creation of knowledge. Those constrains are however far broader than what is dictated by parochial concerns about what *should* be done in terms of generating knowledge. Because the growth of knowledge is inherently unpredictable, an argument looms that perhaps the only ethical principle one requires here is: do not apply ethical prohibitions upon the creation of knowledge. Of course, practically speaking, we should not seek to discover what is the most hurtful thing we can do to make people suffer? That would be abhorent. Or what is the most dangerous risk we can take? We can play games like this and suggest that therefore we need tight restrictions on what problems people should try to solve. Such concerns are not genuine limits upon the growth of knowledge but rather silly moral-thought experiments about how values seem to conflict (on the one hand the value of knowledge production and on the other valuing personal autonomy, for example) and they are always resolvable with a little bit of critical enquiry.
So ethics, typically, is not - or should never be - the biggest constraint upon the growth of knowledge. The growth of knowledge is motivated by problems that arise. That is what the growth of knowledge is: the search for solutions to some problem situation we find ourselves in, personally or as a community or civilization.
But there are other constraints upon knowledge. From logic for example: we cannot hope to discover simultaneously that eggs are simultaneously good to eat and also deadly poison (modulo logic games like: some people are lethally allergic to eggs, or that eating 100 of them might kill a person).
Knowledge production is of course limited due to physical law, there are limitations due to time, space and energy, there are perhaps limits yet to be explored (like the so called “no go” theorems found in pure mathematics and physics - but perhaps there are more we’ve yet no notion of). David Deutsch has explained the great dichotomy when it comes to the limits of knowledge: whatever is not prohibited by physical law is possible. So the only thing preventing us from accomplishing something we want to, and which we've decided is good to do, is *knowing* how. That's an amazing thing. Resources are almost always plentiful - the universe is vast. So taking a cosmic perspective on these things, it is not matter and energy and time that is scarce (the universe provides these in abundance, as it happens) but rather it is knowledge that is always scarce. (Of course, see his books for this - or his Ted Talk).
But also, now, and in the other direction - it is not only constraints upon knowledge but also it is the availability of knowledge - which is the limiting reagent in both the universe and our lives. Knowledge itself provides the constraint that prevents us personally, as families, communities and whole civilisations from accomplishing what we want. When we lack *that* resource - knowledge - everything else (importantly progress) stagnates. Most especially, civilisations do, and so do our own personal lives.
This idea of "constraints" as some kind of theme through which to view knowledge can be a useful one. Ethics, on this view, is but one example of the constraints on knowledge and also that there are many ways the production of knowledge is constrained...and also many constraints resulting from our lack of knowledge and lack of progress in our creation of knowledge. “Constraints” might seem to be a gloomy lens through which to view a thing, but on analysis this is an uplifting lesson to learn. Creating knowledge - learning more - is typically, in our world as it now is - the only thing (or at worst the main thing) limiting each of us personally and as a civilisation from accomplishing our goals. Your choice to know more really is the way to move forward.
*Credit goes to Ric Sims (@sharpcomposer) for remarks inspiring parts of this piece.
The Search For Truth
The prevailing view of “knowledge” - handed down from Plato - is that knowledge is some kind of justified true belief. Modern incarnations, descended with mutations to fill the niche occupied by this desire for justified truth include Bayesianism (a more mathematically inclined twin of inductivism) - where the idea is that knowledge is justified as close to true by repeated confirming instances. Whether Bayesian or Inductivist, these kinds of justificationism applied to science hold that the more frequently one observes an hypothesis to work, the more confident one can be in expecting it is actually true, more true, or probably true compared to its rivals.
But Bayesianism, in claiming that some theory has some quantifiable (indeed calculable) and precise amount of truth we can discover, cannot explain how despite repeated “confirmations” increasing one’s confidence in the truth of a theory, nevertheless, it can still be shown utterly false by an observation that theory cannot accommodate. Indeed it cannot explain how it is that when confidence in truth is at its highest, this is when theories are typically shown false. In other words, on Bayesianism, when we have every reason to expect the theory to be true, it is shown false. So for example, every single observation that occurred prior to around 1919 was a “confirming instance” that would have granted “Bayesian credibility” to Newton’s theory of gravity. (If this date is in dispute, we need only move it back to around 1859 where Newton’s theory had never been known to produce any anomalous predictions (it was in this year that Urbain Le Verrier in “Celestial Mechanics” published data dating from around 1697 to 1842 which, when investigated carefully, appeared to reveal some anomalies with Mercury’s orbit. In principle these could reasonably, at that time, have been interpreted as consistent with Newton’s theory on the assumption the orbit was being perturbed by some other massive body (this is not unprecedented given the method of discovery of Neptune relied on something quite similar)). Whatever the case, absent any other theory, the Bayesian method of increasing confidence that a theory is true, given repeated instances that are consistent with a theory, meant that Newton’s theory of gravity was at its highest confidence right before it was shown false. At which point all of those observations that it was correct now “flowed” in some sense to its replacement: Einstein’s General Theory of Relativity. Or if they did not “flow” then the count started again and Einstein’s General Theory - being without rivals - just continues to grow and grow in truth to this day. And with each passing day we should be more confident, not less, that it is true. But nothing in Bayesianism - no matter how many confirmations there are - can rule out the possibility that Einstein’s General Theory will be ruled out by a process similar to that which Newton’s went through. Namely some observation inconsistent with Einstein’s General Theory but consistent with some other theory that does everything Einstein’s does but also accurately predicting where Einstein’s cannot work. Indeed we should expect it to be shown false because we should always expect some deeper theory to explain everything some currently accepted best theory does...and more. That is: we should admit theories are improvable and progress is always possible because knowledge continues to grow. In particular we should expect a theory in physics to be found that is deeper than both quantum theory and general relativity - one single theory that can explain why both work and which also do something new that neither is able to: perhaps explain dark matter and dark energy or something like that. Something at a deeper level. That is what we should expect. We should expect falsity to be shown and so we should expect that General Relativity is, now, strictly, false. We just don't know how it is and cannot show it is yet. One day, we will because we will have both a replacement for it, and a test to distinguish the replacement from General Relativity by comparing it against reality in some way (we call such comparisons "crucial tests" or "crucial experiments".)
To remain with Bayesianism for a moment, it is also important to note that Bayesianism alone cannot explain how an ad-hoc modification to a theory is not also “verified” to the same degree. As explained in “The Fabric of Reality” by David Deutsch the idea that the currently accepted theory of gravity is justified as true or probably true because of all the observations that people have ever made consistent with it applies also to the theory that the prevailing theory is justified as true or probably true except in cases where it is defied on those occasions when objects levitate for reasons not accounted for by the theory of gravity. The “our best theory of gravity is true except when things levitate” is justified by precisely all of those observations that justify the current accepted theory of gravity.
So it cannot be the case that theories are justified by repeated observations - no matter how many there are. If they were, the ad-hoc modification that “things sometimes also levitate” would also be justified - even if we have never (yet!) witnessed such levitation that would be inconsistent with the first theory (that the best theory of gravity always applies everywhere).
This is an argument against induction and against Bayesianism. Repeated observations are not needed. That is not how knowledge is produced. Instead theories are guessed (conjectured) and then attempts are made to refute these theories. This is the rare best case scenario: there are multiple competing theories. All these theories then get tested against reality by some means. The means - the methods of criticism - along with the subject matter itself - are what define a “discipline” or “subject area” or “domain of inquiry” or any other such synonym for fields like “Science” as compared to “Mathematics” and “Philosophy” and “History” and “Morality” and so on.
So let us recap all of this in light of the broad brush strokes that the majority of people interested in this topic of epistemology - no matter where they are on the spectrum between Plato’s JTB and Bayesianism sit.
Knowledge, they sometimes argue is some kind of belief (not all Bayesians do this: some believe in knowledge that need not be about personal thoughts). But belief cannot be a property needed for knowledge as Karl Popper observed and David Deutsch has clarified in many places. Knowledge is not only something that is in minds. It is also in objects. A telescope contains the knowledge of how to focus light. A jet engine contains the knowledge of how to convert chemical energy into heat and thrust and motion. The DNA molecule contains knowledge of how to construct an organism. A book contains knowledge, as does a computer. But none of these dumb, unthinking objects have beliefs.
So knowledge is not about belief. Must it nevertheless be justified true? Justified true means “shown to be true” - but we have just seen that there is no method whereby a theory can ever be shown to be finally, once and for all, true. There is always some way it might be shown false (and we cannot rule this out). This is true in science, but even in mathematics and is basically the philosophy of "fallibilism" - the claim that error is impossible to avoid. Mathematicians make mistakes and (this is poorly understood but absolutely crucial to appreciate) proofs in mathematics are computations. Proofs are done by something. They are done by a mathematician (or a computer) using some physical object (their brain, or pen and paper or a calculator) and physical objects obey the laws of physics. And if the laws of physics say that necessarily physical processes are error prone (cannot 100% be shown to produce the same outcome every time (this is a consequence of the laws of quantum theory - our deepest physical theory)) then methods of proof will likewise not be 100% in all cases absolutely perfect. More than that - for reasons stated above about Bayesianism - we cannot even put a “close to 100%” number on it or any probability at all. My favourite example here remains Euclid’s demonstration of the obvious - clear to everyone - fact that through any two points a unique straight line can be drawn. We know this now to be false because there exist such things as curved (“non-Euclidean”) geometries and in these cases many straight lines can be drawn through any two points. For more on that, see here.
Knowledge is likewise never justified because if it could be the justifications would have to be justified. And if they could not be then our original claim would not be justified as true. But if the justifications for the justifications were true on this view, then this would only be because they were justified and so on, leading to an infinite regress. So “justification” cannot work as some kind of deep truth about how knowledge works because it rests on either an infinite regress of needing to justify justifications or stopping at some point where the justifications are unjustified meaning that “justificationism” is no kind of deep and universal truth about knowledge.
And finally “true”. When people here use “true” they seem to mean “certain”. And we cannot be certain because we can never be without doubt. And besides, certainty is just a feeling - one feels certain or not. And objective knowledge cannot be about one’s subjective feelings.
So there we have it for the moment: knowledge is not justified and it is not true and it is not about belief. Everything about Plato’s definition is wrong. Instead what is the case is that knowledge is about guessing theories (that solve some problem we have) and then criticising those theories. If we’re fortunate (because we’ve been sufficiently creative and critical and perhaps have cooperated with other similarly creatively critical people) - we manage to have many such theories. And then the critical process of experimenting (in science) or disproving (in mathematics) or trying just to argue (in all areas) and reveal weaknesses and flaws and contradictions we whittle away all the theories that fail to meet our criticisms and - again if we are fortunate - we’re left with just one theory standing. If we are not left with only one this, in science, is where we can do a crucial experiment. The experiment where the outcome is predicted to be one way given one theory but another way given another theory and that allows us to decide which is false. Whatever the case, in whatever the domain, usually we’re left with identically one theory that does what we want it to: solve our problem. And we call that The Explanation.
So we have jettisoned “justified” and “belief” in their entirety from this conception of knowledge. But what about “truth”. Is knowledge nonetheless a quest for “truth” as Popper says? Above I seemed unable to avoid the word, or its negation more than once. Of course we have seen the quest for knowledge cannot be a quest for certainty (100% infallible truth) but can it be a quest for something lesser? Well for the same reason that it cannot be a quest for 100% certain truth, it cannot be a quest for 99.99% truth or 99% truth or 50% truth.
So is truth a chimera?
Let us return to mathematics briefly. Surely it is about proving things true? What things? Well in mathematics what we assume we have are propositions (claims that are identically true, false or undecidable) and we use rules of inference to reach conclusions. But many pure mathematicians understand that because one needs to start somewhere (with axioms) that themselves must remain unproven assertions about the world, mathematics is actually not about proving things true. Rather it is just a domain of showing what necessarily follows from the axioms. Now if you assume the axioms are true then you can assume what is proved from them is true. But it is all just an assumption. If the axioms are false, well so much for your conclusion. Now because we have no method for showing that our axioms are actually true or false or undecidable - but rather that they are just assumptions, we may call what follows from them, on the assumption they are true and in the knowledge that moving from one mathematical claim (like an assumption/premise) to the next mathematical claim by following some rule of inference, we are not moving from proposition to proposition (actually, demonstrably true "meaningful sentences") we may more accurately say we are moving from statement to statement (approximations to such propositions) . So mathematics is about showing claims (that although we cannot know are true) do proceed logically (necessarily) one from another.
This works also for any domain of knowledge outside of mathematics and follows from what is called in the business “Tarski’s theory of truth” (named for Alfred Tarski). This is actually the person Popper refers to in “Objective Knowledge” (p 44 onwards) where he makes some “Remarks on Truth”. He makes the distinction there, following Tarski, that truth is “correspondence with the facts” and so it is sometimes also called the “Correspondence” theory of truth (this is the commonsense view, Popper says. I would add that this is to distinguish it from competing claims like: truth is about “Consensus” - that is, that a thing can be deemed true when some group of people agree that it is (a rather relativist notion if ever there was one. Each group, by this measure, when they disagree, has merely agreed upon contradictory “truths”) and there is also something known as the “Coherence theory of truth” where a thing is true if it coheres (agrees) with some other known true propositions. Of course how those propositions are known true is because they agree with each other and with some other “true” claims and so on. But at no point need anything need correspond with reality.
Popper begins this section on truth with the claim that “Our main concern in philosophy and science should be the search for truth…We should seek to see or discover the most urgent problems, and we should try to solve them by proposing true theories…or at any rate by proposing theories which come a little nearer to the truth than those of our predecessors.”
Is he wrong about some of that? Namely the first sentence? Should that - the search for truth - be our main concern? It would seem our main concern is solving problems. But does Popper suggest there that solving problems is to be identified with the search for truth? We cannot ask him, so I propose that this is indeed what we are doing in solving problems. We are searching for truth by eliminating error to bring us a little closer to truth. By uncovering tiny parts of it and eliminating falsehoods.
If we consider that statements are approximations to propositions (the latter what we cannot utter because those are actual truths or actual falsehoods) then the statement - being an approximation - is an approximation to truth or an approximation to falsehood. And in general terms, to correct errors is to make progress - to improve. But improvement or progress occurs in some direction. When we solve a problem it is that things get actually better. There is a direction. The direction is in bringing the approximation closer in line with reality. That is to say the statement comes to reflect that reality with increased fidelity. But this increased fidelity - this better way of capturing reality with the statement or the theory - this is an objective improvement. How is it an objective improvement? Well it solves the problem that a previous theory could not. That previous statements were unable to explain. The previous theory is shown wanting. In what way? Well the successfully criticised theory, the one refuted, cannot be the truth because it has been refuted - shown false by observation (or other criticism). Cannot be the final truth? No. Of course, as always, we may be mistaken. But having to make this caveat each time one uses the word "true" or "truth" can be cumbersome and violates Popper's injunction to "speak clearly...and avoid...complications." And regard brevity as important (p 44 "Objective Knowledge")
Theories solve problems. That is their purpose. But how can you know your problem is solved? Well - the solution has worked. That is to say that what was a problem (the position of the planet was there at point Y but you predicted point X because of theory “A”) but now you have solved that problem with your replacement theory “B” so when you do the calculation, "B" gives you the answer Y, and the old theory gave you a calculation leading to X. So the solution worked. The new theory worked. This is what “worked” means. It means it corresponded to something in the world. You compared it to something in reality. Reality matters: it is the adjudicator between your theories. Now of course you might have made a mistake. But modulo that, what do we say about theory A? It has been refuted. What does that mean? It means it cannot account for the observation that your planet was predicted to be at X but was not.
We cannot jettison truth. Knowledge has something to do with truth. But what? Well knowledge creation is about solving problems and that involves correcting errors. And correction of errors brings us closer to reality such that our statements about it are approximations to the actual truth. Now what it could mean to “hit on” the actual truth (some call this the “ontological truth”) is difficult to say. Could it be possible that “triangles have 3 sides” is in some sense the actual ontological truth? No. It can always be the case that this could be improved in some way. Being unable to imagine a way is no refutation of the idea that people improve their ideas. We cannot rule out the possibility that some future civilisation will agree (because, I don’t know, (let's be fantasical for a moment) they have uploaded themselves into some some holographic higher dimensional space) where triangles, it turns out, are rough approximations to figures that, when viewed from our meagre 4-dimensional spacetime, only appear to have 3 sides and in fact, viewed from a broader and deeper perspective available only to more enlightened higher-dimensional beings, actually have more sides. This might seem bizarre but I’d say it’s no more bizarre than, having mathematically proved from the “self evidently true axioms” that triangles have an internal angle sum of 180º degrees - you then learn about geometries where this is “self evidently true” NOT to be the case. So claims in mathematics - shown true, are sometimes overturned. We cannot know that when we think we’ve got it correct, that we’re going to be moments later shown how we’ve been in error. That there’s a problem.
So is knowledge a search for truth at all? So long as we solve problems and correct our errors such that the new theory that solves the problem by correcting the errors better corresponds to reality as compared to all rival - isn’t this enough? Yes - but there is a succinct way to put this.
The new theory contains more truth. Or: the new theory is more true. The old theory is demonstrably false and we know it’s false. Do we know it is once-and-for-all certainly false? No. Do we know the new one is once and for all true? No.
Is it true at all? Yes.
Can we say one is more true that the other? Yes!
Can we say by how much more? No. It’s merely a binary distinction. But it’s convenient. One theory has more truth to it than the other.
Are we sure?
No. We never need to be.
Can we say a theory is “true”. Yes - so long as we understand “true” there is shorthand for “fallibly, provisionally true” or “pragmatically true” which we can take to mean: we act as if it is true. And why not? If the proverbial life-and-death situation is before us, we should not act any other way. The patient’s heart has stopped and the epistemologically savvy emergency doctor calls for the (external defibrillator) paddles STAT(!). Those assisting need not debate whether it’s true that the paddles will work. They act as if it’s a true claim “they work”.
“Is it true those paddles work?” someone asks our critical rational doctor later. “Yes” he says - and quite right too. To say “Well, I don’t know if it’s true they do. But I do know they work” is not only cumbersome, but it misses something important in fallible critical rationalism. And that is that the word “true” should come to be known to mean “provisionally true” - this is the default position. Someone who thinks “true” means “certainly true” is making the mistake. That’s the error. And it doesn't matter if the majority are making the error and the minority understand how epistemology actually works. After all, most people think "knowledge" means "justified true belief" but we can still use the word "know" and "knowledge" without being overly concerned about each and every time providing the caveats. When we spot the errors, we point them out and in the case of "truth" if we want to highlight or criticise that error then affix the “certainly” adjective yourself to remind people that is not what the word "true" should be thought to mean in common day-to-day usage. Why should dogmatists be able to claim the word? Let's not cede that territory.
It is quite right to say that General Relativity has more truth (corresponds closer to reality and solves more problems and corrects errors with…) Newton’s theory of gravity which itself contains more truth than some “law” of gravity like F = 2GMm/r^4 but we cannot measure the quantity of truth. Truth is not a quantity that can be measured but it is a quality that a theory possesses compared to some other. There are many things we cannot measure and yet we can make reasonable and sensible claims as to difference in kind. For example, in biology, it is a routine matter to distinguish one species from another or even one breed from another. There may be edge cases, but in general the identification that a particular organism belongs to this species and not that species is done largely on the basis of appearance of kind or type. These days we can do this with greater precision using genetic analysis. In terms of epistemology we are not there yet but there is some symmetry here (and that is no coincidence).
As in any domain, in epistemology we want to solve a problem. The problem before us here and now is: how can we most effectively - that is to say clearly and efficiently and accurately - convey the epistemology that is critical rationalism? Should we jettison the idea that we are seeking truth? Or should we look at ways of preserving what is useful with that word and modifying what most understand the term to mean? In part this is what I have attempted above. We must be cautious we are not misunderstood as denying the possibility of truth - that may be viewed as a kind of relativism. Of course we can always be misunderstood. I return once more to Popper in “Remarks on Truth” as he says in many other places words to the same effect that and that I quoted only partially above “…aiming at simplicity and lucidity is a moral duty of all intellectuals: lack of clarity is a sin and pretentiousness is a crime. (Brevity is also important…but it is of lesser urgency, and it sometimes is incompatible with clarity).” Preserving not only the word truth, but also the idea that we are engaged in a search for it, helps with brevity and with clarity. Rather than avoiding the claim that science and reason broadly is a search for truth, we can merely correct people when they think it is about the search for certain truth, or final truth or “complete” truth (or a “complete science” as Sam Harris is fond of saying). Rather, we just correct them to “provisional truth”. Provisional truth that solves our problems.
So is it true "We aren't seeking truth"? Well is "seeking truth" synonymous with "solving problems"? Might it not be parsimonious to use these interchangeably given the facility of both terms? "I'm looking for the truth!" exclaims the exasperated scientist trying to uncover if the wobbly motion of their planet is a sign of yet another, as yet, unobserved planet. Are they wrong to do so? Should it be "I'm trying to solve this problem!".
I don't think it matters.
Do theories need to be falsifiable to be science?
That theories need to be falsifiable is a necessary but not sufficient condition for science. For example, the claims: Eating 1.00000 kg of grass cures the common cold or The world will end at 2am UTC on 2/2/22 are falsifiable theories. But they are not scientific. Without a good explanation to accompany them, they are not science. They are just “falsifiable claims”. A scientific theory should be a good explanation that also happens to be testable/falsifiable. Popper figured out that falsifiability is an improvement on verifiability of the logical positivists. It is falsifiability that better separates science from non-science. This includes separating science from pseudo-science like astrology and homeopathy as well as things like morality and philosophy broadly. But it has never been the case that all falsifiable theories are scientific theories. For example: those two claims I started this paragraph with.
But is it nevertheless necessary that scientific theories need to be falsifiable? Well the scientific theory for some phenomenon - or any theory that purports to be the scientific theory for some phenomenon must be a good (hard to vary) explanation of that phenomenon. Part of this “hard to vary” quality is that the theory is falsifiable - testable by experiment. In principle. Now it need not be in practise. But that doesn’t change its testability in principle. So, for example: many people have observed that string theory is very, very difficult to test. Some have asserted that to observe the predictions of string theory would take a particle accelerator half the size of the galaxy. Now this is impractical. So does this mean the theory is unfalsifiable? No! In practise we cannot build such a particle accelerator. But in principle it could be done. So it’s still falsifiable in principle. And perhaps there exist "natural" particle accelerators such a quasars - observations of which might rule out string theory? We do not know.
So, it’s science. It makes predictions. We need not jettison falsifiability on the basis of that. What we might do is search for better ways to test it. If it’s a claim about the physical world, then the physical world must be the adjudicator of the truth about string theory. Can we rule it out? Can we refute it? Then it’s falsifiable. But notice there are two kinds of falsifiability: in principle and in practise. In principle is a black-and-white quality of a theory that is required for science. It is just the claim that some observation of physical reality could in principle rule out the theory. But if no such observation can - that is to say no such observation exists in any possible world - then the theory is not about the physical world. There is no comparison to be made between the actual physical world where the theory holds and an imaginary fictitious physical reality where the theory does not. Or vice versa.
Let us take an even more extreme case than string theory (which I argue is science - but for reasons I will come to is not necessarily “good” or “optimal” science) - and that is the theory that there exist other universes where the very laws of physics are themselves different. So universes outside our own, but where the laws are different. Now it was once thought that such universes are in principle unobservable and so therefore not testable and this makes them unfalsifiable and not science. After all: another universe? Outside our own? How can we access that? Well as it turns out - in principle - we could see such a universe. A universe where the laws are different will have different physical constants and as far back as 1999 physicists claimed to have observed a changing fine structure constant. This would be evidence of a region of space where the laws were different. Another universe (by some definitions). It turned out they were wrong (see that very same link above) - but it is this kind of observation that, in principle, could allow us to observe other universes beyond our own. (Or force us to change what we mean by “universe”).
But this “falsifiable in principle” (necessary as it is) as a criterion to demarcate science from metaphysics (for example) is also not sufficient to make something a good, hard to vary, explanation. Let us return to string theory. What we’re interested in is solving problems in physics and string theory is an attempt to unify quantum mechanics (a physics of discrete entities like particles and energy) with general relativity (a physics of continuous entities like space and time). As we have already seen, string theory could in principle be tested with a particle accelerator half the size of the galaxy. That's the worst case scenario - likely things are not that grim. But say they were. There is probably not enough matter for several lightyears to construct such an experiment. It’s impractical. So “in practice” we would have to say it’s not falsifiable. It’s “not falsifiable in practice”. But this is not a black-and-white all-or-nothing thing. In practice means something like “we lack the wealth to do so” - we cannot actually perform the physical transformation of the matter to do this. Actually we do not even know how to gather enough matter - using the technology we have presently - to build one. So knowledge is also a problem here.
The fact that string theory makes this assertion of itself (as being practically not testable right now), as things currently stand, makes it an “easy to vary” theory even though it’s testable in principle. This is because minor modifications of the theory making similar predictions cannot be distinguished by experiment. And many such varieties of string theory exist. So “untestable” in practice is a weakness. This does not make string theory unscientific - it just makes it a poor explanation. For now. Maybe someone will think up a better test. Maybe someone will make a prediction that operates at lower energies requiring a smaller particle accelerator. Or - and this is key - maybe someone will come up with a theory that makes all similar predictions string theory can but which itself can be tested by some routine means available here on Earth - making that new theory a good explanation and worthy replacement for both quantum theory and the general theory of relativity. Such a theory - testable in practice as well as principle - would be a very good explanation. And string theory would then not be.
What is an example of a good theory within science that is unfalsifiable in principle? I do not know of one. Why is it important to distinguish between science and other subjects or disciplines anyway? It is largely a matter of convenience but also important to distinguish efficiently and effectively between pseudoscience and scientistic arguments (so arguments that claim something like: science can tell us what we ought to do - that there can be a science of morality or a science of economics or politics). Knowledge is some kind of unified whole, it is true. But "falsifiability" is a useful necessary criterion for science. And it is useful to know that demanding, say, moral theories are testable would be a terrible error. This would mean requiring the conducting of experiments on people (say) in order to determine if an even more pure version of Communism than anything China or North Korea has ever tried would be a good idea because - science! No, we do not need to experiment. We begin instead with moral explanations about people and rule out the "a falsification is required here before we can properly reject this theory". Morality is not science and we should not require it to be. But science is a place where experiments - conducted on the physical world are necessary. I think it's necessary we preserve this distinction.
A note on evolution
Quite rightly I was altered to and corrected upon a misconception I had about a particular kind of exception to this strict requirement for falsification in science. For reasons we shall see this does not undermine the central idea that scientific theories must be falsifiable. Now in the case of some (large number!) of theories they are not practically testable because there are no viable alternatives. This means we need to split the meanings of "falsifiable" and "testable in practice". Because there are no viable alternatives to Neo-Darwin "Evolution by natural selection" it cannot be "tested" - because to be testable it needs to be tested against something. And there is nothing. As David Deutsch observes in The Beginning of Infinity: if we observed something inconsistent with the prevailing theory of evolution by natural selection, nothing could be said except the test we used to find the inconsistency was faulty. It is often said, following Haldane that "Rabbits in the pre-Cambrian" would refute evolution by natural selection. But they would not. They would be a problem - but they could be explained by being a rare complex organism that somehow got there earlier than anything else (unlikely) or that a mistake was made by our geologist or paleontologist or evidence of a prankster. Many things would need to be ruled out (and how?) before we ruled out evolution on the basis of rabbits in the pre-Cambrian. But this untestability does not mean unfalsifiable in principle. These are different things.
If an organism (or many organisms, many different species) we found to undergo only or mainly favorable mutations then this would be better explained by Lamarckism and would rule out Darwinism. But then there are all those organisms we already know of that would refute Lamarckism. But the point would be Darwinism would be refuted as a universal (applies to all cases, everywhere) explanation for the evolution of life. It would just be a special case - presumably of some deeper explanation that accounted for why both Lamarckism and Darwinism worked within their less-than-universal domains. So testability and falsifiability are not synonyms. While the latter is needed (and Darwinism is that) the former is about the practical ability of performing some test (experiment) and having somewhere else to "jump to". Some viable alternative theory to test our theory against.
Not everything in science, it should also be noted, is falsifiable. Some eminently scientific claims are unfalsifiable. In physics we say "Work is a form of energy". That's a scientific claim. It's also unfalsifiable because it's essentially a definition. One will never calculate the physical work done (by using a classical formula for work like Work = Force x Distance) and discover that it is not a form of energy. These are just words and terms and though scientific, untestable and unfalsifiable. So some things in science are unfalsifiable. But they are not explanatory theories as such but more like frameworks within which we do science. In chemistry the scientific claim that "The 6th element on the periodic table is carbon". Or "The element with 6 protons is carbon" is a scientific claim. But it too is unfalsifiable. No one can possibly ever, in any world, discover an atom containing only 6 protons and conclude it is not an atom of carbon. No one will find some element which, upon analysis is carbon but contains 7 protons in every nucleus (because that would be nitrogen). And no one will find an element that contains only 5.5 protons in the nucleus bumping carbon up one position on the periodic table. These things are ruled out by the definitions of words like "element" and "atom" and "proton" and "carbon". So unfalsifiable claims in science are common. But the explanatory theories in which these definitions are used and themselves explained make predictions that can turn out to be false. It would not falsify the definitions - but the theories. In particular all existential claims of the form "X exists" are unfalsifiable. So the claim "gravity exists" is not falsifiable. But the claim "Gravity is a force" is, and was falsified. Gravity still existed - it just turned out not to be a force but rather, as Einstein showed, was the manifestation of space being warped by energy and matter. "Gravity" is a word used to describe some phenomenon that exists. The concept of gravity cannot be "falsified" - only what it appears to be, or what it is claimed to be, can be. In an extreme case, the idea "Matter exists" cannot be falsified, though matter may not be the most fundamental thing, in the final analysis. Maybe it is true that there is something deeper - a Platonic realm of sorts from which the appearance of matter arises. But that would just be to explain that matter is an emergent feature. The appearance of it - which is to say its measurable qualities - would still be real emergent things.
So falsifiablility is a necessary quality for scientific theories to possess. But not all claims in science are falsifiable. And falsifiability is not the same as testability. In particular the theory of evolution is not obviously testable in practise. Though it is in principle falsifiable. What we call science in the final analysis is an open question. It is a domain of study focussed on discovering how the physical world works - the patterns in nature, their beauty and their dangers. In part so we can control our environment and use it to our advantage. We guess what's true and compare our guess against that physical reality in some way. So long as we are making progress and solving our problems, that is what matters. But if progress is slow, that can be when these debates can be extra useful to understand.
This post has been motivated by some inspiring Tweets by Lulie Tannet (@reasonisfun) which then resulted in a subsequent exchange of ideas with David Deutsch (@daviddeutschoxf) and others. As always, "The Beginning of Infinity" and "The Fabric of Reality" underpin much of what I say - but errors are my own and nothing I say should be seen as an endorsement by David Deutsch. You can (and should!) buy both books here: https://www.daviddeutsch.org.uk/books/the-beginning-of-infinity/
My full view is expressed here but this post is just a repeat of some specific remarks about Singer as I do not engage with his position in my piece because I was so disappointed to read his work. An example can be found here: http://www.animal-rights-library.com/texts-m/singer03.htm
Titled “Do animals feel pain?” I do not want to engage much with his conclusions. Let us concentrate primarily on his methods. That is to say: the philosophical techniques he uses to establish his position. They need to be valid arguments, or we can ignore his conclusions (which will be as bad as simply false, or as good as mere assertions). He does write “We also know that the nervous systems of other animals were not artificially constructed--as a robot might be artificially constructed--to mimic the pain behavior of humans.” which I agree with, as I stated. But when he asks the question “If it is justifiable to assume that other human beings feel pain as we do, is there any reason why a similar inference should not be justifiable in the case of other animals?” he answers “no”. He argues, “It is surely unreasonable to suppose that nervous systems that are virtually identical physiologically, have a common origin and a common evolutionary function, and result in similar forms of behavior in similar circumstances should actually operate in an entirely different manner on the level of subjective feelings.” but as I have argued this is completely false. You can indeed share an almost identical architectural hardware (as say chimps and humans do with respect to their brains) but the software (the mind!) can be altogether different. And yes there are hardware differences, of course - and perhaps those hardware differences contain the specialised processing and memory capacity required to run the special “universal knowledge creation” software of a person, but the point is: similar hardware says nothing about software. Two identical Apple Mac computers can run totally different software. One might be running a computer game. Another, a spreadsheet. That look nothing alike. The brain of a chimp might superficially look kind of like the brain of a human: but the mind? Totally different. And so the experiences might be totally different. Indeed I argue they are totally different. But Singer, like most people concerned about this topic, is completely confused about (because he is ignorant of) the relationship between the physical and the abstract; between hardware and software. The brain-mind connection. The mind really is a causal agent. Like software controls the hardware. He does not know about universal knowledge creators and the morally central role concept this plays in our understanding of the potential for a creature to suffer. Of course, this is no fault of his, at the time of writing (that article predates “The Beginning of Infinity” by over 20 years) but I think most people agree “animals can feel pain and all pain is bad so that’s that” kind of thing. But more worrying to me is the following, where Singer writes: “The overwhelming majority of scientists who have addressed themselves to this question agree. Lord Brain, one of the most eminent neurologists of our time, has said: “I personally can see no reason for conceding mind to my fellow men and denying it to animals…”
So Singer resorts to *appeal to authority* and the authority he appeals to resorts to *argument from ignorance*. Singer says “Look, other scientists agree with me” (inference being: scientists are clever people who get things right. Always though?) And the authority “Lord Brain” says “I don’t see any reason to suggest animals don’t have minds like people do” which means “I don’t understand the differences”. Now if I read this from a journalist, or even a scientist I could perhaps forgive these sort of mistakes. But Singer purports to be a professional *philosopher*. One who constructs arguments and explanations in order to establish conclusions. One who knows the logical fallacies - and how to avoid them. But he has not avoided them here. He has deployed them!
“…there are no good reasons, scientific or philosophical, for denying that animals feel pain. If we do not doubt that other humans feel pain we should not doubt that other animals do so too. Animals can feel pain.”
As I have argued: animals may well feel pain. But so does a person exercising: and it feels good, even if painful. An animal that feels pain does not suffer - that is a philosophical position that no science experiment can undermine (yet). These are critical distinctions that, if you are engaged in arguing for so-called "animal rights" and talking about something as ethically important as the morality of pain: you need to take seriously. But given the terrible philosophical arguments made by Singer we must, unfortunately, conclude he is not actually philosophically serious about one of his most cherished areas of expertise. He resorts to arguments from authority, arguments from ignorance and a good measure of the emotive thrown in. Philosophers should be far more cautious because if they have important points to make, people might just stop listening if they demonstrate they cannot "ply their own trade" with competence.
Science and democracy share the feature that they are error correction systems. The former is about correcting errors to our knowledge of the physical world. The latter to our choice of rulers and their policies. With science on the rare occasion when we have two theories competing to explain the same phenomena we can rule one out through a "crucial experiment" (for more on crucial experiments, see here). With democracy, when candidates compete to win elections they put forward policies and if the one who wins, and has the power to actually enact their policies fails to meet our expectations, an election is an opportunity to correct our mistake and try out another candidate.
But in neither case - science or democracy - can we ensure that the theory we have or the candidate we vote for - cannot possibly fail. And we must expect them to fail in ways we can not have foreseen.
“Until the average person is well-educated and well-informed, you will always have a dysfunctional political system. I agree that free high-quality education for all would be costly to implement, but rich economies can afford it. In fact, I think they can't afford not to do it.” - Google Programmer François Chollet (@fchollet), Twitter, 4 Feb 2018
If the average person was educated and informed to a standard that François Chollet approved that would not guarantee that, by his lights, the government was not “terribly dysfunctional” (that it was made up of terrible people or that it never got anything done (See note 1 below)) or even that the system itself was was “dysfunctional” if by that we meant something like: incapable in principle of enabling the worst people - by our personal standards - of being elected. Or perhaps it means something deeper: that there is corruption that makes the democracy rotten to the core. But then well educated, well informed people are still liable to fall into error and nothing can guarantee they cannot be deceived. Indeed here lurks an irony, but it's true: the more well educated you are about a thing, the more blind you can be to the most common errors. You might simply be "used to" making the same mistake over and again. Expertise can sometimes be a liability - even and perhaps especially - in your domain of expertise. The reason is you cannot often think as creatively because you think of all the criticisms. That's what makes you an expert, after all! So you think of all the criticisms against the idea that you are wrong - because you know them. Isn't that strange? It's like an expert Korean linguist who is teaching someone the Korean word for (say) computer (which, as it turns out, is "computer" with a Korean accent ("keompyuteo"). And say the (ignorant!) person they teach comes to them one day and says "I heard from a Korean and they said that's not the only word. There in another word and it's "gaesangi" they insist. But the expert knows they're correct - there's one word only and they consult with some native Korean speakers who agree and besides, they're the expert after all. So they return to the learner and insist "You're mistaken - there is one word. I've researched this. You can trust me. And I've checked - with other native speakers." But experts can be mistaken and in this case the learner just happened to overhear some older North Koreans speaking and using that word...which is indeed an older word in North Korean for "computer" and not well known by South Koreans. So as it turns out the ignorant, less educated person knew the truth. There was more than one word in Korean for computer in existence and no amount of checking with the typical South Korean expert would have fixed that. More education doesn't mean you won't make mistakes that those with less education will not make. We are all equally fallible. There is always an infinite amount we do not know and we must expect others know things we do not. Even (perhaps especially) the "less well" educated and "less well" informed. No system of education can ensure errors of this kind become less frequent. No democratic system can ensure that, for example, terrible rulers might not get elected. So even if President Trump really is/was a terrible mistake, no democratic system which is to say no democratic institution could have prevented his election in principle if he was a legally qualified candidate.
Of course at the extremes that exact criticism is made: he is not legally qualified. But those accusations seem to be just par for the Presidential election course in the United States now. Obama was not born in the United States, or Hillary Clinton was actually a criminal who should have been in gaol and so on. If the institutions investigate and you regard them as having worked in those cases then it is a poor, ad-hoc explanation that says they only ever fail, are corrupt and evidence of a broken or "dysfunctional" system when applied to the candidates and parties you do not support.
Now this may seem a bizarre diversion, but bear with me. The average person probably doesn’t think much about the intricacies of how science generates the knowledge that it does. That’s a rarefied kind of interest of concern only to philosophers of science and some scientists. Then again, so far as “interests” go, there is no "average person" - there’s few academic interests all average people share. Does the average person enjoy learning maths or engaging in deeply refined literary criticism or history lessons or do they want to have a deep understanding of civics and constitutional law? Hint: ask a school-aged student to find out. But the average person is indeed interested in knowledge of all sorts - it may be academic knowledge of a subject of interest to them or some project they are working on (both these often wrongly dismissively referred to as a "hobbies") or it can be knowledge of their own lives, those of their friends and family and how to do their job well and better and other day to day things. The average person has concerns and interests - perhaps not shared by philosophers of science in Sydney, or google programmers in Silicon Valley, say.
It’s not really of great importance, though it may be of some use to the average person to learn that the process that is science is in large part defined by the creation of hard to vary explanations of the physical world that can be tested against physical reality. These “tests” are known as experiments - but they are not the only way we have of criticising scientific explanations. It is just that explanations of the physical world that can be tested against physical reality - by experiments - are precisely the scientific ones. The experiment should be able to be performed in practise, which is to say we should posses an explanation of how the experiment can be conducted by us.
Some versions of string theory that postulate entities that can only be resolved with the energy of a particle accelerator the diameter of the galaxy would be an example of a possible explanation of the physical world that is, in my view, not scientific. Although there is some kind of test possible “in principle” the lack of an “in practise” explanation of how to build such a device given the possible transformations people can actually make in order to test the theory should remove it from serious contention as a way forward in making progress in physical science (as useful as the mathematical techniques discovered from explorations of string theory have been in mathematics.)
Sometimes this process of science generates theories that are false. Indeed this is rather the rule and not the exception. We should expect that the vast majority of scientific theories will turn out to be false. This is simply a claim that the scientific enterprise is unbounded: we will always be able to improve upon any explanation we do discover. And any improvement will show how flawed the unimproved version was and why.
The “average person” might think that science is an engine for generating truths about the world that, once the authority of science in the form of some professorial scientist has deigned to profess that truth that we can trust such claims to stand as “scientific truth”. But science is very much a catalogue of errors. As David Deutsch has said - it would have been much preferred if scientific theories were called scientific misconceptions from the start.
Science, for example, at various times has produced theories such as “spontaneous generation” as an attempt to explain how non-living matter can become living. Some of the earliest theories in chemistry included the “phlogiston” idea where this substance inhabited all matter and it was this that was combustible. Earthquakes, volcanoes, moving continents and other eruptions of the Earth were explained as evidence the planet was expanding. And for centuries it was believed that an instantaneous-acting gravitational force existed between all masses in the universe and that this explained the motion of objects from orbiting planets to falling apples. And these are just some of the more prominent examples from just biology, chemistry, geology and physics. Astronomy is a catalogue of bold conjectures about the nature of the cosmos being utterly decimated by the light of observation. Literally. And we are all familiar with supposedly rock solid medical and nutritional advice seemingly turning on a dime to advice the precise opposite of what we were once taught (cf: eat more carbohydrates and less protein (becomes) eat more protein and less carbohydrates).
So is this system of producing explanations in science flawed? Why should it consistently throw up utter falsehoods? Why won’t it simply provide us with the final correct answer? Of course there is no such answer. Only better and better answers. Approximations of increasing fidelity, reach and depth. So although any given explanation must be expected to be flawed, the system itself cannot be blamed for those flaws. This process where a creative scientist tries to solve a problem with what is known by producing a new theory is roughly the way knowledge generation in all domains works. An idea is guessed and then anyone interested attempts to refute that guess by careful criticism. The criticism might be how the idea is false, or ugly or not so useful compared to some other. But if the criticisms all fail, and the new idea accomplishes everything any competing idea does - and perhaps more (and more elegantly) - the idea survives to earn the moniker “The explanation of…”.
The system must be expected to produce utter falsehood. Indeed it is required to. If science is about generating beautiful explanations, then for each beautiful explanation that becomes “The Scientific explanation of…” defeated rivals will lay in its wake. The decimation of opponents - typically though experiment - is a constant in science. It reveals how what we once thought was correct actually always was utterly false and flawed. And how blind we were not to see. But we are fallible and it is no sin to keep on making these mistakes. That is our nature. We are fallible. Our fallibility is tied intimately to our creativity - that feature of us that strives to make bold conjectures - majestic guesses - in an attempt to improve our lot and what we know. But that process is an undirected one for we cannot know in which direction the ultimate ontological truth about reality lays. We set out from our island of what is known and sail into the unknown, hoping to find a better place. If we fail, we can always find our way back, but there is no guarantee we will land somewhere better. That is our nature. Science cannot provide sure answers - it can only provide the conditions under which those answers can possibly arise.
Now all of that, if absorbed, might make someone somewhat better informed about the process of science and some of its history. And they might learn a little about epistemology besides. But would that do anything to sway them in an election? Precisely what kind of information could make the average person “well informed” enough such that the system was not broken? Should it be about who should be elected?
The process that is democracy is in large part defined by the conditions under which the successes and failure of the rulers of a society can be tested against the expectations of an electorate such that are those expectations not met, the rulers can be removed without violence. The ultimate expression of such “tests” are known as elections - but of course they are not the only way of criticising elected rulers. Rulers are criticised every single day - the media and much of the electorate is obsessed by it. It is just that elections are the means by which rulers who fail to meet the expectations of the electorate - which is to say by some measure of comparing the politicians stated policies to what was actually achieved by them - is a democratic process. Democracy is, or should be seen as, a system whereby we trial some leader (on the basis of their stated policy) and should this leader fail to meet our expectations then we can remove that leader through a process that allows us to install some other leader with different policies should we so choose.
Now people are all very different. We are fallible and have different values, different knowledge and different circumstances. This kaleidoscope of differences ensures that we cannot possibly agree all the time on every topic. Some people are more or less knowledgeable about this or that thing and that different knowledge will come to bear when it comes to making decisions about whether this course or that might best suit their own interests or interests they care about. And this, it must be said, is a wonderful thing. It means that there will always be wildly divergent ideas about how to proceed in life. Each of us as rulers of our own lives guess, trial and correct courses we take, amending our paths and trying to plot out a better course. Often, many of us, fail terribly. We are fallible. We lack the knowledge to know what to do next.
Sam Harris and Russell Brand had a conversation recently on Russell’s podcast radio show called “Under the Skin”. That two hour conversation was an impressive display of just how far apart and what entirely different “language games” two people could play while somehow keeping the conversational ball in the air. At times they really weren’t even playing the same game the disagreement was so great. So while there seemed to be little common ground at any point on almost any issue of substance (except that there exists mysteries in the world and human beings are important), both nevertheless found an opportunity at the 1h 50 min mark to find a point of enthusiastic agreement:
Harris: “Democracy seems impressively broken to me and capitalism seems impressively broken to me…except the alternatives seem worse…this is Churchill, right?”
But why? Why does Sam think this? One need only listen to the Waking Up podcast to get a taste. Donald Trump’s election is a clear sign of a broken system, in Sam’s eyes. Though Sam would have been no fan of Hillary Clinton either and so perhaps the “broken system” is evidenced by the dearth of choice on offer as though the choices on offer were particularly abhorrent. What is remarkable about this is how Harris notices - mere minutes after making the claim that capitalism is broken - that today we live in a wonderful age that seems to keep getting better where only 10% of people are in extreme poverty while a mere 150 years ago those numbers were flipped. Now why is this? Is it the spread of socialism or is it free trade (capitalism). What makes the difference?
But Sam is very worried. He agrees, he says, at the end of that podcast, with some experts that we are basically in a new "Cuban Missile Crisis" but no one has noticed. That now is particularly dangerous. America is at a particularly unstable epoch - irrationality rules, fake news has proliferate, the experts have been shown to be wrong time and again and there is mistrust all around. Congress and the Senate seem incapable of passing legislation (Again, see note 1). There is deadlock. All of this: a sign of a broken system.
Sam's idea that our systems are broken is a common underlying thought of our times. It is shared by many in Europe where Brexit too is seen as evidence of a terribly broken system. These “populist” uprisings. People voting against their own economic interests. The system is broken. The outcomes are unjust and unfair - especially for the least powerful. Those people have been deceived by corrupt double-speakers. Political charlatans interested only in lining their own pockets and those of the powerful corporations. The system is broken.
But when did it break? In the case of the American system: Did it break sometime during Obama’s term? Did it break at the moment Trump was elected? Perhaps when he won the nomination? What exactly is broken, except the expectations of those who do not agree with the outcome of these elections and referendums?
Let us remind outselves of François Chollet (@fchollet) Tweet in full:
“Until the average person is well-educated and well-informed, you will always have a dysfunctional political system. I agree that free high-quality education for all would be costly to implement, but rich economies can afford it. In fact, I think they can't afford not to do it.”
Let us observe (before we return to this shortly) how wondrous is the claim something can be simultaneously "free" and "costly". This is a tactic employed by those who believe government is the best provider of some service - especially something like education. What is meant here is: the education is "free" to the user and "costly" to the taxpayer. (It's not quite like this, of course - because many of us were indeed taxpayers when we were users - so we paid). "Free" and "costly" means: the government extracts taxes so that for some the system is (apparently) free while for everyone else it is costly. That is what is meant by "free" yet "costly". And is why I argue that this entails (logically implies, assuming the preceding holds) that "we need government funded institutions to ensure people vote the right way."
The process works like this: the taxpayer has money extracted from them under penalty of force by the government who then allocates some of that to educational institutions. They don't do this without conditions. After all, if there were no conditions anyone at all could claim they were an educational institution and demand money from the government. So governments require "standards" are met in those institutions they fund. Meeting "standards" requires a comparison between the content the institutions provide and a set of criteria designed by government. So "standards" shape content - which is to to say the curriculum. In reality it's far more prescriptive: standards are the curriculum and also how the curriculum is taught and assessed. Standards - conditions for funding - are extremely restrictive and inspections occur and schools and other educational institutions closed if government requirements for what is taught are not met. And some of that content must include things like: particular interpretations of history, how economic systems and commerce should operate, what the normative response to social and environmental issues is, how a legal system should be set up, the place of religion in society and the proper role and function of government and so on and on. This is a terrible conflict of interest. If the purpose of education is to help young people foster and explore their own creativity and become better critical thinkers, this cannot happen when the government is mandating standards. As governments must do, else how can they possibly decide between the many institutions competing for funding so that education can be provided "free" to students? Hence any simultaneously "free" and "costly" system of education must amount to a government funded system of indoctrination. A system which, in part, has at its core an objective of helping to influence how people view the government and, therefore in democracies, how they choose to vote.
Returning to the Tweet under discussion. That view - popular in some circles - suggests that the outcome of an election is an indication of the “functionality” of the system itself. Which is to say if the outcome is bad, then the system that produced it must be faulty. But that would be rather like arguing that the production of a demonstrably faulty theory is a demonstration that the process of science itself is faulty. But as we have seen: science is in the business of producing faulty theories only to be replaced by better (though we must expect ultimately faulty) theories.
Now you may or may not think that Donald Trump is a great thing for America. But let us go with the most some of the more common positions preferred by his opponents: Donald Trump is a terrible president. He is altogether unsuitable for the position.
Does that mean the system in America is broken? No it can merely mean Donald Trump is terrible, people elected someone who does not deserve to be there (so they made an error) and he needs to go. Happily the system is perfectly designed to solve that problem. What happens is that there is an election every four years in America and a terrible president can be removed. That is what happens. And so far in the history of America that process has occurred without violence except where presidents have been assassinated.
So the system works. What is the alternative?
Now maybe you think: but no! Trump is corrupt and is not entitled to be there and never was. People were hoodwinked by a liar. Now of course accusing politicians of lying is hardly the uncovering of some deep truth. But can't people who voted for Trump decide for themselves?
"But no! They cannot." perhaps the retort may come, "They are incapable. They are too poorly educated. The average person is not well-educated and not well informed. So that is why a charlatan can be elected."
But that cannot be so. People are better informed than ever before. And they have always been fallible and gullible. Those things are constants - but information is now more easily accessed and people can choose among sources and choose criteria for judging those sources.
Back to @fchollet's tweet. What would “free high quality education for all” really entail? Well firstly - it cannot be “free”.
There is no such thing as "free" except, perhaps, the air.
Free of course here, as it always does in these cases, is a euphemism for “taxpayer funded”. Teachers do not work for free. And government funded education is necessarily indoctrination. He who pays the piper calls the tune, after all. North Korea provides “free high-quality education for all”. They really do. Education and learning are not at all identical as I say here. Some North Korean children are excellent at mathematics and some other subjects and of course they can recite all sorts of “facts” about what it is “right” to think when it comes to the government. The system works! It's not broken. It is doing exactly what the government want it to do. And the system is a terrible travesty and tragedy.
What can it mean for a system in a free (in the philosophical, libertarian sense) and open society to provide a high quality education?
Firstly - again - it cannot possibly be free. Whatever a child wants to learn - they should be able to. And that might include - no school at all. It might include doing little more than attending the local park each day with their ipad and their friends. Accessing the internet they have access to more knowledge than anyone has, ever. And if they have a loving set of parents and friends and other wise adults around they can have conversations to correct any errors they might encounter in their learning travels. Children do indeed love to do this (only forced school manages to switch off this natural love of learning). But ipads aren’t free. Or maybe they would like to go and have swimming lessons instead, or piano lessons or Korean language lessons or, or or…whatever the case those lessons won’t typically be free. They cannot be. People become experts at things at high cost to themselves and so they are entitled to sell their services. They shouldn’t be forced to provide their services for free. And likewise nor should the rest of us be required to pay for someone else’s children to have swimming lessons. Maybe we can barely afford to pay for our own child’s swimming lessons - or whatever.
So “free high quality education for all” cannot be free. That makes zero sense.
High quality will mean children must pursue their own interests and therefore will necessarily form very different views about the world and have wildly different preferences, such as for things like who to vote for in elections.
And as “for all” - we don’t want everyone to do the same thing, let alone be forced to. Especially children. The future is in the other direction entirely. Some small number of students might choose to pursue a traditional course of study of the kind François Chollet might approve.
As Popper writes in “The Open Society” (you can find the whole context here at: www.theopensociety.net/2017/12/what-democratic-institutions-may-be-expected-to-do/ thanks to Peter Monnerjahn @PeterMonnerjahn
“The idea that this problem can be tackled, in turn, by an institutional eugenic and educational control is, I believe, mistaken; some reasons for my belief will be given below.) It is quite wrong to blame democracy for the political shortcomings of a democratic state.”
(The problem in question of which Popper speaks is “dissatisfaction with “democratic institutions because they find that these do not necessarily prevent a state or a policy from falling short of some moral standards or of some political demands which may be urgent as well as admirable.”)
And I agree. When the state or policy falls short, it cannot be that ever more education of the people is needed in order to fix the democratic institutions (the system). The system of democracy - like the system of science - cannot prevent flaws and faults and “falling shorts”.
And with respect to education, anyway, the “average person” is now more educated and well informed than ever before at any point in history. The “average person” was once an illiterate person who even if they could read had access to almost zero books and the current goings on of the day. Now the average person can read. They have access to news and the views of their family and friends dispersed throughout the world and - amazingly - the views of some of the best thinkers on the planet - instantly. Some look only at the Instagram and Facebook feeds of young popstars or celebrities famous for being famous, sure. But even the most banal of those people comment on the days news and inform their followers of trends. The “average person” is an amazingly knowledgable, creative nexus of opinion and contradiction and fallibility and knowledge.
If you actually listened to them, you just may find they’ve thought things through. They’ve got reasons. Yes, they might have been mistaken. And the reasons they had were flawed. And they voted based on a mistake.
But when has this never been the case? And how could it possibly be otherwise?
(1) The idea that when a bicameral legislature such as exists in the United States (The Congress and the Senate) or in Australia (The House of Representatives and the Senate) or the United Kingdom (The Commons and the Lords) are at loggerheads and no legislation is being passed is a bad thing is, typically, false. Government is a powerful, dangerous and (at it’s most mundane) simply annoying institution that intrudes into lives and livelihoods. The less it does to interfere, the better. So it is *good* when government, in its best moods, reduces its own powers and lessens the intrusions it makes. But this is rarely the case. Mostly it is legislating to make regulations and ban this or that thing or prevent this or that thing from occurring or being tried and taking money from these people to give to those people and so on and on. The best it can do is pass laws eliminating regulations and reducing taxes. But the second best thing it can do is, as a broad rule: nothing. So when there is a “deadlock” don’t despair. Realise that is government *working* - the two houses working together to prevent the overall government from doing more to hurt people and intrude into their lives. That system is the one that has survived meta-government trials over millenia. It works better than alternatives that have been tried. And when it’s “not working” it’s working.
If you heat water on a stove and monitor the temperature as time passes - if the heat source is more or less constant and the environment is reasonably controlled (no strong breeze, say) then you will end up gathering data that looks something quite close to this (the details will, of course, depend on how powerful that heat source is and how much water you have)
Say you keep heating the water. What happens next? What your answer is depends entirely on what knowledge you already have. If you genuinely think you do not know, you can guess. Note this: if you are in that position, it doesn't matter how "educated" you are. Your guess is as wild as anyone else's. "I have a PhD in Science" someone might say "But although I've no experience ever with this experiment, or anything like it, let me make an educated, expert guess..." Now that person's guess is no better than if they didn't have a PhD. Now there's a sense in which all knowledge is "guessed". But some guesses are made because they are derived from some existing theory. This is called a prediction. And if the theory being used is scientific we call such a conjecture a scientific prediction. Now a prediction is not a theory - a theory is an explanation - an account of why some phenomena happens in the way that it does. A prediction is where that general theory is taken and applied to a specific case. In science there is typically one "scientific theory" - namely the explanation - of any given phenomena. Sometimes there is no such theory (what is consciousness, for example?) and rarely there are competing theories (how do we resolve situations where quantum theory and general relativity conflict?) - often, like in this example of heating water - there is one explanation known. So...what happens next? If you're a person who thinks "induction" is a thing - guess now what happens next? If you already actually know what happens next, guess what someone who does not know would guess happens next. Many people think something like: well to make a prediction you "extrapolate", don't you? That's the rational thing to do. You have some data, so now continue the trend, right? It's a nice straight line - a "linear trend" so they say, so why not use what data we have and just continue the pattern?
Why not indeed:
Here, what is predicted is that the temperature of the water just continues to climb. We just follow the pattern previously and guess that the straight line just continues without limit. That might be called "pattern recognition" and is supposedly something like a sign of intelligence. A computer that can make that kind of guess might be well on the way to being smart like us, so we're told. In that context it can also be given the fancy title "Bayesian inference generation" (or something like this) and some people think that this is the kind of prediction that artificially intelligence machines are increasingly able to do. I criticise that line of thinking here: http://www.bretthall.org/superintelligence-4.html I should say: this guess seems quite reasonable. And it's even partially correct. From 80 to 90 degrees that next data point is correct. And so too is the one from 90 to 100 degrees. It is indeed very close to linear there. But anyone who has taken high school or even primary school science, or read a book about this, or seen the graph on the internet or perhaps even done this experiment themselves, knows this isn't true for values above 100 Celsius (yes, I'm assuming we're at sea level and the conditions are just such that "100 Celsius" is indeed the boiling point of water. Anyway, if you already know you might guess something strange happens at the boiling point.
That "strange thing" is that as you heat beyond the boiling point (here assumed to be exactly 100 degrees Celsius) the temperature does not increase. It plateaus and will stay like that until all the water boils away (at which point your thermometer, if it keeps monitoring the temperature of the empty vessel will then start to increase in temperature again). Now could you predict this? Yes - if you already know the theory (or if you get wildly lucky). Most people who make this prediction either have seen it before (they know what actually happens) and they may even know a deeper explanation involving something about a thing called "latent heat" which is part of a general theory about how pure materials broadly speaking (like water) behave when they change state. The heating doesn't cause a temperature rise, but instead goes into breaking particle bonds and this takes energy not available to increase the kinetic energy of the particles (and hence the temperature). So even if you'd never had any experience with monitoring the temperature of water over time, if you knew about latent heat - and that water was a pure substance - you could make this prediction. You might not get the exact time and temperature when the graph flattens out correct, but you could at least make this prediction roughly speaking and far more accurately than the plain straight line "extrapolation from induction".
But absent that theory about latent heat that you already know you are just wildly guessing. And it wouldn't matter who you were. The thing about a guess (that's a prediction) and a guess (that's wild) is that in the former case you can provide some deeper-than-surface account of why you choose this over that. In the first prediction where you just continue the straight line (some people call this kind of thing "induction") well, you're just superficially assuming the pattern continues. But why? No reason. It just seems as though it should and perhaps you've heard the word "extrapolate" before? But you're guessing. You're actually creatively trying to come up with something reasonable. You're not using "induction" (but if you were, we've just shown it leads you straight into error) you are guessing: making a conjecture. In your mind you might think "the water gets to 100 then boils away getting hotter and hotter because - well what else could happen?). If you don't know, you just don't know. And your guess will be uncoupled from - in this case - actual science. Namely: the best known explanation that has been discovered.
The second prediction is a prediction from a theory itself creatively conjectured some time in the past and tested over and again under many different conditions. It did not come from induction either. Many people over many years, working together, had to explain that matter was made of ever smaller particles that themselves were held together by forces and that energy was required to break the forces of attraction between these particles and that this caused changes of state. And this theory isn't contradicted by lots of other science - but instead is essential for progress in other areas too. But its development had nothing to do with collecting lots of data and then "extrapolating" to the "best hypothesis". If that was the case: we shouldn't expect any more accuracy than we do in that straight line graph above.
Now people in hard sciences like chemistry and physics are well aware of this kind of thing. But strangely, when it comes to other areas it is almost a rule that to "extrapolate" is the very height of sophisticated data-informed, evidence-based reasoning. But we have seen that with even the most simple system we can imagine (heating water on a stove) extrapolation cannot work. So how can we possibly expect it to work when things are more complicated and with more variables? And yet doomsday prophesies about population www.bretthall.org/cosmological-economics.html are common and rest on how graphs monitoring the growth of people "suggest" or "imply" terrible things to come. But how can such a prediction be made on the basis of data alone? On the basis of data alone, liquid water would seem to continue to get ever hotter even after it boils. And if you are trying to program your robot to be ever more clever, it can only be clever like a person if it can make conjectures like a person. And as we have seen here: if it is required to only make guesses as a "Bayesian Inference Generator" then it will forever be restricted to just those things it has been programmed to be able to predict. It won't be able to genuinely create new knowledge because it is programmed not to be creative but rather to implement an uncreative extrapolation algorithm that pattern matches. This, by the way, is why some of us question the conclusions that psychologists (and related researchers) draw. They have lots of data and lots of models and even "predictions" from those models. But to us, their graphs often look like that first graph above: a very good set of data (excellent correlation coefficients (a measure of how closely the data matches the "model" (i.e: how closely the points sit on the line))) all very carefully collected and precisely reported on. But no explanation. The data leads to a model that is explanationless. It does not account for why that graph and not some other and it does not explain whether and how it might be wrong or how it might be just a small part of a much larger phenomena. Therefore attempts to draw conclusions using such a model are in truth "pure guesswork" and nothing like a "scientific prediction". Lots and lots of precise data is not what science is about - or else science would just be about those first two graphs. Science is about the deep explanations of the world - that accompany that 3rd graph. The purpose of additional data gathered is to rule out the second graph and its accompanying explanation (if there is one), with a complete accounting as to why it's false in terms of latent heat, and moreover allowing us to make an infinite number of predictions about not only water but all substances. And at no point did we ever need the chimera of "induction".
@CriticalRationalist (CR) on Twitter took the time to respond to my post here http://www.bretthall.org/humans-and-other-animals.html with his own here https://thecriticalrationalist.weebly.com/philosophy.html CR is a good thinker and writer and his site is certainly worth reading.
Let me quickly state my position for those who tl:dr my own page above. “Suffering” as we understand the term (so as the word applies to us) is categorically a bad thing. My position is that to “suffer” entails being able to create some explanation about why you are in pain. But although all suffering begins with pain and is caused by it, not all pain is suffering. Pain is morally neutral for humans without some accompanying explanation. For more on this, read my post. (An example of “good pain” might be the injection that you know will cure your illness at the doctor. But better: things like the pain of exercise that many of us learn to enjoy or the pain from a fun but scary bumpy ride at a theme park, etc).
Now this is not quite the same as saying “All pain that is not suffering is good”. There could be kinds of pain we as humans do not yet have access to, or understand, that is nonetheless bad. My explanation of the subjective conscious experience of other humans rests on what my own is like. A problem here is that I cannot do the same for other animals. I don’t know what it’s like to be a bat.
I say in my piece at one point “The philosopher Ludwig Wittgenstein famously remarked that “If a lion could speak we could not understand him”. He did not mean that the lion could not speak English: he meant that the internal workings of the *mind* of the lion may have been so far removed from our own as to have no analogue that could be captured by our own vocabulary.”
I also wrote “I guess animals experience pain but I also concede in saying this that though they experience something (we call pain), it might not be like what we experience as pain - at all. “
I don’t know what that pain might be like. It cannot be suffering of the sort we experience. It also cannot be “pleasurable” either for the same reason. It might be neutral - but because “what it’s like” to be a lion or bat or cow isn’t something we know yet (because we have no explanation of consciousness) I say that causing pain to animals is bad.
But CR says “He (That’s me) argues that meat eaters do not face any ethical dilemmas.”. This is not quite right. They do. If they didn’t, my post would have been far more brief. I don’t think animals eaten for meat should (ideally!) experience any pain because of our farming, etc because we do not know what the quality of that pain is, yet. I don’t know what “fear” feels like to a cow. I guess it must feel like something. I don’t know if it would be “bad” but because I cannot know, for now, I withhold judgement and I argue to not do evil (like cause unnecessary pain of a kind we don’t understand to a creature whose internal subjective state we do not know about.) Now my explanation is that their pain cannot be suffering. And whatever it is, it cannot be bad in the way it is for us. Does it have a moral valence at all? I simply don’t know. But I guess it’s neutral. But that’s all I have. My argument is also that some vegans claim to have a positive explanation that non-human animals actually suffer. I’m arguing they are actually wrong (for the reasons outlined in my post and summarised here and by CR). On the basis that “animals suffer therefore eating meat is wrong” I am saying is false. There might be other reasons not to eat meat from animals - but I’m yet to hear it and I won’t make those arguments for a vegan.
CR writes: “Consider, for instance, why people think animals suffer in the first place: people think animals suffer because the behaviour of animals seems to indicate a subjective experience to us. This should strike a thinking person as odd; most animals did not evolve their facial expression for the purpose of communicating to humans how they feel.” CR also here summarises my position about how not all pain is suffering because some - like the pain of a workout if you’re after massive gains or whatever at the gym - is pleasurable. And yet those people who work out (experience the pleasure of pain) have weird facial expressions too. But it doesn’t signify suffering. So I am not moved here by CRs reply. If the “understanding” that CR writes about because of our co-evolution with animals is to stand, there must be a way to distinguish between “suffering” facial expressions vs “just in pain” facial expressions. Is there? (This reminds me of an old Adam Sandler sketch: “Having sex or working out?” - it was just recordings of people moaning and, of course, you can’t tell what they’re up to. Similarly facial expressions don’t tell us much about how a thing feels inside. Facial expressions probably evolved to scare other animals away. If damage is being caused here’s my facial expression: run away. Whether that signifies suffering or not - well that just returns me full force to the original problem.
I think CR’s argument here demonstrates that animals experience pain. But this was never in dispute. I agree with him that “If animals turn out to suffer, i.e. if they have subjective experiences that are morally bad, then the factory farming of today must be evil.”. But I don’t think they do. What someone needs to show myself or someone who holds my position, is how suffering can be divorced from requiring “explanatory knowledge creation” (it requires that you can *understand why* you are in pain) and this seems to require “universal knowledge creation” - universal because there must be the potential to understand the cause of *any* pain). And the only being we know of that is a universal knowledge creator is an human being - a person. I don’t think animals sharing some facial expressions with human beings is at all relevant, let alone decisive in the question about how similar their internal subjective states must be much less their ability to generate explanations about the sources of pain that give rise to those facial expressions.
A person asked @peez (David) about whether if Hitler got into the Star Trek transporter and an identical copy of him was copied, would the copy be “morally responsible”? @peez posted this question to @paulbloomatyale. The discussion was interesting and I think highlights something about how people decide what personhood is all about. You can find the discussion around this Tweet here: https://twitter.com/ToKTeacher/status/919001168629481472
Paul said the copy of Hitler “…didn’t do anything. This person is just a minutes-old baby who looks just like Hitler.” He further said “Sorry to disagree with some of your other respondents -- but you don't punish a guy for having the delusion that he's Hitler”
Now I think this is wrong. I said “If he's a *fungible* copy, he is identical in all respects. Including memories & motivations. He literally is Hitler and thus responsible.”
David said “…If you create an identical copy of me, it is not me. That is what I’m saying.”
Paul said in response to this “Right. True for other things too. If you copy my favorite chair, maybe we can't tell them apart. But it's not my chair.”
I asked “Say the transporter room is sealed/opaque to the world. A perfect copy is created along with the original. When they exit the room…Neither "copy" or original nor any person or *any physical process* can distinguish them. Would you try neither Hitler for war crimes, both?”
Paul said “Neither, since you can't tell which one = original. (Similar to arresting identical twins, knowing one did the crime but not which one)”
Here is where I wanted to pause. Notice that I am trying to maintain the structure of the “thought experiment”. The Star Trek transporter creates absolutely identical in all respects down to the very atoms - copies of people. In other words *fungible* (or absolutely perfect) copies. This matters. I'm using the word in a sense close to that which appears in David Deutsch's book "The Beginning of Infinity" (page 265 in particular).
So perfectly identical (fungible) copies is substantively different to chairs that *look* identical (but whose atoms might be quite different) and it’s especially different to “identical twins” (which are never identical, even in their DNA, it turns out). I wouldn’t punish an innocent identical twin for the crimes of his brother. They really are different people with different histories and - most important - different minds.
But now: a fungible copy of Hitler *is* Hitler. But why is this?
Consider "the original" (not the copy) - it was atoms in his vocal chords that gave the orders. It was the atoms of his hand (and not his copy) that gave all those salutes. It was his body that was there in Berlin and not the body or the atoms in the body of his copy that did all those bad things. And it’s for this reason, I guess, that Paul and others argue that the copy is not culpable.
My position here is: the atoms are not what we’re actually concerned with. The hardware (the body) is irrelevant.
What matters, instead, and crucially, is the mind.
The mind is the software that runs on the hardware of the brain. And that software, if we really did make a perfect copy of Hitler is identical in all respects. The mind - the software - is just a pattern. It’s the arrangement of the atoms in the brain (constituting the the neural connections or what not, it doesn’t matter) - it is something that in principle could be - at some future time (when we’ve Star Trek transporters, say) could be instantiated in a silicon computer. Or written down on paper. It is a code of some sort that we don't yet understand - the software - the mind of Hitler that is responsible for Hitler’s actions and not his body. That mind contains the memories and all the motivations to keep on killing that the original Hitler has because - and this is key - it is the original Hitler. The original Hitler isn’t about which atoms were there at the time of the invasion of Poland. It was about which mind was there. And the mind that was there was Hitler’s mind. And just because there are now two identical, indistinguishable versions in two bodies (one body with a history and one without) does not mean only one is culpable because in both cases each *mind* has the same history.
And that is why the copy is equally culpable.
Postscript: This is not merely a thought experiment. According to quantum theory, there really do exist “fungible copies” of you because of what we know about how particles (and therefore everything made of matter) behaves. The laws of quantum theory compel us towards a vision of reality bigger than what we are familiar with. This forces upon us the idea that not only are electrons and other sub atomic particles in two places simultaneously (because they occupy different universes) but so does everything including: you. What does it feel like for multitudes of "you" to exist right now. Exactly as you feel right now. And each instant universes "differentiate". For the facts about this read “The Beginning of Infinity” by David Deutsch. - in particular the chapter on "The Multiverse".
This is a blog post. It's clearly not a Tweet*. What's the difference? Is it just because this is on my private blog and a tweet is on the platform we call Twitter? Or, since Twitter began has there arisen a technique - indeed a culture - of how to construct an effective tweet? I think a tweet usually has a style - a style that quickly evolved from the constraint that is 140 characters. Forced into that environment, language took on the form that it did; tweets evolved to be succinct in a way that other mediums did not promote. Here, on my blog, where resources are plentiful and my thoughts can eat up all the characters that are available to them, a certain kind of verbose and descriptive style abounds where metaphor conjures images of ideas as being expressible in certain environments and that this means species of communication can evolve. We should value those species. If someone wants to write long form: get a blog. If you want to sample many ideas quickly: look at twitter. If you want to combine the two: link to your blog from a tweet.
David Deutsch made the point elegantly in a couple of tweets when Twitter decided that it would experiment with 280 character Tweets for some people. David wrote:
Would you redefine
A haiku to have double
The syllable count?
The point here being: any small change (to the number of syllables) makes things worse. And also: there's simply a tradition. And why? Well traditions last because they work. They are ideas that survive. Twitter has survived as long as it has for a reason. Perhaps not as enduring, thus far, as a haiku
In another Tweet:
The contrarian natives of Limerick
Thought the rules of the eponymous verse form arbitrary.
They tried to break loose,
But what is the use
Of free form that just makes the thing humourless?
This makes the point even more powerfully. Here it is obvious (if you are familiar with the form of a Limerick) that something has gone terribly wrong. Why change what already works? Rhyming is what makes a limerick a limerick. If you don't follow the meter - the pattern of rhyming - that a limerick demands - you get something worse. A limerick simply *is* of the form:
There was a young man called @jack
And characters he felt he did lack
So up went the limit
Much better now innit?
More room for everyone's craic.
Anything that deviates from that style isn't a limerick. Anything more than 140 characters isn't a Tweet. It's something else. 140 characters forces upon people a style. Especially for thoughts that cannot normally be easily expressed in 140 characters or less.
Rather randomly choosing some Tweets (from Sam Harris who Tweets far less than he once did) and David Deutsch respectively we get:
We can oppose all extremism and dogmatism, while recognizing that not all extremes and dogmas are the same. The fine print still matters.
Knowledge is created by conjecture and criticism—in Darwin's theory, mutation and natural selection. Lamarckism tries to do without either.
Tweets are very dense. In the first one is 137 characters and the second is 139. Sam has actually used an article ("the") but this is rarer in tweets because those are typically unnecessary. Sam has attempted to explain a complex idea succinctly. He's forced into being clear because he is limited. There are differences between dogmas. The details matter. The second tweet by David is even more dense. It makes a bold claim about two kinds of knowledge and contrasts this with an alternative. An important point lurks here: in both cases the tweet serves as a starting point for engaging with the broader work of both authors. Just pick up their books to find out more.
If people want to tweet longer, there is actually a service called http://www.twitlonger.com or even http://talltweets.com (just google "tweet longer"). My preference is to simply link to my own blog. Or sometimes take a screen shot of a longer bit of text and post it as a picture. But I do this rarely. It's cheating!
*I don't know when to capitalise Tweet. I've mixed things up here with tweet and Tweet. Probably not ideal...
It makes sense to be a Republican in Australia. It seems eminently logical: after all why shouldn’t the Head of State of Australia be Australian? Moreover, shouldn’t we have a system that doesn’t simply allow people to be born into power? Born into power? What an ancient - and ridiculous - concept. The "divine right of kings"? Isn't that an outdated religious notion? Democracy: the idea that the people vote for the best leader based on merit, is surely preferable. And modern. And good.
Those conservatives - and worse - those conservative monarchists must simply be set in their ways. There are no good arguments for the monarchy. And in 1999, when I voted "Yes" for a Republic in the Referendum on this issue, I heard all the arguments. And as I heard them they were weak. "Don’t fix what is not broken" seemed to be the refrain. Others argued in return: the horse and buggy were never broken. They were simply superseded: taken over by a better, more modern way of doing things. So too it must be with governments and systems of democracy. Sometimes things might still work, but nevertheless we can improve things.
Since that time I’ve read more. And interacted with more people. Not about the law, or politics so much: but philosophy and how knowledge is constructed. Here is something remarkable: So much of what we know is inexplicit. We find it very difficult to put into words much of what we know. Here’s a way of thinking about explicit versus inexplicit knowledge:
A good chef has lots and lots of explicit knowledge. If the recipe is written reasonably well, then most anyone can replicate their dish to a very high degree of accuracy with the right tools and ingredients. Indeed this was the basis for a competition on the television reality show “Masterchef”. What happens is that a professional chef with some complicated dish shows some amateur cooks their special creation. Then the cooks get the recipe from the chef. What always surprised me, was no matter how complicated, the cooks managed to get there - and replicate the complicated dish - quite well. The words alone - with some visual cues - enabled unorganised ingredients to come together often in highly complex artistic ways of presenting food. That knowledge about how to cook is highly, highly explicit.
But now consider the great tennis player Roger Federer. He must know lots about how to play tennis really well and serve the ball - and return it - better than almost anyone who has ever lived. But if he was just to use some words to try to explain to you how to serve a ball - you’d never manage it. Even if you spend a whole day watching him and talking to him - though you’d perhaps get a little better, chances are your serve would look nothing like Roger Federer’s serve. Yes: there’s genetics involved, some sort of “innate” capabilities that his body has that yours may not. But still: you really would show very little improvement over the course of a day.
So there is inexplicit knowledge that Roger Federer has about tennis. You have inexplicit knowledge too: perhaps you know how to drive a car. It feels a certain way. Or a bike: how to balance. You just know how to balance. You can explain some, but not all. Words capture it somewhat. But not all of it. People don't learn to ride bikes from reading books - but they can learn to bake cakes. But the point is: just because you cannot articulate precisely how, using words, how to ride a bike, doesn’t mean your bike riding is somehow especially dubious. There are some things we can explain with words - explicit knowledge- like a recipe - or scientific theories. But there exist other things like how to serve a tennis ball really well, or ride a bike or play a piece of music well - these are inexplicit. That kind of knowledge is not all easily articulated. Some is, much is not. I've written a fair bit more about inexplicit vs explicit knowledge here in another context.
Another thing about “knowledge” in terms of where it might be found as well as of what kind it is. For example, we all know it can appear in our minds, because we know things. Knowledge also appears in books. And in computers. But while in a mind it's represented by electricity flowing along neurones in our brains, in books it's ink on paper. In computers: on silicon chips. Knowledge is a rather strange kind of "substance" - it's abstract: the physical stuff that represents it can be completely different from one situation to the next - but the knowledge itself can be the same. Knowledge can even appear in systems. For example: the knowledge of the physics of how light and glass interact is “instantiated” (so we say) in a telescope. “Instantiate” means something like “appears there in a certain form” or “represented within”. So although there can be a book written all about the physics of light and in that form it's basically - physically speaking - ink on paper (it's this which is “instantiating” the knowledge) - that same knowledge can be instantiated in an actual physical thing like a telescope.
So complex things like telescopes can instantiate the very explicit knowledge of how to gather and then focus light.
But what does any of this have to do with the republican vs constitutional monarchy debate? This is the thing about societies: as a rule, historically, they are terribly unstable things. We live at an unusually peaceful time - notwithstanding the chaos in various places. But we shouldn't forget how badly wrong things can go. The author Douglas Murray likes to think of societies as "fragile ecosystems" and I think that's quite right. The majority of societies and whole civilisations throughout all of human history have fallen into chaos and ruin and disappeared from the face of the planet. Whole empires and nations and city-states. Human beings have tried a very many different kinds of ways of organising, ordering and running societies: absolute monarchs, democracies where people work in coalitions, democracies where the person at the top has more or less power. Many kinds of democracies. Many kinds of unelected tyrannies. Democracy is, of course, no perfect shield against tyranny and disaster: indeed we may well say democracy is a kind of tyranny. The tyranny of the many over the few. And of course we have seen famously in many places democracy turned against itself - Nazi Germany of course - but one need not go so far back in history. The nations of South America are testimony to the instability of democracy, as is the continent of Africa. Coups and violent overthrowing of parliaments. But is any of this an argument against democracy? Not really. We should keep Winston Churchill in mind who has attributed to him words to the effect: “Democracy is the worst form of government. Except for all those other forms that have been tried from time to time.”
The point there is this: while there is no better system than democracy that we know of, it is terribly imperfect. It is liable to fall into chaos and even tyranny if we are not very careful about how it is set up and run. Given the chance, as Sam Harris has observed, some people will quickly democratically elect to vote away their rights and democracy itself. We might reasonably wonder right now as I write this in 2017, if this is not indeed happening in many places the world over. Democracy is fragile. A fragile system for organising the fragile ecosystems that are modern societies.
So with these dangers looming over any society at any time, what can we do? Surely we should look at what actually works? What systems have been stable over time? In particular what systems allow stability under change? That, despite huge changes and challenges have nonetheless not suffered terrible chaos or tyranny. We can look to the United States perhaps: a great nation of relatively stable democracy. Relatively: they did have a civil war, of course so great internal violence within that system is not unknown. There have been hiccups. But it is certainly a beacon to look to. Where else? Let us look to England: a democracy of a different kind. There the Head of State is not elected, that position has far less power than that of the American President (who is both Head of State and Head of the Government and, for example, can launch nuclear weapons) but there we have a particularly remarkable degree of great stability. The British System is ancient - one might presume stretching back to the Magna Carta of 1215 and before. But why - why should these places be especially stable and others not? We cannot articulate all the reasons. Both instantiate inexplicit knowledge: their traditions and customs contain within them rules about how to keep a society “stable under change”. Great change. Dynamic societies are the rare exception: most societies that have ever been have been “static” - they have not made great progress. But England - for example - led the industrial revolution. Science made great leaps there. Society itself underwent great changes and democracy reached its most inclusive form with the head of state having among the most diluted of powers. And yet the system of governance itself weathered all that came - including a brief period of republicanism from 1649-1660. Notwithstanding all that, the system has persisted and thrived. It was the framework within which so much change took place safely and to the net benefit of all in that great nation. David Deutsch explains in "The Beginning of Infinity" that a "tradition" has - until now - always been a way of preventing things from changing. Traditions are usually the ways things are done so that things remain more or less static. But in modern "dynamic" societies there is now a different kind of tradition - a unique and powerful one - a "tradition of criticism". That is a monumental difference between a tradition we have and the traditions of the past. It is a tradition that allows for change. And how that tradition works exactly - what the conditions in a society are that allow for that are not easy to articulate. There must be some other traditions and customs in a society that allows a tradition of criticism to flourish. Those other traditions: preconditions for creating that favourable environment for progress - are not easily articulated. Were they, we would more easily export our peaceful democracies to places like North Korea and Iran and Russia. But it is not easy to explain.
So, now to Australia and our Constitutional Monarchy. It has clearly allowed stability under great change. It has actually worked. The nation can be a dynamic and changing one, but the type of democracy itself allows for that change to occur while the whole project remains in place, functioning and thriving. The system we have embodies knowledge - of an inexplicit sort - of how to keep the nation stable. Those customs and traditions of democracy that we have actually work. We know some of the reasons but we cannot articulate all of them. Should we change this? Can we improve it? Perhaps we could. But how? We do not know why it works and so we cannot know how to improve systems we do not fully understand. So we could change things, and intend to improve them, but we might be completely mistaken and cause damage instead. Rather than an automobile replacing a horse and buggy we should think instead: a vibrant and healthy person who is then offered the chance to take a drug which has not been tried before and for which no explanation is given as to how it might work. Yet, an "expert" assures you: this drug will make you even more healthy and vibrant. There is a risk, they are reluctant to admit, that it might make you terribly sick. But we've no reason to assume that either. What would you do? It all turns on how well you currently feel. And if you look around and most other people are pretty unhealthy by comparison - perhaps appreciating your good fortune is enough and you should, perhaps, pursue greater wisdom, knowledge and satisfaction and progress elsewhere, rather than take the risky pill.
An elected president, even an appointed one (appointed by the Parliament or some committee, say) would shift some power away from the Parliament to another seat. We would actually not know what systems we are changing if we made this change. Perhaps those systems would not be too much affected. But perhaps it would be a tragic mistake. Shifting whole systems from one to another is no small thing.
And ultimately it does not matter “Who rules?” as Popper argues here in a paper that should be required reading for anyone interested in these issues. Because democracy simply isn't about electing and installing rulers - be they presidents or prime minsters. It's actually about ensuring rulers can do very little damage so that we can correct their errors if need be. The Monarch - or their representative - is simply prohibited from doing much damage and we have seen this (1973 notwithstanding).
The question before us when considering changes to our system of government is: how can we most easily undo mistakes that are made by rulers? Our system has already satisfied this criterion to a level that leads the world. Popper’s criterion of error correction is no better elsewhere and we might guess cannot be easily improved. Again: we should not fall back into the mistake of thinking democracy is about putting particular rulers into positions and therefore the question of whether the head of state is Australian, or not, is the wrong question for a democracy to consider. And it is true, a monarchist cannot properly articulate all the reasons that a monarchy is preferable because much of reason why is tied up in a type of inexplicit knowledge instantiated in the traditions of governing. But just because these cannot be explained in clear language does not make the knowledge more dubious. Remember: the knowledge of how to ride a bike is of a similar sort: real, yet not easily explained in words. But we know that the knowledge works because the bike stays balanced and you get to where you need to go. As systems of government and great democratic traditions are. Means of safely, and with stability changing our place to allow us to make progress together. The analogy is not, in this case: replacing a horse and buggy with a car. It's repairing the best bike we've ever had that has absolutely no sign of wear and tear. It's taking off the front wheel and replacing it with another: never tested, and for no reason other than it was, for example, made in Australia.
So, in summary: our Constitutional Monarchy maintains the constant stability that allows for the change that the Parliament brings. To remove that stability that is the very thing that has facilitated our dynamics society is dangerous. We’d then have two seats - the Parliament and the Presidency - both subject to change.
The Crown is the Dignified and the Parliament is the Efficient said Walter Bagehot in “The English Constitution” to separate out the symbolic versus the way things are actually acheived. In modern science-type language: The Crown is the Constant and the Parliament is the Variable.
We change this at our peril.