Science and democracy share the feature that they are error correction systems. The former is about correcting errors to our knowledge of the physical world. The latter to our choice of rulers and their policies. With science on the rare occasion when we have two theories competing to explain the same phenomena we can rule one out through a "crucial experiment" (for more on crucial experiments, see here). With democracy, when candidates compete to win elections they put forward policies and if the one who wins, and has the power to actually enact their policies fails to meet our expectations, an election is an opportunity to correct our mistake and try out another candidate.
But in neither case - science or democracy - can we ensure that the theory we have or the candidate we vote for - cannot possibly fail. And we must expect them to fail in ways we can not have foreseen.
“Until the average person is well-educated and well-informed, you will always have a dysfunctional political system. I agree that free high-quality education for all would be costly to implement, but rich economies can afford it. In fact, I think they can't afford not to do it.” - Google Programmer François Chollet (@fchollet), Twitter, 4 Feb 2018
If the average person was educated and informed to a standard that François Chollet approved that would not guarantee that, by his lights, the government was not “terribly dysfunctional” (that it was made up of terrible people or that it never got anything done (See note 1 below)) or even that the system itself was was “dysfunctional” if by that we meant something like: incapable in principle of enabling the worst people - by our personal standards - of being elected. Or perhaps it means something deeper: that there is corruption that makes the democracy rotten to the core. But then well educated, well informed people are still liable to fall into error and nothing can guarantee they cannot be deceived. Indeed here lurks an irony, but it's true: the more well educated you are about a thing, the more blind you can be to the most common errors. You might simply be "used to" making the same mistake over and again. Expertise can sometimes be a liability - even and perhaps especially - in your domain of expertise. The reason is you cannot often think as creatively because you think of all the criticisms. That's what makes you an expert, after all! So you think of all the criticisms against the idea that you are wrong - because you know them. Isn't that strange? It's like an expert Korean linguist who is teaching someone the Korean word for (say) computer (which, as it turns out, is "computer" with a Korean accent ("keompyuteo"). And say the (ignorant!) person they teach comes to them one day and says "I heard from a Korean and they said that's not the only word. There in another word and it's "gaesangi" they insist. But the expert knows they're correct - there's one word only and they consult with some native Korean speakers who agree and besides, they're the expert after all. So they return to the learner and insist "You're mistaken - there is one word. I've researched this. You can trust me. And I've checked - with other native speakers." But experts can be mistaken and in this case the learner just happened to overhear some older North Koreans speaking and using that word...which is indeed an older word in North Korean for "computer" and not well known by South Koreans. So as it turns out the ignorant, less educated person knew the truth. There was more than one word in Korean for computer in existence and no amount of checking with the typical South Korean expert would have fixed that. More education doesn't mean you won't make mistakes that those with less education will not make. We are all equally fallible. There is always an infinite amount we do not know and we must expect others know things we do not. Even (perhaps especially) the "less well" educated and "less well" informed. No system of education can ensure errors of this kind become less frequent. No democratic system can ensure that, for example, terrible rulers might not get elected. So even if President Trump really is/was a terrible mistake, no democratic system which is to say no democratic institution could have prevented his election in principle if he was a legally qualified candidate.
Of course at the extremes that exact criticism is made: he is not legally qualified. But those accusations seem to be just par for the Presidential election course in the United States now. Obama was not born in the United States, or Hillary Clinton was actually a criminal who should have been in gaol and so on. If the institutions investigate and you regard them as having worked in those cases then it is a poor, ad-hoc explanation that says they only ever fail, are corrupt and evidence of a broken or "dysfunctional" system when applied to the candidates and parties you do not support.
Now this may seem a bizarre diversion, but bear with me. The average person probably doesn’t think much about the intricacies of how science generates the knowledge that it does. That’s a rarefied kind of interest of concern only to philosophers of science and some scientists. Then again, so far as “interests” go, there is no "average person" - there’s few academic interests all average people share. Does the average person enjoy learning maths or engaging in deeply refined literary criticism or history lessons or do they want to have a deep understanding of civics and constitutional law? Hint: ask a school-aged student to find out. But the average person is indeed interested in knowledge of all sorts - it may be academic knowledge of a subject of interest to them or some project they are working on (both these often wrongly dismissively referred to as a "hobbies") or it can be knowledge of their own lives, those of their friends and family and how to do their job well and better and other day to day things. The average person has concerns and interests - perhaps not shared by philosophers of science in Sydney, or google programmers in Silicon Valley, say.
It’s not really of great importance, though it may be of some use to the average person to learn that the process that is science is in large part defined by the creation of hard to vary explanations of the physical world that can be tested against physical reality. These “tests” are known as experiments - but they are not the only way we have of criticising scientific explanations. It is just that explanations of the physical world that can be tested against physical reality - by experiments - are precisely the scientific ones. The experiment should be able to be performed in practise, which is to say we should posses an explanation of how the experiment can be conducted by us.
Some versions of string theory that postulate entities that can only be resolved with the energy of a particle accelerator the diameter of the galaxy would be an example of a possible explanation of the physical world that is, in my view, not scientific. Although there is some kind of test possible “in principle” the lack of an “in practise” explanation of how to build such a device given the possible transformations people can actually make in order to test the theory should remove it from serious contention as a way forward in making progress in physical science (as useful as the mathematical techniques discovered from explorations of string theory have been in mathematics.)
Sometimes this process of science generates theories that are false. Indeed this is rather the rule and not the exception. We should expect that the vast majority of scientific theories will turn out to be false. This is simply a claim that the scientific enterprise is unbounded: we will always be able to improve upon any explanation we do discover. And any improvement will show how flawed the unimproved version was and why.
The “average person” might think that science is an engine for generating truths about the world that, once the authority of science in the form of some professorial scientist has deigned to profess that truth that we can trust such claims to stand as “scientific truth”. But science is very much a catalogue of errors. As David Deutsch has said - it would have been much preferred if scientific theories were called scientific misconceptions from the start.
Science, for example, at various times has produced theories such as “spontaneous generation” as an attempt to explain how non-living matter can become living. Some of the earliest theories in chemistry included the “phlogiston” idea where this substance inhabited all matter and it was this that was combustible. Earthquakes, volcanoes, moving continents and other eruptions of the Earth were explained as evidence the planet was expanding. And for centuries it was believed that an instantaneous-acting gravitational force existed between all masses in the universe and that this explained the motion of objects from orbiting planets to falling apples. And these are just some of the more prominent examples from just biology, chemistry, geology and physics. Astronomy is a catalogue of bold conjectures about the nature of the cosmos being utterly decimated by the light of observation. Literally. And we are all familiar with supposedly rock solid medical and nutritional advice seemingly turning on a dime to advice the precise opposite of what we were once taught (cf: eat more carbohydrates and less protein (becomes) eat more protein and less carbohydrates).
So is this system of producing explanations in science flawed? Why should it consistently throw up utter falsehoods? Why won’t it simply provide us with the final correct answer? Of course there is no such answer. Only better and better answers. Approximations of increasing fidelity, reach and depth. So although any given explanation must be expected to be flawed, the system itself cannot be blamed for those flaws. This process where a creative scientist tries to solve a problem with what is known by producing a new theory is roughly the way knowledge generation in all domains works. An idea is guessed and then anyone interested attempts to refute that guess by careful criticism. The criticism might be how the idea is false, or ugly or not so useful compared to some other. But if the criticisms all fail, and the new idea accomplishes everything any competing idea does - and perhaps more (and more elegantly) - the idea survives to earn the moniker “The explanation of…”.
The system must be expected to produce utter falsehood. Indeed it is required to. If science is about generating beautiful explanations, then for each beautiful explanation that becomes “The Scientific explanation of…” defeated rivals will lay in its wake. The decimation of opponents - typically though experiment - is a constant in science. It reveals how what we once thought was correct actually always was utterly false and flawed. And how blind we were not to see. But we are fallible and it is no sin to keep on making these mistakes. That is our nature. We are fallible. Our fallibility is tied intimately to our creativity - that feature of us that strives to make bold conjectures - majestic guesses - in an attempt to improve our lot and what we know. But that process is an undirected one for we cannot know in which direction the ultimate ontological truth about reality lays. We set out from our island of what is known and sail into the unknown, hoping to find a better place. If we fail, we can always find our way back, but there is no guarantee we will land somewhere better. That is our nature. Science cannot provide sure answers - it can only provide the conditions under which those answers can possibly arise.
Now all of that, if absorbed, might make someone somewhat better informed about the process of science and some of its history. And they might learn a little about epistemology besides. But would that do anything to sway them in an election? Precisely what kind of information could make the average person “well informed” enough such that the system was not broken? Should it be about who should be elected?
The process that is democracy is in large part defined by the conditions under which the successes and failure of the rulers of a society can be tested against the expectations of an electorate such that are those expectations not met, the rulers can be removed without violence. The ultimate expression of such “tests” are known as elections - but of course they are not the only way of criticising elected rulers. Rulers are criticised every single day - the media and much of the electorate is obsessed by it. It is just that elections are the means by which rulers who fail to meet the expectations of the electorate - which is to say by some measure of comparing the politicians stated policies to what was actually achieved by them - is a democratic process. Democracy is, or should be seen as, a system whereby we trial some leader (on the basis of their stated policy) and should this leader fail to meet our expectations then we can remove that leader through a process that allows us to install some other leader with different policies should we so choose.
Now people are all very different. We are fallible and have different values, different knowledge and different circumstances. This kaleidoscope of differences ensures that we cannot possibly agree all the time on every topic. Some people are more or less knowledgeable about this or that thing and that different knowledge will come to bear when it comes to making decisions about whether this course or that might best suit their own interests or interests they care about. And this, it must be said, is a wonderful thing. It means that there will always be wildly divergent ideas about how to proceed in life. Each of us as rulers of our own lives guess, trial and correct courses we take, amending our paths and trying to plot out a better course. Often, many of us, fail terribly. We are fallible. We lack the knowledge to know what to do next.
Sam Harris and Russell Brand had a conversation recently on Russell’s podcast radio show called “Under the Skin”. That two hour conversation was an impressive display of just how far apart and what entirely different “language games” two people could play while somehow keeping the conversational ball in the air. At times they really weren’t even playing the same game the disagreement was so great. So while there seemed to be little common ground at any point on almost any issue of substance (except that there exists mysteries in the world and human beings are important), both nevertheless found an opportunity at the 1h 50 min mark to find a point of enthusiastic agreement:
Harris: “Democracy seems impressively broken to me and capitalism seems impressively broken to me…except the alternatives seem worse…this is Churchill, right?”
But why? Why does Sam think this? One need only listen to the Waking Up podcast to get a taste. Donald Trump’s election is a clear sign of a broken system, in Sam’s eyes. Though Sam would have been no fan of Hillary Clinton either and so perhaps the “broken system” is evidenced by the dearth of choice on offer as though the choices on offer were particularly abhorrent. What is remarkable about this is how Harris notices - mere minutes after making the claim that capitalism is broken - that today we live in a wonderful age that seems to keep getting better where only 10% of people are in extreme poverty while a mere 150 years ago those numbers were flipped. Now why is this? Is it the spread of socialism or is it free trade (capitalism). What makes the difference?
But Sam is very worried. He agrees, he says, at the end of that podcast, with some experts that we are basically in a new "Cuban Missile Crisis" but no one has noticed. That now is particularly dangerous. America is at a particularly unstable epoch - irrationality rules, fake news has proliferate, the experts have been shown to be wrong time and again and there is mistrust all around. Congress and the Senate seem incapable of passing legislation (Again, see note 1). There is deadlock. All of this: a sign of a broken system.
Sam's idea that our systems are broken is a common underlying thought of our times. It is shared by many in Europe where Brexit too is seen as evidence of a terribly broken system. These “populist” uprisings. People voting against their own economic interests. The system is broken. The outcomes are unjust and unfair - especially for the least powerful. Those people have been deceived by corrupt double-speakers. Political charlatans interested only in lining their own pockets and those of the powerful corporations. The system is broken.
But when did it break? In the case of the American system: Did it break sometime during Obama’s term? Did it break at the moment Trump was elected? Perhaps when he won the nomination? What exactly is broken, except the expectations of those who do not agree with the outcome of these elections and referendums?
Let us remind outselves of François Chollet (@fchollet) Tweet in full:
“Until the average person is well-educated and well-informed, you will always have a dysfunctional political system. I agree that free high-quality education for all would be costly to implement, but rich economies can afford it. In fact, I think they can't afford not to do it.”
Let us observe (before we return to this shortly) how wondrous is the claim something can be simultaneously "free" and "costly". This is a tactic employed by those who believe government is the best provider of some service - especially something like education. What is meant here is: the education is "free" to the user and "costly" to the taxpayer. (It's not quite like this, of course - because many of us were indeed taxpayers when we were users - so we paid). "Free" and "costly" means: the government extracts taxes so that for some the system is (apparently) free while for everyone else it is costly. That is what is meant by "free" yet "costly". And is why I argue that this entails (logically implies, assuming the preceding holds) that "we need government funded institutions to ensure people vote the right way."
The process works like this: the taxpayer has money extracted from them under penalty of force by the government who then allocates some of that to educational institutions. They don't do this without conditions. After all, if there were no conditions anyone at all could claim they were an educational institution and demand money from the government. So governments require "standards" are met in those institutions they fund. Meeting "standards" requires a comparison between the content the institutions provide and a set of criteria designed by government. So "standards" shape content - which is to to say the curriculum. In reality it's far more prescriptive: standards are the curriculum and also how the curriculum is taught and assessed. Standards - conditions for funding - are extremely restrictive and inspections occur and schools and other educational institutions closed if government requirements for what is taught are not met. And some of that content must include things like: particular interpretations of history, how economic systems and commerce should operate, what the normative response to social and environmental issues is, how a legal system should be set up, the place of religion in society and the proper role and function of government and so on and on. This is a terrible conflict of interest. If the purpose of education is to help young people foster and explore their own creativity and become better critical thinkers, this cannot happen when the government is mandating standards. As governments must do, else how can they possibly decide between the many institutions competing for funding so that education can be provided "free" to students? Hence any simultaneously "free" and "costly" system of education must amount to a government funded system of indoctrination. A system which, in part, has at its core an objective of helping to influence how people view the government and, therefore in democracies, how they choose to vote.
Returning to the Tweet under discussion. That view - popular in some circles - suggests that the outcome of an election is an indication of the “functionality” of the system itself. Which is to say if the outcome is bad, then the system that produced it must be faulty. But that would be rather like arguing that the production of a demonstrably faulty theory is a demonstration that the process of science itself is faulty. But as we have seen: science is in the business of producing faulty theories only to be replaced by better (though we must expect ultimately faulty) theories.
Now you may or may not think that Donald Trump is a great thing for America. But let us go with the most some of the more common positions preferred by his opponents: Donald Trump is a terrible president. He is altogether unsuitable for the position.
Does that mean the system in America is broken? No it can merely mean Donald Trump is terrible, people elected someone who does not deserve to be there (so they made an error) and he needs to go. Happily the system is perfectly designed to solve that problem. What happens is that there is an election every four years in America and a terrible president can be removed. That is what happens. And so far in the history of America that process has occurred without violence except where presidents have been assassinated.
So the system works. What is the alternative?
Now maybe you think: but no! Trump is corrupt and is not entitled to be there and never was. People were hoodwinked by a liar. Now of course accusing politicians of lying is hardly the uncovering of some deep truth. But can't people who voted for Trump decide for themselves?
"But no! They cannot." perhaps the retort may come, "They are incapable. They are too poorly educated. The average person is not well-educated and not well informed. So that is why a charlatan can be elected."
But that cannot be so. People are better informed than ever before. And they have always been fallible and gullible. Those things are constants - but information is now more easily accessed and people can choose among sources and choose criteria for judging those sources.
Back to @fchollet's tweet. What would “free high quality education for all” really entail? Well firstly - it cannot be “free”.
There is no such thing as "free" except, perhaps, the air.
Free of course here, as it always does in these cases, is a euphemism for “taxpayer funded”. Teachers do not work for free. And government funded education is necessarily indoctrination. He who pays the piper calls the tune, after all. North Korea provides “free high-quality education for all”. They really do. Education and learning are not at all identical as I say here. Some North Korean children are excellent at mathematics and some other subjects and of course they can recite all sorts of “facts” about what it is “right” to think when it comes to the government. The system works! It's not broken. It is doing exactly what the government want it to do. And the system is a terrible travesty and tragedy.
What can it mean for a system in a free (in the philosophical, libertarian sense) and open society to provide a high quality education?
Firstly - again - it cannot possibly be free. Whatever a child wants to learn - they should be able to. And that might include - no school at all. It might include doing little more than attending the local park each day with their ipad and their friends. Accessing the internet they have access to more knowledge than anyone has, ever. And if they have a loving set of parents and friends and other wise adults around they can have conversations to correct any errors they might encounter in their learning travels. Children do indeed love to do this (only forced school manages to switch off this natural love of learning). But ipads aren’t free. Or maybe they would like to go and have swimming lessons instead, or piano lessons or Korean language lessons or, or or…whatever the case those lessons won’t typically be free. They cannot be. People become experts at things at high cost to themselves and so they are entitled to sell their services. They shouldn’t be forced to provide their services for free. And likewise nor should the rest of us be required to pay for someone else’s children to have swimming lessons. Maybe we can barely afford to pay for our own child’s swimming lessons - or whatever.
So “free high quality education for all” cannot be free. That makes zero sense.
High quality will mean children must pursue their own interests and therefore will necessarily form very different views about the world and have wildly different preferences, such as for things like who to vote for in elections.
And as “for all” - we don’t want everyone to do the same thing, let alone be forced to. Especially children. The future is in the other direction entirely. Some small number of students might choose to pursue a traditional course of study of the kind François Chollet might approve.
As Popper writes in “The Open Society” (you can find the whole context here at: www.theopensociety.net/2017/12/what-democratic-institutions-may-be-expected-to-do/ thanks to Peter Monnerjahn @PeterMonnerjahn
“The idea that this problem can be tackled, in turn, by an institutional eugenic and educational control is, I believe, mistaken; some reasons for my belief will be given below.) It is quite wrong to blame democracy for the political shortcomings of a democratic state.”
(The problem in question of which Popper speaks is “dissatisfaction with “democratic institutions because they find that these do not necessarily prevent a state or a policy from falling short of some moral standards or of some political demands which may be urgent as well as admirable.”)
And I agree. When the state or policy falls short, it cannot be that ever more education of the people is needed in order to fix the democratic institutions (the system). The system of democracy - like the system of science - cannot prevent flaws and faults and “falling shorts”.
And with respect to education, anyway, the “average person” is now more educated and well informed than ever before at any point in history. The “average person” was once an illiterate person who even if they could read had access to almost zero books and the current goings on of the day. Now the average person can read. They have access to news and the views of their family and friends dispersed throughout the world and - amazingly - the views of some of the best thinkers on the planet - instantly. Some look only at the Instagram and Facebook feeds of young popstars or celebrities famous for being famous, sure. But even the most banal of those people comment on the days news and inform their followers of trends. The “average person” is an amazingly knowledgable, creative nexus of opinion and contradiction and fallibility and knowledge.
If you actually listened to them, you just may find they’ve thought things through. They’ve got reasons. Yes, they might have been mistaken. And the reasons they had were flawed. And they voted based on a mistake.
But when has this never been the case? And how could it possibly be otherwise?
(1) The idea that when a bicameral legislature such as exists in the United States (The Congress and the Senate) or in Australia (The House of Representatives and the Senate) or the United Kingdom (The Commons and the Lords) are at loggerheads and no legislation is being passed is a bad thing is, typically, false. Government is a powerful, dangerous and (at it’s most mundane) simply annoying institution that intrudes into lives and livelihoods. The less it does to interfere, the better. So it is *good* when government, in its best moods, reduces its own powers and lessens the intrusions it makes. But this is rarely the case. Mostly it is legislating to make regulations and ban this or that thing or prevent this or that thing from occurring or being tried and taking money from these people to give to those people and so on and on. The best it can do is pass laws eliminating regulations and reducing taxes. But the second best thing it can do is, as a broad rule: nothing. So when there is a “deadlock” don’t despair. Realise that is government *working* - the two houses working together to prevent the overall government from doing more to hurt people and intrude into their lives. That system is the one that has survived meta-government trials over millenia. It works better than alternatives that have been tried. And when it’s “not working” it’s working.
If you heat water on a stove and monitor the temperature as time passes - if the heat source is more or less constant and the environment is reasonably controlled (no strong breeze, say) then you will end up gathering data that looks something quite close to this (the details will, of course, depend on how powerful that heat source is and how much water you have)
Say you keep heating the water. What happens next? What your answer is depends entirely on what knowledge you already have. If you genuinely think you do not know, you can guess. Note this: if you are in that position, it doesn't matter how "educated" you are. Your guess is as wild as anyone else's. "I have a PhD in Science" someone might say "But although I've no experience ever with this experiment, or anything like it, let me make an educated, expert guess..." Now that person's guess is no better than if they didn't have a PhD. Now there's a sense in which all knowledge is "guessed". But some guesses are made because they are derived from some existing theory. This is called a prediction. And if the theory being used is scientific we call such a conjecture a scientific prediction. Now a prediction is not a theory - a theory is an explanation - an account of why some phenomena happens in the way that it does. A prediction is where that general theory is taken and applied to a specific case. In science there is typically one "scientific theory" - namely the explanation - of any given phenomena. Sometimes there is no such theory (what is consciousness, for example?) and rarely there are competing theories (how do we resolve situations where quantum theory and general relativity conflict?) - often, like in this example of heating water - there is one explanation known. So...what happens next? If you're a person who thinks "induction" is a thing - guess now what happens next? If you already actually know what happens next, guess what someone who does not know would guess happens next. Many people think something like: well to make a prediction you "extrapolate", don't you? That's the rational thing to do. You have some data, so now continue the trend, right? It's a nice straight line - a "linear trend" so they say, so why not use what data we have and just continue the pattern?
Why not indeed:
Here, what is predicted is that the temperature of the water just continues to climb. We just follow the pattern previously and guess that the straight line just continues without limit. That might be called "pattern recognition" and is supposedly something like a sign of intelligence. A computer that can make that kind of guess might be well on the way to being smart like us, so we're told. In that context it can also be given the fancy title "Bayesian inference generation" (or something like this) and some people think that this is the kind of prediction that artificially intelligence machines are increasingly able to do. I criticise that line of thinking here: http://www.bretthall.org/superintelligence-4.html I should say: this guess seems quite reasonable. And it's even partially correct. From 80 to 90 degrees that next data point is correct. And so too is the one from 90 to 100 degrees. It is indeed very close to linear there. But anyone who has taken high school or even primary school science, or read a book about this, or seen the graph on the internet or perhaps even done this experiment themselves, knows this isn't true for values above 100 Celsius (yes, I'm assuming we're at sea level and the conditions are just such that "100 Celsius" is indeed the boiling point of water. Anyway, if you already know you might guess something strange happens at the boiling point.
That "strange thing" is that as you heat beyond the boiling point (here assumed to be exactly 100 degrees Celsius) the temperature does not increase. It plateaus and will stay like that until all the water boils away (at which point your thermometer, if it keeps monitoring the temperature of the empty vessel will then start to increase in temperature again). Now could you predict this? Yes - if you already know the theory (or if you get wildly lucky). Most people who make this prediction either have seen it before (they know what actually happens) and they may even know a deeper explanation involving something about a thing called "latent heat" which is part of a general theory about how pure materials broadly speaking (like water) behave when they change state. The heating doesn't cause a temperature rise, but instead goes into breaking particle bonds and this takes energy not available to increase the kinetic energy of the particles (and hence the temperature). So even if you'd never had any experience with monitoring the temperature of water over time, if you knew about latent heat - and that water was a pure substance - you could make this prediction. You might not get the exact time and temperature when the graph flattens out correct, but you could at least make this prediction roughly speaking and far more accurately than the plain straight line "extrapolation from induction".
But absent that theory about latent heat that you already know you are just wildly guessing. And it wouldn't matter who you were. The thing about a guess (that's a prediction) and a guess (that's wild) is that in the former case you can provide some deeper-than-surface account of why you choose this over that. In the first prediction where you just continue the straight line (some people call this kind of thing "induction") well, you're just superficially assuming the pattern continues. But why? No reason. It just seems as though it should and perhaps you've heard the word "extrapolate" before? But you're guessing. You're actually creatively trying to come up with something reasonable. You're not using "induction" (but if you were, we've just shown it leads you straight into error) you are guessing: making a conjecture. In your mind you might think "the water gets to 100 then boils away getting hotter and hotter because - well what else could happen?). If you don't know, you just don't know. And your guess will be uncoupled from - in this case - actual science. Namely: the best known explanation that has been discovered.
The second prediction is a prediction from a theory itself creatively conjectured some time in the past and tested over and again under many different conditions. It did not come from induction either. Many people over many years, working together, had to explain that matter was made of ever smaller particles that themselves were held together by forces and that energy was required to break the forces of attraction between these particles and that this caused changes of state. And this theory isn't contradicted by lots of other science - but instead is essential for progress in other areas too. But its development had nothing to do with collecting lots of data and then "extrapolating" to the "best hypothesis". If that was the case: we shouldn't expect any more accuracy than we do in that straight line graph above.
Now people in hard sciences like chemistry and physics are well aware of this kind of thing. But strangely, when it comes to other areas it is almost a rule that to "extrapolate" is the very height of sophisticated data-informed, evidence-based reasoning. But we have seen that with even the most simple system we can imagine (heating water on a stove) extrapolation cannot work. So how can we possibly expect it to work when things are more complicated and with more variables? And yet doomsday prophesies about population www.bretthall.org/cosmological-economics.html are common and rest on how graphs monitoring the growth of people "suggest" or "imply" terrible things to come. But how can such a prediction be made on the basis of data alone? On the basis of data alone, liquid water would seem to continue to get ever hotter even after it boils. And if you are trying to program your robot to be ever more clever, it can only be clever like a person if it can make conjectures like a person. And as we have seen here: if it is required to only make guesses as a "Bayesian Inference Generator" then it will forever be restricted to just those things it has been programmed to be able to predict. It won't be able to genuinely create new knowledge because it is programmed not to be creative but rather to implement an uncreative extrapolation algorithm that pattern matches. This, by the way, is why some of us question the conclusions that psychologists (and related researchers) draw. They have lots of data and lots of models and even "predictions" from those models. But to us, their graphs often look like that first graph above: a very good set of data (excellent correlation coefficients (a measure of how closely the data matches the "model" (i.e: how closely the points sit on the line))) all very carefully collected and precisely reported on. But no explanation. The data leads to a model that is explanationless. It does not account for why that graph and not some other and it does not explain whether and how it might be wrong or how it might be just a small part of a much larger phenomena. Therefore attempts to draw conclusions using such a model are in truth "pure guesswork" and nothing like a "scientific prediction". Lots and lots of precise data is not what science is about - or else science would just be about those first two graphs. Science is about the deep explanations of the world - that accompany that 3rd graph. The purpose of additional data gathered is to rule out the second graph and its accompanying explanation (if there is one), with a complete accounting as to why it's false in terms of latent heat, and moreover allowing us to make an infinite number of predictions about not only water but all substances. And at no point did we ever need the chimera of "induction".
@CriticalRationalist (CR) on Twitter took the time to respond to my post here http://www.bretthall.org/humans-and-other-animals.html with his own here https://thecriticalrationalist.weebly.com/philosophy.html CR is a good thinker and writer and his site is certainly worth reading.
Let me quickly state my position for those who tl:dr my own page above. “Suffering” as we understand the term (so as the word applies to us) is categorically a bad thing. My position is that to “suffer” entails being able to create some explanation about why you are in pain. But although all suffering begins with pain and is caused by it, not all pain is suffering. Pain is morally neutral for humans without some accompanying explanation. For more on this, read my post. (An example of “good pain” might be the injection that you know will cure your illness at the doctor. But better: things like the pain of exercise that many of us learn to enjoy or the pain from a fun but scary bumpy ride at a theme park, etc).
Now this is not quite the same as saying “All pain that is not suffering is good”. There could be kinds of pain we as humans do not yet have access to, or understand, that is nonetheless bad. My explanation of the subjective conscious experience of other humans rests on what my own is like. A problem here is that I cannot do the same for other animals. I don’t know what it’s like to be a bat.
I say in my piece at one point “The philosopher Ludwig Wittgenstein famously remarked that “If a lion could speak we could not understand him”. He did not mean that the lion could not speak English: he meant that the internal workings of the *mind* of the lion may have been so far removed from our own as to have no analogue that could be captured by our own vocabulary.”
I also wrote “I guess animals experience pain but I also concede in saying this that though they experience something (we call pain), it might not be like what we experience as pain - at all. “
I don’t know what that pain might be like. It cannot be suffering of the sort we experience. It also cannot be “pleasurable” either for the same reason. It might be neutral - but because “what it’s like” to be a lion or bat or cow isn’t something we know yet (because we have no explanation of consciousness) I say that causing pain to animals is bad.
But CR says “He (That’s me) argues that meat eaters do not face any ethical dilemmas.”. This is not quite right. They do. If they didn’t, my post would have been far more brief. I don’t think animals eaten for meat should (ideally!) experience any pain because of our farming, etc because we do not know what the quality of that pain is, yet. I don’t know what “fear” feels like to a cow. I guess it must feel like something. I don’t know if it would be “bad” but because I cannot know, for now, I withhold judgement and I argue to not do evil (like cause unnecessary pain of a kind we don’t understand to a creature whose internal subjective state we do not know about.) Now my explanation is that their pain cannot be suffering. And whatever it is, it cannot be bad in the way it is for us. Does it have a moral valence at all? I simply don’t know. But I guess it’s neutral. But that’s all I have. My argument is also that some vegans claim to have a positive explanation that non-human animals actually suffer. I’m arguing they are actually wrong (for the reasons outlined in my post and summarised here and by CR). On the basis that “animals suffer therefore eating meat is wrong” I am saying is false. There might be other reasons not to eat meat from animals - but I’m yet to hear it and I won’t make those arguments for a vegan.
CR writes: “Consider, for instance, why people think animals suffer in the first place: people think animals suffer because the behaviour of animals seems to indicate a subjective experience to us. This should strike a thinking person as odd; most animals did not evolve their facial expression for the purpose of communicating to humans how they feel.” CR also here summarises my position about how not all pain is suffering because some - like the pain of a workout if you’re after massive gains or whatever at the gym - is pleasurable. And yet those people who work out (experience the pleasure of pain) have weird facial expressions too. But it doesn’t signify suffering. So I am not moved here by CRs reply. If the “understanding” that CR writes about because of our co-evolution with animals is to stand, there must be a way to distinguish between “suffering” facial expressions vs “just in pain” facial expressions. Is there? (This reminds me of an old Adam Sandler sketch: “Having sex or working out?” - it was just recordings of people moaning and, of course, you can’t tell what they’re up to. Similarly facial expressions don’t tell us much about how a thing feels inside. Facial expressions probably evolved to scare other animals away. If damage is being caused here’s my facial expression: run away. Whether that signifies suffering or not - well that just returns me full force to the original problem.
I think CR’s argument here demonstrates that animals experience pain. But this was never in dispute. I agree with him that “If animals turn out to suffer, i.e. if they have subjective experiences that are morally bad, then the factory farming of today must be evil.”. But I don’t think they do. What someone needs to show myself or someone who holds my position, is how suffering can be divorced from requiring “explanatory knowledge creation” (it requires that you can *understand why* you are in pain) and this seems to require “universal knowledge creation” - universal because there must be the potential to understand the cause of *any* pain). And the only being we know of that is a universal knowledge creator is an human being - a person. I don’t think animals sharing some facial expressions with human beings is at all relevant, let alone decisive in the question about how similar their internal subjective states must be much less their ability to generate explanations about the sources of pain that give rise to those facial expressions.
A person asked @peez (David) about whether if Hitler got into the Star Trek transporter and an identical copy of him was copied, would the copy be “morally responsible”? @peez posted this question to @paulbloomatyale. The discussion was interesting and I think highlights something about how people decide what personhood is all about. You can find the discussion around this Tweet here: https://twitter.com/ToKTeacher/status/919001168629481472
Paul said the copy of Hitler “…didn’t do anything. This person is just a minutes-old baby who looks just like Hitler.” He further said “Sorry to disagree with some of your other respondents -- but you don't punish a guy for having the delusion that he's Hitler”
Now I think this is wrong. I said “If he's a *fungible* copy, he is identical in all respects. Including memories & motivations. He literally is Hitler and thus responsible.”
David said “…If you create an identical copy of me, it is not me. That is what I’m saying.”
Paul said in response to this “Right. True for other things too. If you copy my favorite chair, maybe we can't tell them apart. But it's not my chair.”
I asked “Say the transporter room is sealed/opaque to the world. A perfect copy is created along with the original. When they exit the room…Neither "copy" or original nor any person or *any physical process* can distinguish them. Would you try neither Hitler for war crimes, both?”
Paul said “Neither, since you can't tell which one = original. (Similar to arresting identical twins, knowing one did the crime but not which one)”
Here is where I wanted to pause. Notice that I am trying to maintain the structure of the “thought experiment”. The Star Trek transporter creates absolutely identical in all respects down to the very atoms - copies of people. In other words *fungible* (or absolutely perfect) copies. This matters. I'm using the word in a sense close to that which appears in David Deutsch's book "The Beginning of Infinity" (page 265 in particular).
So perfectly identical (fungible) copies is substantively different to chairs that *look* identical (but whose atoms might be quite different) and it’s especially different to “identical twins” (which are never identical, even in their DNA, it turns out). I wouldn’t punish an innocent identical twin for the crimes of his brother. They really are different people with different histories and - most important - different minds.
But now: a fungible copy of Hitler *is* Hitler. But why is this?
Consider "the original" (not the copy) - it was atoms in his vocal chords that gave the orders. It was the atoms of his hand (and not his copy) that gave all those salutes. It was his body that was there in Berlin and not the body or the atoms in the body of his copy that did all those bad things. And it’s for this reason, I guess, that Paul and others argue that the copy is not culpable.
My position here is: the atoms are not what we’re actually concerned with. The hardware (the body) is irrelevant.
What matters, instead, and crucially, is the mind.
The mind is the software that runs on the hardware of the brain. And that software, if we really did make a perfect copy of Hitler is identical in all respects. The mind - the software - is just a pattern. It’s the arrangement of the atoms in the brain (constituting the the neural connections or what not, it doesn’t matter) - it is something that in principle could be - at some future time (when we’ve Star Trek transporters, say) could be instantiated in a silicon computer. Or written down on paper. It is a code of some sort that we don't yet understand - the software - the mind of Hitler that is responsible for Hitler’s actions and not his body. That mind contains the memories and all the motivations to keep on killing that the original Hitler has because - and this is key - it is the original Hitler. The original Hitler isn’t about which atoms were there at the time of the invasion of Poland. It was about which mind was there. And the mind that was there was Hitler’s mind. And just because there are now two identical, indistinguishable versions in two bodies (one body with a history and one without) does not mean only one is culpable because in both cases each *mind* has the same history.
And that is why the copy is equally culpable.
Postscript: This is not merely a thought experiment. According to quantum theory, there really do exist “fungible copies” of you because of what we know about how particles (and therefore everything made of matter) behaves. The laws of quantum theory compel us towards a vision of reality bigger than what we are familiar with. This forces upon us the idea that not only are electrons and other sub atomic particles in two places simultaneously (because they occupy different universes) but so does everything including: you. What does it feel like for multitudes of "you" to exist right now. Exactly as you feel right now. And each instant universes "differentiate". For the facts about this read “The Beginning of Infinity” by David Deutsch. - in particular the chapter on "The Multiverse".
This is a blog post. It's clearly not a Tweet*. What's the difference? Is it just because this is on my private blog and a tweet is on the platform we call Twitter? Or, since Twitter began has there arisen a technique - indeed a culture - of how to construct an effective tweet? I think a tweet usually has a style - a style that quickly evolved from the constraint that is 140 characters. Forced into that environment, language took on the form that it did; tweets evolved to be succinct in a way that other mediums did not promote. Here, on my blog, where resources are plentiful and my thoughts can eat up all the characters that are available to them, a certain kind of verbose and descriptive style abounds where metaphor conjures images of ideas as being expressible in certain environments and that this means species of communication can evolve. We should value those species. If someone wants to write long form: get a blog. If you want to sample many ideas quickly: look at twitter. If you want to combine the two: link to your blog from a tweet.
David Deutsch made the point elegantly in a couple of tweets when Twitter decided that it would experiment with 280 character Tweets for some people. David wrote:
Would you redefine
A haiku to have double
The syllable count?
The point here being: any small change (to the number of syllables) makes things worse. And also: there's simply a tradition. And why? Well traditions last because they work. They are ideas that survive. Twitter has survived as long as it has for a reason. Perhaps not as enduring, thus far, as a haiku
In another Tweet:
The contrarian natives of Limerick
Thought the rules of the eponymous verse form arbitrary.
They tried to break loose,
But what is the use
Of free form that just makes the thing humourless?
This makes the point even more powerfully. Here it is obvious (if you are familiar with the form of a Limerick) that something has gone terribly wrong. Why change what already works? Rhyming is what makes a limerick a limerick. If you don't follow the meter - the pattern of rhyming - that a limerick demands - you get something worse. A limerick simply *is* of the form:
There was a young man called @jack
And characters he felt he did lack
So up went the limit
Much better now innit?
More room for everyone's craic.
Anything that deviates from that style isn't a limerick. Anything more than 140 characters isn't a Tweet. It's something else. 140 characters forces upon people a style. Especially for thoughts that cannot normally be easily expressed in 140 characters or less.
Rather randomly choosing some Tweets (from Sam Harris who Tweets far less than he once did) and David Deutsch respectively we get:
We can oppose all extremism and dogmatism, while recognizing that not all extremes and dogmas are the same. The fine print still matters.
Knowledge is created by conjecture and criticism—in Darwin's theory, mutation and natural selection. Lamarckism tries to do without either.
Tweets are very dense. In the first one is 137 characters and the second is 139. Sam has actually used an article ("the") but this is rarer in tweets because those are typically unnecessary. Sam has attempted to explain a complex idea succinctly. He's forced into being clear because he is limited. There are differences between dogmas. The details matter. The second tweet by David is even more dense. It makes a bold claim about two kinds of knowledge and contrasts this with an alternative. An important point lurks here: in both cases the tweet serves as a starting point for engaging with the broader work of both authors. Just pick up their books to find out more.
If people want to tweet longer, there is actually a service called http://www.twitlonger.com or even http://talltweets.com (just google "tweet longer"). My preference is to simply link to my own blog. Or sometimes take a screen shot of a longer bit of text and post it as a picture. But I do this rarely. It's cheating!
*I don't know when to capitalise Tweet. I've mixed things up here with tweet and Tweet. Probably not ideal...
It makes sense to be a Republican in Australia. It seems eminently logical: after all why shouldn’t the Head of State of Australia be Australian? Moreover, shouldn’t we have a system that doesn’t simply allow people to be born into power? Born into power? What an ancient - and ridiculous - concept. The "divine right of kings"? Isn't that an outdated religious notion? Democracy: the idea that the people vote for the best leader based on merit, is surely preferable. And modern. And good.
Those conservatives - and worse - those conservative monarchists must simply be set in their ways. There are no good arguments for the monarchy. And in 1999, when I voted "Yes" for a Republic in the Referendum on this issue, I heard all the arguments. And as I heard them they were weak. "Don’t fix what is not broken" seemed to be the refrain. Others argued in return: the horse and buggy were never broken. They were simply superseded: taken over by a better, more modern way of doing things. So too it must be with governments and systems of democracy. Sometimes things might still work, but nevertheless we can improve things.
Since that time I’ve read more. And interacted with more people. Not about the law, or politics so much: but philosophy and how knowledge is constructed. Here is something remarkable: So much of what we know is inexplicit. We find it very difficult to put into words much of what we know. Here’s a way of thinking about explicit versus inexplicit knowledge:
A good chef has lots and lots of explicit knowledge. If the recipe is written reasonably well, then most anyone can replicate their dish to a very high degree of accuracy with the right tools and ingredients. Indeed this was the basis for a competition on the television reality show “Masterchef”. What happens is that a professional chef with some complicated dish shows some amateur cooks their special creation. Then the cooks get the recipe from the chef. What always surprised me, was no matter how complicated, the cooks managed to get there - and replicate the complicated dish - quite well. The words alone - with some visual cues - enabled unorganised ingredients to come together often in highly complex artistic ways of presenting food. That knowledge about how to cook is highly, highly explicit.
But now consider the great tennis player Roger Federer. He must know lots about how to play tennis really well and serve the ball - and return it - better than almost anyone who has ever lived. But if he was just to use some words to try to explain to you how to serve a ball - you’d never manage it. Even if you spend a whole day watching him and talking to him - though you’d perhaps get a little better, chances are your serve would look nothing like Roger Federer’s serve. Yes: there’s genetics involved, some sort of “innate” capabilities that his body has that yours may not. But still: you really would show very little improvement over the course of a day.
So there is inexplicit knowledge that Roger Federer has about tennis. You have inexplicit knowledge too: perhaps you know how to drive a car. It feels a certain way. Or a bike: how to balance. You just know how to balance. You can explain some, but not all. Words capture it somewhat. But not all of it. People don't learn to ride bikes from reading books - but they can learn to bake cakes. But the point is: just because you cannot articulate precisely how, using words, how to ride a bike, doesn’t mean your bike riding is somehow especially dubious. There are some things we can explain with words - explicit knowledge- like a recipe - or scientific theories. But there exist other things like how to serve a tennis ball really well, or ride a bike or play a piece of music well - these are inexplicit. That kind of knowledge is not all easily articulated. Some is, much is not. I've written a fair bit more about inexplicit vs explicit knowledge here in another context.
Another thing about “knowledge” in terms of where it might be found as well as of what kind it is. For example, we all know it can appear in our minds, because we know things. Knowledge also appears in books. And in computers. But while in a mind it's represented by electricity flowing along neurones in our brains, in books it's ink on paper. In computers: on silicon chips. Knowledge is a rather strange kind of "substance" - it's abstract: the physical stuff that represents it can be completely different from one situation to the next - but the knowledge itself can be the same. Knowledge can even appear in systems. For example: the knowledge of the physics of how light and glass interact is “instantiated” (so we say) in a telescope. “Instantiate” means something like “appears there in a certain form” or “represented within”. So although there can be a book written all about the physics of light and in that form it's basically - physically speaking - ink on paper (it's this which is “instantiating” the knowledge) - that same knowledge can be instantiated in an actual physical thing like a telescope.
So complex things like telescopes can instantiate the very explicit knowledge of how to gather and then focus light.
But what does any of this have to do with the republican vs constitutional monarchy debate? This is the thing about societies: as a rule, historically, they are terribly unstable things. We live at an unusually peaceful time - notwithstanding the chaos in various places. But we shouldn't forget how badly wrong things can go. The author Douglas Murray likes to think of societies as "fragile ecosystems" and I think that's quite right. The majority of societies and whole civilisations throughout all of human history have fallen into chaos and ruin and disappeared from the face of the planet. Whole empires and nations and city-states. Human beings have tried a very many different kinds of ways of organising, ordering and running societies: absolute monarchs, democracies where people work in coalitions, democracies where the person at the top has more or less power. Many kinds of democracies. Many kinds of unelected tyrannies. Democracy is, of course, no perfect shield against tyranny and disaster: indeed we may well say democracy is a kind of tyranny. The tyranny of the many over the few. And of course we have seen famously in many places democracy turned against itself - Nazi Germany of course - but one need not go so far back in history. The nations of South America are testimony to the instability of democracy, as is the continent of Africa. Coups and violent overthrowing of parliaments. But is any of this an argument against democracy? Not really. We should keep Winston Churchill in mind who has attributed to him words to the effect: “Democracy is the worst form of government. Except for all those other forms that have been tried from time to time.”
The point there is this: while there is no better system than democracy that we know of, it is terribly imperfect. It is liable to fall into chaos and even tyranny if we are not very careful about how it is set up and run. Given the chance, as Sam Harris has observed, some people will quickly democratically elect to vote away their rights and democracy itself. We might reasonably wonder right now as I write this in 2017, if this is not indeed happening in many places the world over. Democracy is fragile. A fragile system for organising the fragile ecosystems that are modern societies.
So with these dangers looming over any society at any time, what can we do? Surely we should look at what actually works? What systems have been stable over time? In particular what systems allow stability under change? That, despite huge changes and challenges have nonetheless not suffered terrible chaos or tyranny. We can look to the United States perhaps: a great nation of relatively stable democracy. Relatively: they did have a civil war, of course so great internal violence within that system is not unknown. There have been hiccups. But it is certainly a beacon to look to. Where else? Let us look to England: a democracy of a different kind. There the Head of State is not elected, that position has far less power than that of the American President (who is both Head of State and Head of the Government and, for example, can launch nuclear weapons) but there we have a particularly remarkable degree of great stability. The British System is ancient - one might presume stretching back to the Magna Carta of 1215 and before. But why - why should these places be especially stable and others not? We cannot articulate all the reasons. Both instantiate inexplicit knowledge: their traditions and customs contain within them rules about how to keep a society “stable under change”. Great change. Dynamic societies are the rare exception: most societies that have ever been have been “static” - they have not made great progress. But England - for example - led the industrial revolution. Science made great leaps there. Society itself underwent great changes and democracy reached its most inclusive form with the head of state having among the most diluted of powers. And yet the system of governance itself weathered all that came - including a brief period of republicanism from 1649-1660. Notwithstanding all that, the system has persisted and thrived. It was the framework within which so much change took place safely and to the net benefit of all in that great nation. David Deutsch explains in "The Beginning of Infinity" that a "tradition" has - until now - always been a way of preventing things from changing. Traditions are usually the ways things are done so that things remain more or less static. But in modern "dynamic" societies there is now a different kind of tradition - a unique and powerful one - a "tradition of criticism". That is a monumental difference between a tradition we have and the traditions of the past. It is a tradition that allows for change. And how that tradition works exactly - what the conditions in a society are that allow for that are not easy to articulate. There must be some other traditions and customs in a society that allows a tradition of criticism to flourish. Those other traditions: preconditions for creating that favourable environment for progress - are not easily articulated. Were they, we would more easily export our peaceful democracies to places like North Korea and Iran and Russia. But it is not easy to explain.
So, now to Australia and our Constitutional Monarchy. It has clearly allowed stability under great change. It has actually worked. The nation can be a dynamic and changing one, but the type of democracy itself allows for that change to occur while the whole project remains in place, functioning and thriving. The system we have embodies knowledge - of an inexplicit sort - of how to keep the nation stable. Those customs and traditions of democracy that we have actually work. We know some of the reasons but we cannot articulate all of them. Should we change this? Can we improve it? Perhaps we could. But how? We do not know why it works and so we cannot know how to improve systems we do not fully understand. So we could change things, and intend to improve them, but we might be completely mistaken and cause damage instead. Rather than an automobile replacing a horse and buggy we should think instead: a vibrant and healthy person who is then offered the chance to take a drug which has not been tried before and for which no explanation is given as to how it might work. Yet, an "expert" assures you: this drug will make you even more healthy and vibrant. There is a risk, they are reluctant to admit, that it might make you terribly sick. But we've no reason to assume that either. What would you do? It all turns on how well you currently feel. And if you look around and most other people are pretty unhealthy by comparison - perhaps appreciating your good fortune is enough and you should, perhaps, pursue greater wisdom, knowledge and satisfaction and progress elsewhere, rather than take the risky pill.
An elected president, even an appointed one (appointed by the Parliament or some committee, say) would shift some power away from the Parliament to another seat. We would actually not know what systems we are changing if we made this change. Perhaps those systems would not be too much affected. But perhaps it would be a tragic mistake. Shifting whole systems from one to another is no small thing.
And ultimately it does not matter “Who rules?” as Popper argues here in a paper that should be required reading for anyone interested in these issues. Because democracy simply isn't about electing and installing rulers - be they presidents or prime minsters. It's actually about ensuring rulers can do very little damage so that we can correct their errors if need be. The Monarch - or their representative - is simply prohibited from doing much damage and we have seen this (1973 notwithstanding).
The question before us when considering changes to our system of government is: how can we most easily undo mistakes that are made by rulers? Our system has already satisfied this criterion to a level that leads the world. Popper’s criterion of error correction is no better elsewhere and we might guess cannot be easily improved. Again: we should not fall back into the mistake of thinking democracy is about putting particular rulers into positions and therefore the question of whether the head of state is Australian, or not, is the wrong question for a democracy to consider. And it is true, a monarchist cannot properly articulate all the reasons that a monarchy is preferable because much of reason why is tied up in a type of inexplicit knowledge instantiated in the traditions of governing. But just because these cannot be explained in clear language does not make the knowledge more dubious. Remember: the knowledge of how to ride a bike is of a similar sort: real, yet not easily explained in words. But we know that the knowledge works because the bike stays balanced and you get to where you need to go. As systems of government and great democratic traditions are. Means of safely, and with stability changing our place to allow us to make progress together. The analogy is not, in this case: replacing a horse and buggy with a car. It's repairing the best bike we've ever had that has absolutely no sign of wear and tear. It's taking off the front wheel and replacing it with another: never tested, and for no reason other than it was, for example, made in Australia.
So, in summary: our Constitutional Monarchy maintains the constant stability that allows for the change that the Parliament brings. To remove that stability that is the very thing that has facilitated our dynamics society is dangerous. We’d then have two seats - the Parliament and the Presidency - both subject to change.
The Crown is the Dignified and the Parliament is the Efficient said Walter Bagehot in “The English Constitution” to separate out the symbolic versus the way things are actually acheived. In modern science-type language: The Crown is the Constant and the Parliament is the Variable.
We change this at our peril.
Not all of DNA appears to be “code” - much of it seems to be “junk” or non-coding. But the parts that do code for something - the genes - collectively the genome - constitute a program, or better: many subprograms for the construction of various organs and therefore organ systems that together constitute a complicated organism. In the case of the human being there must exist subprograms for the structure of the eye, the skin, the liver and bones and so on for every part of the body that can be distinguished from every other part. This includes, of course, that most important part of the human: the brain. While our eyes share many features - though not all - with other animals, our brains, though anatomically similar in many ways to some other large mammals, must be in one sense sharply different. What they are doing is different - which is to say, what the mind is, is different. The software running on the hardware that is the brain is different. If animals have other minds at all (I tend to think they must) they are completely different from our own. They are not a little different (as their eyes or brains are) they are very, very different. Qualitatively different. Of an order altogether unlike anything else in nature. There is something very special about human brains compared to insect brains or dog brains or dolphin and chimp brains. Our brains - or at least the minds that run upon them - are universal. While dogs and chimps and dolphins may have some internal subjective experience of the world, their minds are limited. They can solve problems only of the sort already coded in their genes. Dogs may learn tricks - but the repertoire of possible tricks it can learn is forever bounded by the behaviours coded in its genes. It cannot, for example, learn to write a sonnet, or a computer program or how to engineer itself a better home.
It is important to recognise, brains are not minds: minds are the software, brains are the physical stuff made of neurones. Minds are made of thoughts (both conscious and unconscious). Brain and minds are different things even if necessarily connected in humans. Our minds, the minds of humans running on those human brains - are universal. This means they (those minds) can turn their attention to any problem whatever and attempt a solution. What they possess is the ability to create new knowledge. This unique creative ability makes them universal. This is why the human environment is shaped by humans to a degree not seen in other species. Most other species are victims of their environment, not us: we are masters of it and gaining ever greater mastery over it.
Whatever the program is for this universal intelligence, it is instantiated in our genes. That code, written in the so called "AGTC" code of base pairs codes in part, somewhere, somehow for this special universal capacity only humans possess: this capacity to create new explanatory knowledge to allow us to control the world around us. An amazing code, but a code it is. And so it must be the case that we can write that code into an algorithm that can be then coded by software engineers one day, to produce an artificial general intelligence (AGI). What an AGI is not, is simply an AI who is better than us as chess, and at doing long division and at translating Japanese into Spanish and…(iterate for all other human intellectual abilities). No: it’s not like a million AIs working in parallel. Instead an AGI will be a universal explainer. It won’t be preprogrammed with all possible games and languages and mathematical things we know of (though it could be, I suppose) - what makes the crucial difference is the (as yet known) code that allows it to be a universal explainer. It, like us, will be able to learn things not already in its code. Whatever that code is - being universal (which means able to, in principle, creatively solve any solvable problem including especially problems not yet known) - there cannot be fundamental differences between alternative ways of writing that code. Different ways of coding the same algorithm might appear different (as the numeral 1 appears different to the characters “one” which again seems different to the strings 2/2 or 4-3) but ultimately they will represent exactly the same underlying abstract structure. “4-3” is not more “1” than 2/2 - both “code for” the same thing. So it is with all humans and future AGI: we will all share this same ability to be universal learners, or knowledge creators or problem solvers. Call it what you like. The code will be equivalent. Which is to say the software has the same capacity. A capacity for universality. That the code (the software) will be equivalent (thought it's unlikely of course to be identical in humans and AGI) it cannot be the case that intelligence is purely a matter of genetics. After all: AGI won't have genes.
What can, however, be different between two humans or humans and AGI are two bits of hardware that might, between people, be heritable: speed (of our processors) and/or memory. Both of these are, of course, improvable. But the software cannot have its capacity improved. If it’s universal, it’s universal. It can solve any problem. You can’t, in principle solve more problems than all the physically solvable problems that there are given enough resources (like matter, energy and time). So why don’t we solve all those problems or even attempt to? Well at the level of civilisation we simply don’t know about all the problems or have the resources. But as individuals it’s usually simply a mundane lack of interest. So is there a genetic component to intelligence? Yes: in that so far the only code we know for intelligence is written into the genes. We don’t know what it is. When we do, and when we can decode the genetic code for intelligence we will be able to code (perhaps far more efficiently) that code into AGI. Or maybe we’ll discover-using our creativity - a program for creativity before this. But it must be logically equivalent to the code in the DNA. But there can’t be “degrees” of intelligence of the kind we and AGI will have anymore than there are “degrees” of “oneness”. 2/2 and 4-3 and 1 all have precisely the same amount of “1” - namely a complete difference from 0 and 2 and all the other numbers that don’t have that at all.
IQ tests are said to be tests of intelligence. But they cannot be, unless one argues that "intelligence" is something like "interest in doing the kind of tasks associated with success in IQ tests". So, for example: doing lots of practise IQ test type questions would be one such task. So would mathematics and logic and reading lots of books. But none of this implies greater intelligence. It implies more knowledge, sure: knowing how. But not intelligence. Real intelligence should be regarded as the capacity (in principle) to creatively solve problems not before encountered. And this will depend in part on (1) how interested one is in attempting to solve such a problem (2) one's memory and (3) one's processing speed.
So call IQ tests a test of intelligence if you like but recognise it's like calling the results of your "height and weight" test a test of your beauty. Sure, there's possibly a relationship. Sure those results are correlated with all sorts of other things. But to say the test is truly, actually, a test of that phenomena is to ignore the simple fact you've arbitrarily named and therefore narrowly restricted a word with lots and lots of history and social baggage and generality. So call the people with high IQ "more intelligent" and call the tall people with low weight "more beautiful" but then appreciate: we're all equally intelligent in the most important way (we're universal) and we're all equally beautiful in the most important way too. And that should matter far more than measuring (then judging) people with numbers ascribed by psychologists.
I began the day catching up on Sam Harris' "Waking Up Podcast". He interviewed Gary Kasparov who spoke clearly about the threat of Russian President/Dictator Vladimir Putin. Socialism, Kasparov observed at one point, cannot value the individual in the way capitalism does. Whereas free, open, capitalist societies see any individual human death as a terrible tragedy, socialism will see the "value of life" quite differently: sometimes we must sacrifice individuals. Individual death of innocents is not necessarily a tragedy. This huge assymetry in the values of the two systems when it comes to life is absolutely crucial to keep in view when discussing the apparent merits of socialism. So mainstream has the fawning over socialism become that ABC reporters/comedians in Australia are now sneeringly admonishing politicians because they are not socialists (see, for example, Tom Gleeson's interview on "The Weekly" with Senator Cory Bernadi). The superemely high value capitalism places upon human life (compared to socialism which places many other things like "preserving the system as it is" higher than any individual) is a consequence, I would argue, of the creative output of producers; creativity - making things better; progress - is valued highly and because the next improvement can come from anywhere - all human life is especially sacred. But socialism is the idea that there is a utopia that we can enact: a system where problems (like inequity) can be once-and-for-all eliminated. And this might require the elimination of some individuals. Not so with open, free capitalist societies that must recognise the inevitability of problems and, therefore, the possibility of their creative solutions. But I digress.
What was also wonderful about Sam's interview, was Gary explaining how at first Deep Blue was beaten by a human (him). Then the computer won. They did not play a third match. They should have. Jaron Lanier in his first book "You are not a gadget" speaks in a similar way to the way Gary did when assessing that loss: it was largely about psychology. Gary got spooked by the computer and the environment. The poker element of chess undid the human player because there was no face on the computer. Eventually, of course, the computers all got so much better that they can more often than not beat a human player. But what is often left out of these discussions of apparent machine "superiority" in this narrow area is: a human player with a computer can always beat a computer with no human. Gary spoke of this with such confidence almost as if it was a law of nature. So the combination of raw computing power (and a fixed algorithm) is always worse than raw computing power AND the creativity of the human mind. People will always beat machines - just so long as people get to use machines too. And why shouldn't they? The whole point of machines is to serve people. They're just dumb machines. We owe nothing to our microwaves, iPhones or AlphaGos. Given some of Sam's remarks, I'm still not sure he quite understands what the point of "universality" is when it comes to the ability of humans to be creative and explain anything. Sam thinks that you can just keep adding module after module after module to an AI so that, for example, it's the best chess playing machine, then the best Alpha Go player, then the best car driver then the best manufacturer of coca cola and you keep doing this for every automated task you can think of and - well - eventually once you've done this for "everything" you have an "intelligence" that - by definition exceeds us for every task.
No. That's false. It can't possibly exceed us for every task because you cannot describe "every task" like you can for the rules of chess. You can't program in: tasks not yet concieved. You can't program in the ability to use imagination to construct creative explanations. Or if you can: you don't need to program in all all those other things! You only need the one thing (the program for creating explanations - i.e: a program for actual universal learning) in which case you have a person. A genuine AGI. And at that point you're not allowed to go forcing it to: play chess or drive your car or anything because people have human rights. Right?!
Hours after this I began to read "The Undoing Project" by Michael Lewis on a completely unrelated topic (eventually it turns out to be about how two Israeli psychologists did some Nobel Prize winning work in psychology for which they were awarded the economics prize). I'll read anything by Michael Lewis: he's a great writer. You can read my page here about his book "The Big Short". This new book of Lewis' begins with a parable about how the top national basketball teams in the USA choose (buy!) their players. Traditionally it was simply "expert opinion" that was used to help teams choose. People with lots of experience (called 'scouts') would choose from the list of available players (the draft). This method was riddled with error (it was basically just guessing although sometimes things like: number of points scored in college or something like that was taken into account). But then people started to make "mathematical models" where lots of factors like "time on court and number of points scored" were weighted. In a basketball match sometimes players don't play the whole game. Or even factors like: how high could the player jump and how fast were they over 100m? Things like that. Some factors might have been given a 0.5 weighting, others 2x or perhaps a square or another exponential or who knows how? But whatever the case, those who relied on models and not just expert opinion started to beat out the experts. Then everyone started to get models and awful mistakes were found with the models (so some team would invest in a new player based on a model alone and the player turned out to be a terrible choice). Long story short here: the solution was to use both the creative theory formation of experts augmented with lots of data and a model. This way the expert and the model could refute each other. Now Lewis doesn't say this - but that, it seemed to me, was clearly what was now going on. Where the expert and the data/model agreed - the choice was better than when at least one of them disagreed.
So today, in the space of hours, in two quite divergent situations I encountered something really important that I've known for some time but which now seems to be entering the cultural "intellectual" vernacular: human creativity adds something qualitatively different. Because it is something qualitatively different. Pure data, or "mindless" computing (which is what all computers except us, are) will always be worse than a person (with some relevant knowledge) and a good predictive theory based on a good explanation and a powerful computer to crunch out some numbers fast.
But the lesson: human creativity is qualitatively different to all other kinds of computation we know about in the universe. Simply adding ever more data, or processing speed or memory to a computer cannot possibly get you AGI. For that: you really do need a jump to universality as David Deutsch explains in The Beginning of Infinity. And it will be a jump. It won't gradually happen (as AI is gradually getting more and more useful in more and more places) it will come abruptly. Just like those chess champions were suddenly able to beat the best computers or those scouts beat the mathematical models...when they too had computers.
Have we reached peak “critical thinking” articles yet? In the weeks after the US election it seemed almost every (respectable) online media outlet had some story - written by an “educationalist” (or some other academic) - about critical thinking and its place in education. The spike has come in lock-step with news about so-called "fake-news" in this so-called "post-truth" era while grenades are being hurled from either side of the politicised-media divide.
On November 23rd “The Conversation” asked “What is critical thinking? And do universities really teach it?” A week earlier National Public Radio (npr) titled a report “From Hate Speech to Fake News: the content crisis facing Mark Zuckerberg”. They ran this the same day as an article titled “Students have dismaying inability to tell fake news from real, study finds.” That was a report on a Stanford University study. One wonders who the control group was (hint: there couldn't possibly be one). This is hardly a new phenomena - it is no more than a specific instance of the general problem of: in a world where children are taught to trust authority how do those same children discern true from false without being told? In other words: which authority should they trust?
This, in crystalline form, is the problem. Indoctrinated with the idea that one should trust their elders and betters students raised on a diet of authorities who tell them what is true (parents, teachers, media) when the traditional authorities turn out not to be trustworthy, to what new authorities do they turn?
This is the wrong question as Karl Popper explained. We should not be looking for authorities to tell us the truth. That is not what the quest for truth and knowledge amounts to. No - the real question is:
How can we detect and eliminate error?
That is Popper's critical method (what "critical thinking" amounts to) and it is almost universally unappreciated, ignored and, where it's not ignored: misunderstood.
Since those npr articles bemoaning the not-new observation that some news sites are hard to trust and people might lack the tools to discern true from false, the articles on so-called "critical thinking" have kept coming. All of them have one thing in common: they fail to clarify what they mean by critical thinking. I'm yet to see one that mentions, at all, "Popper" or "Critical Rationalism". Some are quite explicit that defining terms is the whole problem and yet also declare "it is, in the end, unimportant to define what we mean by critical thinking." This and statements like it, are frustrating reads for anyone interested in actual "critical thinking" as an important general skill for any human being to possess and as it has been understood for decades and more by people actually interested in critical thinking (and not just the broad, nebulous project of "education"). It’s a simple idea: critical thinking skills allow one to sift the true from the false, the useful from the useless; the valuable from the worthless. It consists of methods of categorising into what works and what fails to work. It was largely explained by Karl Popper in his corpus of work decades ago (building on the work of some of the ancient Greeks and early scientists and some others) and broadly referred to as “critical rationalism”. That epistemology explains under what conditions one can evaluate propositions as either provisionally true or demonstrably false. So there is a long, wise tradition that contains genuine knowledge about how knowledge works and the centrality of "criticism" to the whole project. People have known this as much as experts have understood germ theory or the laws explaining gravity. But just because most people are ignorant about it does not make it any less true. We must remember: most people are still ignorant about Darwin's Theory of Evolution by Natural Selection: but their ignorance does not make it less true.
There is no third option between "true and false". This is the law of the excluded middle. And this is the beauty of critical rationalism: it rests upon a simple logical truth. If something is successfully criticised - it is refuted. It is, simply, the method of criticism. That's critical thinking.
And that is how I will come to define critical thinking. It must be about criticism.
So, for my part, I have been attempting to explain the simplicity of critical thinking for some years now. Yes, some so-called "educationalists" are coming late to this whole area - but this should be no surprise. There are few, if any, engaged in the study of "education" who have any deep appreciation of what learning actually is (follow for my video which provides a sketch answer to this question following Popper) and so are not really interested in critical thinking. To understand that either my own website here or my video here can explain that.
I do this because I genuinely think that these are not difficult concepts. Sure they can lead to amazingly complex places - but just because an idea is deep does not mean it must be "hard to define". So because academics won't define it, let me:
Critical Thinking (noun): thinking aimed at producing criticisms.
That's really all there is to it. Criticism is how we sift the true from the false. It is called "critical" thinking because it's about being critical. And being critical means producing criticisms. And applying those criticisms to ideas and works and other notions.
If the criticisms are valid; the claim being made is false or less worthy than (hopefully) some other competing claim.
If the claim being made survives every attempt at criticism; we take the claim as provisionally true so long as it’s also useful to our purposes.
Again, it really is as simple as that. There's no need for obfuscating or using big words or providing long lists of educational buzz words.
But yes, of course, one can get very very deep with this philosophy and there truly is an ocean of material to explore. But to “define” critical thinking is not difficult. To understand just how important it is, is likewise not hard to appreciate. It is absolutely crucial for learning: it's the cornerstone - and the more critical one is in their approach to ideas, the faster, better and deeper they learn.
The articles recently have attempted to obfuscate the entire concept of critical thinking for reasons that allude me (but I will attempt to provide some possible answers in a moment). Let’s begin with one of those articles above - possibly my favourite:
November 2016, “The Conversation” by an Academic whose title is “Associate Professor of Higher Education”. Let me, in the words of Darth Vader on his visit to the Death Star in Return of the Jedi and "dispense with the pleasantries". One would expect one who possesses such a lofty title to know much about learning and therefore - the critical method. The article is titled "What is critical thining and do universities really teach it?" and again we have high hopes for uncovering what so many regard as an elusive concept. The author begins well enough asking: "But what is critical thinking? If we do not have a clear idea of what it is, we can’t teach it."
And I could not agree more! And then:
"It is hard to define things like critical thinking: the concept is far too abstract."
It's not hard. I did it. In 7 words. And the concept is not "far too abstract" at all. But let us keep in view the first, valid, claim: If we do not have a clear idea of what it is, we can't teach it. Right. Now moving further on in the article after the author offers us nothing clear about what critical thinking might be, he writes:
"But perhaps a definition of it is, in the end, unimportant. The important thing is that it does need to be taught, and we need to ensure graduates emerge from university being good at it."
But the author has said themselves: if they do not have a clear idea (some would say this is what a definition amounts to!) then it cannot be taught. I am not surprised the author is, apparently, satisfied with this contradiction in claims. They do not know what critical thinking is, deny it can be defined and (this is almost like a mathematical proof) therefore cannot apply it to their own writing. I feel a little sorry for a person who applies their life, ostensibly to the pursuit of expertise in "education" (and one would think "learning") and not know what "critical thinking" is or know how central it is to that business. But worse: I feel more for the students. We need to clarify this. Simplify this. And help people who want to learn: learn it.
This lack of shared understanding - culturally - but especially institutionally among universities and even secondary and primary educators is a scandal. They should know what learning is. They should know what "critical thinking" is. That they reject notions like Popper's epistemology as being absolutely central to this whole project is a huge problem. (There is a worse problem: coercion in education and I write about that here.)
Now that article, bad as it was, has been outdone. And together all of these articles have propelled me to write this piece in response. So let me get to the grand-daddy of poorly thought out articles on critical thinking. The title of that article is "Why schools should not teach general critical-thinking skills." Now yes, it's true I argue schools should teach students what they want. But given a coercive system, one must then ask: what is it we value? The choice really is as stark as this: indoctinate students or provide them with knowledge (tools/skills) to think for themselves. Now the article is right about a number of things: not least of which is that silly so-called "brain training" exercises that many schools now use (because: neuroscience!) are supposed to provide "exericse" for the "brain" to help it "think critically". No, that's very wrong headed. I agree. But the lesson is not: stop trying to help young people think critically. The lesson is: figure out what critical thinking actually is and help student who want to learn it, learn it!
So this article takes things to an even higher level of misconception, confusion and misdirection. It, just as the previous article, argues that both critical thinking does not exist and that, despite this we should not learn what it is. So they both deny the thing is real or worth defning and then, for good measure, just in case they were wrong the first time: if anyone suggests it's a specific thing, let’s not try to learn what it is.
If one were in a conspiratorial frame of mind one could just think that education, increasingly dogmatic and partisan in its way of providing a narrow perspective on all issues (especially political) would have an interest in denying the facility of “critical thinking skills” and specifically denying that such a thing even exists (so don’t even bother looking).
So let me offer my first tentative explanation for why universities and other educational institutions speak much about how they promise to provide their charges with “critical thinking skills” while at the same time denying such broad skills could be useful and how such skills are too difficult to define: Could it be because it is in the interest of those who promulgate dogma and it can protect those who promulgate dogma to deny the usefulness of critical thinking?
Critical thinking skills are THE most important tool in the anti-dogma tool box. Logically prior to scientific reasoning, mathematical skill or even philosophical knowledge - critical thinking skills are THE way in which we sift all true from false ideas. They are the means by which we learn when coupled with our human creativity.
Another related possibility: This new war on critical thinking from some sectors (like some university academics and others) is becoming more brazen. In some cases it seems to be an attempt to demarcate territory: we are the guardians of teaching, learning and education, (the implicit this thesis goes). But why should it concern those who are ostensibly engaged in teaching and education to deny the deep usefulness of a critical attitude? Precisely because a critical attitude - the aim of “critical thinking” is how learning proceeds. Period. It is possible for people (students!) to learn how to learn all on their own. And, of course, if people genuinely take this on board this is a terrible threat to institutionalised learning in an age of such free access to high quality information. Yes, of course, learned people are useful to those doing some learning - no one denies this except, it would seem - some academics who might realise that their own highly specialised "knowledge" in some areas is, well, rather useless. But if one wants to learn something really, genuinely useful, like chemistry or history, say, a chemistry professor or history major is going to be one of the best resources possible. Some of the time, anyway. But someone who is - say a “Professor of Literacy Education” (genuine title) may indeed find that students interested in becoming expert in that field need do little more than spend an afternoon or two on Google and Wikipedia and the odd journal before they discover what, if anything, is worth learning about the field.
Why else might there be this (unwarranted) criticism of (actual) critical thinking? Because critical thinking - general, broad based, critical thinking skills - are the very means by which entrenched dogma come under attack and are subverted and shown false or lacking. Mere science is not enough. It’s important, - essential perhaps - but it is far from sufficient (there is no shortage of scientists who are also, on the weekends, young earth creationists for example). Reading some philosophy is not enough. Nor is doing maths, collecting data and understanding trends. These things are not enough to be a genuinely critical thinker. The deeper skill of figuring out “what is wrong with this claim?” across any domain whatsoever is a threat to anyone who thinks the answer must be “there is nothing possibly wrong with this.” That has been the response of priests and rabbis in the past and mullahs and evangelists of the present. It is also, scandalously, an implicit claim of too many in academia.
Let us return once more to "Why schools should not teach general critical thinking skills." It was published in “aeon” magazine and authored by Carl Hendrick (head of learning and research at Wellington College in Berkshire. He is also an English teacher completing a PhD in English education at King’s College London we are told.)
Hendrick asks: “Should critical thinking be taught as a general skill at school?”. There is no evidence in the article that Mr. Hendrick has ever come to grips with what “critical thinking” is which is my constant refrain in articles of this type. He tends to deny it really exists. It would be interesting to know if he has read anything by Karl Popper and, if so, why he is so dismissive of an entire philosophy about the critical attitude.
Hendrick writes, “Teaching students generic ‘thinking skills’ separate from the rest of their curriculum is meaningless and ineffective.” But no one is proposing that the critical thinking is not to be applied to anything! I propose, below - it should be applied to education itself. And yes: by the students! But Hendrick quotes another “educationalist” Daniel Wellingham who writes “There is not a set of critical thinking skills that can be acquired and deployed regardless of context.” And so there we have it! A denial of the basic epistemology that underpins how learning works.
The deeper the skill the broader its techniques will apply to diverse situations. If one knows how epistemology (the production of knowledge; learning) actually works as Karl Popper explains then one has a powerful toolbox of techniques (knowledge) about how to criticise claims no matter the domain. Is the person claiming that their claim has never been and could not possibly be subject to criticism? That’s a dubious claim! Has the person considered how the claim might possibly be false? No? That’s a criticism. And that is the general technique: how can we determine if this claim is possibly false? To claim that questions like this simply aren’t possible because the “educationalist” has never gotten to grips whatever with what “critical thinking” might be because they heard people define it out of existence - but this is no reason for the rest of us to throw the baby out as well. Just because "educationalists" are now skeptical of "critical thinking" to the point they are no longer critical of their own thinking or writing does not mean we need to follow their lead into relativism and nihilism. Let us keep in view: there's truth here to be discovered.
Hendrick goes on, “Instead of teaching generic critical-thinking skills, we ought to focus on subject-specific critical-thinking skills that seek to broaden a student’s individual subject knowledge and unlock the unique, intricate mysteries of each subject.” Again, no one has ever suggested that one comes at the expense of the other. We can do two things at once. But one of the examples Hendrick uses simply underscores the misconception. He writes, “A physics student investigating why two planes behave differently in flight might know how to ‘think critically’ through the scientific method but, without solid knowledge of contingent factors such as outside air temperature and a bank of previous case studies to draw upon, the student will struggle to know which hypothesis to focus on and which variables to discount.” This difficult to parse passage seems to be saying that physics students need knowledge about physics to make progress. It also suggests (wrongly!) that science proceeds by looking at the past. That a student ostensibly doing an experiment needs to first draw upon "a bank of previous case studies". Of course no one denies some knowledge of physics is useful in making progress in physics! But the whole point of science - and this can be learned in school - is that we want, when doing science, to make progress beyond that "bank of previous case studies". Yes - even students! The subject specific technique of: simply build a model and do repeated experiments is a critical "subject-specific" technique. The plane that flies worse after repeated trials is disqualified from being the best. The investigation as to why must provide a general explanation WHY one plane is better than another. That is both a subject specific and a general requirement for claims in science (and broadly): what is the explanation? If there is no explanation - one has not really learned anything. Absent a good explanation - that is a criticism. If an experiment is done comparing two planes and plane 1 is definitely better at flying than plane 2 but one has no clue why one is not really doing science. One is going through a process that looks like science in form only - but science is not just about predicting the outcome of experiments. It's about explaining the evidence. Who would want to get in a plane that the engineer had no better reason of attributing magical forces to keeping it aloft rather than Newton's Laws in the form of the Bernoulli effect? And again, appreciating that is not part of the corpus of scientific knowledge but rather the general set of skills called the "critical method" in epistemology. And anyone can learn that! Subject specific critical thinking in science allows us to use an *experiment* to criticise ideas and how to properly design experiments is a whole subject in itself. But the general idea: that we need to ask the question "What is wrong with this and why?" is not subject specific. But, simple though it is, it is so very rarely ever applied. To anything. Ever. Not by teachers and students in schools. But it should be! It is true intellectual self defence against nonsense and dogma.
Articles on critical thinking we have seen - and if one does a Google search one will quickly find - they are absolutely riddled with misconception: this has now reached the point where one might conceivably be concerned that the very term “critical thinking” is so mired in confusion that it could be unsalvageable. I want to push back against this before the battle is lost completely. But obfuscation on this point brings to mind a wonderful piece by Professor Richard Dawkins when writing about the Sokal Hoax (itself an application of "critical thinking" techniques to postmodern literary nonsense). “Suppose you are an intellectual impostor with nothing to say, but with strong ambitions to succeed in academic life, collect a coterie of reverent disciples and have students around the world anoint your pages with respectful yellow highlighter. What kind of literary style would you cultivate? Not a lucid one, surely, for clarity would expose your lack of content." The original piece can be found here. So it is in education as much as in those areas of postmodern philosophy Dawkins was criticising: writers do not cultivate a clear literary style. They say things are difficult to define and perhaps not possible to. And why? It disguises a lack of content. It disguises a lack of clarity on their part about what they are supposed to be experts in: learning.
Critical thinking teaches us to ask: “where are the mistakes?” and "can I explain why this is false?” and to claim “I have a better idea!” and “this is the best explanation and here is why”. It is logically prior to knowledge in any other subject area: science, maths, history, philosophy. Education theory. So much of what passes for “knowledge” in many many domains rests upon one or more dogma, according to some. In reality there can be no dogma (i.e: propositions that must be held immune from criticism). And yet, in many places there are implicit dogmas (religion is simply the rare case where so many knowledge claims consist of explicit dogmas). Implicit dogmas especially plague many social sciences - and most especially education theory. Explicitly, educators speak about how the “bucket theory of mind” (for example) is not true. But the implicit dogma is: it’s true. And this is why we have coercive education with fixed curricula. It assumes that the knowledge written down in school (and university) syllabi can be transmitted faithfully by the technique of classroom setting, lectures and seminars. For this reason, school has changed little over the centuries. Sure, chalkboards are now electronic smart-boards and slates are now ipads - but students are still in classes at certain times of the day doing, almost always, what the teacher requests (sure more often there are games and hands on activities and building - but this is an incremental improvement - not a revolution). And they do what they do because the system requires it of them. It’s often legislated what is to be understood. This is not a system for fostering critical and creative thinking. It’s a system which seeks to indoctrinate. It is impolitic to say this - but most students understand this. Most critical thinkers recognise this.
Educators need to recognise this implicit dogma. It’s really there. The education system is a machine for indoctrination. So of course people in that system baulk at the idea of genuinely critical thinking in their students (though they shouldn't! They should bravely seek to change it from the inside - however slowly this happens!). But they baulk because that’s what they’ve been taught their whole lives: do not think for yourself. Learn what you are told to learn. Memorise. Solve this puzzle. Jump through this hoop. Earn high marks. Now you are qualified to pass on that understanding to those younger than you - the importance of learning what you are told to learn. Memorising. Solving puzzles. Jumping through hoops and earning high marks to earn a qualification and be successful so you can pass on your wisdom to younger people about how to be successful by learning what you are told to learn. Memorising and solving puzzles by jumping through hoops…(and so it goes).
Critical thinking skills reject almost all of that viscious cycle of learning what people before you were taught uncritically. Instead critical thinking teaches students to question at every single turn. Does this mean people won't learn if they are questioning everything constantly? No - the opposite - they learn. Better. To effectively criticise, one must learn deeply about what it is that’s being crititiced in the long run. But one is genuinely allowed to say: this is boring and useless for me. And be quite right about that.
Within that system, mandated and policed by government agencies, a teacher may ask: what scope to I have to genuinely help students think critically about all this? One can be honest about it all: how the system is set up. It is to “meet standards” and “achieve outcomes”. One can tell students: if you think critically about the standards and outcomes because you’ve come to understand them so well you find flaw with them, in all but the rarest of cases the system will not reward you for uncovering the problems. It will punish you. That is simply how an “education system” mandated by the state must work. It must be an indoctrination engine. So you can learn to “game the system” if you want to “succeed” by the lights of the culture in which you are raised (and there is nothing wrong with this if this is what you want to do).
But - and this is the central key to teaching critical thinking in a coercive environment - a student can understand all of that about the system while still questioning it and everything they learn within that system. But they need help from the rare souls who are willing to show how flawed and provisional all the knowledge learned in school (and university) truly is.
We need critical thinkers: people not taken in uncritically by leaders either religious, political or enamoured by the importance of their own area of expertise. We need people who share values not because they are indoctrinated with certain ideals but because critical thinkers naturally cohere on the truth! Critical thinkers naturally arrive at the same conclusions because there is an underlying objective reality to be discovered. It must be the case that if people are openly, honestly pursuing truth that they will find it. Because the truth exists. Both science and religion agree on this fundamental point and are correct about it: The truth exists. Take that seriously. Now all we need to do is be critical thinkers and we’ll each find that same truth together.
Here is the problem “for school children” that it is claimed has “stumped the world”. I wanted to write here not just a solution, but also a clear (I hope) line by line explanation of the reasoning which repeats itself multiple times as this is how people best learn this kind of thing. It's easy to miss just one line of reasoning. I also want to make comments about how being able to do this, or not do this question - or this type of question is a measure of nothing more than how interested you are in doing this question - or questions like it. Just a quick thing about what this question is all about: it’s called logical deduction. That’s worth saying because some articles are claiming this is about something called “induction”. Induction, as a method of reasoning, doesn’t exist. The best that can be said for “induction” is that it is a fancy way of referring to “guessing” - but that’s being too generous. Logic is actually a subject all its own. It is taught in some places, in some schools, as a part of mathematics (which it is) but it is also part of philosophy. At universities with philosophy departments you can take subjects in logic. Sometimes logic is also taught in schools of mathematics as well at universities. It is a learnable thing, taught at schools and universities. Like French. Or Chinese. Or the piano. If you can’t understand this question, or the solution, it’s kind of like not understanding Chinese. Or how to play the piano. It means: you’ve not learned how. And not having ever learned how is no reflection on your intelligence. And yet, for some strange reason, some people (usually teachers and academic administrators) see questions like this one as a measure of "IQ" in a way that knowing how to speak Chinese, or play the piano is not.
So to the question. The first thing is that, like many of these sorts of questions, they are made up by people who like to do these kinds of questions to test other people who like to do these kinds of questions. Like I said: if you don’t get it: don’t worry. It just means - you’re not practicing doing these sorts of things much. That is entirely normal - does not mean you are less smart than anyone else. Think of something you are good at and do often. Okay - now imagine there are people who do these kinds of logic questions as much as that, and enjoy it just as much.
In other words - to do this question is just another kind of skill - like understanding a language, drawing portraits or playing a musical instrument.
There are some “useful hints” you need before you start to learn a language or play a piano (we can call these kind of hints "implicit" knowledge - things that "go without saying" (unlike "explicit" - which is stated up front, clearly). People who are skilled at certain things know the implicit knowledge (subconscious hints) and sometimes are so familiar with these things they don’t know how others could not be. For example, a useful "implicit" rule some language learners know is: if you find a certain sound in a particular foreign language is very difficult to make - did you know that sometimes in English some combination of syllables or words sounds remarkably like a really foreign word? With a good teacher, if you struggle with a certain sound in one language, they might say ‘well we have exactly the same sound in English...it sounds like the first one and a half syllables in the word Hawaii” (or something like that). Once you know that kind of thing - it makes it easier to learn. So what is the implicit knowledge in this logic problem? Before I get to that - why don't they state it so everyone is on an equal playing field? Obviously it is so they (the teachers, the exam writers, whatever) can weed out the ‘smart people who know how to do this because they practise’ from the ‘normal person not interested in doing this kind of thing’. It's like a pre-test. If people haven't practiced this sort of thing, they are immediately eliminated and we can move on to testing out those who have practiced. A person who has not practiced might still get the answer - but they will usually take longer - so there's an inherent penalty in the test. So here’s what you need to know:
Everyone involved in the question (Albert, Bernard and Cheryl (A, B & C) are perfectly rational people who think absolutely logically all the time. If that seems strange, it is. And it's not stated in the problem.
The problem involves A knowing what B is thinking and B knowing what A is thinking. And they can do this because they all think exactly the same way. (Namely, absolutely logically, all the time). And this is not stated.
They are speaking out loud and can hear one another. This is crucial - but not stated in the problem. Again - you’d know this if you’d done things like this before. Namely in situations (like classrooms) where you can ask questions of someone else who already knows. But if you read this on a website, or worse, in an exam situation, somewhere - there’s no one to ask questions to! People who do well at these things, do lots of them.
Note in the question the word “respectively” - A knows the month. B knows the day. This is a key point, easily overlooked. Again - if you don't do these kind of questions much - you will not realize that there are certain absolutely crucial logic-type words.
So now - what is written in the problem: both A and B have the 10 dates in front of them.
A gets told the month, but not the day. And B he gets the day...not the month.
Bored yet? If you are, then notice that feeling for a moment: it’s not that you are less intelligent. You’re just bored by this. Imagine someone not bored - who is as interested in this as you are in your favorite thing. That's all so-called "intelligence" amounts to: other people naming certain kinds of interests as being those constituting intelligence. So that’s that. Let’s keep going:
Let’s say C said to B “the date is the 18th”. Well immediately he knows - because there is only one “18th” there. So her birthday is June 18th. The same goes for May the 19th. So if either 18th or 19th is used, the game is over. So we are now down to 8 possibilities.
But now A (who has the month) says that he doesn’t know and that he knows B doesn’t know too. So because B thinks perfectly rationally this means he knows A can’t have June at all. Why? Well because if A had June 17, then being perfectly rational, he would have said he knew because otherwise B would have spoken up immediately (as A must, in that case have had June). There’s only 2 June dates on the list! So if A sees “June” then B either says “I know” (namely he sees an 18 and there’s only one 18th) or is silent. If he’s silent then A knows it must be the other June date. Again, this issue of who is silent and who is saying what, when - is something that comes up in a very formulaic way with these puzzles. It's in the same class of problems as others where people are only allowed to say certain things like the person who always lies or always tells the truth and is guarding some door or prisoners forced to wear silly hats (http://en.wikipedia.org/wiki/Prisoners_and_hats_puzzle).
Anyways - let's keep going. We've eliminated June and we've eliminated May 19th. Notice how this goes? We are eliminating dates one by one. It’s just a puzzle.
Remember, A has the month, B has the day and both have now eliminated 3 dates like we have because 19 and 18 are unique numbers, and the other June date through logical deduction is eliminated on the assumption one of them would have spoken up if this were the month. Because B knows the day, but A says that B “does not know” then this means that A does not have May at all (because if he is so sure B “does not know” this means he thinks that May 15 and 16th are possibilities). You might have to read that bit of reasoning again - it can be tricky.
So all of June is gone. And now all of May is gone because A says B doesn't know. So we whittle it down to this:
July 14, 16
August 14, 15, 17
Keep in mind now that A and B think perfectly rationally - so they have thought much the way I have presented this and eliminated things, just like I have.
So they can start the problem from scratch at this point - and use the same reasoning as before:
“We both have the same list so August 17 is eliminated.” After all - if B had “17” he would speak up and say so. He'd say "I know it! I've got 17. That means it's August 17!"
But he does not. 17 is a unique number on that list now. He should have said something! But he doesn't. So by similar reasoning to before, August dates are all eliminated in one go (As A has said that he thinks “B does not know”). Progress. We (by which I mean A and B and you and I) are down to:
July 14 or July 16
Now because A says “B does not know” then this means A is not able to eliminate one of these dates. But B can! Because B has only one number in front of him. Which one? The one that does not appear on the August list. Namely 16.
So when A says “B does not know”, B is able to reason “This means A thinks that my number could still appear in both the July and August lists - and so I could be confused. He thinks I am thinking that it could be any of:
July 14, 16
August 14, 15
But I’m not confused. I know August is out entirely and I can't be confused because I have only one number. If I had the number 14, then A would think it possible for me to still be confused (and I would be). But I'm not and so I am able to say "I know now". And so he does!
And A, who is an equally boring person (who thinks in an identical way (probably why they are friends)) says “Then I also know”. Because he understands B has reasoned that he has the unique number among the options for July, given that it was possible to rule out August altogether.
Still don’t get it?
Don’t worry - you would if you were really, genuinely interested. Which means - you would sit down for (literally) many minutes - probably hours - doing this problem and ones like it until it just became second nature. People do this! And weirdly lots of the people that do it say "I find logic puzzles so easy" and they do well at them. Because they have fun doing them. And so do well in tests with logic puzzles. And they think, and other people think, it is somehow an inherent (i.e: not learned) capacity. That it is something about their brains. But it's not. It's just a skill of their minds. Like speaking Chinese. Or playing the piano. Or dancing. Or any one of a million other things human beings can do. It's not a measure of intelligence. It's just a certain kind of learned skill. But it is a learned skill that is quickly, and highly, rewarded in schools because schools value this kind of thing and some students value getting gold stars or good marks or whatever. And so they do well and the cycle continues.
Postscript: I have looked at other solutions. They are different to mine in a way. Typically much shorter. And I think for someone who doesn’t know how to do these things - almost entirely unhelpful. Mine is maybe a little bit more or less confusing because I try to explain line by line my reasoning and try to explain how it's just something to learn. There are things you need to know (tricks, implicit knowledge, whatever). People who solve these things are typically good at solving them but not necessarily at explaining why the answer is what it is. And why should they? That is a whole other skill entirely - also equally (un)important for almost all of us almost all of the time. So if you didn’t understand my solution - no worries. If you don’t understand other solutions out there (like the so-called “official” answer) even more no need to worry. It’s like if you cannot read music and don’t really want to learn how. It probably won’t help if someone really articulate tries to explain it to you: you probably still won’t understand because you won’t really pay attention. But that doesn’t mean you lack some capability someone else has, or have a lower IQ - it means you are not interested. Truly, some people are interested in, for fun, doing lots of logic type puzzles of the sort they ask in IQ tests. That doesn't make these people smarter than someone who teaches themselves how to speak Mandarin or Russian - it just gives them different interests. Would you think someone who learned how to speak Mandarin completely fluently was really smart? Well every child in mainland China (more or less) learns this to a very high level of proficiency.
If people were rewarded socially for doing IQ tests in the way they were for learning languages, we would all do really really well in IQ tests. We'd all be "fluent" in IQ testing questions and their answers. Instead, only some of us are ever interested in in doing lots of silly little mathematical and logical puzzles - because we find it fun and rewarding.
PPS: This is why I think IQ should stand for “Interest Quotient” - it’s a measurement of how interested people are in trying to do well at answering questions on IQ tests. In other words: a completely useless number.
The most valuable thing you can offer to an idea