A person asked @peez (David) about whether if Hitler got into the Star Trek transporter and an identical copy of him was copied, would the copy be “morally responsible”? @peez posted this question to @paulbloomatyale. The discussion was interesting and I think highlights something about how people decide what personhood is all about. You can find the discussion around this Tweet here: https://twitter.com/ToKTeacher/status/919001168629481472
Paul said the copy of Hitler “…didn’t do anything. This person is just a minutes-old baby who looks just like Hitler.” He further said “Sorry to disagree with some of your other respondents -- but you don't punish a guy for having the delusion that he's Hitler”
Now I think this is wrong. I said “If he's a *fungible* copy, he is identical in all respects. Including memories & motivations. He literally is Hitler and thus responsible.”
David said “…If you create an identical copy of me, it is not me. That is what I’m saying.”
Paul said in response to this “Right. True for other things too. If you copy my favorite chair, maybe we can't tell them apart. But it's not my chair.”
I asked “Say the transporter room is sealed/opaque to the world. A perfect copy is created along with the original. When they exit the room…Neither "copy" or original nor any person or *any physical process* can distinguish them. Would you try neither Hitler for war crimes, both?”
Paul said “Neither, since you can't tell which one = original. (Similar to arresting identical twins, knowing one did the crime but not which one)”
Here is where I wanted to pause. Notice that I am trying to maintain the structure of the “thought experiment”. The Star Trek transporter creates absolutely identical in all respects down to the very atoms - copies of people. In other words *fungible* (or absolutely perfect) copies. This matters. I'm using the word in a sense close to that which appears in David Deutsch's book "The Beginning of Infinity" (page 265 in particular).
So perfectly identical (fungible) copies is substantively different to chairs that *look* identical (but whose atoms might be quite different) and it’s especially different to “identical twins” (which are never identical, even in their DNA, it turns out). I wouldn’t punish an innocent identical twin for the crimes of his brother. They really are different people with different histories and - most important - different minds.
But now: a fungible copy of Hitler *is* Hitler. But why is this?
Consider "the original" (not the copy) - it was atoms in his vocal chords that gave the orders. It was the atoms of his hand (and not his copy) that gave all those salutes. It was his body that was there in Berlin and not the body or the atoms in the body of his copy that did all those bad things. And it’s for this reason, I guess, that Paul and others argue that the copy is not culpable.
My position here is: the atoms are not what we’re actually concerned with. The hardware (the body) is irrelevant.
What matters, instead, and crucially, is the mind.
The mind is the software that runs on the hardware of the brain. And that software, if we really did make a perfect copy of Hitler is identical in all respects. The mind - the software - is just a pattern. It’s the arrangement of the atoms in the brain (constituting the the neural connections or what not, it doesn’t matter) - it is something that in principle could be - at some future time (when we’ve Star Trek transporters, say) could be instantiated in a silicon computer. Or written down on paper. It is a code of some sort that we don't yet understand - the software - the mind of Hitler that is responsible for Hitler’s actions and not his body. That mind contains the memories and all the motivations to keep on killing that the original Hitler has because - and this is key - it is the original Hitler. The original Hitler isn’t about which atoms were there at the time of the invasion of Poland. It was about which mind was there. And the mind that was there was Hitler’s mind. And just because there are now two identical, indistinguishable versions in two bodies (one body with a history and one without) does not mean only one is culpable because in both cases each *mind* has the same history.
And that is why the copy is equally culpable.
Postscript: This is not merely a thought experiment. According to quantum theory, there really do exist “fungible copies” of you because of what we know about how particles (and therefore everything made of matter) behaves. The laws of quantum theory compel us towards a vision of reality bigger than what we are familiar with. This forces upon us the idea that not only are electrons and other sub atomic particles in two places simultaneously (because they occupy different universes) but so does everything including: you. What does it feel like for multitudes of "you" to exist right now. Exactly as you feel right now. And each instant universes "differentiate". For the facts about this read “The Beginning of Infinity” by David Deutsch. - in particular the chapter on "The Multiverse".
This is a blog post. It's clearly not a Tweet*. What's the difference? Is it just because this is on my private blog and a tweet is on the platform we call Twitter? Or, since Twitter began has there arisen a technique - indeed a culture - of how to construct an effective tweet? I think a tweet usually has a style - a style that quickly evolved from the constraint that is 140 characters. Forced into that environment, language took on the form that it did; tweets evolved to be succinct in a way that other mediums did not promote. Here, on my blog, where resources are plentiful and my thoughts can eat up all the characters that are available to them, a certain kind of verbose and descriptive style abounds where metaphor conjures images of ideas as being expressible in certain environments and that this means species of communication can evolve. We should value those species. If someone wants to write long form: get a blog. If you want to sample many ideas quickly: look at twitter. If you want to combine the two: link to your blog from a tweet.
David Deutsch made the point elegantly in a couple of tweets when Twitter decided that it would experiment with 280 character Tweets for some people. David wrote:
Would you redefine
A haiku to have double
The syllable count?
The point here being: any small change (to the number of syllables) makes things worse. And also: there's simply a tradition. And why? Well traditions last because they work. They are ideas that survive. Twitter has survived as long as it has for a reason. Perhaps not as enduring, thus far, as a haiku
In another Tweet:
The contrarian natives of Limerick
Thought the rules of the eponymous verse form arbitrary.
They tried to break loose,
But what is the use
Of free form that just makes the thing humourless?
This makes the point even more powerfully. Here it is obvious (if you are familiar with the form of a Limerick) that something has gone terribly wrong. Why change what already works? Rhyming is what makes a limerick a limerick. If you don't follow the meter - the pattern of rhyming - that a limerick demands - you get something worse. A limerick simply *is* of the form:
There was a young man called @jack
And characters he felt he did lack
So up went the limit
Much better now innit?
More room for everyone's craic.
Anything that deviates from that style isn't a limerick. Anything more than 140 characters isn't a Tweet. It's something else. 140 characters forces upon people a style. Especially for thoughts that cannot normally be easily expressed in 140 characters or less.
Rather randomly choosing some Tweets (from Sam Harris who Tweets far less than he once did) and David Deutsch respectively we get:
We can oppose all extremism and dogmatism, while recognizing that not all extremes and dogmas are the same. The fine print still matters.
Knowledge is created by conjecture and criticism—in Darwin's theory, mutation and natural selection. Lamarckism tries to do without either.
Tweets are very dense. In the first one is 137 characters and the second is 139. Sam has actually used an article ("the") but this is rarer in tweets because those are typically unnecessary. Sam has attempted to explain a complex idea succinctly. He's forced into being clear because he is limited. There are differences between dogmas. The details matter. The second tweet by David is even more dense. It makes a bold claim about two kinds of knowledge and contrasts this with an alternative. An important point lurks here: in both cases the tweet serves as a starting point for engaging with the broader work of both authors. Just pick up their books to find out more.
If people want to tweet longer, there is actually a service called http://www.twitlonger.com or even http://talltweets.com (just google "tweet longer"). My preference is to simply link to my own blog. Or sometimes take a screen shot of a longer bit of text and post it as a picture. But I do this rarely. It's cheating!
*I don't know when to capitalise Tweet. I've mixed things up here with tweet and Tweet. Probably not ideal...
It makes sense to be a Republican in Australia. It seems eminently logical: after all why shouldn’t the Head of State of Australia be Australian? Moreover, shouldn’t we have a system that doesn’t simply allow people to be born into power? Born into power? What an ancient - and ridiculous - concept. The "divine right of kings"? Isn't that an outdated religious notion? Democracy: the idea that the people vote for the best leader based on merit, is surely preferable. And modern. And good.
Those conservatives - and worse - those conservative monarchists must simply be set in their ways. There are no good arguments for the monarchy. And in 1999, when I voted "Yes" for a Republic in the Referendum on this issue, I heard all the arguments. And as I heard them they were weak. "Don’t fix what is not broken" seemed to be the refrain. Others argued in return: the horse and buggy were never broken. They were simply superseded: taken over by a better, more modern way of doing things. So too it must be with governments and systems of democracy. Sometimes things might still work, but nevertheless we can improve things.
Since that time I’ve read more. And interacted with more people. Not about the law, or politics so much: but philosophy and how knowledge is constructed. Here is something remarkable: So much of what we know is inexplicit. We find it very difficult to put into words much of what we know. Here’s a way of thinking about explicit versus inexplicit knowledge:
A good chef has lots and lots of explicit knowledge. If the recipe is written reasonably well, then most anyone can replicate their dish to a very high degree of accuracy with the right tools and ingredients. Indeed this was the basis for a competition on the television reality show “Masterchef”. What happens is that a professional chef with some complicated dish shows some amateur cooks their special creation. Then the cooks get the recipe from the chef. What always surprised me, was no matter how complicated, the cooks managed to get there - and replicate the complicated dish - quite well. The words alone - with some visual cues - enabled unorganised ingredients to come together often in highly complex artistic ways of presenting food. That knowledge about how to cook is highly, highly explicit.
But now consider the great tennis player Roger Federer. He must know lots about how to play tennis really well and serve the ball - and return it - better than almost anyone who has ever lived. But if he was just to use some words to try to explain to you how to serve a ball - you’d never manage it. Even if you spend a whole day watching him and talking to him - though you’d perhaps get a little better, chances are your serve would look nothing like Roger Federer’s serve. Yes: there’s genetics involved, some sort of “innate” capabilities that his body has that yours may not. But still: you really would show very little improvement over the course of a day.
So there is inexplicit knowledge that Roger Federer has about tennis. You have inexplicit knowledge too: perhaps you know how to drive a car. It feels a certain way. Or a bike: how to balance. You just know how to balance. You can explain some, but not all. Words capture it somewhat. But not all of it. People don't learn to ride bikes from reading books - but they can learn to bake cakes. But the point is: just because you cannot articulate precisely how, using words, how to ride a bike, doesn’t mean your bike riding is somehow especially dubious. There are some things we can explain with words - explicit knowledge- like a recipe - or scientific theories. But there exist other things like how to serve a tennis ball really well, or ride a bike or play a piece of music well - these are inexplicit. That kind of knowledge is not all easily articulated. Some is, much is not. I've written a fair bit more about inexplicit vs explicit knowledge here in another context.
Another thing about “knowledge” in terms of where it might be found as well as of what kind it is. For example, we all know it can appear in our minds, because we know things. Knowledge also appears in books. And in computers. But while in a mind it's represented by electricity flowing along neurones in our brains, in books it's ink on paper. In computers: on silicon chips. Knowledge is a rather strange kind of "substance" - it's abstract: the physical stuff that represents it can be completely different from one situation to the next - but the knowledge itself can be the same. Knowledge can even appear in systems. For example: the knowledge of the physics of how light and glass interact is “instantiated” (so we say) in a telescope. “Instantiate” means something like “appears there in a certain form” or “represented within”. So although there can be a book written all about the physics of light and in that form it's basically - physically speaking - ink on paper (it's this which is “instantiating” the knowledge) - that same knowledge can be instantiated in an actual physical thing like a telescope.
So complex things like telescopes can instantiate the very explicit knowledge of how to gather and then focus light.
But what does any of this have to do with the republican vs constitutional monarchy debate? This is the thing about societies: as a rule, historically, they are terribly unstable things. We live at an unusually peaceful time - notwithstanding the chaos in various places. But we shouldn't forget how badly wrong things can go. The author Douglas Murray likes to think of societies as "fragile ecosystems" and I think that's quite right. The majority of societies and whole civilisations throughout all of human history have fallen into chaos and ruin and disappeared from the face of the planet. Whole empires and nations and city-states. Human beings have tried a very many different kinds of ways of organising, ordering and running societies: absolute monarchs, democracies where people work in coalitions, democracies where the person at the top has more or less power. Many kinds of democracies. Many kinds of unelected tyrannies. Democracy is, of course, no perfect shield against tyranny and disaster: indeed we may well say democracy is a kind of tyranny. The tyranny of the many over the few. And of course we have seen famously in many places democracy turned against itself - Nazi Germany of course - but one need not go so far back in history. The nations of South America are testimony to the instability of democracy, as is the continent of Africa. Coups and violent overthrowing of parliaments. But is any of this an argument against democracy? Not really. We should keep Winston Churchill in mind who has attributed to him words to the effect: “Democracy is the worst form of government. Except for all those other forms that have been tried from time to time.”
The point there is this: while there is no better system than democracy that we know of, it is terribly imperfect. It is liable to fall into chaos and even tyranny if we are not very careful about how it is set up and run. Given the chance, as Sam Harris has observed, some people will quickly democratically elect to vote away their rights and democracy itself. We might reasonably wonder right now as I write this in 2017, if this is not indeed happening in many places the world over. Democracy is fragile. A fragile system for organising the fragile ecosystems that are modern societies.
So with these dangers looming over any society at any time, what can we do? Surely we should look at what actually works? What systems have been stable over time? In particular what systems allow stability under change? That, despite huge changes and challenges have nonetheless not suffered terrible chaos or tyranny. We can look to the United States perhaps: a great nation of relatively stable democracy. Relatively: they did have a civil war, of course so great internal violence within that system is not unknown. There have been hiccups. But it is certainly a beacon to look to. Where else? Let us look to England: a democracy of a different kind. There the Head of State is not elected, that position has far less power than that of the American President (who is both Head of State and Head of the Government and, for example, can launch nuclear weapons) but there we have a particularly remarkable degree of great stability. The British System is ancient - one might presume stretching back to the Magna Carta of 1215 and before. But why - why should these places be especially stable and others not? We cannot articulate all the reasons. Both instantiate inexplicit knowledge: their traditions and customs contain within them rules about how to keep a society “stable under change”. Great change. Dynamic societies are the rare exception: most societies that have ever been have been “static” - they have not made great progress. But England - for example - led the industrial revolution. Science made great leaps there. Society itself underwent great changes and democracy reached its most inclusive form with the head of state having among the most diluted of powers. And yet the system of governance itself weathered all that came - including a brief period of republicanism from 1649-1660. Notwithstanding all that, the system has persisted and thrived. It was the framework within which so much change took place safely and to the net benefit of all in that great nation. David Deutsch explains in "The Beginning of Infinity" that a "tradition" has - until now - always been a way of preventing things from changing. Traditions are usually the ways things are done so that things remain more or less static. But in modern "dynamic" societies there is now a different kind of tradition - a unique and powerful one - a "tradition of criticism". That is a monumental difference between a tradition we have and the traditions of the past. It is a tradition that allows for change. And how that tradition works exactly - what the conditions in a society are that allow for that are not easy to articulate. There must be some other traditions and customs in a society that allows a tradition of criticism to flourish. Those other traditions: preconditions for creating that favourable environment for progress - are not easily articulated. Were they, we would more easily export our peaceful democracies to places like North Korea and Iran and Russia. But it is not easy to explain.
So, now to Australia and our Constitutional Monarchy. It has clearly allowed stability under great change. It has actually worked. The nation can be a dynamic and changing one, but the type of democracy itself allows for that change to occur while the whole project remains in place, functioning and thriving. The system we have embodies knowledge - of an inexplicit sort - of how to keep the nation stable. Those customs and traditions of democracy that we have actually work. We know some of the reasons but we cannot articulate all of them. Should we change this? Can we improve it? Perhaps we could. But how? We do not know why it works and so we cannot know how to improve systems we do not fully understand. So we could change things, and intend to improve them, but we might be completely mistaken and cause damage instead. Rather than an automobile replacing a horse and buggy we should think instead: a vibrant and healthy person who is then offered the chance to take a drug which has not been tried before and for which no explanation is given as to how it might work. Yet, an "expert" assures you: this drug will make you even more healthy and vibrant. There is a risk, they are reluctant to admit, that it might make you terribly sick. But we've no reason to assume that either. What would you do? It all turns on how well you currently feel. And if you look around and most other people are pretty unhealthy by comparison - perhaps appreciating your good fortune is enough and you should, perhaps, pursue greater wisdom, knowledge and satisfaction and progress elsewhere, rather than take the risky pill.
An elected president, even an appointed one (appointed by the Parliament or some committee, say) would shift some power away from the Parliament to another seat. We would actually not know what systems we are changing if we made this change. Perhaps those systems would not be too much affected. But perhaps it would be a tragic mistake. Shifting whole systems from one to another is no small thing.
And ultimately it does not matter “Who rules?” as Popper argues here in a paper that should be required reading for anyone interested in these issues. Because democracy simply isn't about electing and installing rulers - be they presidents or prime minsters. It's actually about ensuring rulers can do very little damage so that we can correct their errors if need be. The Monarch - or their representative - is simply prohibited from doing much damage and we have seen this (1973 notwithstanding).
The question before us when considering changes to our system of government is: how can we most easily undo mistakes that are made by rulers? Our system has already satisfied this criterion to a level that leads the world. Popper’s criterion of error correction is no better elsewhere and we might guess cannot be easily improved. Again: we should not fall back into the mistake of thinking democracy is about putting particular rulers into positions and therefore the question of whether the head of state is Australian, or not, is the wrong question for a democracy to consider. And it is true, a monarchist cannot properly articulate all the reasons that a monarchy is preferable because much of reason why is tied up in a type of inexplicit knowledge instantiated in the traditions of governing. But just because these cannot be explained in clear language does not make the knowledge more dubious. Remember: the knowledge of how to ride a bike is of a similar sort: real, yet not easily explained in words. But we know that the knowledge works because the bike stays balanced and you get to where you need to go. As systems of government and great democratic traditions are. Means of safely, and with stability changing our place to allow us to make progress together. The analogy is not, in this case: replacing a horse and buggy with a car. It's repairing the best bike we've ever had that has absolutely no sign of wear and tear. It's taking off the front wheel and replacing it with another: never tested, and for no reason other than it was, for example, made in Australia.
So, in summary: our Constitutional Monarchy maintains the constant stability that allows for the change that the Parliament brings. To remove that stability that is the very thing that has facilitated our dynamics society is dangerous. We’d then have two seats - the Parliament and the Presidency - both subject to change.
The Crown is the Dignified and the Parliament is the Efficient said Walter Bagehot in “The English Constitution” to separate out the symbolic versus the way things are actually acheived. In modern science-type language: The Crown is the Constant and the Parliament is the Variable.
We change this at our peril.
Not all of DNA appears to be “code” - much of it seems to be “junk” or non-coding. But the parts that do code for something - the genes - collectively the genome - constitute a program, or better: many subprograms for the construction of various organs and therefore organ systems that together constitute a complicated organism. In the case of the human being there must exist subprograms for the structure of the eye, the skin, the liver and bones and so on for every part of the body that can be distinguished from every other part. This includes, of course, that most important part of the human: the brain. While our eyes share many features - though not all - with other animals, our brains, though anatomically similar in many ways to some other large mammals, must be in one sense sharply different. What they are doing is different - which is to say, what the mind is, is different. The software running on the hardware that is the brain is different. If animals have other minds at all (I tend to think they must) they are completely different from our own. They are not a little different (as their eyes or brains are) they are very, very different. Qualitatively different. Of an order altogether unlike anything else in nature. There is something very special about human brains compared to insect brains or dog brains or dolphin and chimp brains. Our brains - or at least the minds that run upon them - are universal. While dogs and chimps and dolphins may have some internal subjective experience of the world, their minds are limited. They can solve problems only of the sort already coded in their genes. Dogs may learn tricks - but the repertoire of possible tricks it can learn is forever bounded by the behaviours coded in its genes. It cannot, for example, learn to write a sonnet, or a computer program or how to engineer itself a better home.
It is important to recognise, brains are not minds: minds are the software, brains are the physical stuff made of neurones. Minds are made of thoughts (both conscious and unconscious). Brain and minds are different things even if necessarily connected in humans. Our minds, the minds of humans running on those human brains - are universal. This means they (those minds) can turn their attention to any problem whatever and attempt a solution. What they possess is the ability to create new knowledge. This unique creative ability makes them universal. This is why the human environment is shaped by humans to a degree not seen in other species. Most other species are victims of their environment, not us: we are masters of it and gaining ever greater mastery over it.
Whatever the program is for this universal intelligence, it is instantiated in our genes. That code, written in the so called "AGTC" code of base pairs codes in part, somewhere, somehow for this special universal capacity only humans possess: this capacity to create new explanatory knowledge to allow us to control the world around us. An amazing code, but a code it is. And so it must be the case that we can write that code into an algorithm that can be then coded by software engineers one day, to produce an artificial general intelligence (AGI). What an AGI is not, is simply an AI who is better than us as chess, and at doing long division and at translating Japanese into Spanish and…(iterate for all other human intellectual abilities). No: it’s not like a million AIs working in parallel. Instead an AGI will be a universal explainer. It won’t be preprogrammed with all possible games and languages and mathematical things we know of (though it could be, I suppose) - what makes the crucial difference is the (as yet known) code that allows it to be a universal explainer. It, like us, will be able to learn things not already in its code. Whatever that code is - being universal (which means able to, in principle, creatively solve any solvable problem including especially problems not yet known) - there cannot be fundamental differences between alternative ways of writing that code. Different ways of coding the same algorithm might appear different (as the numeral 1 appears different to the characters “one” which again seems different to the strings 2/2 or 4-3) but ultimately they will represent exactly the same underlying abstract structure. “4-3” is not more “1” than 2/2 - both “code for” the same thing. So it is with all humans and future AGI: we will all share this same ability to be universal learners, or knowledge creators or problem solvers. Call it what you like. The code will be equivalent. Which is to say the software has the same capacity. A capacity for universality. That the code (the software) will be equivalent (thought it's unlikely of course to be identical in humans and AGI) it cannot be the case that intelligence is purely a matter of genetics. After all: AGI won't have genes.
What can, however, be different between two humans or humans and AGI are two bits of hardware that might, between people, be heritable: speed (of our processors) and/or memory. Both of these are, of course, improvable. But the software cannot have its capacity improved. If it’s universal, it’s universal. It can solve any problem. You can’t, in principle solve more problems than all the physically solvable problems that there are given enough resources (like matter, energy and time). So why don’t we solve all those problems or even attempt to? Well at the level of civilisation we simply don’t know about all the problems or have the resources. But as individuals it’s usually simply a mundane lack of interest. So is there a genetic component to intelligence? Yes: in that so far the only code we know for intelligence is written into the genes. We don’t know what it is. When we do, and when we can decode the genetic code for intelligence we will be able to code (perhaps far more efficiently) that code into AGI. Or maybe we’ll discover-using our creativity - a program for creativity before this. But it must be logically equivalent to the code in the DNA. But there can’t be “degrees” of intelligence of the kind we and AGI will have anymore than there are “degrees” of “oneness”. 2/2 and 4-3 and 1 all have precisely the same amount of “1” - namely a complete difference from 0 and 2 and all the other numbers that don’t have that at all.
IQ tests are said to be tests of intelligence. But they cannot be, unless one argues that "intelligence" is something like "interest in doing the kind of tasks associated with success in IQ tests". So, for example: doing lots of practise IQ test type questions would be one such task. So would mathematics and logic and reading lots of books. But none of this implies greater intelligence. It implies more knowledge, sure: knowing how. But not intelligence. Real intelligence should be regarded as the capacity (in principle) to creatively solve problems not before encountered. And this will depend in part on (1) how interested one is in attempting to solve such a problem (2) one's memory and (3) one's processing speed.
So call IQ tests a test of intelligence if you like but recognise it's like calling the results of your "height and weight" test a test of your beauty. Sure, there's possibly a relationship. Sure those results are correlated with all sorts of other things. But to say the test is truly, actually, a test of that phenomena is to ignore the simple fact you've arbitrarily named and therefore narrowly restricted a word with lots and lots of history and social baggage and generality. So call the people with high IQ "more intelligent" and call the tall people with low weight "more beautiful" but then appreciate: we're all equally intelligent in the most important way (we're universal) and we're all equally beautiful in the most important way too. And that should matter far more than measuring (then judging) people with numbers ascribed by psychologists.
I began the day catching up on Sam Harris' "Waking Up Podcast". He interviewed Gary Kasparov who spoke clearly about the threat of Russian President/Dictator Vladimir Putin. Socialism, Kasparov observed at one point, cannot value the individual in the way capitalism does. Whereas free, open, capitalist societies see any individual human death as a terrible tragedy, socialism will see the "value of life" quite differently: sometimes we must sacrifice individuals. Individual death of innocents is not necessarily a tragedy. This huge assymetry in the values of the two systems when it comes to life is absolutely crucial to keep in view when discussing the apparent merits of socialism. So mainstream has the fawning over socialism become that ABC reporters/comedians in Australia are now sneeringly admonishing politicians because they are not socialists (see, for example, Tom Gleeson's interview on "The Weekly" with Senator Cory Bernadi). The superemely high value capitalism places upon human life (compared to socialism which places many other things like "preserving the system as it is" higher than any individual) is a consequence, I would argue, of the creative output of producers; creativity - making things better; progress - is valued highly and because the next improvement can come from anywhere - all human life is especially sacred. But socialism is the idea that there is a utopia that we can enact: a system where problems (like inequity) can be once-and-for-all eliminated. And this might require the elimination of some individuals. Not so with open, free capitalist societies that must recognise the inevitability of problems and, therefore, the possibility of their creative solutions. But I digress.
What was also wonderful about Sam's interview, was Gary explaining how at first Deep Blue was beaten by a human (him). Then the computer won. They did not play a third match. They should have. Jaron Lanier in his first book "You are not a gadget" speaks in a similar way to the way Gary did when assessing that loss: it was largely about psychology. Gary got spooked by the computer and the environment. The poker element of chess undid the human player because there was no face on the computer. Eventually, of course, the computers all got so much better that they can more often than not beat a human player. But what is often left out of these discussions of apparent machine "superiority" in this narrow area is: a human player with a computer can always beat a computer with no human. Gary spoke of this with such confidence almost as if it was a law of nature. So the combination of raw computing power (and a fixed algorithm) is always worse than raw computing power AND the creativity of the human mind. People will always beat machines - just so long as people get to use machines too. And why shouldn't they? The whole point of machines is to serve people. They're just dumb machines. We owe nothing to our microwaves, iPhones or AlphaGos. Given some of Sam's remarks, I'm still not sure he quite understands what the point of "universality" is when it comes to the ability of humans to be creative and explain anything. Sam thinks that you can just keep adding module after module after module to an AI so that, for example, it's the best chess playing machine, then the best Alpha Go player, then the best car driver then the best manufacturer of coca cola and you keep doing this for every automated task you can think of and - well - eventually once you've done this for "everything" you have an "intelligence" that - by definition exceeds us for every task.
No. That's false. It can't possibly exceed us for every task because you cannot describe "every task" like you can for the rules of chess. You can't program in: tasks not yet concieved. You can't program in the ability to use imagination to construct creative explanations. Or if you can: you don't need to program in all all those other things! You only need the one thing (the program for creating explanations - i.e: a program for actual universal learning) in which case you have a person. A genuine AGI. And at that point you're not allowed to go forcing it to: play chess or drive your car or anything because people have human rights. Right?!
Hours after this I began to read "The Undoing Project" by Michael Lewis on a completely unrelated topic (eventually it turns out to be about how two Israeli psychologists did some Nobel Prize winning work in psychology for which they were awarded the economics prize). I'll read anything by Michael Lewis: he's a great writer. You can read my page here about his book "The Big Short". This new book of Lewis' begins with a parable about how the top national basketball teams in the USA choose (buy!) their players. Traditionally it was simply "expert opinion" that was used to help teams choose. People with lots of experience (called 'scouts') would choose from the list of available players (the draft). This method was riddled with error (it was basically just guessing although sometimes things like: number of points scored in college or something like that was taken into account). But then people started to make "mathematical models" where lots of factors like "time on court and number of points scored" were weighted. In a basketball match sometimes players don't play the whole game. Or even factors like: how high could the player jump and how fast were they over 100m? Things like that. Some factors might have been given a 0.5 weighting, others 2x or perhaps a square or another exponential or who knows how? But whatever the case, those who relied on models and not just expert opinion started to beat out the experts. Then everyone started to get models and awful mistakes were found with the models (so some team would invest in a new player based on a model alone and the player turned out to be a terrible choice). Long story short here: the solution was to use both the creative theory formation of experts augmented with lots of data and a model. This way the expert and the model could refute each other. Now Lewis doesn't say this - but that, it seemed to me, was clearly what was now going on. Where the expert and the data/model agreed - the choice was better than when at least one of them disagreed.
So today, in the space of hours, in two quite divergent situations I encountered something really important that I've known for some time but which now seems to be entering the cultural "intellectual" vernacular: human creativity adds something qualitatively different. Because it is something qualitatively different. Pure data, or "mindless" computing (which is what all computers except us, are) will always be worse than a person (with some relevant knowledge) and a good predictive theory based on a good explanation and a powerful computer to crunch out some numbers fast.
But the lesson: human creativity is qualitatively different to all other kinds of computation we know about in the universe. Simply adding ever more data, or processing speed or memory to a computer cannot possibly get you AGI. For that: you really do need a jump to universality as David Deutsch explains in The Beginning of Infinity. And it will be a jump. It won't gradually happen (as AI is gradually getting more and more useful in more and more places) it will come abruptly. Just like those chess champions were suddenly able to beat the best computers or those scouts beat the mathematical models...when they too had computers.
Have we reached peak “critical thinking” articles yet? In the weeks after the US election it seemed almost every (respectable) online media outlet had some story - written by an “educationalist” (or some other academic) - about critical thinking and its place in education. The spike has come in lock-step with news about so-called "fake-news" in this so-called "post-truth" era while grenades are being hurled from either side of the politicised-media divide.
On November 23rd “The Conversation” asked “What is critical thinking? And do universities really teach it?” A week earlier National Public Radio (npr) titled a report “From Hate Speech to Fake News: the content crisis facing Mark Zuckerberg”. They ran this the same day as an article titled “Students have dismaying inability to tell fake news from real, study finds.” That was a report on a Stanford University study. One wonders who the control group was (hint: there couldn't possibly be one). This is hardly a new phenomena - it is no more than a specific instance of the general problem of: in a world where children are taught to trust authority how do those same children discern true from false without being told? In other words: which authority should they trust?
This, in crystalline form, is the problem. Indoctrinated with the idea that one should trust their elders and betters students raised on a diet of authorities who tell them what is true (parents, teachers, media) when the traditional authorities turn out not to be trustworthy, to what new authorities do they turn?
This is the wrong question as Karl Popper explained. We should not be looking for authorities to tell us the truth. That is not what the quest for truth and knowledge amounts to. No - the real question is:
How can we detect and eliminate error?
That is Popper's critical method (what "critical thinking" amounts to) and it is almost universally unappreciated, ignored and, where it's not ignored: misunderstood.
Since those npr articles bemoaning the not-new observation that some news sites are hard to trust and people might lack the tools to discern true from false, the articles on so-called "critical thinking" have kept coming. All of them have one thing in common: they fail to clarify what they mean by critical thinking. I'm yet to see one that mentions, at all, "Popper" or "Critical Rationalism". Some are quite explicit that defining terms is the whole problem and yet also declare "it is, in the end, unimportant to define what we mean by critical thinking." This and statements like it, are frustrating reads for anyone interested in actual "critical thinking" as an important general skill for any human being to possess and as it has been understood for decades and more by people actually interested in critical thinking (and not just the broad, nebulous project of "education"). It’s a simple idea: critical thinking skills allow one to sift the true from the false, the useful from the useless; the valuable from the worthless. It consists of methods of categorising into what works and what fails to work. It was largely explained by Karl Popper in his corpus of work decades ago (building on the work of some of the ancient Greeks and early scientists and some others) and broadly referred to as “critical rationalism”. That epistemology explains under what conditions one can evaluate propositions as either provisionally true or demonstrably false. So there is a long, wise tradition that contains genuine knowledge about how knowledge works and the centrality of "criticism" to the whole project. People have known this as much as experts have understood germ theory or the laws explaining gravity. But just because most people are ignorant about it does not make it any less true. We must remember: most people are still ignorant about Darwin's Theory of Evolution by Natural Selection: but their ignorance does not make it less true.
There is no third option between "true and false". This is the law of the excluded middle. And this is the beauty of critical rationalism: it rests upon a simple logical truth. If something is successfully criticised - it is refuted. It is, simply, the method of criticism. That's critical thinking.
And that is how I will come to define critical thinking. It must be about criticism.
So, for my part, I have been attempting to explain the simplicity of critical thinking for some years now. Yes, some so-called "educationalists" are coming late to this whole area - but this should be no surprise. There are few, if any, engaged in the study of "education" who have any deep appreciation of what learning actually is (follow for my video which provides a sketch answer to this question following Popper) and so are not really interested in critical thinking. To understand that either my own website here or my video here can explain that.
I do this because I genuinely think that these are not difficult concepts. Sure they can lead to amazingly complex places - but just because an idea is deep does not mean it must be "hard to define". So because academics won't define it, let me:
Critical Thinking (noun): thinking aimed at producing criticisms.
That's really all there is to it. Criticism is how we sift the true from the false. It is called "critical" thinking because it's about being critical. And being critical means producing criticisms. And applying those criticisms to ideas and works and other notions.
If the criticisms are valid; the claim being made is false or less worthy than (hopefully) some other competing claim.
If the claim being made survives every attempt at criticism; we take the claim as provisionally true so long as it’s also useful to our purposes.
Again, it really is as simple as that. There's no need for obfuscating or using big words or providing long lists of educational buzz words.
But yes, of course, one can get very very deep with this philosophy and there truly is an ocean of material to explore. But to “define” critical thinking is not difficult. To understand just how important it is, is likewise not hard to appreciate. It is absolutely crucial for learning: it's the cornerstone - and the more critical one is in their approach to ideas, the faster, better and deeper they learn.
The articles recently have attempted to obfuscate the entire concept of critical thinking for reasons that allude me (but I will attempt to provide some possible answers in a moment). Let’s begin with one of those articles above - possibly my favourite:
November 2016, “The Conversation” by an Academic whose title is “Associate Professor of Higher Education”. Let me, in the words of Darth Vader on his visit to the Death Star in Return of the Jedi and "dispense with the pleasantries". One would expect one who possesses such a lofty title to know much about learning and therefore - the critical method. The article is titled "What is critical thining and do universities really teach it?" and again we have high hopes for uncovering what so many regard as an elusive concept. The author begins well enough asking: "But what is critical thinking? If we do not have a clear idea of what it is, we can’t teach it."
And I could not agree more! And then:
"It is hard to define things like critical thinking: the concept is far too abstract."
It's not hard. I did it. In 7 words. And the concept is not "far too abstract" at all. But let us keep in view the first, valid, claim: If we do not have a clear idea of what it is, we can't teach it. Right. Now moving further on in the article after the author offers us nothing clear about what critical thinking might be, he writes:
"But perhaps a definition of it is, in the end, unimportant. The important thing is that it does need to be taught, and we need to ensure graduates emerge from university being good at it."
But the author has said themselves: if they do not have a clear idea (some would say this is what a definition amounts to!) then it cannot be taught. I am not surprised the author is, apparently, satisfied with this contradiction in claims. They do not know what critical thinking is, deny it can be defined and (this is almost like a mathematical proof) therefore cannot apply it to their own writing. I feel a little sorry for a person who applies their life, ostensibly to the pursuit of expertise in "education" (and one would think "learning") and not know what "critical thinking" is or know how central it is to that business. But worse: I feel more for the students. We need to clarify this. Simplify this. And help people who want to learn: learn it.
This lack of shared understanding - culturally - but especially institutionally among universities and even secondary and primary educators is a scandal. They should know what learning is. They should know what "critical thinking" is. That they reject notions like Popper's epistemology as being absolutely central to this whole project is a huge problem. (There is a worse problem: coercion in education and I write about that here.)
Now that article, bad as it was, has been outdone. And together all of these articles have propelled me to write this piece in response. So let me get to the grand-daddy of poorly thought out articles on critical thinking. The title of that article is "Why schools should not teach general critical-thinking skills." Now yes, it's true I argue schools should teach students what they want. But given a coercive system, one must then ask: what is it we value? The choice really is as stark as this: indoctinate students or provide them with knowledge (tools/skills) to think for themselves. Now the article is right about a number of things: not least of which is that silly so-called "brain training" exercises that many schools now use (because: neuroscience!) are supposed to provide "exericse" for the "brain" to help it "think critically". No, that's very wrong headed. I agree. But the lesson is not: stop trying to help young people think critically. The lesson is: figure out what critical thinking actually is and help student who want to learn it, learn it!
So this article takes things to an even higher level of misconception, confusion and misdirection. It, just as the previous article, argues that both critical thinking does not exist and that, despite this we should not learn what it is. So they both deny the thing is real or worth defning and then, for good measure, just in case they were wrong the first time: if anyone suggests it's a specific thing, let’s not try to learn what it is.
If one were in a conspiratorial frame of mind one could just think that education, increasingly dogmatic and partisan in its way of providing a narrow perspective on all issues (especially political) would have an interest in denying the facility of “critical thinking skills” and specifically denying that such a thing even exists (so don’t even bother looking).
So let me offer my first tentative explanation for why universities and other educational institutions speak much about how they promise to provide their charges with “critical thinking skills” while at the same time denying such broad skills could be useful and how such skills are too difficult to define: Could it be because it is in the interest of those who promulgate dogma and it can protect those who promulgate dogma to deny the usefulness of critical thinking?
Critical thinking skills are THE most important tool in the anti-dogma tool box. Logically prior to scientific reasoning, mathematical skill or even philosophical knowledge - critical thinking skills are THE way in which we sift all true from false ideas. They are the means by which we learn when coupled with our human creativity.
Another related possibility: This new war on critical thinking from some sectors (like some university academics and others) is becoming more brazen. In some cases it seems to be an attempt to demarcate territory: we are the guardians of teaching, learning and education, (the implicit this thesis goes). But why should it concern those who are ostensibly engaged in teaching and education to deny the deep usefulness of a critical attitude? Precisely because a critical attitude - the aim of “critical thinking” is how learning proceeds. Period. It is possible for people (students!) to learn how to learn all on their own. And, of course, if people genuinely take this on board this is a terrible threat to institutionalised learning in an age of such free access to high quality information. Yes, of course, learned people are useful to those doing some learning - no one denies this except, it would seem - some academics who might realise that their own highly specialised "knowledge" in some areas is, well, rather useless. But if one wants to learn something really, genuinely useful, like chemistry or history, say, a chemistry professor or history major is going to be one of the best resources possible. Some of the time, anyway. But someone who is - say a “Professor of Literacy Education” (genuine title) may indeed find that students interested in becoming expert in that field need do little more than spend an afternoon or two on Google and Wikipedia and the odd journal before they discover what, if anything, is worth learning about the field.
Why else might there be this (unwarranted) criticism of (actual) critical thinking? Because critical thinking - general, broad based, critical thinking skills - are the very means by which entrenched dogma come under attack and are subverted and shown false or lacking. Mere science is not enough. It’s important, - essential perhaps - but it is far from sufficient (there is no shortage of scientists who are also, on the weekends, young earth creationists for example). Reading some philosophy is not enough. Nor is doing maths, collecting data and understanding trends. These things are not enough to be a genuinely critical thinker. The deeper skill of figuring out “what is wrong with this claim?” across any domain whatsoever is a threat to anyone who thinks the answer must be “there is nothing possibly wrong with this.” That has been the response of priests and rabbis in the past and mullahs and evangelists of the present. It is also, scandalously, an implicit claim of too many in academia.
Let us return once more to "Why schools should not teach general critical thinking skills." It was published in “aeon” magazine and authored by Carl Hendrick (head of learning and research at Wellington College in Berkshire. He is also an English teacher completing a PhD in English education at King’s College London we are told.)
Hendrick asks: “Should critical thinking be taught as a general skill at school?”. There is no evidence in the article that Mr. Hendrick has ever come to grips with what “critical thinking” is which is my constant refrain in articles of this type. He tends to deny it really exists. It would be interesting to know if he has read anything by Karl Popper and, if so, why he is so dismissive of an entire philosophy about the critical attitude.
Hendrick writes, “Teaching students generic ‘thinking skills’ separate from the rest of their curriculum is meaningless and ineffective.” But no one is proposing that the critical thinking is not to be applied to anything! I propose, below - it should be applied to education itself. And yes: by the students! But Hendrick quotes another “educationalist” Daniel Wellingham who writes “There is not a set of critical thinking skills that can be acquired and deployed regardless of context.” And so there we have it! A denial of the basic epistemology that underpins how learning works.
The deeper the skill the broader its techniques will apply to diverse situations. If one knows how epistemology (the production of knowledge; learning) actually works as Karl Popper explains then one has a powerful toolbox of techniques (knowledge) about how to criticise claims no matter the domain. Is the person claiming that their claim has never been and could not possibly be subject to criticism? That’s a dubious claim! Has the person considered how the claim might possibly be false? No? That’s a criticism. And that is the general technique: how can we determine if this claim is possibly false? To claim that questions like this simply aren’t possible because the “educationalist” has never gotten to grips whatever with what “critical thinking” might be because they heard people define it out of existence - but this is no reason for the rest of us to throw the baby out as well. Just because "educationalists" are now skeptical of "critical thinking" to the point they are no longer critical of their own thinking or writing does not mean we need to follow their lead into relativism and nihilism. Let us keep in view: there's truth here to be discovered.
Hendrick goes on, “Instead of teaching generic critical-thinking skills, we ought to focus on subject-specific critical-thinking skills that seek to broaden a student’s individual subject knowledge and unlock the unique, intricate mysteries of each subject.” Again, no one has ever suggested that one comes at the expense of the other. We can do two things at once. But one of the examples Hendrick uses simply underscores the misconception. He writes, “A physics student investigating why two planes behave differently in flight might know how to ‘think critically’ through the scientific method but, without solid knowledge of contingent factors such as outside air temperature and a bank of previous case studies to draw upon, the student will struggle to know which hypothesis to focus on and which variables to discount.” This difficult to parse passage seems to be saying that physics students need knowledge about physics to make progress. It also suggests (wrongly!) that science proceeds by looking at the past. That a student ostensibly doing an experiment needs to first draw upon "a bank of previous case studies". Of course no one denies some knowledge of physics is useful in making progress in physics! But the whole point of science - and this can be learned in school - is that we want, when doing science, to make progress beyond that "bank of previous case studies". Yes - even students! The subject specific technique of: simply build a model and do repeated experiments is a critical "subject-specific" technique. The plane that flies worse after repeated trials is disqualified from being the best. The investigation as to why must provide a general explanation WHY one plane is better than another. That is both a subject specific and a general requirement for claims in science (and broadly): what is the explanation? If there is no explanation - one has not really learned anything. Absent a good explanation - that is a criticism. If an experiment is done comparing two planes and plane 1 is definitely better at flying than plane 2 but one has no clue why one is not really doing science. One is going through a process that looks like science in form only - but science is not just about predicting the outcome of experiments. It's about explaining the evidence. Who would want to get in a plane that the engineer had no better reason of attributing magical forces to keeping it aloft rather than Newton's Laws in the form of the Bernoulli effect? And again, appreciating that is not part of the corpus of scientific knowledge but rather the general set of skills called the "critical method" in epistemology. And anyone can learn that! Subject specific critical thinking in science allows us to use an *experiment* to criticise ideas and how to properly design experiments is a whole subject in itself. But the general idea: that we need to ask the question "What is wrong with this and why?" is not subject specific. But, simple though it is, it is so very rarely ever applied. To anything. Ever. Not by teachers and students in schools. But it should be! It is true intellectual self defence against nonsense and dogma.
Articles on critical thinking we have seen - and if one does a Google search one will quickly find - they are absolutely riddled with misconception: this has now reached the point where one might conceivably be concerned that the very term “critical thinking” is so mired in confusion that it could be unsalvageable. I want to push back against this before the battle is lost completely. But obfuscation on this point brings to mind a wonderful piece by Professor Richard Dawkins when writing about the Sokal Hoax (itself an application of "critical thinking" techniques to postmodern literary nonsense). “Suppose you are an intellectual impostor with nothing to say, but with strong ambitions to succeed in academic life, collect a coterie of reverent disciples and have students around the world anoint your pages with respectful yellow highlighter. What kind of literary style would you cultivate? Not a lucid one, surely, for clarity would expose your lack of content." The original piece can be found here. So it is in education as much as in those areas of postmodern philosophy Dawkins was criticising: writers do not cultivate a clear literary style. They say things are difficult to define and perhaps not possible to. And why? It disguises a lack of content. It disguises a lack of clarity on their part about what they are supposed to be experts in: learning.
Critical thinking teaches us to ask: “where are the mistakes?” and "can I explain why this is false?” and to claim “I have a better idea!” and “this is the best explanation and here is why”. It is logically prior to knowledge in any other subject area: science, maths, history, philosophy. Education theory. So much of what passes for “knowledge” in many many domains rests upon one or more dogma, according to some. In reality there can be no dogma (i.e: propositions that must be held immune from criticism). And yet, in many places there are implicit dogmas (religion is simply the rare case where so many knowledge claims consist of explicit dogmas). Implicit dogmas especially plague many social sciences - and most especially education theory. Explicitly, educators speak about how the “bucket theory of mind” (for example) is not true. But the implicit dogma is: it’s true. And this is why we have coercive education with fixed curricula. It assumes that the knowledge written down in school (and university) syllabi can be transmitted faithfully by the technique of classroom setting, lectures and seminars. For this reason, school has changed little over the centuries. Sure, chalkboards are now electronic smart-boards and slates are now ipads - but students are still in classes at certain times of the day doing, almost always, what the teacher requests (sure more often there are games and hands on activities and building - but this is an incremental improvement - not a revolution). And they do what they do because the system requires it of them. It’s often legislated what is to be understood. This is not a system for fostering critical and creative thinking. It’s a system which seeks to indoctrinate. It is impolitic to say this - but most students understand this. Most critical thinkers recognise this.
Educators need to recognise this implicit dogma. It’s really there. The education system is a machine for indoctrination. So of course people in that system baulk at the idea of genuinely critical thinking in their students (though they shouldn't! They should bravely seek to change it from the inside - however slowly this happens!). But they baulk because that’s what they’ve been taught their whole lives: do not think for yourself. Learn what you are told to learn. Memorise. Solve this puzzle. Jump through this hoop. Earn high marks. Now you are qualified to pass on that understanding to those younger than you - the importance of learning what you are told to learn. Memorising. Solving puzzles. Jumping through hoops and earning high marks to earn a qualification and be successful so you can pass on your wisdom to younger people about how to be successful by learning what you are told to learn. Memorising and solving puzzles by jumping through hoops…(and so it goes).
Critical thinking skills reject almost all of that viscious cycle of learning what people before you were taught uncritically. Instead critical thinking teaches students to question at every single turn. Does this mean people won't learn if they are questioning everything constantly? No - the opposite - they learn. Better. To effectively criticise, one must learn deeply about what it is that’s being crititiced in the long run. But one is genuinely allowed to say: this is boring and useless for me. And be quite right about that.
Within that system, mandated and policed by government agencies, a teacher may ask: what scope to I have to genuinely help students think critically about all this? One can be honest about it all: how the system is set up. It is to “meet standards” and “achieve outcomes”. One can tell students: if you think critically about the standards and outcomes because you’ve come to understand them so well you find flaw with them, in all but the rarest of cases the system will not reward you for uncovering the problems. It will punish you. That is simply how an “education system” mandated by the state must work. It must be an indoctrination engine. So you can learn to “game the system” if you want to “succeed” by the lights of the culture in which you are raised (and there is nothing wrong with this if this is what you want to do).
But - and this is the central key to teaching critical thinking in a coercive environment - a student can understand all of that about the system while still questioning it and everything they learn within that system. But they need help from the rare souls who are willing to show how flawed and provisional all the knowledge learned in school (and university) truly is.
We need critical thinkers: people not taken in uncritically by leaders either religious, political or enamoured by the importance of their own area of expertise. We need people who share values not because they are indoctrinated with certain ideals but because critical thinkers naturally cohere on the truth! Critical thinkers naturally arrive at the same conclusions because there is an underlying objective reality to be discovered. It must be the case that if people are openly, honestly pursuing truth that they will find it. Because the truth exists. Both science and religion agree on this fundamental point and are correct about it: The truth exists. Take that seriously. Now all we need to do is be critical thinkers and we’ll each find that same truth together.
Here is the problem “for school children” that it is claimed has “stumped the world”. I wanted to write here not just a solution, but also a clear (I hope) line by line explanation of the reasoning which repeats itself multiple times as this is how people best learn this kind of thing. It's easy to miss just one line of reasoning. I also want to make comments about how being able to do this, or not do this question - or this type of question is a measure of nothing more than how interested you are in doing this question - or questions like it. Just a quick thing about what this question is all about: it’s called logical deduction. That’s worth saying because some articles are claiming this is about something called “induction”. Induction, as a method of reasoning, doesn’t exist. The best that can be said for “induction” is that it is a fancy way of referring to “guessing” - but that’s being too generous. Logic is actually a subject all its own. It is taught in some places, in some schools, as a part of mathematics (which it is) but it is also part of philosophy. At universities with philosophy departments you can take subjects in logic. Sometimes logic is also taught in schools of mathematics as well at universities. It is a learnable thing, taught at schools and universities. Like French. Or Chinese. Or the piano. If you can’t understand this question, or the solution, it’s kind of like not understanding Chinese. Or how to play the piano. It means: you’ve not learned how. And not having ever learned how is no reflection on your intelligence. And yet, for some strange reason, some people (usually teachers and academic administrators) see questions like this one as a measure of "IQ" in a way that knowing how to speak Chinese, or play the piano is not.
So to the question. The first thing is that, like many of these sorts of questions, they are made up by people who like to do these kinds of questions to test other people who like to do these kinds of questions. Like I said: if you don’t get it: don’t worry. It just means - you’re not practicing doing these sorts of things much. That is entirely normal - does not mean you are less smart than anyone else. Think of something you are good at and do often. Okay - now imagine there are people who do these kinds of logic questions as much as that, and enjoy it just as much.
In other words - to do this question is just another kind of skill - like understanding a language, drawing portraits or playing a musical instrument.
There are some “useful hints” you need before you start to learn a language or play a piano (we can call these kind of hints "implicit" knowledge - things that "go without saying" (unlike "explicit" - which is stated up front, clearly). People who are skilled at certain things know the implicit knowledge (subconscious hints) and sometimes are so familiar with these things they don’t know how others could not be. For example, a useful "implicit" rule some language learners know is: if you find a certain sound in a particular foreign language is very difficult to make - did you know that sometimes in English some combination of syllables or words sounds remarkably like a really foreign word? With a good teacher, if you struggle with a certain sound in one language, they might say ‘well we have exactly the same sound in English...it sounds like the first one and a half syllables in the word Hawaii” (or something like that). Once you know that kind of thing - it makes it easier to learn. So what is the implicit knowledge in this logic problem? Before I get to that - why don't they state it so everyone is on an equal playing field? Obviously it is so they (the teachers, the exam writers, whatever) can weed out the ‘smart people who know how to do this because they practise’ from the ‘normal person not interested in doing this kind of thing’. It's like a pre-test. If people haven't practiced this sort of thing, they are immediately eliminated and we can move on to testing out those who have practiced. A person who has not practiced might still get the answer - but they will usually take longer - so there's an inherent penalty in the test. So here’s what you need to know:
Everyone involved in the question (Albert, Bernard and Cheryl (A, B & C) are perfectly rational people who think absolutely logically all the time. If that seems strange, it is. And it's not stated in the problem.
The problem involves A knowing what B is thinking and B knowing what A is thinking. And they can do this because they all think exactly the same way. (Namely, absolutely logically, all the time). And this is not stated.
They are speaking out loud and can hear one another. This is crucial - but not stated in the problem. Again - you’d know this if you’d done things like this before. Namely in situations (like classrooms) where you can ask questions of someone else who already knows. But if you read this on a website, or worse, in an exam situation, somewhere - there’s no one to ask questions to! People who do well at these things, do lots of them.
Note in the question the word “respectively” - A knows the month. B knows the day. This is a key point, easily overlooked. Again - if you don't do these kind of questions much - you will not realize that there are certain absolutely crucial logic-type words.
So now - what is written in the problem: both A and B have the 10 dates in front of them.
A gets told the month, but not the day. And B he gets the day...not the month.
Bored yet? If you are, then notice that feeling for a moment: it’s not that you are less intelligent. You’re just bored by this. Imagine someone not bored - who is as interested in this as you are in your favorite thing. That's all so-called "intelligence" amounts to: other people naming certain kinds of interests as being those constituting intelligence. So that’s that. Let’s keep going:
Let’s say C said to B “the date is the 18th”. Well immediately he knows - because there is only one “18th” there. So her birthday is June 18th. The same goes for May the 19th. So if either 18th or 19th is used, the game is over. So we are now down to 8 possibilities.
But now A (who has the month) says that he doesn’t know and that he knows B doesn’t know too. So because B thinks perfectly rationally this means he knows A can’t have June at all. Why? Well because if A had June 17, then being perfectly rational, he would have said he knew because otherwise B would have spoken up immediately (as A must, in that case have had June). There’s only 2 June dates on the list! So if A sees “June” then B either says “I know” (namely he sees an 18 and there’s only one 18th) or is silent. If he’s silent then A knows it must be the other June date. Again, this issue of who is silent and who is saying what, when - is something that comes up in a very formulaic way with these puzzles. It's in the same class of problems as others where people are only allowed to say certain things like the person who always lies or always tells the truth and is guarding some door or prisoners forced to wear silly hats (http://en.wikipedia.org/wiki/Prisoners_and_hats_puzzle).
Anyways - let's keep going. We've eliminated June and we've eliminated May 19th. Notice how this goes? We are eliminating dates one by one. It’s just a puzzle.
Remember, A has the month, B has the day and both have now eliminated 3 dates like we have because 19 and 18 are unique numbers, and the other June date through logical deduction is eliminated on the assumption one of them would have spoken up if this were the month. Because B knows the day, but A says that B “does not know” then this means that A does not have May at all (because if he is so sure B “does not know” this means he thinks that May 15 and 16th are possibilities). You might have to read that bit of reasoning again - it can be tricky.
So all of June is gone. And now all of May is gone because A says B doesn't know. So we whittle it down to this:
July 14, 16
August 14, 15, 17
Keep in mind now that A and B think perfectly rationally - so they have thought much the way I have presented this and eliminated things, just like I have.
So they can start the problem from scratch at this point - and use the same reasoning as before:
“We both have the same list so August 17 is eliminated.” After all - if B had “17” he would speak up and say so. He'd say "I know it! I've got 17. That means it's August 17!"
But he does not. 17 is a unique number on that list now. He should have said something! But he doesn't. So by similar reasoning to before, August dates are all eliminated in one go (As A has said that he thinks “B does not know”). Progress. We (by which I mean A and B and you and I) are down to:
July 14 or July 16
Now because A says “B does not know” then this means A is not able to eliminate one of these dates. But B can! Because B has only one number in front of him. Which one? The one that does not appear on the August list. Namely 16.
So when A says “B does not know”, B is able to reason “This means A thinks that my number could still appear in both the July and August lists - and so I could be confused. He thinks I am thinking that it could be any of:
July 14, 16
August 14, 15
But I’m not confused. I know August is out entirely and I can't be confused because I have only one number. If I had the number 14, then A would think it possible for me to still be confused (and I would be). But I'm not and so I am able to say "I know now". And so he does!
And A, who is an equally boring person (who thinks in an identical way (probably why they are friends)) says “Then I also know”. Because he understands B has reasoned that he has the unique number among the options for July, given that it was possible to rule out August altogether.
Still don’t get it?
Don’t worry - you would if you were really, genuinely interested. Which means - you would sit down for (literally) many minutes - probably hours - doing this problem and ones like it until it just became second nature. People do this! And weirdly lots of the people that do it say "I find logic puzzles so easy" and they do well at them. Because they have fun doing them. And so do well in tests with logic puzzles. And they think, and other people think, it is somehow an inherent (i.e: not learned) capacity. That it is something about their brains. But it's not. It's just a skill of their minds. Like speaking Chinese. Or playing the piano. Or dancing. Or any one of a million other things human beings can do. It's not a measure of intelligence. It's just a certain kind of learned skill. But it is a learned skill that is quickly, and highly, rewarded in schools because schools value this kind of thing and some students value getting gold stars or good marks or whatever. And so they do well and the cycle continues.
Postscript: I have looked at other solutions. They are different to mine in a way. Typically much shorter. And I think for someone who doesn’t know how to do these things - almost entirely unhelpful. Mine is maybe a little bit more or less confusing because I try to explain line by line my reasoning and try to explain how it's just something to learn. There are things you need to know (tricks, implicit knowledge, whatever). People who solve these things are typically good at solving them but not necessarily at explaining why the answer is what it is. And why should they? That is a whole other skill entirely - also equally (un)important for almost all of us almost all of the time. So if you didn’t understand my solution - no worries. If you don’t understand other solutions out there (like the so-called “official” answer) even more no need to worry. It’s like if you cannot read music and don’t really want to learn how. It probably won’t help if someone really articulate tries to explain it to you: you probably still won’t understand because you won’t really pay attention. But that doesn’t mean you lack some capability someone else has, or have a lower IQ - it means you are not interested. Truly, some people are interested in, for fun, doing lots of logic type puzzles of the sort they ask in IQ tests. That doesn't make these people smarter than someone who teaches themselves how to speak Mandarin or Russian - it just gives them different interests. Would you think someone who learned how to speak Mandarin completely fluently was really smart? Well every child in mainland China (more or less) learns this to a very high level of proficiency.
If people were rewarded socially for doing IQ tests in the way they were for learning languages, we would all do really really well in IQ tests. We'd all be "fluent" in IQ testing questions and their answers. Instead, only some of us are ever interested in in doing lots of silly little mathematical and logical puzzles - because we find it fun and rewarding.
PPS: This is why I think IQ should stand for “Interest Quotient” - it’s a measurement of how interested people are in trying to do well at answering questions on IQ tests. In other words: a completely useless number.
If there are multiple points of view on a particular issue - especially in politics - a good rule of thumb is: they are all wrong. Nowhere is this more obvious than in the application of politics to the science of climate change.
One thing that sets scientific theories apart from other kinds of knowledge is the capacity of scientific theories to make predictions about the physical world. A prediction is a logical consequence that one can derive from a scientific theory. For example, one can predict the times when and places where solar eclipses will occur on the Earth. One can predict the outcome of administering penicillin to a person suffering bacterial meningitis. While predictions are never certain (though nothing is because our knowledge is never complete) they are nonetheless possible. And they follow. That is to say there are logical reasons why they should be the case (or not be the case, as the case may be).
On the other hand, if you don’t have a scientific theory to explain some phenomena you cannot predict the outcome of your experiment. The only way of describing future states of affairs then is to guess wildly. In that case what you are doing is not predicting - in any scientific sense - you are prophesying. This distinction between scientific predictions and irrational prophesy was one made by Karl Popper and it leads to an important difference in the way people who lack knowledge view the world. In politics, attempts to describe future states of affairs are often nothing but prophesies devoid of any genuine predictions. In politics the decisions people make can always change the outcomes some person or group expects. In particular, discoveries in science can affect the decisions people make and the policies they enact. And the content of scientific theories are unpredictable before we discover them.
We cannot know what the future content of our better theories will be. Knowledge improves. In science this progress comes about by the gradual, incremental improvement of existing theories. Older theories are eventually shown false by criticism through experiment and replaced by better, deeper - closer to truth - theories. But our best theories can never predict the content of their successors.
Where this becomes critically important is in the realm of social policy. People, unlike electrons obeying the laws of physics, make choices. And those choices affect outcomes. This might seem so obvious as to not worth stating - but the problem is many people - seemingly many scientists and many with an interest in science - forget this simple truth. The fact people make choices which affect outcomes makes them unpredictable in the strict scientific sense. Psychology, it is true, makes claims about the predictability of people at least to some extent. But that is merely a misconception. Physical systems - including people - are explained by good scientific theories or they are not. If they are - then those systems are predictable in the scientific sense. But we do not have a scientific theory that can predict the behavior of people. Instead, the best we have is prophesy: attempts to foretell the future in the absence of explanatory theories.
This is not only a problem for psychology, but also sociology and politics. In those fields there is a tendency to make more-or-less confident statements about the future by appealing to scientific theories while ignoring the choices that people might make which would affect the outcome of those very predictions. Let us make this more concrete: it is common today to debate the effects of global warming on things like sea level rise and the various economic impacts while ignoring what people in the future might do and what knowledge they might discover which would affect global warming.
So, for example - we hear that economic sanctions (like an increase in the cost of carbon dioxide producing industries) are needed in order to help curb the use of fossil fuels. This, it is said, could help to reduce the amount of carbon dioxide in the atmosphere. In Australia the target is somewhere between a reduction of 5% if you are on the right of politics (Labor and the Coalition) and 40% for the left side of politics (the Australian Greens). If you are on the far green left you might even hope that there is a 100% reduction in carbon dioxide emissions by anthropogenic sources and you might be hoping for a quick cessation of all fossil fuel combustion.
But what we know is that the the Human Induced Climate Change Greenhouse Effect seems to be approaching a tipping point and may already have passed a number of points of no return. A 5% reduction is not going to stop, let alone reverse global warming. It will barely slow it. A 40% reduction will also have little impact - even if this was global. Even if we cut emissions in half by 2050 the planet will still warm by 2 degrees Celsius. This is more than enough to cause massive melting of the polar ice caps. And it is possible a 100% reduction will not occur fast enough (even if it was to occur tomorrow) because the amount of carbon dioxide in the atmosphere is already too much to be removed by natural processes (namely rain) to stop global warming being accelerated by feedback mechanisms like melting permafrost and methane venting. So it looks, from all angles as though if nothing drastic is done, significant amounts of the ices in Greenland and Antarctica will melt leading to sea level rises. If nothing drastic is done. What we need is not a way to slow global warming or even a way to stop global warming. These are not solutions to the problem: the problem being the ice is already melting, the sea levels are rising and the climate is changing. If we agree all of that is a problem, we have only two options as Physicist David Deutsch points out: to devise ways of surviving at the higher temperature or ways of reversing climate change (that is to say - ways of actively cooling the globe).
Economic "solutions" like taxes do not even pretend to do anything like this. They are only proposed to slow the inevitable - not solve the problem. Nor, indeed, do alternative energy targets. While alternative energy sources like solar and wind should be pursued and more money put into research in this and other areas generally (for reasons I will come to) none of this will cool the globe, if that is what your aim is: to solve the problem of climate change.
The left and right have always been divided on taxation - at least in principle if not in practice. But because the problem is that the world is already too warm and sea levels rises have already begun - merely reducing the amount of carbon dioxide by the very cumbersome and slow technique of increasing the price of fossil fuels which (we hope) somehow persuades people to use less energy will not solve the problem. As far as climate change is concerned all that taxation is purported to achieve is to merely delay the warming, not reverse it (which is what is needed). If we want global warming to stop, and reverse, we need to cool the globe. Actively reduce the amount of carbon dioxide in the atmosphere - not merely slow its rate of increase. Increasing taxes cannot do that. Merely reducing fossil fuel combustion - or even eliminating it altogether - cannot seem to be able to do that now either. It's already too late. Take note of what Eric Rignot and other NASA climate scientists are saying about the fact glacial melting in the Antarctic is now unstoppable. Of course that is pessimistic - glacial melting is just another problem and problems are soluble. It's just that it is unstoppable if the only thing you do is stop using fossil fuels.
So stopping fossil fuel combustion cannot reverse global warming. We seem to have reached a tipping point and natural global warming is likely to continue anyway. Even if those climate scientists are wrong, they may not be far from from. If the tipping point has not already past, as they claim, it may be just around the corner. Given the growth of civilization, and the amount of carbon dioxide being put into the atmosphere and that which is and will continue to be put into the atmosphere, it is likely that global warming was already too late to stop decades ago. The seas were already warming, the ice already melting. The sea levels were always going to rise - no matter how quickly we stopped burning fossil fuels.
This sounds all very pessimistic. But it is not. Problems are not pessimistic - but proposed solutions can be. The idea that ineffective solutions we know cannot work should be tried anyway - at great expense and at great cost to individuals, industries, economies and societies is pessimistic. The idea we should search for better solutions and implement ones that just might work is optimistic. And the principle of optimism: that whatever the problem is we can solve it - should be motivating us.
And so this is why the left and right of politics are both completely wrong on this issue as they are on so many others. The left is wrong because they think economic sanctions can work and fossil fuels are evil. And the right is wrong because they will not take this issue seriously enough - unwilling to acknowledge that the danger is to large parts of civilization itself. If the sea level rises - millions will be displaced and weather patterns can become dangerously unstable as the atmosphere is loaded up with additional energy. Climate change will cause death and destruction on a scale that is almost unimaginable - if nothing effective is done. But that’s the problem with the left. They are not suggesting anything seriously effective.
It’s possible (indeed common) for all sides of an important debate in politics to be completely wrong. The strange thing is - even if global warming wasn't man made - it would still be a serious problem. But, again - the left and the right typically do not see it that way. The right seems to want to argue "well because it's natural there is nothing we can do to stop it" and the left would seem to want to say "Well that's nature and we shouldn't interfere with nature". Again - both misconceptions argue for inaction and are about problem denial, not problem solving.
So what should be done? We should prepare for the growing problem as we would for any natural or man-made disaster. We should look for solutions. We should admit that the problem is already here and we must either devise ways of living at a higher temperature, or think of ways of cooling the globe.
So we need to invest - heavily - in scientific solutions. This means pure science - across the board. Not just in fields directly related to global warming (like climate science) but more broadly. Solutions come from places we typically cannot even guess at. The solution of how to get your email and Facebook wirelessly from the server to your computer did not come from technology companies trying to build ever better modems. No. WiFi came (in no small part) from astronomy - in particular from pure research into black holes. The solution of how to scan the body for tumors at super high resolution before they became dangerous did not come from investing directly in cancer research. No. MRI technology came from pure research in quantum physics and rested in large part upon the curiosity of a “lazy” Austrian physicist interested in how silver atoms behaved in a magnetic field.
But pure research requires money. And the more urgent the problem, the more brains we need working on it directly as well as on the periphery. We need science. And that costs. Equipment costs. Salaries cost. Education costs. We need to make lots of money - and fast - to pay for the research that will solve our problems. We need to believe that there are solutions or we won’t try. Or we will try non-scientific things that simply make us feel good without actually achieving anything. We might, for example, complain that profits are evil and wealth is evil and taxation is great and so decide that slowing or reducing private investment in solutions that just might solve the problems taxation never can. This is not to say public funding has no place: of course it does. But change must come incrementally. With science it would be good to increase both at once. Not one at the expense of the other. When it comes to public funding of pure science, the words of astrophysicist Neil deGrasse Tyson ring true here: "...it's not that you don't have enough money, it's that the distribution of money that you're spending is warped in some way that you are removing the only thing that gives people something to dream about tomorrow." So it's not like we need more taxation we just need to think of better ways to spend the money that governments already have. And as nations become more wealthy and more educated, we can rely less on the few (those in power in governments) and more on the many (the individuals) to decide where to invest. That way politicians do not need to be in control of as much money (that is to say: they do not need to collect as much tax) and individuals can become better off, and better at helping to solve both their own problems and the problems of their society more broadly.
There will always be problems. That is the circumstance we find ourselves in. But for any problem there are solutions out there not yet found. So how do we find them?
Only with the most curious minds working on as many problems as possible all at once. For any solution in science, there is a scientist who was never interested in exactly that particular problem who made a crucial discovery without which the problem would never have been solved. This is to say pure science is always the key.
In the case of global warming this means the eventual solution - the means by which we cool the globe - will depend in part at least on what people not working on that problem will discover. Knowledge is a web of interconnected and interdependent ideas. A discovery in one discipline is never completely isolated from another. But this does not mean we should not also fund applied science - the more narrow application of what we already know to problems we have already discovered (in the case of global warming this would mean applying the knowledge we already have to formulate better engineering solutions we have already imagined). Of course we should put as much money into that as we can too. Not at any cost of course - but the minimum cost that will be effective and will solve the problem at a cost less than it would cost us if the problem went unsolved. We cannot, after all, ignore all the other pressing problems. We must always do more than one thing at a time. One problem cannot be at the expense of all others. The most important reason for this is that the biggest dangers come from problems we have not even encountered yet and so have no possible knowledge about. Take any huge problem facing civilisation today: global warming, genetically engineered bioweapons, the possibility of war or terrorism utilizing nuclear weapons, how to avoid an asteroid impact. None of these "problems" were dreamed of just a century ago. Again - this is why we need pure science. To uncover those problems before we fail to discover the means of averting them. The problems that lurk in our complete ignorance are the ones we need to be most urgently searching for.
But to return to the pressing problem at hand; global warming. Applied science here could mean working on how to put mirrors in space to reflect some of the Sun (an engineering effort that might cost trillions - or maybe substantially less if some industrialist or scientist stumbles upon a much cheaper technology while trying to build a better computer/car/aircraft/power station). It might mean looking at genetic engineering solutions on how to encourage algae in the sea to consume more carbon dioxide. It might mean looking at geological solutions like geosequestration.
But my guess is that it will most likely mean a solution we simply cannot foresee. Because that is how science works. That is what the history of science teaches us: that science is crucial in solving problems and the solutions come from unexpected places. In science this idea comes under the rather twee heading of “serendipity”.
But to fund all this important research we need money. As individuals, corporations and society. We need profits - that is to say additional money over and above that needed to survive. More money across all sectors and this requires growth. And lots of it. We need as much money as we can get to fund these scientists, their laboratories, their technicians and students. We need to encourage kids to enter a lucrative career in the sciences and not always rely just upon their charitable natures.
Increasing taxation on people and industries typically does the opposite. Extra taxation takes money away from the economy - it discourages investment and spending (because it simply leaves less money for investment and spending by companies and shareholders). It reduces productivity because less is being created because less wealth is available to create it. In general, politicians do not have the expertise to decide how best to spend money and they seem more and more unwilling to rely upon those who might have better ideas to advise them. Those who desire higher taxes - which takes money from industry and individuals - strangely have far too much confidence in politicians. It is a strange thing in politics that those on the right and left often claim a distrust of politicians and yet the left is willing to hand them more money to make bad decisions with and the right is willing to hand them more power to make more bad laws with.
Taxation is sometimes seen as a means by which we can punish so-called “evil” industries like coal and oil companies. But this is short sighted. Of course these industries create big problems. But all industries create problems. The size of the problem you create scales with the size of the problem you solve. In the case of the fossil fuel industry the big problem created is air pollution - but the problem solved is that of cheap energy and bringing electricity to those who otherwise would not be able to afford it. The oil and coal industries are solutions even if they are seen by some as nothing more than a source of problems. There are problems with solar too. And wind. And nuclear. It may be the case that with fossil fuels the problems are well known and large but the solutions they bring are sometimes ignored or downplayed. They, and not other energy sources, are the solution to how to bring cheap energy most quickly to people who need electricity most now. Namely - the people without it and who have never had it. The people in Africa and Asia and South America - the developing world. Where - in those places we can light up (literally and figuratively) the lives of children who might be able to plug in a cheap computer to a cheap power source and be provided with cheap internet because the coal is cheap and the power station is cheap. And yes, it’s putting out carbon dioxide. But that kid just might learn enough physics and chemistry and biology to find a solution that has eluded everyone else. They might discover the key to reversing climate change.
We hope some person, now being delivered the internet in a warm home for the first time by cheap energy might discover a truly cheap, extra efficient solar cell (because at the moment they are neither) and then we can move away from fossil fuels (this is modulo taxation. Of course if you tax fossil fuels much more then their cost will artificially be more than that of renewables. But I have already argued why I think that is a mistake).
If it truly were the case that there were cheap, efficient and effective options for the developing world to power their factories, industries, cities and societies in a way which could bring them up to the standard of living that people in places like the USA and Australia expect then those countries would already be using those sources. But they use what their people want and need most. They use the best sources. And by best we mean: the source of energy people can most afford, which are most reliable and which, on a cost-benefit analysis will allow them to have the kind of cities and technologies and societies we demand.
The solutions come faster when we can pay for people to find those solutions - (or when people have sufficient free time to work on solutions - as both Google and Apple grant their employees). And that happens when there is more money in the economy. Where development and growth happens faster.
So next time you hear someone argue that global warming is not a serious problem - you can argue they are being unscientific because the science tells us it is. But if someone says that the sea levels will rise if we don't start taxing the big polluters - they are making a prophesy, not a prediction. They are assuming that no new discovery will be made that makes that extra taxation an unnecessary impost. And those who think we need to slow growth and development through things like ceasing the use of fossil fuels altogether are being pessimistic. So be an optimist: have a stance that problems are always soluble. And people can find those solutions. Perhaps people like you or me. But also perhaps people who don’t yet have access to all the knowledge that needs improving so the big problems of our time can be solved. Optimism entails understanding that the only thing which stands in our way of finding solutions is lack of knowledge. Not lack of resources. Because the universe is infinite - resources are infinite too. But that is the topic of another post.
This post has been largely informed and inspired by the work of physicist and philosopher David Deutsch. In particular his book "The Beginning of Infinity". You can find I have used some of the ideas he articulated here, in this TED talk.
We can consider certain attributes of organisms a “niche” - a way life responds given environmental circumstances. So for example, grazing is a niche that is filled in Africa by zebra, gazelle and countless other large animals, the same niche is filled in Australia by Kangaroos and in North America by Buffalo. So grazing really is something that seems to crop up again and again. Or consider burrowing - that too appears to be something life seems to do again and again: Meercats in Africa, Tasmanian Devils in Australia and Hedgehogs in North America. Independently, in different places, separated widely by geography and time - grazing and burrowing, flight and eyesight evolved again and again. So what about intelligence? Is it a niche?
50 million years ago, the continents looked roughly as they do now. We know they were separated from one another by the same oceans we are familiar with on the planet today. If you wanted to see how life would evolve in slightly different environments, the 50 millions years ago you effectively had separate continents, effectively isolated from one another on which to conduct your experiments in biology. Let's say you wanted to see if human-like intelligence would be inevitable in the same way, say: flight seems to be (flight evolved in insects, fish, birds and mammals to name just some).
So here's what you do: in Africa, set the experiment running for some 5 million years after the common ancestor of Humans and Gorillas split into those two families and observe that homo sapiens eventually evolve. Left alone, for well over 50 million years (it can be argued that the continents were essentially separate up to 200 million years ago) we find that unlike in Africa where you get human-level intelligence in another otherwise similar continent Australia, the most complex organisms that evolve are koalas and kangaroos, while on Madagascar we have Lemurs, in South America and India we have Monkeys and in the Oceans we have dolphins. A person watching this grand experiment unfold from a god-like perspective might well ask: “how much longer should we wait for human-like intelligence to evolve in Australia and South America? We've waited 200 million years. Those humans in Africa are going to start migrating to all the other continents soon now too.”
To put it more plainly: if human beings did not evolve in Africa, how long would one need to leave all the life on Australia alone, for any of the species currently inhabiting that continent to evolve something akin to human-like intelligence? What is the likelihood that kangaroos would, eventually, evolve the capacity to do what humans can do and build a radio telescope? The question is not meant to be rhetorical. We know that out of the millions of species extant and extinct only one of them ever developed human-like intelligence. Namely - humans. Perhaps there were people besides homo sapiens (like the Neanderthal) that evolved intelligence as well - but they too evolved in Africa from a very closely related - likely intelligent - common ancestor. This surely is an indication that human-like intelligence is anything but a convergent feature of evolution. Given the experiments of the separate continents here on Earth, we find that life seems not to find the characteristic of intelligence as useful an attribute for survival as, say, wings or bipedalism which again and again evolved independently. Features that genuinely are convergent.
Philosopher Peter Slezak from UNSW makes an important point about the number of independent, but necessary, mutations that were selected for in the chain of evolution that led from simple bacteria to human-like intelligence. It seems that if there were many evolutionary paths to human-like intelligence (such as there are with wings) then the odds would be lower – but as far as we can tell, there seems to be a single (or at least a very small number of) evolutionary path(s) that works – namely the path that has led to us. Such a path contains millions, maybe billions of steps. But let us be ridiculously conservative and assume that only 100 such steps are required. Now further, each of those independent, but necessary steps had a certain probability of occurring. Possibly the probability of each step was far far less than 1/100. Let us be conservative once more and assume each step had a ½ chance of occurring (Slezak actually uses the figure of 1/10 – but I am being even more conservative than he). Now if each step has a ½ chance of actually happening and there are 100 such steps, this means that the probability of replicating that chain of events required to reach an organism with our intelligence is going to be (1/2)^100 = 7.9 x 10^-31. This number is difficult to appreciate - far more difficult to appreciate than the number of stars in the universe – it essentially suggests that even if all the Earth like planets in the universe were covered in bacteria, the chance of finding another sample of intelligent life would still be essentially zero. In other words, no matter how astronomical the number of stars and planets in the universe may be, the biological odds of recreating some evolutionary pathway to intelligence is so small as to blow the astronomical number out of the water.
Carl Sagan however believed that the number of independent pathways towards intelligence are likely to be not simply more than just one, but numerous. It would seem however, that unless the number of pathways to intelligence is absolutely staggering then it is unlikely to make much of a dent in the number calculated above which it must be emphasized is ridiculously conservative – and, given the evidence we have been looking at life on Earth and in the fossil record, is the best model we have for the probability of life arising. In other words, as far as we can tell, given the best evidence on hand, intelligence has only one way of evolving: namely that leading to homo sapiens. If the number of pathways were large as Sagan thought and others think: what reasons are there for believing this? No evidence on Earth suggests that there is anything other than a very small number of pathways to intelligence.
Astrobiologist Charley Lineweaver recalls iconoclast Frank Drake of the Drake Equation stating that the reason he, like many people believe that the Search for Extra Terrestrial Intelligence t is a worthwhile venture likely to yield results is that if you look at the fossil record, what you find is evidence of increasing complexity. Importantly, what you find is that the relative size of the brain over time increases.
But Lineweaver has what I would consider a complete knockdown argument against that idea and he has gone to some length to explain why it is that arguing there is a trend towards increasing “complexity” or “intelligence” in the fossil record is flawed. He points out that starting from any sufficiently extreme feature and working back will always seem to indicate a trend towards that feature. A common asserted phenomena here is what is referred to as “increasing encephalisation quotient or EQ”. EQ is a measurement of the size of the brain cavity compared to body mass. It appears that looking back through the fossil record, EQ increases. So it seems that evolution is selecting for greater and greater relative brain sizes and therefore intelligence. However Lineweaver asks us to consider what would happen if we were not humans but elephants in which case we would be more interested in what he calls the the “nasalation quotient” (NQ) which the fossil record shows there has been a gradual trend of increase in as well. All of the ancestors of modern elephants had shorter trunks, showing a trend in evolution towards towards long trunks. The point is that the trend towards longer noses is not a trend at all. Incrased NQ is an illusion. We can see that easily for our trunks are no longer than those of our ancestors and it seems there is no reason why trunks of modern elephants should continue to get longer, or shorter or remain the same. That is the point of random mutations and natural selection. The “trend” of long noses in elephants is not a general trend that occurs across species and certainly cannot be regarded as a convergent feature of evolution. Similarly human like intelligence cannot be convergent despite the insistence that there seems to be a trend.
So although there might be a lot of planets out there - and habitable ones - the question of whether there is intelligent life out there isn't so much a question of planets and astronomy. It's a question of evolution and here the biological numbers just swamp the astronomical ones.
This post is just an edited part of a much larger article on life elsewhere in the universe you can find here.
Nutrition and exercise struggle to be sciences. This is not to say that rationality and experiment cannot, or are not, applied to these fields - but rather that the conclusions so far reached are difficult to help inform a person interested in the “best” way to eat healthy and exercise efficiently. That is to say: sports science has so far been unable to make testable predictions of the sort that are repeatable or informative for the population generally. But this is not to say sports science is not worth pursuing. It is. There always exists an infinite frontier of ignorance before us in all subjects - so more investigation is always called for. But the problem with proclamations in the sciences of nutrition and exercise is that they are always so full of confidence and so devoid of carefully checked content. These fields are complex. Huge numbers -literally incalculable numbers of variables require computation in order for us to approach anything deserving of the word precision. So we must be skeptical of anyone claiming to be in possession of very specific recommendations for what foods you should or should not eat - how you should or should not exercise. Think stuff like: the top 10 list of most healthy foods. Yes, even when it's from a medical website. That link in particular is a catalog of poorly controlled studies. Modulo allergies, more or less: eat what you like.
So first, nutrition. The chemicals we refer to as “nutrients” number in the dozens. And we simply do not know what all the nutrients that a human person requires happen to be let alone how much precisely is required. It is true we have some very general guesses - but that's all they can ever be, because we know that different people will require different amounts and we simply haven't quantified much at all - even with questions apparently as simple as that. For example it happens to be the case that even with the nutrients we are most familiar with - multiple studies disagree about how much we "need". Indeed it seems quite major changes are made periodically when it comes to what amount of (say) sugar we need (if any). What, for example, constitutes a "deficiency" in something at familiar as Vitamin C? This is not a trivial question. Is merely not having scurvy enough to say one has had enough? Apparently not.
Anyone who takes even a passing interest in what is "healthy" versus "unhealthy" food will (or perhaps should) appreciate how misunderstood a nutrient like fat is. Some chemistry is needed to appreciate nutrition - fat, like the other macronutrients is not a single chemical but a word describing an entire class of compounds. Some people might be less interested in the question of what fat actually is to asking: How much do we actually need? What is a "healthy" amount? This question has no answer for different people will have different requirements - one of which is what they want. If the greater part of someone's day is devoted to the sport of sumo wrestling, they might require more because of what they want. And this is what takes nutrition away from being an entirely objective science like chemistry to something with a very personal, subjective, moral element. Desire becomes a factor, nutrition ceases to be a pure science in the same way chemistry is because what is “nutritious” for you is different to what is "nutritious" for me.
Is 100g of fat a day unhealthy? The question itself is meaningless and admits of no rational answer. Is the 100g for me or you or someone else? Is the fat mono-, poly- or un-saturated? If a person is literally starving - and 100g of fat is all that is available - that 100g is more than merely 'healthy' - it could be the difference between life and death. If a person is a marathon runner, they will burn up the equivalent of the energy contained within that fat before the race is over. Is the fat in question saturated or not? Is it eaten with sugar, or not? Does the person already have clogged arteries? Is the person about to die from fatty acid deficiency?
So whether fat is “good for you” or not is not a question like “is cobalt a metal?” and indeed it never well be that sort of question. Because it is a question not simply of science (what is the case) but also of morality - it is about what should be the case.
Of course this analysis does not apply only to fat. It applies to all nutrients - and every food more broadly. What amount is healthy? The answer will always depend upon a multitude of factors. The range of possible dosages of any nutrient will be vast and currently there is no known general purpose answer - no mathematical formula into which you can enter your height, weight, caloric intake per day, metabolic rate, etc, etc - that will be enable you to prescribe for yourself what amount of whatever nutrient you will need to achieve your goal, or what amount - for you - might be harmful. Until that day comes - what amount is "healthy" versus "unhealthy" will reduce to a very rough guess. And those scare quotes around the words "healthy" and "unhealthy" are no accident. There is no good definition that sharpens the meaning of those terms into something that can be useful beyond "not sick" and "sick" (and even there we have circularity). So, it seems a food is "healthy" if it does not make you sick. But, poison and bacteria aside, we do not know which foods, or how much of them, are required to make an otherwise healthy person ill. Or if indeed the same foods make the same people sick. Those with apple allergies can rightly say apples are extremely unhealthy.
Is alcohol a nutrient? Some studies at least seem to suggest it's better to drink heavy than not at all (while moderate daily drinking seems to be best). Yet, perversely in the state of New South Wales in Australia alcohol is taxed more highly than other commodities because of the supposed detriments to health and venues that serve alcohol must suffer severe restrictions on opening and closing times. Bad ideas about nutrition can lead to bad policies and jumping to one conclusion over another only ever leads to irrationality. At every turn it seems someone with not enough knowledge and too little skepticism has more than enough confidence to tell you what to do when it comes to what you should eat or drink. And they might even punish you with fines and prison if you deviate from their ideas about nutrition. This is no small thing.
Next: exercise. There is a mantra that often goes something like this among health gurus: Exercise is more important than diet. Or diet is more important than exercise. Or diet and exercise are equally important.
But important for what? For getting good at exercise? For getting as massive as a sumo wrestler? For the nebulous concept of "health"? Typically, it about weight loss. And often getting "fit".
Let’s say we deal with weight loss because that seems to be the most common reason people become atypically obsessed with how much activity they are doing and what they are consuming. But even when it comes to a well defined goal like "I want to lose 10 kg of my body mass" - no answer can be given as to what is more important - diet or exercise. A person who eats little but high fat hamburgers can still be very thin. Think a person who runs regular marathons. Anyone who runs 3 or 4 marathons (or more) a week will not be fat. There has never been a person who has run regular marathons who is also fat. You simply cannot consume enough kilojoules per day to put on weight if you are expending some tens of thousands per day in exercise. It just will not add up mathematically. This is proof positive that diet is not more important than exercise for keeping fat off your hips. Moreover, we all know of the glutton who seems to be able to eat hamburgers and chocolates and never get fat despite never exercising. From what we know, these people are burning a lot of energy just by resting - and why this is we just do not seem to know. We hand-wave the mystery away by saying "they have a fast metabolism" (in sports science and medicine it's called "Basal Metabolic Rate" or BAL and they have altogether unreliable formulae to help predict what one's energy expenditure might be given certain constrains. But giving a mystery a label, and coming up with an empirical formula doesn't actually provide any answers). Indeed all this simply begs the question: How can two otherwise extremely similar people respond so differently to identical food intake and exercise? This is a good scientific question we do not yet have good answers to. No doubt the answer in large part lurks within the genes - but where exactly, we do not know. These questions are the "dark matter" and "dark energy" of sports science. Merely statements of our collective and scientific ignorance.
One place where people have approached something like a scientific approach to nutrition and exercise is in the sport of bodybuilding. Here we can actually see, before our eyes, the effect of different theories. We can test the theory that eating 1000g of protein per day, 500g of carbohydrates and 200g of fat has upon the body of a 25 year old, 90kg male who follows a very strict, particular exercise regime where, within the best measurements available, burns some number of calories per day.
But even here the error bars are beyond the realms that any other quantitative scientist would regard as acceptable in order to establish trends. Some people will simply put on more muscle and burn more fat, even though they do the same amount of weight lifting and eat the same meals - and so this means general advice is typically not going to apply to - well anyone. Because there is no "general person". When at rest it is difficult to know how many calories John burns compared to Joe. And even if we do get an estimate, we do not know whether 10 minutes later, after the test is done, if John’s caloric burn will increase or decrease due to an almost infinite number of factors. We simply do not know enough about human exercise physiology to make such judgements. And even if we had a perfect experimental specimen - say John who we have studied for some years and in the process minimised those error bars - we know it is a very unreliable thing to extrapolate from this single data point to others - like Joe - let alone someone who is not 25...and a male...and a bodybuilder.
So for all we know about nutrition, using this very general knowledge to make specific predictions (like what should *I* eat to maximise *my* weight loss/muscle gain?) is not yet possible without further experimentation.
Of course what we do know is worth knowing. We know water is essential. We know protein is needed to build muscles. But whether broccoli or blueberries really do contain anti-cancer properties simply is not known. And anyone who asserts they do, is being dishonest. Worse than that - they are actually hurting people by giving false hope. Cancer, for example, we know is not a single disease - but many. And potential treatments, or preventatives, are going to be as broad as the disease itself. If someone tells you "Wheatgrass is going to prevent cancer" ask: which cancer? If they say "All of them" ask the best question of all: "How do you know?" If the best they have to offer is one of the thousands of "natural living" websites you know they don't know. No one does. If something really did prevent cancer, we would know about it. In the same way we all know aspirin works for mild headaches. We all know what works - and we use it. That's a good rule of thumb. Common knowledge when it comes to "health" is often superior to that from some purported expert. This is because "health" is so wooly and nebulous a term that broad general knowledge actually works better than narrow specific stuff.
All this is to say: be extremely skeptical. Almost all the nutrition and exercise advice you will be given is false. Almost all of the posts of facebook, most of the tweets on twitter - most of the stuff on supposed "health" websites - are replete with misconceptions. It would typically take very high level expertise - like degree level knowledge in nutrition, or a PhD in biochemistry - to sift the "discovered through the application of good scientific research carefully checked and repeated by multiple peers" from the "just a rumour" stuff. Most of the time what already passes for general knowledge is pretty robust. It's best to get a bit of protein, fat and carbohydrate each day. Drink when you're thirsty. Have some veggies to keep your vitamins up. It doesn't get much harder than this - and even those very general rules might be too strict.
Most of the time, in a rich country, you can eat most of what you want and do as much or as little exercise as you like. You will be able to tell just by looking in the mirror if you like what you see. In a rich country you would actually have to do more work to suffer some deficiency, or risk cancer from some food you eat, or risk any bad disease by the amount of exercise you do not do. Of course, if you are a fat 200kg and thinking about buying one of those scooters to get you to McDonalds - it should be obvious to you that you're at higher risk of disease. It's common sense. You can see it in the mirror.
We do not know to what extent this or that chemical in food causes diseases like the many cancers that plague the population (remember when I use the word “chemical” I use it in the scientific sense - so I include things like “water” and “oxygen” and “protein” as well as “potassium benzoate” as chemicals). Likely none, on their own really do cause cancer. Almost all "this thing causes cancer" scare articles are complete lies. One of the best lies is the one about artificial sweeteners. The fact is: artificial sweeteners are great. They are not unhealthy - not in the slightest under any definition of "unhealthy". There is nothing wrong with them in moderation.
If it's what the reasonable person would say is "edible" and you're not allergic - and fits the definition of "food" as a starving person might understand the word - you can relax about it. Food is going to be considerably less of a cancer risk than, say, the granite and basalt rock beneath your feet at this very instance that is mildly -and entirely naturally - radioactive. And which it is impossible to escape (and if you think living at higher altitudes away from bedrock might help - think again - there's even more radiation coming from the sky - also entirely naturally). Radiation, like light - just is. Right now, the walls around your house, the sky above your head and the ground beneath your feet contains radioactive materials - that are there naturally. And we live with that. Are they harmful? Not really. Are they harmless? Well no - not really. But they are a fact and we have no means to guard against this background of relatively harmless but nonetheless damaging fact of life. And it doesn't matter in the long run if you live above slightly more granite than your neighbor because they just might get a little more cosmic radiation. So if you are concerned about stuff in your food potentially causing disease - you should be far more concerned about the rocks on which your house is built is the point. The latter we know can, according to our best physical theories, damage the DNA. The former, we are still guessing at the relevant mechanisms.
So eat what you like and just remember you cannot get around the laws of physics. In this case the first law of thermodynamics: energy cannot be created or destroyed - only changed from one form into another.
When applied to nutrition this simply means: if you want to lose weight: the amount of energy you take in must be less than the amount of energy you expend. If you want to gain weight, the amount of energy needs to be more.
So you can eat all the hamburgers and chocolates you like. But if you want to lose weight, you better burn more calories throughout the day so as to not store that stuff as fat. Because - in a healthy person at least - most excess calories will get stored as fat. That really is all there is to it, until - years from now - we have a precise, predictive science of nutrition and exercise. We just aren't there...yet.