Bayes’ theorem is a mathematical formula that allows us to determine the subjective probability of an event occurring given some information we have about it. And further it allows us to (repeatedly) recalibrate that probability given new information as we find it. The theorem is uncontroversial. Let’s see how it works. Don’t be turned off by the formula - we’ll explain it in plain English and give some examples.
This says that the Probability of thing A happening (given that thing B has happened) is equal to the Probability that thing B has happened (given that thing A has happened) multiplied by the Probability of A all divided by the Probability of B. Phew. Ok, a common-sense example:
Imagine you’re inside, in bed, the blinds are drawn. Your significant-other has already left for work. You turn on the television and the weather report is on and they say “It’s a 60% chance of rain” (This is P(A)). Is it actually raining? What are the chances? 60% presumably. But now imagine you notice that the umbrella - normally kept by the door - is gone (This is event “B”). And you know your partner doesn’t watch the news. You should recalibrate. The chance of rain (A) now, has been affected in your mind by some new information (B). If you’d never turned on the news and noticed the umbrella was gone, you might have guessed it’s a 50% chance there’s rain (of course most people never assess chances like this. It’s too hard to get reliable numbers of this sort). All pretty uncontroversial. But let's just consider (and move on quickly from) the other common sense, true idea that: you can be totally, utterly wrong about your assumptions. For example: when you switched on the television, it was actually a recording of last week's news. And that umbrella? It's sitting downstairs. Outside right now it's really sunny. If you're a creatively critical thinker you might actually consider all this and test your ideas before taking any action or making any decisions. Or assigning probabilities.
This is all hand waving: let’s use a better example and plug in some figures.
One common way of appreciating this is to consider something like a medical test. This is the kind of example that is always used to explain the facility of Bayes' theorem. And indeed it shows, quickly, how useful the theorem is. And simultaneously how limited it can be. So, medical tests in real life can never be perfect. There are always errors. Things go wrong with instruments, with the ability of chemicals to react as expected with things in your blood. Doctors make mistakes. And so on. But if you do enough trials of a particular medical test you can actually begin to build up enough data to say something like "9 times out of 10 when this test says you have (for example) hepatitis: then you have hepatitis."
So if there's a 9/10- chance it's correct, there’s a 1/10 chance that the test gave you a “false positive” (said you had it when you didn’t). But also there’s such a thing as “false negatives” - if the test tells you you don’t have the disease, there’s some chance you actually do. And this number could be anything at all, not necessarily related to the rate of false positives (so just because the false positive rate is 1/10 it doesn’t mean the false negative rate is 1/10 or 9/10 or whatever - it’s generally independent).
So let’s go down this road and see what Bayes’ theorem says about this sort of thing.
Let’s say you’ve been to Brazil for the Olympics in 2016 and you come home and don’t feel so good. Suddenly you’re really worried “Have I got the Zika Virus? It's been all over the news.” You rush off the the doctor to get tested.
“Look” says the doctor, “Don’t worry. Zika is actually really rare. There's lots of media hype. Of all the people who come back from Brazil, only 1 in 10,000 of them will actually have the virus. But you’re in luck - to put your mind at ease we’ve got a test that is 99% accurate and it gives instant results. So let’s get this over and done with right now and you can go home confident in the knowledge about whether you are infected or not.”
This sounds promising. You of course immediately have the test done. Blood is taken. It’s run through a computer…but the screen flashes red. It’s a positive test! Oh no, you’re infected!
Woah tiger. Stop right there.
Are you really? The test is not perfect, remember? In fact it’s wrong 1% of the time. The doctor says “99% accurate” but what they mean is: 99% of the time *positive results* are, really, positive and negative results are actually negative. In other words: the doctor implicitly gives you the rate of false positive and false negative (1%).
(Just to labour that point as an aside it's important to note that in this case the 99% accuracy means it’s also wrong 1% of the time when you get a negative result too. But the false negatives and false positives do not need to be the same. And sometimes we need to take that into account. On false negatives: If you’re unlikely to have the disease AND the test tells you that you don’t have it, well…you probably don’t have it, right? Well yes - but there’s still a chance. Can we quantify that chance? Absolutely. In our case I'm just assuming that for this Zika test the false negative probability is also 1% (quite high). So if the tests says you don’t have it - there’s actually a 1% chance you do and the test didn’t pick it up.)
Again: the false positive and false negative numbers do not have to be the same and usually they are not. The chance that a test is exactly 99% accurate for positive and negative results is not realistic - but we’re using that number here because it’s easier to understand. But just appreciate that you could have 1% false positive and like 5% false negative or whatever. So anyway, moving on:
But happily a 99% accurate test is highly accurate, right?
No! 99% is actually not a high probability at all. What might be considered high or not is relative to some situation. If it's a 99% chance you will win the million dollar lottery: yes - buy a ticket! But if it's downsides, where the stakes are high - forget it. Not for something where life and death is involved is 99% high enough. Consider: would you get on an aircraft which had a safety record of 99/100 safe landings? I wouldn’t! Would you eat at a restuarant where every 99th customer got really severe food poisoning (but everyone else just had a wonderful time, strangely)? Surely not. 99% isn't that high compared to much higher probabilities.
You should recalibrate your expectation of what is a high probability in this context to get a sense of why Bayes' sometimes, rather often actually, throws up counter intuitive predictions. For aircraft I’d really need the number to be 9999/10000 or something like that to feel marginally better. As for crashes where everyone onboard dies: I want the number to be better than 999,999/1,000,000 + (And in reality it’s more like only 1 out of every 24 million trips are fatal crashes or something like that).
So let’s look at Bayes’ theorem in plain English first:
Given the chance of you being infected with Zika is 1/10,000 if you’ve been to Brazil (& you have) - *now* we want to know: what’s the chance, given you’ve been to Brazil AND you have tested positive on a test that’s 99% accurate? It’s not just 99%. Wait, what? Why?
Put it this way: if you’ve never been to Brazil. If you’ve been living in isolation in Antarctica, say, for the last 2 years with little outside contact, and someone tests you for Zika and you test positive on a test that’s 99% accurate - what’s the best explanation? Say that an expert tropical diseases doctor says: the chance of such a person having Zika is actually 1 in 1 trillion then you need to think: is it more likely that you actually have Zika or that the test is a false positive? By which I mean: what's the better explanation?The better explanation is: the test has made a mistake. Repeat it. (Which, by the way, is exactly what is often done, of course!).
These matters are sometimes hard to think clearly about. The chances are you don’t have Zika if you're the person in Antarctica. The 1/1000,000,000,000 number simply swamps the 99% accurate one. Yes you might have Zika - but based on the mathematics of the situation and nothing else, if you want to make a prediction: you’d want to wager you don’t. (But typically more than the mathematics is important in these situations. Namely: what's our reasons for thinking you had Zika at all? It can't just be the positive test, right? Surely you'd feel sick if you're getting tested, no?).
Back to the original situation of you being in the doctor’s office and you have actually been to Brazil. Remember our numbers of 99% and 1/10,000.
What’s the probability you’re infected with Zika given that 1/10,000 people like you are actually infected and you’ve just tested positive on a test that’s 99% accurate?
Our formula once more:
We need to set here what A and B means. A = You are infected with the Zika Virus. B = You test positive.
The P(A|B) bit on the right hand side is, in this case, “The probability that you have the disease given you tested positive”. It’s what we do not know. The P(B|A) bit is, in this case “The probability that you test positive if you actually have the disease”. We know this. It’s 99% or 0.99. The P(A) bit is: the probability that you have the disease at all, without the information contained in B. In other words: 1/10000 or 0.0001.
The P(B) bit is: the probability that you test positive, given no prior knowledge (making no other assumptions whatsoever). To figure this number out just understand that if you’re the 1 person in 10,000 that has Zika, there’s a 99% chance it reports that number accurately (in other words 0.0001 x 0.99) but also (and this is really really important!) you could be one of the 9999/10000 who *don’t* have the diseease and yet are unlikely enough to be in the 1% who test positive to the test anyway!
So that’s how we get P(B) in this case: we need to add the positive tests that are correct (99% of the 1/10000 who have it) to the positive tests that aren’t (1% of the 9999/10000 who don’t).
So P(B) becomes 0.99x0.0001 + 0.01x0.9999
Now let’s put that all into our formula:
And my calculator spits out the answer of 0.0098 or 0.98%. In other words there’s still less than a 1% chance of actually having the disease. Isn’t that very strange? We went to get a test for a disease, and the test is 99% accurate and we actually test positive but our chances of having the disease are less than 1%. What?! Well that's just how it is because the chances of having the disease at all are 0.01% in the first place. Getting a positive test raises that chance not to 99% but rather just about 100 times more to around 1%. It's counter intuitive and weird but true! And that kind of counter-intuitive stuff is what some people just love. And having learned it they like to want to apply to to almost everything. I mean...almost. Everything. Here's where the famous philosopher Nick Bostrom applies it to concerns about robots who might take over the world (who will reason like a perfect Bayesian) and how we should try to emulate that way of thinking (and be better Bayesians) and my thoughts that this is all very wrong: Superintelligence critique.
Back to the example: In a sense this isn’t really how medical stuff works at all. Doctors don't do much in the way of calculations like that - they never have reason to. They're simply not testing people for diseases unless there are other reasons. Here’s what I mean: the 1/10000 number only applied if the person in front of the doctor is truly some random person But they're not. They're a sick person with symptoms. Again: if you are feeling sick: then already you’re not a random person. Your probability is going to be higher. Much higher. And tests are far more accurate than 99% (well sometimes).
I’m not interested in the accuracy of the test for now. What I am interested in is how Bayes’ theorem then allows us to change this probability. So let’s say you have new information: something like the doctor learns that you also have a rash consistent with the virus. That changes everything. People with a rash, who feel sick, who’ve just come back from Brazil - they’re not 1/10000. They’re 7000/10000 or 70% (or more!) likely to have Zika (or something like that, I don’t know). So we can change the numbers. And we can keep changing the numbers as we learn new stuff. And often the test is the last resort. The doctor knows, for all intents and purposes, that you've got Zika. They might have seen it before. The test might be merely to satisfy some law. When it comes back positive for a person who, in the doctor's mind, already definitely has the disease, they just go "Yep, as I fully expected. Consistent with my prediction."
That's more how doctors work (but forget the numbers). They’re not doing Bayes’ theorem and performing calculations. They’re looking at symptoms and using rules of thumb (far more useful than Bayes’) in making the diagnosis. A fever? A rash? Been to Brazil? A positive test? Ok: I’m sad to say YOU have Zika. If it was just some random person off the street: ok, I’d assume false positive. But not you. Look at you: you’re at death’s door and the test just confirms it. In fact if the test came back negative, I’d assume it was a false negative. (And so on).
Bayes’ theorem can’t actually tell us much at all here that we didn't already know. Not when, for example, we have to arrive at a creative explanation for some phenomena. Imagine a similar situation but where we've got no reason to presume Zika right away. As a doctor, we might have 10 possible explanations all logically consistent with, say a fever and a rash. Each time we ask a question or perform some new test, we might *rule out* certain of these theories. This is the way it works. This is what epistemology says. Which is to say: this is what the Popperian/Deutsch conception of how science and rational criticism of ideas works. But not a person who calls themselves a "Bayesian". A Bayesian takes Bayes' theory (which is uncontroversial) and attempts to apply it in a serious way to a situation like: the creation of a new theory to explain some phenomena.
The Bayesian may say something more like: we give a higher probability to some of the 10 possible theories being actually true.
The problem with this idea is that, if we take that concept of truth seriously: it makes a mockery of the very concept. Because although truth exists, it's rarely (if ever) fully contained within a single theory. No theory can be finally (or "actually" true). It can be conditionally true, sure. But an 80% likely-to-be-true theory is what? Well it's going to be proven false as much as a 10% like-to-be-true theory is at some point. Before Newton's Theory of Gravity was proven false: what probability would a Bayesian have assigned to it being true? Presumably something very high. If we're a doctor with our 10 theories, all consistent with the data, then one moment we’ve a theory that is likely 80% true and another that’s only 30% true and then some new information comes along and the first theory drops to 0% and the second becomes 95%. But infact, all along, both could be completely false.
And this is the most crucial point to consider. Bayes theorem cannot possibly assign a probability to the truth of a theory we do not yet have. And theories we do not yet have are actually the very business of scientists to create. That’s the function of science: to explain things (and the business of scientists is to find explanations for things not already known). To solve problems. And this requires creativity. The notion that science is about weighing up existing ideas only and assigning to them probabilities (to what end?) is flat out false. It’s never done and nor does it need to be done. Not ever in reality.
Even in medicine, where such a thing could conceivably be useful: doctors are not performing calculations using Bayes’ theorem in order to decide on a treatment regime. No. Instead what they actually do (and this mirrors what happens elsewhere in science from oceanography to ornithology) is that experimental tests or observations (evidence!) of some kind rule out theories until a best one is left standing.
You might have 10 possible diseases. We do a test. We rule out 9. We treat what’s left. Does that mean you as the patient are certainly suffering from “what’s left”? No! Mistakes can be made and you just might (in a highly unlikely case) have a disease we never thought of in the first place or that we’ve never encountered before. That is: something not in the original list. This of course, is not that uncommon.
A doctor is not there putting probabilities next to the 10 and then deciding that we should treat the highest probability one as the most true one. Instead, as I said, some are actually ruled out decisively by experiment. If they are not then the following rare situation can happen:
We have 3 diseases a patient might possibly have that we cannot distinguish between given the symptoms. We treat the most dangerous first. If the patient doesn’t respond we try the second worst. Then the third. If no treatment works, we’re in a bind. We need to think creatively. But this is what happens. Probability need not enter into it. It's not what the patient most likely has, but also what the patient is most threatened by.
Again, imagine we have 3 possible diseases that are possible diagnoses. And Bayes’ somehow said “It’s a 70% chance you have X, 50% you have Y and 30% you have Z”. Then what? Well surely it depends on exactly what X, Y and Z are. If Z is something that will kill you in 2 days flat if you don’t get the one medicine that will cure it - then it needs to be the priority over X which is relatively harmless. As a way of moving people to action, Bayes’ is also rather useless much of the time. Not all of the time. Sometimes there are important places where it can play a role. But it can never play a role in discovering that the actual answer to our patient who might have been suffering X, Y or Z is that they had condition W the whole time. Bayes' is actually a distraction from even looking in that direction.
What Bayes theorem cannot do is actually perform the function that scientists and philosophers who call themselves Bayesian say it can: to be a philosophy of inferring the best explanation. It cannot possibly create new explanations (which is, and should be the focus of science as much as gathering new evidence) and nor can it tell us what we should do. If we have a problem and we have no actual solution to it, Bayes’ theorem cannot possibly help. All it can do is assign probabilities to existing ideas (none of which are regarded as actual solutions). But why would one want to assign probabilities to possible solutions, none of which are known to work? There can be no reason other than if one wanted, to say, wager on which idea is likely to be falsified first, perhaps. But we must know - following Faraday and Popper, and Feynman and Deutsch: we must expect all of them will be falsified eventually. Your theories should be held on the tips of your fingers, so said Faraday, so that the merest breath of fact can blow them away. So no amount of assigning 99% probabilities to the truth of them makes them anymore "certain" or "likely to be true". We need to have a pragmatic approach: take the best theory seriously as an explanation of reality and use it to solve problems and create solutions and technology - but don't pretend that the content is "certain" to any degree. Just useful with some truth more than those other theories that have gone before and fallen to the sword of criticism and testing.
When we have actual solutions in science they go by a generic honorific title. We call them “The scientific theory of…”. So for example we have “The scientific theory of gravity” (it’s given name is General Relativity). We don’t need to assign a probability to it being true. We regard it as provisionally true knowing it is superior to all other rivals (insofar as there are any (and there are not!)) and we use it as if it’s true (this is pragmatic). But actually we expect that one day we will find it false. Just as we did with Newton. But this philosophy that our best theories are likely misconceptions in some way has no practical effect on what we do with them. We take them seriously as conditional truths about the world. As David Deutsch has said: it would have been preferable if long ago we'd all just decided to call scientific theories "scientific misconceptions" instead. It would save much in the way of so many of these debates. We'd all know that our best explanations, though better and closer to true than others that went before, are nonetheless able to be superseded by better ideas eventually.
Again: We have a solution to the problem of how diversity in nature arises. It’s called “The scientific theory of evolution by natural selection”. We don’t assign a probability to it being true. We don't need to. There are no scientific or philosophic alternatives. But yes: we believe it’s possible to improve that theory and our understanding of it. Always. There is no need to assign numbers to it.
Sometimes we have unanswered questions. Problems. For example: Is the force that keeps electrons in place around nuceli in atoms constant in time and space? (The precise way of asking this is: is the fine structure constant changing?) All of the observations we have made now seem to point to “No” - it’s not. What is the probability that it’s changing? Well, for now, we’ve no good reason to think it is. There’s simply no reason to say “It’s 3%”. There is always the possibility we are wrong. It might be changing. In which case the chance it’s changing is 100%. But actually I’d rather not say that: probabilities here are wrong. 100% would imply some degree of “certainty” and that’s what I’m arguing against. That there are even degrees of certainty. There are not. There are better and worse theories. The worse theories have already been falsified. Or can be criticised as worse according to objective criteria. The better theories have so far survived this rigorous critical process in science (or elsewhere). But at no point do we need to say "We're certain *this* theory is true. Or 90% true. Or 90% likely to be true. Or 1.9 times more true than it's next nearest rival." None of that is required. Just the binary distinction between being "The explanation of..." and "Not an explanation of". And if we have an open question, then open it remains until we have "The explanation of...".
So it is better to say that what we need is a stance that a theory is either conditionally true. Or it's not true. When we have an unanswered, open question then it’s fine to say “I don’t know” without needing to assign some number (liable itself to be proven completely false) to the theory.
So is the explanation for a fine tuned universe the fact there exists a megaverse of other universes with different physical laws? I don’t know. Well can we assign a number? We could, but to what end? Whether the number is 10% or 90% I’ve no more reason to believe it’s certainly the case that such a megaverse exists than that it doesn’t. What we need to wait for is evidence of a kind that would best be explained by such a theory. Absent that evidence, we have no way to know. Should we believe in a megaverse? Should we believe a megaverse is 70% likely to be true? No. There’s no reason to assert this. We just say “We’ve no reason to prefer that theory over others, all else being equal.”
Probabilities alone cannot tell you what course of action to take. Sometimes the best explanation is one no one has even considered yet.
And that’s something Bayesians don't seem to consider.