BRETT HALL
  • Home
  • Physics
    • An anthropic universe?
    • Temperature and Heat
    • Light
    • General Relativity and the Role of Evidence
    • Gravity is not a force
    • Rare Earth biogenesis
    • Fine Structure
    • Errors and Uncertainties
    • The Multiverse
    • Galaxy Collisions
    • Olber's Paradox
  • About
  • ToKCast
    • Episode 100
    • Ep 111: Probability >
      • Probability Transcript
  • Blog
  • Philosophy
    • Epistemology
    • Fallibilism
    • Bayesian "Epistemology"
    • The Aim of Science
    • Physics and Learning Styles
    • Positive Philosophy >
      • Positive Philosophy 2
      • Positive Philosophy 3
      • Positive Philosophy 4
    • Inexplicit Knowledge
    • Philosophers on the Web
    • David Deutsch & Sam Harris
    • David Deutsch: Mysticism and Quantum Theory
    • Morality
    • Free Will
    • Humans and Other Animals
    • Principles and Practises: Preface >
      • Part 2: Modelling Reality
      • Part 3: Political Principles and Practice
      • Part 4: Ideals in Politics
      • Part 5: The Fundamental Conflict
    • Superintelligence >
      • Superintelligence 2
      • Superintelligence 3
      • Superintelligence 4
      • Superintelligence 5
      • Superintelligence 6
  • Korean Sydney
  • Other
    • Critical and Creative Thinking >
      • Critical and Creative Thinking 2
      • Critical and Creative Thinking 3
      • Critical and Creative Thinking 4
      • Critical and Creative Thinking 5
    • Learning >
      • Part 2: Epistemology and Compulsory School
      • Part 3: To learn you must be able to choose
      • Part 4: But don't you need to know how to read?
      • Part 5: Expert Children
      • Part 6: But we need scientific literacy, don't we?
      • Part 7: Towards Voluntary Schools
    • Cosmological Economics
    • The Moral Landscape Challenge
    • Schools of Hellas
  • Postive Philosophy blog
  • Alien Intelligence
  • High Finance
  • New Page
  • Serendipity In Science
  • Philosophy of Science
  • My YouTube Channel
  • The Nature of Philosophical Problems
  • The Nature of Philosophical Problems with Commentary
  • Subjective Knowledge
  • Free Will, consciousness, creativity, explanations, knowledge and choice.
  • Solipsism
  • P
  • Image for Podcast
  • ToK Introduction
  • Begging the Big Ones
  • Blog
  • Our Most Important Problems
  • Corona Podcasts
    • Brendan and Peter
    • Jonathan Davis
  • Responses
  • Audio Responses
  • New Page
  • Critically Creative 1
  • Critically Creative 2
  • Critically Creative 3
  • Critically Creative 4
  • Critically Creative 5
  • David Deutsch Interview in German
  • Audio Files

Philosophy of science

​

a guide to
david deutsch's 2016 paper 

"The Logic of Experimental Tests, Particularly of Everettian Quantum Theory" 
(for non-physicists)
Here I want to make some remarks on "The Logic of Experimental Tests, Particularly of Everettian Quantum Theory" by David Deutsch.   This paper should be required reading for any student interested in the philosophy of science. Yet it is hard going and those without a physics background may stumble in places. That said, it can be read, more or less, by ignoring the few pieces of mathematical notation and quantum physics nomenclature found in it. So what I want to do here is denude the paper of some of its jargon and so attempt, however poorly, to reconstruct some of the arguments (and later with extensive, lengthy quoting) for an audience that might not persevere with the original. Yet I think the original paper is seminal work and for further reading of course consult the original paper found here. Some of the paper does contain material that can be found elsewhere - particularly in Deutsch's previous works "The Fabric of Reality (1997)" and "The Beginning of Infinity (2011)." However there is much new material in this paper that supplements (and is supplemented by) those books and some excellent clarifications of key points in the philosophy of science (and as one may guess from the title: the actual role of the experiment in the sciences. )

Although the central concern of the paper is a defence of the role of explanation in science and so an explanation of both explanation itself and the purpose of experimental tests in science, another crucial point emphasised throughout the paper (despite Deutsch's books and comments on the topic) is how quantum theory is fully deterministic. Despite what passes for high school and undergraduate teachings on this subject and what one finds in popular books, documentaries and even texts; quantum theory is not a theory of how the world is governed by laws that are probabilistic or bring true, objective, randomness into the world: there are no truly random processes. There may be subjective randomness, but this is explained by purely deterministic laws. Everything is determined by the (quantum mechanical) laws of motion. And those laws of motion specify that what is observed to occur happens because of everything else that happens in physical reality. That is to say: the laws of quantum theory predict that prior to an observation everything (physically) possible actually occurs and all those occurrences come to bear upon the outcome observed. Indeed not merely prior to the observation, but during and after the observation whatever is physically possibly able to happen at those times, happens. Necessarily an observer finds themselves only in one universe and therefore only observing one thing - not many. This should be no more mysterious than that observers necessarily only ever experience a particular instant in time: they never observe many times simultaneously. Although they know that the past must have happened and that the future will come - and that the past and future are just as real as the present; the observer can only possibly, at any given moment, experience the present. 

Another example taken from an interview found here can even more starkly illuminate the lack of a problem where some people are confused. Imagine a statue about which a person can walk and view from any angle. At any instant the person might be found north of the statue or south or west or south west or indeed at any position in the 360 degree space around the statue. What they cannot do is experience more than one perspective on the statue simultaneously (they cannot (without technology) simultaneously view it from the north and the south, say). This is hardly a deep philosophical problem. We must stand somewhere with respect to the statue if we wish to view it. Even from above we cannot see it all. Unless and until we move, some places on the statue are hidden from us - yet we know those places, that remain hidden, are just as real as the places we are observing at some given instant. Without those parts the statue would perhaps topple or collapse upon itself depending upon how large and massive a structure it was. What you are able to see depends entirely on what you cannot see (if it's a real and complete statue and not a hollowed out cast, say and we are not otherwise deceived. And this is easy to check if we just continue to change our perspective).

So it is with quantum theory: what happens in the two-slit experiment with single particles is that all possible paths are taken through the apparatus each time a particle is fired at it. But we necessarily only ever see one. And we know that all other possible paths really do exist because if you repeat the experiment often enough you will eventually approximate all those other possible paths - like slowly walking around the statue to gain a different perspective. (Deutsch uses the statue analogy to resolve some of the mystery about the nature of time: the subjective consciousness of an observer must experience only the present moment and not the present, past and future simultaneously for much the same reason that the statue viewer sees only one angle on the statue at any time. But I use it here in an attempt to convey that, if you take seriously what Schrodinger's Wave Equation predicts about reality then all physically possible events actually occur even if you only ever experience one tiny slice of that reality at any given moment). The statue analogy actually shows how deep this idea of the observer having a particular perspective on reality runs and it must have something to do with how quantum theory (which explains how an observer finds themselves only ever in one universe) is to be united with general theory (which explains how time is related to space and that times are just special cases of universes). The single perspective on the greater whole that an observer necessarily has is at once a very simple, common sense and true way in which we understand the space and events around us. It should also be the way in which we understand better the nature of time and the nature of "parallel" universes.


Let me now turn to some lengthy quoting of the original article and make some further remarks (emphasis in bold is mine). My words in this colour, quotations taken from David Deutsch's original paper in this colour

Deutsch:
"In this paper I shall be concerned with the part of scientific methodology that deals with experimental testing. But note that experimental testing is not the primary method of finding fault with theories. The overwhelming majority of theories, or modifications to theories, that are consistent with existing evidence, are never tested by experiment: they are rejected as bad explanations. Experimental tests themselves are primarily about explanation too: they are precisely attempts to locate flaws in a theory by creating new explicanda of which the theory may turn out to be a bad explanation. "

This is a key theme in "The Beginning of Infinity". The idea that science is all about experiments is a misconception (handed to culture by the education system). Indeed it is the case that experiments are necessary in science - but they are far from sufficient and although crucial, not central to the whole project. The purpose of science is explanation - not experiments. Deutsch is about to come to two types of experiments that are performed and the purpose of those experiments. But it is vital here to notice the point: bad theories or silly ideas that purport to be about the physical world do not need to be tested to be shown worthless: they can be dismissed outright as bad explanations without ever being tested. This point was made in "The Fabric of Reality" with the "Grass cure" thought experiment. If a herbalist comes to you suggesting that eating 1.0 kg of grass is a cure for the common cold - what is the reasonable response? Well of course it's to reject the suggestion but on what basis? Surely not that it's "untestable" - because of course, it is. But who will ever bother? What is truly missing from the grass cure theory is any explanation: how on earth is the theory supposed to work? Unless the herbalist can give an answer that is a good explanation that explains how the 1.0kg of grass actually interacts with viruses and destroys them or otherwise is able to alleviate symptoms - then we know we have an explanationless theory. And if they do (and this is key) such an explanation must be able to account for why it's 1.0kg and not 0.9kg or 1.1kg or any other of an infinite number of other explanations. Of course, I just note here that many herbalists and the like do suggest something akin to "grass cures" and they do attempt explanations - but they are never good explanations. They are often nonsense: they conflict with some other actual piece of science (so just think homeopathy: the idea that water remembers the medicine that was once diluted in it (although it apparently "forgets" the sewerage that was also once in it). This conflicts with the idea that chemicals are the agents that can actually do pharmacological work: and not "vibration" or some other nonsense like that. Back to Deutsch:

"Scientific methodology, in turn, does not (nor could it validly) provide criteria for accepting a theory. Conjecture, and the correction of apparent errors and deficiencies, are the only processes at work. And just as the objective of science isn’t to find evidence that justifies theories as true or probable, so the objective of the methodology of science isn’t to find rules which, if followed, are guaranteed, or likely, to identify true theories as true. There can be no such rules. A methodology is itself merely a (philosophical) theory – a convention, as Popper (1959) put it, actual or proposed – that has been conjectured to solve philosophical problems, and is subject to criticism for how well or badly it seems to do that. There cannot be an argument that certifies it as true or probable, any more than there can for scientific theories."

Here is just about the most contentious piece of philosophy that Popper and Deutsch (or any Popperian/Critical Rationalist) proposes about how science works. It is poorly understood and the opposing world view is still the dominant philosophy of science even though it is false. The false idea - subscribed almost universally by scientists, philosophers and laymen alike is that science somehow provides a way of demonstrating that certain theories are true or close to true or probably true. And moreover that the more one gathers evidence for some theory T, then the more likely T is true. What Deutsch, following Karl Popper is saying here is that there is no such process as that. There is no method in science, no set of rules to follow that can demonstrate theories as either true or probably true. The whole purpose of science is not to "support" theories with evidence. That is a complete misconception. The truth is that science is about correcting errors in our explanations. This is a completely different view of science to what most people have. Now some, admittedly, have read some Popper, or Deutsch - but are afraid to, or perhaps just confused about, fully taking the step to actually appreciate the significance of this. I say "afraid" because there seems to be some concern that if one too strongly endorses even a true theory like this, one might seem dogmatic. I have a person in mind here and that is: Sam Harris. Sam is an otherwise brilliant philosopher on many matters but this is one of his missteps. He at times endorses Popper, other times Kuhn and still other times induction. I won't go down this rabbit hole here, but I just observe that smart people struggle to really grapple with the centrality of what science is even all about. Now many scientists today do not want to call themselves "Popperians" or "Critical Rationalists" (which is to say they do not want to endorse the idea that science is not about "supporting theories with evidence") and so they may call themselves "empiricists" or many these days "Bayesians". For a detailed critique of Bayesianism as a philosophy of science or an epistemology see my other page here (opens in new tab): http://www.bretthall.org/bayesian-epistemology.html In brief, however: a Bayesian is essentially someone who thinks that repeatedly observing phenomena allows them to build up a probability that a particular theory is true. They can assign a number between 0% and 100% that a given theory is true, or something like this. So if the result of an experiment continues to come out the same way the number climbs closer and closer to 100% - but perhaps it can never quite reach 100% - but that's okay because science does not need to generate "certainly true" theories - just "probably true". So perhaps 90% is okay. Or 95%. Or maybe 99.99999% at the 5-sigma confidence level (if you understand statistics). But one need merely consider the question: What probability would a Bayesian assign to Newton's theory of gravity being true any time prior to finding it false?  If a scientist were actually a Bayesian in the year 1900 then it would seem that every experiment ever devised to test Newton's theory of gravity always corroborated it. Newton's theory correctly predicted the outcome of every well designed and executed test of it prior to and including the year 1900 (and a little later). A Bayesian could do statistics on any prediction you like and generate some number and the number would be pushing the ceiling of the magic 100% number. Newton's theory of gravity - according to that philosophy of science - would be very very very close to certainly true.

​And yet, ultimately, it was shown to be false. It was shown false by a crucial experiment on May 29, 1919, the great physicist Arthur Eddington measured the amount by which starlight was bent as it passed by the  Sun during a solar eclipse. Newton's theory predicted one number, Einstein's another. The amount of bending was in agreement with Einstein's General Relativity but not in agreement with Newton. Newton's theory was then refuted. So far from being very very close to true because of all the experiments that it had ever predicted the outcomes of up until then accurately, it was shown false by a crucial test that pitted it against a rival. Now General Relativity is in the same position that Newton's theory was prior to around 1900. It is not "probably true" or "true" or anything like that. It contains some truth - and more truth than Newton's (which was closer to true than any random guess would be). But in neither case can we say the theory is true - only that it contains some truth (we don't know what and it doesn't matter anyway - the theories can be used to help us control reality around us by making predictions and creating technologies to solve our problems). At any time, to paraphrase Thomas Huxley: the beautiful theory could be slain by some ugly fact. Indeed we have to expect that it will be at some point. General Relativity is at odds with Quantum Theory. They are mutually incompatible for reasons beyond the scope of my present piece here (but in brief: the dispute may come down to a disagreement about whether the most fundamental parts of reality consist of discrete or continuous quantities). Deutsch has said in other places, and I agree: it would be far better had we all decided to call scientific theories "scientific misconceptions" to remind ourselves of how tentative they are and that they will one day be superseded by some better misconception. Back now to Deutsch who elaborates on this:


"...expectations...apply only to (some) physical events, not to the truth or falsity of propositions in general – and particularly not to scientific theories: if we have any expectation about those, it should be that even our best and most fundamental theories are false. For instance, since quantum theory and general relativity are inconsistent with each other, we know that at least one of them is false, presumably both, and since they are required to be testable explanations, one or both must be inadequate for some phenomena. Yet since there is currently no single rival theory with a good explanation for all the explicanda of either of them, we rightly expect their predictions to be borne out in any currently proposed experiment. "

In other words although we know one at least (but presumably both) of our best, deepest theories of physics are false, there is no rival out there ready to replace them that can do the job of both just as well. And we must just recall that when we refute a theory we do not discard every single part of the theory. As a rule, very much is preserved. A short example from astronomy will suffice: Ptolemy explained that the universe was a geocentric arrangement where the Earth was at the centre orbited by other smaller spheres in circles. Copernicus theoretically did away with parts of this: replacing the Earth with the Sun but keeping circular orbits. Keplar likewise came and replaced the circles with ellipses and Galileo used observation to show how the Sun-centred model was superior and that there were objects orbiting Jupiter. Newton then provided a universal physical law in mathematical form allowing orbits to be precisely predicted and finally Einstein showed how Newton's Law was a good approximation to a better theory of the behaviour of spacetime which explained why the paths around the Sun were how they were. Each new improvement preserved much of the past (and crucially the idea that orbits were actually occurring, even if what was orbiting what, and why, changed as things improved). So refutation of a previously good theory - whether experimental or not - does not do away wholesale with everything that was valuable in the theory - it preserves much although ultimately demonstrating how the theory is fatally flawed and therefore ultimately false (with the proviso as Deutsch mentions below that theories are never entirely "logically contradicted" by some experimental observation. But this is a technical point we can return to later. Deutsch:

"A test of a theory is an experiment whose result could make the theory problematic. A crucial test – the centrepiece of scientific experimentation – can, on this view, take place only when there are at least two good explanations of the same explicandum (good, that is, apart from the fact of each other’s existence). Ideally it is an experiment such that every possible result will make all but one of those theories problematic, in which case the others will have been (tentatively) refuted. "

Now this is an amazingly important and clear articulation of what experiments are. Experiments test theories. But what can the results do? Well interestingly if the result of an experiment conflicts with a theory it does not necessarily rule out the theory. So take for example the more or less frequent media hype that can surround certain high-energy physics observations that are reported as "Einstein proved false!". Perhaps one of the more famous examples (detailed here) was about an experiment at the Large Hadron Collider where neutrinos apparently exceeded the speed of light in violation of special relativity (it turned out there was a cable incorrectly connected or some such). Now the results were actually false. But even if the results were true and neutrinos exceeded the speed of light this would not "prove" Einstein false or possibly cause us to reject relativity theory. What it would do is make relativity theory "problematic". Relativity theory would still be the best theory about how fast things can move and what happens to things as they move relative to one another. So a test of a theory: an experiment - even if it disagrees with the best theory going - is not a reason to reject that theory. After all, if you reject that theory, then what theory should you use? The second best theory? There is almost never a second best theory. But even if there were: that second best theory is "second best" for some good set of reasons. And if those reasons include things like "it cannot explain phenomena a, b, c, d, e and f while the first best theory can" then there still won't be a reason to turn to that theory in place of the first best.

There is only one way an experimental test of a theory can result in us rejecting our best theory. And that is when we actually have an equally best rival theory that explains everything our other best theory does PLUS it explains the outcome of the new experimental test. This kind of experiment is called a "crucial test". It is that rare type of test - like Eddington's observation of the bending of light - that allows us to decide between two theories that make incompatible predictions about the outcome of the test but that otherwise are (until that moment) equally able to account for all other phenomena. As it is now, of course, General Relativity is able to account for far far more than the mere bending of starlight during eclipses over what Newton's theory can. Newton's Universal Gravity, as brilliant as it is (it was able to get man to the moon) is left in the distant dust by Einstein's General Relativity (who could not only get us to the moon if we so wanted but can give us GPS, explain neutron stars and black holes and much more besides - none of which Newton's theory can come close to accomplishing). Deutsch:


"the existence of a problem with a theory has little import besides, as I said, informing research programmes – unless both the new and the old explicanda are well explained by a rival theory. In that case the problem becomes grounds for considering the problematic theory tentatively refuted "

Again: this deemphasises the supposed centrality of the experiment to the whole project of science. Science is a knowledge creation (in the form of bold explanations) machine. The genuinely difficult part is positing grand explanations for what is actually going on in the world. Of course those explanations need to be testable - but if the explanations accounts for the phenomena and survives the tests the explanation is the central concern of civilisation who can then go about actually making practical use of the science (to, for example, create technology, treat disease, solve other problems and so forth). An experiment that disagrees with some great theory just makes the theory problematic. But if we did find some experiment that, for example, could not be explained by quantum theory - or seemed to refute quantum theory - that would be a problem for quantum theory. But not a grounds for rejecting it. The (now problematic) quantum theory would still be used to create technologies and solve problems and, essentially, everyone would carry on more or less as before with respect to the theory and regard it as a genuine description of reality. But there would be an unsolved problem. And, once more as observed below and above - the problem just might be with the apparatus. And if it's not a problem with the apparatus it could be a problem with us not understanding some subtlety of the theory. Or, it could be the theory is genuinely not the best theory because someone, somewhere, has just created something better but is yet to publish it. And when they do, it will do all that quantum theory ever did and explain the problematic result that quantum theory couldn't. And in that case, the test that created the problem in the first place now becomes a crucial test. Deutsch:

"In contrast, the traditional (inductivist) account of what happens when experiments raise a problem is in summary: that from an apparent unexplained regularity, we are supposed to ‘induce’ that the regularity is universal (or, according to ‘Bayesian’ inductivism, to increase our credence for theories predicting that); while from an apparent irregularity, we are supposed to drop the theory that had predicted regularity (or to reduce our credence for it). Such procedures would neither necessitate nor yield any explanation.  "

This is crucial. Under the prevailing view of how science works - if an experiment critically wounds a theory such that it is once-and-for-all falsified and so liable to be rejected - then what can we jump to? If we reject our best theory and there is no rival - the process of rejection does not provide any new explanation for us. The negation of a theory is not a new theory. Deutsch

"In any experiment designed to test a scientific theory T, the prediction of the result expected under T also depends on other theories: background knowledge, including explanations of what the preparation of the experiment achieves, how the apparatus works, and the sources of error. Nothing about the unmet expectation dictates ​whether T or any of those background-knowledge assumptions was at fault. Therefore there is no such thing as an experimental result logically contradicting T, nor logically entailing a different ‘credence’ for T."  As Deutsch says in a footnote:
"That is known as the Duhem–Quine thesis (Quine 1960). It is true, and must be distinguished from the Duhem–Quine problem, which is the misconception that scientific progress is therefore impossible or problematic. "

Now this is something new I have learned about how to respond neatly to the Duhem-Quine thesis objections that are often raised if anyone gets into discussions about Popper and falsification. The thesis itself is correct - but what many people assumes follows from that is not! The Duhem-Quine thesis in my words is: When an experiment is conducted and the result disagrees with some theory T, then it is not logically the case that T must be false. Logically it can always be the case that the experiment was conducted badly (the method wasn't followed, the apparatus was faulty or operated incorrectly or some other "background assumptions" were false.  So some object, using this thesis: that there is no such thing as a "crucial test" because it might not be the theory T that is false but rather it could be the experimenter (or equipment) that is incompetent. Sure, so far so good. But Deutsch's point here is that Popper's philosophy of science is the true epistemology of how science generates knowledge and although it can always logically be the case that experimental error might be at root a reason for an apparently problematic observation this does not have any lasting effect on how science makes progress. Scientific progress actually happens in spite of this. As with the faster-than-light neutrinos. It might have been the case that this observation was a problem for relativity or it might have been "background assumptions" in the form of badly connected cables. That it turned out to be the latter (as Duhem-Quine warn it always could be) has no bearing whatever on the methodology of science. Indeed it is the methodology of good science that uncovers such problems and allows things to keep on moving in the right direction. Deutsch:

"But as I have said, an apparent failure of T’s prediction is merely a problem, so seeking an alternative to T is merely one possible approach to solving it. And although there are always countless logically consistent options for which theory to reject, the number of good explanations known for an explicandum is always small. Things are going very well when there are as many as two, with perhaps the opportunity for a crucial test; more typically it is one or zero. For instance, when neutrinos recently appeared to violate a prediction of general relativity by exceeding the speed of light, no good explanation involving new laws of physics was, in the event, created, and the only good explanation turned out to be that a particular optical cable had been poorly attached (Adam et al. 2012). "

Forming theories to explain things adequately is very hard. It is a highly creative process. It takes understanding what seems to be happening in the world and how to communicate the idea clearly in a language that others will understand. Sometimes (though this is not necessary) it can require appreciating some of the current theories and what problems there are with them. In short: it requires background knowledge and then lots of imagination. So because of some of these factors, there is a poverty of good explanations in the world but a proliferation of false and bad ones. Now here I just want to turn to a section of the paper that I won't quote but instead will put into my own words about the special case of quantum theory. Readers not so interested in quantum theory can skip over this part. Here we are concerned with whether or not a theory of "everything happens" is itself testable (quantum theory - in the form of the multiverse explanation - is an "everything possible happens" theory. Now is this theory testable and is this theory worse than a "just this thing happens" theory. Well...

If you have two explanations E (that explains everything possible that happens, (so "a" happens and b happens and c, d, e, f,...etc) all actually happen) and another explanation D (which predicts one thing (say x) happening) then if that one thing that D predicts happens over and again when you perform an experiment, E is not refuted by experiment (because it also predicts that thing should happen - as well as everything else) but what it cannot explain is why only that thing should happen (which D does).  So although experiment doesn't refute E, the fact it's a bad explanation does. Of course if something else happens, like b rather than x then D is refuted but E is not. The strange thing here is that even if x is observed every single time (making D apparently more accurate), E still might be actually true or closer to true. It could just be a coincidence that x happens all the time. But that's a poor explanation. But this is why poor explanations might still be more true than good explanations as David says at the very beginning of his paper. E could be augmented with G, where G explains why x is found to occur every time. Back to Deutsch:

"Thus it is possible for an explanatory theory to be refuted by experimental results that are consistent with its predictions. In particular, the everything-possible- happens interpretation of quantum theory, to which it has been claimed that Everettian quantum theory is equivalent, could be refuted in this way (provided, as always, that a suitable rival theory existed), and hence it is testable after all. Therefore the argument that Everettian quantum theory itself is untestable fails at its first step. But I shall show in Section 8 that it is in fact much more testable than any mere everything-possible-happens theory. "

As Deutsch is about to explain below, but let me emphasise: the multiverse theory is testable because it predicts that, for example, all possible paths are taken by particles through the double slit experiment and that is exactly what we observe when we repeat the experiment again and again with lots of particles. If the particles were instead just over and again striking the screen in one place (x, x, x, x, x, and x again) then we could refute the "everything possible happens" multiverse theory because strings like that are not expected. Deutsch:

This is because a string of repeated observations, like x, is not expected to happen even though it asserts that everything (including that string) actually does happen. "This is no contradiction. Being expected is a methodological attribute of a possible result (depending, for instance, on whether a good explanation for it exists) while happening is a factual one. What is at issue in this paper is not whether the properties ‘expected not to happen’ and ‘will happen’ are consistent but whether they can both follow from the same deterministic explanatory theory, in this case E, under a reasonable scientific methodology. And I have just shown that they can. "

So that, it would seem, settles that. The Multiverse theory is testable. I will just present a couple more paragraphs and make some concluding remarks:

"...Explanation itself cannot be defined unambiguously, because, for instance, new modes of explanation can always be invented (e.g. Darwin’s new mode of explanation did not involve predicting future species from past ones). Disagreeing about what is problematic or what counts as an explanation will in general cause scientists to embark on different research projects, of which one or both may, if they seek it (there are no guarantees), provide evidence by both their standards that one or both of their theories are problematic. There is no methodology that can validly guarantee (or promise with some probability etc.) that following it will lead to truer theories – as demonstrated by countless arguments of which Quine’s (loc. cit.) is one. But if one adopts this methodology for trying to eliminate flaws and deficiencies, then despite the opportunities for good-faith disagreements that criteria such as (i)-(iii) still allow, one may succeed in doing so. "

This is important. I have read some criticism of Deutsch's criterion for what constitutes a "good explanation" (that being that it is "hard to vary" on the grounds it is not well defined). There we find a good response: new types of explanations can always be created, not only new explanations and so a definition that is unambiguous could rule out legitimate explanations to come in the future.

"We have become accustomed to the idea of physical quantities taking ‘random’ values with each possible value having a ‘probability’. But the use of that idea in fundamental explanations in physics is inherently flawed, because statements assigning probabilities to events, or asserting that the events are random, form a deductively closed system from which no factual statement (statement about what happens physically) about those events follows (Papineau 2002, 2010). For instance, one cannot identify probabilities with the actual frequencies in repeated experiments, because they do not equal them in any finite number of repeats, and infinitely repeated experiments do not occur. And in any case, no statement about frequencies in an infinite set implies anything about a finite subset – unless it is a ‘typical’ subset, but ‘typical’ is just another probabilistic concept, not a factual one, so that would be circular. Hence, notwithstanding that they are called ‘probabilities’, the pi in a stochastic theory would be purely decorative (and hence the theory would remain a mere something-possible-happens theory) were it not for a special methodological rule that is usually assumed implicitly.  "

I have no comments on that but wanted to keep it in for reference. I want to just think out loud about some of the material about errors that came later in the paper. The argument seems to be of the following form:
Where there is a measurement error (say a limit of reading error on a scale) of some size (say "e") then if for argument sake some quantity known to have a simple real number value, X, is measured and found to have a value of A then the best we can say is that the thing is to be found in the range of A+/-e. Now if repeated measurements are made for X and they have the form A1, A2, ... And then even if none of A1 through An different from X by more than some other amount s which is less than the amount e we learn nothing new beyond what we would have had we only measured once (because all measurements are within the limit of reading). So we can't say, for example, that if the quantity X is measured and found to have a value A we can't say it "probably" is closer to A+/-s. Our measuring device provides the limit of reading.  But also if "s" is too small then we have a problem with the experiment: why should the differences between all our A's be so small? Shouldn't they reach e?

I think this will do for now. :)
Proudly powered by Weebly
  • Home
  • Physics
    • An anthropic universe?
    • Temperature and Heat
    • Light
    • General Relativity and the Role of Evidence
    • Gravity is not a force
    • Rare Earth biogenesis
    • Fine Structure
    • Errors and Uncertainties
    • The Multiverse
    • Galaxy Collisions
    • Olber's Paradox
  • About
  • ToKCast
    • Episode 100
    • Ep 111: Probability >
      • Probability Transcript
  • Blog
  • Philosophy
    • Epistemology
    • Fallibilism
    • Bayesian "Epistemology"
    • The Aim of Science
    • Physics and Learning Styles
    • Positive Philosophy >
      • Positive Philosophy 2
      • Positive Philosophy 3
      • Positive Philosophy 4
    • Inexplicit Knowledge
    • Philosophers on the Web
    • David Deutsch & Sam Harris
    • David Deutsch: Mysticism and Quantum Theory
    • Morality
    • Free Will
    • Humans and Other Animals
    • Principles and Practises: Preface >
      • Part 2: Modelling Reality
      • Part 3: Political Principles and Practice
      • Part 4: Ideals in Politics
      • Part 5: The Fundamental Conflict
    • Superintelligence >
      • Superintelligence 2
      • Superintelligence 3
      • Superintelligence 4
      • Superintelligence 5
      • Superintelligence 6
  • Korean Sydney
  • Other
    • Critical and Creative Thinking >
      • Critical and Creative Thinking 2
      • Critical and Creative Thinking 3
      • Critical and Creative Thinking 4
      • Critical and Creative Thinking 5
    • Learning >
      • Part 2: Epistemology and Compulsory School
      • Part 3: To learn you must be able to choose
      • Part 4: But don't you need to know how to read?
      • Part 5: Expert Children
      • Part 6: But we need scientific literacy, don't we?
      • Part 7: Towards Voluntary Schools
    • Cosmological Economics
    • The Moral Landscape Challenge
    • Schools of Hellas
  • Postive Philosophy blog
  • Alien Intelligence
  • High Finance
  • New Page
  • Serendipity In Science
  • Philosophy of Science
  • My YouTube Channel
  • The Nature of Philosophical Problems
  • The Nature of Philosophical Problems with Commentary
  • Subjective Knowledge
  • Free Will, consciousness, creativity, explanations, knowledge and choice.
  • Solipsism
  • P
  • Image for Podcast
  • ToK Introduction
  • Begging the Big Ones
  • Blog
  • Our Most Important Problems
  • Corona Podcasts
    • Brendan and Peter
    • Jonathan Davis
  • Responses
  • Audio Responses
  • New Page
  • Critically Creative 1
  • Critically Creative 2
  • Critically Creative 3
  • Critically Creative 4
  • Critically Creative 5
  • David Deutsch Interview in German
  • Audio Files