Fine tuned fine structure
As our knowledge becomes greater, it becomes both broader and more deep. It therefore surprises some to learn that our best physical theories become not more numerous but less, as they encapsulate more. That is, the more we learn about the world, the fewer theories we need to explain what we do know. Where once two separate explanations were required for two hitherto seemingly separate phenomena – now one theory does that job. The new theory not only does all that the preceding theory did, but does it to a greater degree of precision, with more elegance and meshes more nicely with our other explanatory theories while at the same time relying upon fewer unsupported assumptions.
In physics, our theories which explain the world are, for whatever else can be said about them, explanations of the forces that govern the interactions of particles. Particles may interact with each other because they each possess mass and so they exert the force of gravity upon each other (or more accurately, they experience the curvature of space around each other) while particles each with a charge can experience each other’s electrostatic force of attraction (or repulsion, as the case may be).
Why it is that particular forces have the strengths that they do is a deep and fascinating mystery. Although physics can in large part be considered that collection of explanatory theories of the forces of nature, an explanation of why those forces have the strengths they do is not yet a part of mainstream physics. Why do two electrons repel each other to this extent and not that? Of course, it is trivial to say that the strength of the force results from the charge on the electron. But why does the electron possess the particular charge that it does? Have electrons always possessed the same charge and is that charge specified to an infinite degree of precision? There are, as yet, no satisfactory answers to these questions and whether such questions can be answered by science or will forever remain in the domain of metaphysics is difficult to say. Recent progress however, which will in part be the topic of this project, hints that science is now is coming to grapple with these questions. We may for example consider the strength of gravity, whether one considers it as the force that Newton first described, or the curvature of space as Einstein explained is determined by a constant of proportionality, given the symbol G. G has the value of 6.67 x 10-11 N m2kg-2. What this number is can be determined by experiment: and the number always turns out to be the same whether the experiment is between two one kilogram masses, or between the moon and the Earth or between a binary star system. But why this number has the value it does has as yet no answer. What we do know is that if this number were much different, then we would not be here to discuss this question. Similar things can be said about other so-called “coupling constants” and the topic of the present discussion will be that constant which determines the strength of the electromagnetic interaction: the fine structure constant, α.
Constant Parameters?
The charge on the electron, e, the fine structure constant α and the universal gravitational constant G are generally known as “fundamental constants of nature”. These constants are called “constants” because their values are assumed not to change….or more precisely, because every experiment yields the same value. Further, they are called fundamental because their values cannot be calculated, only measured (Murphy, 2002). Recently however, groundbreaking research shows that these constants may not have quite such fixed values at all and referring to them as parameters is now common in the literature. No parameter which is part of a theory has been predicted by the theory that contains it so why the constants have the values that they do is a very interesting field of study and recently books have been written which touch upon this subject (See for example: Davies “The Goldilocks Enigma” 2007 or Barrow “From alpha to omega: The Constants of Nature” 2002). According to these authors, it appears that the standard model of physics contains around 26 of these constants and further it appears that changing most of these by even the smallest amount can result in changes to chemistry, nuclear physics and space itself that would cause life to be impossible. There appears, to use Davies’ often used phrase “some fine tuning” going on. So if the constants are changing, or have changed – why did they change to a value that today makes life possible?
Dimensionless is more.
Measuring changes in the value of these constants is problematic. Let us take the example of another famous constant: the speed of light. The speed of light is defined to have the value of 299 792 458 ms-1. But has it ever been lesser or greater in magnitude than it is now? Does it vary from place to place? In short, is the speed of light really constant in time and space?
This question is not a scientific one, as surprising as that may seem. This is because there is no experiment which can be done to determine if the speed of light has changed or not. Imagine we measure how long it takes for a photon of light to traverse a single meter – a meter being the distance between two points A and B. If we get some value today and then repeat the experiment tomorrow, but get a different number – what do we conclude? Do we conclude that it is the speed of light which has changed, or do we conclude that our timing device has changed, or that the length of a meter has changed? Actually, there is no way to distinguish between these various possibilities. And this is true of the gravitational constant and any other constant which contains dimensions. If such a constant varies, then there is no way to rule out the possibility that it is our measuring devices or the duration of our second or the length of our meters which are changing – and not the constant we are attempting to measure.
For this reason, what is required are constants without dimensions: that is, parameters which are just numbers. The fine structure constant is one such parameter: it has a value of approximately 1/137 and is defined to be:
α = e2/hc
This number, which has become ubiquitous in physics, remains mysterious. One of the pioneers of quantum theory, Wolfgang Pauli said of it, “When I die, my first question to the devil will be: What is the meaning of the fine structure constant?”. Michael Murphy writes that “All ‘everyday’ phenomena are gravitational and electromagnetic. Thus G and α are the most important constants for ‘everyday’ physics” (Murphy, 2007). As has been said, the fine structure constant is a measure of the strength of the electromagnetic force but it can also be regarded as a measure of how “relativistic” electrons in atoms are.
α was first introduced by Sommerfield as a way of explaining the splitting of spectral lines. What is found if one looks closely at spectral lines with a spectroscope of very high resolution is that almost all spectral lines are actually multiplets. That is, they are not (as they first appear to be) single lines, but rather two or more finer lines very close together. This is due to the fact that electrons which travel around the nucleus of an atom may move in circular or elliptical orbits, and possess spin up or spin down. That is, the electrons which move between one energy level and another can possess slightly different energies even if they occupy the same orbital. Relativistically, the energy of a circular orbit is slightly different to the energy of an elliptical orbit (unlike in Newtonian mechanics where elliptical orbits with the same major axis possess the same energy) resulting in not a single spectral line, but two which have almost identical wavelengths. The multiplet structure of spectral lines is also called “fine structure” and was for many years a mystery to physicists. But if spectral lines had fine structure it meant that electron energy levels had fine structure also. In the hydrogen atom, for example, most of the energy levels are closely spaced pairs of levels. So an electron excited from the ground state s orbital may end up in the p orbital with spin up or the p orbital moving in a circle or the p orbital moving in an ellipse. Such electrons have very slightly different energy and upon falling back to the ground state will emit photons and slightly different wavelengths, giving the emission line a fine structure. It is the fine structure constant α that is key to the quantification of this difference in energy and it has been assumed that, like other constants of nature, its value did not vary in time or space.
But what if α has changed? If α has changed then this means that the strength with which photons and electrons interact has also changed. This would mean that what is supposedly a constant, is not and that would require explanation. Now because the theory that utilizes α assumes that it is constant, any explanation of a change in α must come from outside the theory: that is, in this case, from outside of Quantum Theory. A deeper, more explanatory theory that encompasses all that Quantum Theory does and which is able to explain changes in α would then have to be developed. Not only that, but changes in α, if found to be true, would open the way for the hypothesis that other physical constants may also have changed over time.
Constant changes
Here I present constraints from the two most significant studies on changes in the fine structure constant. In section 1 below, constraints from a natural fission reactor are discussed, in section 2 the constraints from quasar absorption spectroscopy are presented.
1. Constraints from the Oklo Reactor
In Gabon in Africa, some 1.8 billion years ago, some uranium 235, trapped in yellowcake was able to dissolve in some oxygenated water from a lake. Streams were able to carry these uranium ions to a filter, made of algae, which concentrated the uranium to a point where it reached critical mass and fission began. The fission process caused the water to heat beyond boiling and neutrons released were then able to escape causing the reactor to cool to a point where the water which had boiled away was replaced. This process then repeated itself for several million years so that today we find the uranium 235 is particularly depleted in this place. In order to constrain changes in α using this process we study the abundances of decay products from uranium 235 fission. The latest best study by Fujii et. al (2000) suggests that
Dα/α=(-0.04±0.15)×10-7
2. Constraints from Quasar Absorption Methods
Two techniques are used which utilize the light from quasars to study changes in the fine structure constant: the alkali doublet method and the many-multiplet (MM) method. Both methods involve intercepting light which was emitted by a high redshift quasar and has then past through the halo of a galaxy so that an absorption spectra is produced. Such measurements obviously require not only a quasar but a galaxy along the line of sight between Earth and the quasar. These spectral lines, as has been previously described, have fine structure and it is the change in this fine structure – specifically the distances between the lines when compared with laboratory spectra – that allows changes in α to be calculated.
The Alkali Doublet Method (AD)
It was this method of determining changes in the fine structure constant that first received media attention as far back as 1998 with the publication by Webb et al (1998) of papers such as “Limits on the Variability of Physical Constants” in Structure and Evolution of the IGM from QSO Absorption Line Systems, IAP Colloquium leading to articles in the Scientific American and “Physics News” on January 13, 1999 by Phillip F. Schewe and Ben Stein which carried a story titled “IS THE FINE STRUCTURE CONSTANT CHANGING?”
Murphy (2002) explains that “the relative wavelength separation between the two transitions of an alkali doublet is proportional to α2.” The reason why absorption lines of gas in galaxy halos are used along the line of sight to the quasar rather than the emission lines of the quasar itself is because the absorption lines are far more narrow and so provide a more precise probe. The quasars involved in this kind of study are so distant (a mean z of 2.6) that their visual magnitudes are very faint (mv < 19) and require an exposure time of between 1 and 2 hours on one of the worlds largest telescopes: the Keck I 10m on Mauna Kea in Hawaii. In one such study (Murphy et al, 2002) using this method,
Dα/α = (-0.5 ± 1.3) x 10-5
The physics of this method rests upon “doublets” of spectral lines where an electron moves from (for example) an s to a p orbital. Take for example an electron in the Si IV ion: there are two possible energies that a given particular electron in the excited state might possess: that corresponding to a photon of wavelength 1393.8 Angstroms and another corresponding to a photon of wavelength 1402.8 Angstroms – for a particular transition. Without a high resolution spectrometer this doublet appears as a single line, of course. It is the small difference in the wavelengths that puts the word “fine” into “fine structure”.
The Many Multiplet (MM) Method
This technique is preferred for probing changes in the fine structure constant as it offers an increase in precision of around one order of magnitude. Webb et al (1998) used this technique to publish groundbreaking results that echoed around the world capturing the imagination of the scientific community and received a large amount of attention in the popular media. Their study involved studying spectra from 128 absorption systems. The physics of the MM method involves looking at absorption spectra from a heavy ion such as Fe(II) say: where transitions can be made from the s orbital to any of 5 other energy levels corresponding to 5 different spectral lines rather than just 2. Moreover, transitions are then compared with those of, say Mg(II). The advantage of using MM over AD include the fact that it offers a 10 fold increase in precision, it permits an in-principle use of all QSO absorption lines and not just a single AD and systematic effects are minimized. Murphy (2007) explains further that “Using the many multiplet method, If α varies, the Fe II lines shift 10 times more than the Mg II (or Si IV) transitions: Mg II acts as an anchor against which we measure the large shifts in Fe II.”
This MM method led in 2002 to the publication of
Dα/α = (-0:574 ± 0.102) x 10-5.
Is it e, c or h that varies?
As has already been said, it is important to keep in mind that because e, c and h are constants with dimensions, there is no experiment that could ever conclude that it is they which vary rather than the measuring device.
Barrow discusses some of the implications of varying α for cosmology (Barrow, 2007). One of the most interesting may be that a varying α entails a violation of the Weak Equivalence Principle (WEP). The WEP can be stated as: “All bodies in the same gravitational field at the same point of spacetime will undergo the same acceleration.” Barrow points out that if α varies then the field which carries the variation will couple differently to different nuclei because they will contain different numbers of charged particles (protons, for example). Barrow says that his theory would lead “…to a relative acceleration of the order of 1013.” Although there may be no way to experimentally demonstrate if a constant with dimensions has changed, nonetheless arguments can be mounted which lend weight to taking the position that it is c rather than (say) e which varies. It is to one such argument that we now turn.
Consequences of Black Hole Thermodynamics
A paper by Davies, Lineweaver and Davis (2002) attempts to constrain variations in α using thermodynamic arguments. The second law of thermodynamics suggests that the entropy of any closed system must always increase. Now, as we know from the 1998 data, α was smaller in the past – so an increase in α according to its definition would mean that either e has increased (making the numerator, e2 a larger number) or hc (the denominator) has decreased. The insight provided by Davies et. al was that the entropy of a non-spinning black hole is well known and given by:
Now, a decrease in entropy is forbidden by the laws of physics. A decrease in entropy is tantamount to time running in reverse: indeed it is the second law which is often said to give time its arrow. It would seem from the above equation, therefore, that according to black hole thermodynamics, the only way α can change if we do not wish to do away with other cherished areas of physics, is to say that it is hc which has decreased: making α greater as time goes on and so ensuring that the second law is not violated. This is significant as some interpretations of a changing fine structure constant interpret this as a change in the charge on the electron. If Davies et. al. are correct, then we have a way of ruling out that, with some higher degree of probability, that possibility.
Fine Line Fine Structure
The media attention received by the teams involved in this research is well deserved, but it is interesting to look at how some of the popular press have reported what is quite a subtle and nuanced area of physics. The fine lines that need to be carefully measured in a quasar absorption spectra can lead to media walking a fine line between sober science on the one hand and sensational speculation on the other. One interpretation of a changing fine structure constant is that the speed of light may be changing…and it is this possibility that has captured the imagination of many in the popular science press. Moreover, such excitement has been seized upon by the researchers themselves: and why not? If science can make it closer to the front of the newspaper, all the better. But extraordinary claims require extraordinary evidence and as previously discussed claims about a changing speed of light cannot be falsified by experiment and so, in a very real sense constitutes a hypothesis which lies beyond the scope of science to investigate. In other words, no amount of evidence, no matter how extraordinary, could ever conceivably justify the claim that the speed of light, like any parameter with dimensions, is changing. This has not prevented much of the popular publicity on this issue making precisely that claim however. A large list of such publicity can be found at Michael Murphy’s web pages (see reference in bibliography). Discover Magazine led with a story titled “At the speed of light: what if Einstein was wrong?” While The Age in Melbourne claimed “Einstein's relativity theory hits a speed bump”. Even one of the leaders of this research, John Webb, titled his piece for Physics World “Are the laws of nature changing with time?” while USA Today exclaimed “Speed of light may not have been constant after all!”
It can safely be said that Einstein’s theory is in no way threatened by any measured change in the fine structure constant. Relativity will still work just as well, and its explanatory power would not be diminished. Time dilation, length contraction and the curvature of space would all still be physical phenomena forming part of our scientific world view. Of course, the BBC is more likely to lead with a story titled “Laws of Physics 'may change' as they did in May of 2002 than they would with a story “The fine structure constant has varied by a few parts in 10-5 over the last billion years”. Sometimes the truth is a little too prosaic. And the fact that the truth was thought to be too prosaic is not doubt the reason why this kind of publicity was generated in this way.
Putting aside questions about the sensationalist press or otherwise that this research has received, we turn now to an issue which is perhaps of greater significance than the somewhat spurious claim that relativity has been overturned. We look now at the significance of a changing fine structure constant for the evolution of life.
The Anthropic Principle.
Before we engage with the fine structure constant specifically, we should more generally look at the idea that the laws of physics seem peculiarly well suited to life. The Anthropic Principle has a number of forms but in broad terms it is the idea that the laws of physics: - or more precisely the constants of nature like the charge on an electron or G or α - are consistent with the existence of life. It might also be called “the statement of the very obvious”. In some sense, there is a triviality to the consideration that the conditions of the universe, including the laws that govern it, must be consistent with the appearance of observers like ourselves. If things were otherwise, then this debate could not be had as we would not be here to have it. The anthropic principle may be quite anthropocentric itself at times: the fact that much is made of the fact that the laws of physics are consistent with the emergence of life and intelligent observers such as ourselves in particular seems to be rather coy about the fact that the laws of physics and the constants of nature are necessarily consistent with everything that exists. The stronger form of the anthropic principle: the idea that the laws of nature are biofriendly will be discussed later: but it is worth mentioning now, in this context, that the laws of physics seem particularly fine tuned for the existence of tables and chairs, of clouds of nebulae and synchrotron radiation, of hurricanes on gas planets and collisions between asteroids. In short, the anthropic principle is somewhat of a misnomer when one realizes that the laws and constants are fine tuned for everything that exists and every process that occurs. Just how strong a version of the anthropic principle we might need to embrace given the apparent fine tuned biofriendliness of the universe is one of the major aims of the discussion in this present project. As an aside, some debate exists around various “flavors” of the Anthropic Principle: there is the weak (the laws of physics happen to be consistent with the existence of observers) to the strong (the laws of physics must be as they are for observers to exist) and there are others still such as the final (once intelligent life exists it will never die out having learned, eventually, to exploit the total resources of the universe itself to ensure its own immortal survival. The existence or otherwise of intelligent life in the universe has generally been explained or explained away by appeal to one of two theories: that of the intelligent designer and that of there being many worlds (we just happen to find ourselves in one of the universes which is just right for life: in an infinite number of other worlds, we do not exist). This latter argument against intelligent design was made as far back as the 18th century by English Philosopher David Hume who asked (recouched in modern parlance) “If the universe were due entirely to chance, how would you expect it to look?” Well you would expect it to look any way at all, of course: including the way that it actually does.
Recently Paul Davies has attempted to explain the existence of the universe we observe as a closed loop of sorts – this theory itself connected in some way to the present study of varying constants. Davies’ idea is that “life explains the universe even as the universe explains life” (Davies, 2007 (Cosmos Magazine)) by which Davies means that the universe itself seems to have purpose and the constants of nature seem to change to values that permit life and intelligence to emerge and evolve towards values that permit them to emerge and evolve so that they can explain why it is that they have those values. Although Davies sees this as a significant step (somewhat tongue in cheek referring to it as “Davies’ third way”) the author believes that this begs the question. Having exchanged this complaint with Davies himself, Davies’ admits that this is indeed the case, but argues that there really are no other alternatives given that the God hypothesis and the many-universes ideas are equally flawed, untestable and to him, unsatisfactory. He would rather find an explanation for the existence of the universe from within the universe itself rather than appealing to something outside the universe that can never be known: be that an intelligent designer or a plenitude of multiple universes. However, it might also be at this point that we embrace the words of the 20th century German philosopher Ludwig Wittgenstein and admit “Whereof one cannot speak, thereof one must remain silent”. This aside, I later look in some detail at “Davies’ third way” and how it may connect to a changing fine structure constant: something that, as yet does not appear explicitly in the literature.
Biofriendly Fine Tuning?
The anthropic principle, as has been said, can in some sense be dismissed as a statement of the obvious: the conditions of the universe must be consistent with our existence. The fact that much is made of how the conditions are favorable for “our” existence in some sense also seems to change the anthropic principle into an anthropocentric principle. A universe that seems peculiarly “fine tuned” for life is said to be a “biofriendly” one. Much has also been written about how those same laws seem to permit not only life but intelligent life which is able to explain it all, to arise. Of course, a similar argument could be made about the significance (or otherwise) of the fact that those same laws seem peculiarly fine tuned for any physical process that one cares to name in the universe. If the laws of physics were much different, tornadoes might never be able to form. Could we not play with the parameters and see ways in which tornadoes might be affected if, say, the charge on an electron were much different? Is the universe peculiarly fine tuned for the existence of tornadoes? After all, tornadoes and complex wind systems of all varieties, seem to be very common indeed when compared to places where life can supposedly flourish. We know that tornadoes are relatively common throughout the solar system and it is likely that around any star with a gas planet that high winds will exist on that planet. Further, it would seem from evolutionary biology that intelligence is far from being a convergent feature of evolution: it seems the universe does not really favor its coming into existence once life has appeared. Considering a feature like wings: birds, insects, mammals and even fish and dinosaurs apparently all independently evolved wings as they did eyes and many other features common to many species. Yet only one species out of the millions that have existed on this planet, as far as we know, ever evolved the ability to explain the universe that they found themselves in: homo sapiens. The fact that intelligence is such a rare feature might imply something about its quirkiness. On the other hand this does not necessarily diminish its significance. David Deutsch argues for the significance of intelligence on computational grounds: the way the universe works (namely explanations in the form of the laws of physics) are reflected in the brains of human beings. There is a deep and fundamental connection between the large complex behavior of the universe as a whole and the brains of human beings which contain, embedded in grey matter, theorems that can predict how that universe is going to evolve. Yet is intelligence, however significant, common in the universe? Is life common? If the universe is truly biofriendly, surely it should be?
“We could hardly find ourselves in a bio-hostile universe, could we?” asks Paul Davies (2007: podcast). I wish to venture that perhaps we could and that the universe is, maybe, not so friendly after all. An argument ventured by Neil Degrasse Tyson in the “Beyond Belief 2006” conference asked that exact question about how biofriendly the universe really is. Whilst it may be said that, yes, the form of the laws of physics seem special – nonetheless they lead to conditions that are, in fact, for the most part – completely hostile to life. Indeed, so far as we know, in a very narrow band upon one very small planet do the laws of physics permit life to arise. If this is the only place in such a vast universe that life can indeed flourish, how justified is the assertion that the laws are “biofriendly” at all? Indeed, considering almost all of the universe is completely devoid of life, could we not similarly argue about just how hostile the laws are to life? The laws could have been such that life may have arisen in far more places than it has. The laws could have been such that DNA could survive in a greater range of temperatures than it does and that such a narrow band of solar energy is utilized. Physicists have fun playing with the idea that if we tinker with this or that parameter then life would be extinguished in an instant: however, how many combinations of parameters might indeed lead to a universe where life is more robust, evolves more quickly and is able to survive in a far greater variety of conditions that it currently does. Such studies would be speculative and indeed have not been comprehensively done to date. It is certainly the case that “tinkering with the parameters” more often than not seems to lead to a universe which is either featureless or in some other way hostile to life (if the proton were slightly heavier than the neutron rather than the other way around, for example, all protons would decay into neutrons and the universe would contain no protons and without protons you cannot have atoms and without atoms you cannot have chemistry of any sort, so no life – to provide just one example) – however, just because we find ourselves in a universe that just so happens to permit life does not mean that this is the only kind of universe that would or even that this one is particularly friendly to life. Certainly there are some degrees of friendliness or hostility to life here and that quantification has not yet been done. It might turn out that we are right on the border between hostility and friendliness and other combinations of the parameters would lead to far more life and intelligence than we see in this, our universe.
If we accept the hypothesis that we are in a biofriendly universe without further defining what we really mean by “biofriendly” we encounter the same two explanations as earlier for why this is the case: The intelligent designer or The Multiverse hypothesis. The idea that ours is one universe of many at least goes back to Liebnitz in the 17th century who spoke of other worlds and ours being “the best”. The 20th century philosopher David Lewis in “On the plurality of worlds” took the idea of all possible worlds actually existing in some sense, seriously. If this is the case, then we simply find ourselves in the (or a) universe where the conditions are just right – we should not be surprised that things “seem” fine tuned. This means that the way things are (that is, the form that the laws of physics have) is completely random as all possible sets of laws are out there somewhere. It is no surprise, therefore, that we find ourselves in a universe that does indeed seem peculiarly fine-tuned for life. There are various meanings of the word “multiverse” out there in the scientific vernacular now: that which describes the set of all physically possible worlds as permitted by quantum theory (in all of these universes the laws of physics are the same, but the conditions are different) and that class of all logically possible worlds (of which the former “multiverse” is but a small subset) where all possible sets of physical laws and values for the parameters are somewhere realized.
“I am the alpha and the omega” (Revelation 22:13)
The biofriendliness of the fine structure constant in particular
According to Davies (2007), although the calculation has not been done, it may be the case that the value of the fine-structure constant can determine, in part, exactly how much carbon is produced in stars throughout the universe. Thus, if α were much different to what it is, then the whole carbon-chemistry enterprise may not have been possible. Without it, this means that the ability of life to get going and remain sustained would be threatened. Perhaps the value that α takes determines things like the likelihood of metabolic processes to occur – the ability of oxidative phosphorlation to occur – any type of reaction where electrons are transferred from one chemical species to another is likely regulated by the strength of the fine structure constant. However, it is worth noting that although the detailed calculations about the sensitivity or otherwise of carbon chemical processes to the value that the fine structure constant takes might not yet have been done; one calculation that has been done is the sensitivity of big bang nucleosynthesis to α. The fine structure constant determines in part the ratio of hydrogen to helium produced during big bang nucleosynthesis. If α were smaller then the electrostatic force of repulsion between protons would have been greater and this would mean that diprotons would be possible (Murphy, powerpoint, 2007). Given the current value of α at least one neutron is required to mediate the strong nuclear force, overcoming the repulsion between protons. So if α were much different than it was in the very early universe then a much greater proportion of helium would have been produced. More precisely, the ratio of hydrogen to helium depends exponentially on the value of the fine structure constant. This means the first stars that formed would have survived for a far more brief period and all the nuclear fuel in the universe would have quickly been used up. Billions of years ago, all stars would have died: life would not have been possible as we would now live in universe which has long since suffered heat death.
If the fine structure constant increases then carbon and most other atoms would become unstable and disintegrate, while if it decreased then water and most other molecules would become unstable. This is because an increase in α corresponds to an increase in the strength of the electromagnetic force whilst a decrease corresponds to a weakening of the electromagnetic force. To be more explicit: if α goes up, then the force between protons in the nucleus will increase to such an extent that the nucleus will become unstable as the proton-proton repulsion becomes too great for the strong force to keep it together, on the other hand if α goes down then the electrostatic attraction between ions forming chemical bonds becomes too weak to keep molecules together. Indeed if what has so far been observed is correct: that the fine structure constant was smaller in the past and has been increasing as time goes on, then extrapolation would mean that if the universe continues to expand it may well be the case that the fine structure constant becomes too large for carbon atoms to remain stable as the electrostatic force of repulsion between the protons becomes too great. If this occurs, then arguments about just how biofriendly the universe is will be mute: it really will just be nothing but a fortuitous time that we live in. Billions or trillions of years from now: life we be quite impossible.
Davies’ Third Way and a changing fine structure constant
Let us grant the concession that the laws and parameters are (at least for now) peculiarly biofriendly to the extent that they seem exquisitely fine tuned and that claims that the universe itself is actually rather biohostile as nothing but a quibble. Paul Davies has argued that we need not necessarily stand in the metaphysical camp that “God did it” or “All possible sets of physical laws exist somewhere” . Instead, Davies proposes what he sees as a somewhat more scientific theory that he has called, tongue in cheek: “Davies’ Third Way”. It entails the hypothesis that the laws of physics seem to have “focused in” on those which are biofriendly. Davies applies the laws of quantum physics: specifically a form of Heisenberg’s Uncertainly Principle, to the laws themselves. In the same way that there is an uncertaintly or “fuzziness” in the position, momentum, energy or indeed any other physical quantity that a particle may possess, this kind of “fuzziness” or uncertaintly could in principle be applied to those very laws themselves, and so by extent to the constants. He uses the example of quantum entanglement to describe what he means. Quantum entanglement is about spatially separated particles being linked in some way so that a measurement made upon one determines the measurement that must be made on another. Similarly, quantum theory permits a form of “backward causation” and this is not in dispute. If a later state of a system is captured, then we can infer possible histories of the particle(s) involved. Extending this idea to the universe itself: the laws of physics themselves may have started out fuzzy – but as time went on, they have picked out values for the constants and forms of the laws that are delicately fine tuned for life. Once upon a time, α may have been substantially different to what it was today. If it were very different, would life be possible today? Let us consider a concrete example: that of the charge on the electron – one of the constants which makes up the fine structure constant. Is the charge on the electron an infinitely precise quantity? According to Davies, it cannot be because the computational resources of the universe are finite and so there can be no infinitely precise quantities. As we go further back towards the big bang what we find is that the computational resources of the universe become smaller as the size of the universe and amount of matter within it decreases. Seth Lloyd from MIT famously calculated the computational power of the universe (it is around 10120 bits – see Lloyd (2006)) and while the number itself is not important: the fact that there even exists a limit, is. The charge on an electron cannot be infinitely precise for this reason and so in the past, when the computational resources of the universe were less, then the charge on an electron was even less precise – by extension all parameters were less precisely specified that they are now and so we can say that the laws of physics themselves possessed some degree of “fuzziness”. Why it is that the fuzziness has reduced in such a way as to permit the existence of life may yet prove to be an interesting question to explore. Davies’ third way suggests that the universe may have some “purpose” and that the purpose is, in part, the emergence of awareness. We are that part of the universe that is aware of itself. If the laws and parameters started out fuzzy and then sharpened to the values that they now possess it might be hypothesized that the universe in some way “knew” about us coming into existence. This seems to be wildly speculative metaphysics, however there may be some weight to lend to this argument with the fact that the constants have indeed sharpened up over time. We now know with some high degree of confidence that the fine structure constant was smaller in the past. It is safe to assume that at even earlier epochs it was smaller still and perhaps soon after, or even during, inflation the value was much different. We can now take these claims more seriously given that the evidence is in for a changing constant. The fine structure constant now possesses a value which permits life to exist and flourish: in at least one place in the universe. Did the universe itself in some way know this? Did it choose from the ensemble of all possible parameter values just the right mix of laws and constants to enable itself to become aware of itself? Given that backward causation is not explicitly prohibited by quantum theory and further given that at least one constant has changed over time, this kind of speculation is coming in for scientific rather than simply philosophical treatment.
Moving back to Davies’ original claim: if we simply apply Heisenberg’s Uncertainty Principle to the constants themselves, perhaps the measurement that we make of the fine structure constant today must have the value it does because of the anthropic principle, but the further back in time we go the less the number needs to be what it is. Yet because we make the measurement now we constrain the value that this parameter takes now.
Of course there may yet prove to be no connection at all between the idea that because the parameters of the universe cannot be specified to infinite precision because of the finite computational resources of the universe and the fact that we have actually measured a change in at least one of those parameters. This is the author’s own speculation – however perhaps part, at least, of the explanation for changing parameters could be the fact that the laws of physics are software running on matter hardware of the universe and the resources of that computer: its memory and its processing speed are changing as time goes on.
Extra Dimensions?
Superunification theories such as various M-theories of which string-theory is a well-known example, postulate the existence of extra spatial dimensions. If these other dimensions exist, they are very small, but over time their evolution due to the Hubble expansion of space would have, as a consequence, a variation in the size of fundamental constants such as α. Indeed, a variation in the fundamental parameters may be cited as experimental evidence for these theories as being true descriptions of reality. Confirmation of the findings of the 1999 and 2002 studies can be found in Tzanavaris et al (2005). Flambaum writes that “in superstring theories – which have additional dimensions compactified on tiny scales – any variation of the size of the extra dimensions results in changes in the 3-dimensional coupling constants.” (Flambaum, 2006).
Conclusions
The studies by Webb et. al using the many multiplet method are the first to show that a hitherto assumed-to-be constant of nature has varied in time. This technique is an extremely accurate one where systematic errors are minimized. The best constraint upon the variation in the fine structure fine structure constant is Dα/α = (-0:574 ± 0.102) x 10-5. That is, α was smaller in the past. Given the definition of α, it is a reasonable question to ask whether it is e, h or c that has varied. Although no experiment can be done to ever decide this question, some theoretical considerations - such as from black hole thermodynamics – seem to restrict the change to being a decrease in hc rather than an increase in e. Whatever the case, the significance of a changing fine structure constant has ramifications for many new areas of physics. An explanation for a varying constant such as α will not come from quantum theory, but necessarily some deeper, more fundamental explanation. String theory is one such possibility: if the extra dimensions of string theory are changing in size, then this means that the fine structure constant will change in just the way we observe. Further, the fact that the fine structure constant has changed and has changed to a value that many consider “biofriendly” is an issue worth exploring in itself. Although the magnitude of the change so far observed would not be large enough to cause any observable differences to chemistry or the possibility of the existence of life, nonetheless the mere fact that a change is possible at all leaves the door open for the change to continue into the future – perhaps to a point where the laws become biohostile. Only time will tell just how fine tuned the fine structure constant remains.
References
Barrow, J. “From alpha to omega: The Constants of Nature” Jonathan Cape (2002)
Barrow, J. “Varying Constants” arXiv: astro-ph/0511440v1. October 2007
Davies, P. “The Goldilocks Enigma” Penguin Books (2007)
Davies, P. “Life, The Universe and Everything” Cosmos Magazine (Issue 14, 2007)
Davies, P. “Science and the city” Podcast, April 2007.
Davies, P., Davis, T., & Lineweaver, C. “Black holes constrain cosmological constants,” Nature, August 8, 602 (2002).
Drinkwater, M., J. K. Webb, J. D. Barrow , V. V. Flambaum “Limits on the Variability of Physical Constants.” Published in: Structure and Evolution of the IGM from QSO Absorption Line Systems, IAP Colloquium and available at http://xxx.lanl.gov/PS_cache/astro-ph/pdf/9709/9709227v1.pdf
Flambaum, V. Variation of fundamental constants (2006)
At http://xxx.lanl.gov/PS_cache/physics/pdf/0608/0608261v1.pdf
Lloyd, S. “Programming the Universe: A Quantum Computer Scientist Takes On the Cosmos” Knopf (2006).
Murphy, M. “Probing variations in the fundamental constants with quasar absorption lines.” Thesis submitted for the award of PhD to UNSW, (2002) available at http://astronomy.swin.edu.au/~mmurphy/pub.html#thesis
Murphy, M. “Variable Fundamental Constant?” (2007) http://astronomy.swin.edu.au/~mmurphy/res.html
Murphy, M. “Publicity” http://astronomy.swin.edu.au/~mmurphy/pub.html#publicity
Murphy, M. “Variable Constants” Swinburne Astronomy Online PowerPoint presentation (2007)
Schewe, P. and Stein, B.
http://www.aip.org/enews/physnews/1999/split/pnu410-1.htm
Number 410 (Story #1), January 13, 1999 by In The American Institute of Physics News Bulletin
Tzanavaris, P.; Webb, J. K.; Murphy, M. T.; Flambaum, V. V.; Curran, S. J.
“Limits on Variations in Fundamental Constants from 21-cm and Ultraviolet Quasar Absorption Lines” Physical Review Letters, vol. 95, Issue 4, (2005)
Webb, J., Murphy, M. T., Flambaum, V. V., Dzuba, V. A., Churchhill, C. W., Prochaska, J. X., Barrow, J. D., Wolfe, A. M. “Possible evidence for a variable fine structure constant from QSO absorption lines: motivations, analysis and results”
Mon.Not.Roy.Astron.Soc. 327 (2001) 1223.
In physics, our theories which explain the world are, for whatever else can be said about them, explanations of the forces that govern the interactions of particles. Particles may interact with each other because they each possess mass and so they exert the force of gravity upon each other (or more accurately, they experience the curvature of space around each other) while particles each with a charge can experience each other’s electrostatic force of attraction (or repulsion, as the case may be).
Why it is that particular forces have the strengths that they do is a deep and fascinating mystery. Although physics can in large part be considered that collection of explanatory theories of the forces of nature, an explanation of why those forces have the strengths they do is not yet a part of mainstream physics. Why do two electrons repel each other to this extent and not that? Of course, it is trivial to say that the strength of the force results from the charge on the electron. But why does the electron possess the particular charge that it does? Have electrons always possessed the same charge and is that charge specified to an infinite degree of precision? There are, as yet, no satisfactory answers to these questions and whether such questions can be answered by science or will forever remain in the domain of metaphysics is difficult to say. Recent progress however, which will in part be the topic of this project, hints that science is now is coming to grapple with these questions. We may for example consider the strength of gravity, whether one considers it as the force that Newton first described, or the curvature of space as Einstein explained is determined by a constant of proportionality, given the symbol G. G has the value of 6.67 x 10-11 N m2kg-2. What this number is can be determined by experiment: and the number always turns out to be the same whether the experiment is between two one kilogram masses, or between the moon and the Earth or between a binary star system. But why this number has the value it does has as yet no answer. What we do know is that if this number were much different, then we would not be here to discuss this question. Similar things can be said about other so-called “coupling constants” and the topic of the present discussion will be that constant which determines the strength of the electromagnetic interaction: the fine structure constant, α.
Constant Parameters?
The charge on the electron, e, the fine structure constant α and the universal gravitational constant G are generally known as “fundamental constants of nature”. These constants are called “constants” because their values are assumed not to change….or more precisely, because every experiment yields the same value. Further, they are called fundamental because their values cannot be calculated, only measured (Murphy, 2002). Recently however, groundbreaking research shows that these constants may not have quite such fixed values at all and referring to them as parameters is now common in the literature. No parameter which is part of a theory has been predicted by the theory that contains it so why the constants have the values that they do is a very interesting field of study and recently books have been written which touch upon this subject (See for example: Davies “The Goldilocks Enigma” 2007 or Barrow “From alpha to omega: The Constants of Nature” 2002). According to these authors, it appears that the standard model of physics contains around 26 of these constants and further it appears that changing most of these by even the smallest amount can result in changes to chemistry, nuclear physics and space itself that would cause life to be impossible. There appears, to use Davies’ often used phrase “some fine tuning” going on. So if the constants are changing, or have changed – why did they change to a value that today makes life possible?
Dimensionless is more.
Measuring changes in the value of these constants is problematic. Let us take the example of another famous constant: the speed of light. The speed of light is defined to have the value of 299 792 458 ms-1. But has it ever been lesser or greater in magnitude than it is now? Does it vary from place to place? In short, is the speed of light really constant in time and space?
This question is not a scientific one, as surprising as that may seem. This is because there is no experiment which can be done to determine if the speed of light has changed or not. Imagine we measure how long it takes for a photon of light to traverse a single meter – a meter being the distance between two points A and B. If we get some value today and then repeat the experiment tomorrow, but get a different number – what do we conclude? Do we conclude that it is the speed of light which has changed, or do we conclude that our timing device has changed, or that the length of a meter has changed? Actually, there is no way to distinguish between these various possibilities. And this is true of the gravitational constant and any other constant which contains dimensions. If such a constant varies, then there is no way to rule out the possibility that it is our measuring devices or the duration of our second or the length of our meters which are changing – and not the constant we are attempting to measure.
For this reason, what is required are constants without dimensions: that is, parameters which are just numbers. The fine structure constant is one such parameter: it has a value of approximately 1/137 and is defined to be:
α = e2/hc
This number, which has become ubiquitous in physics, remains mysterious. One of the pioneers of quantum theory, Wolfgang Pauli said of it, “When I die, my first question to the devil will be: What is the meaning of the fine structure constant?”. Michael Murphy writes that “All ‘everyday’ phenomena are gravitational and electromagnetic. Thus G and α are the most important constants for ‘everyday’ physics” (Murphy, 2007). As has been said, the fine structure constant is a measure of the strength of the electromagnetic force but it can also be regarded as a measure of how “relativistic” electrons in atoms are.
α was first introduced by Sommerfield as a way of explaining the splitting of spectral lines. What is found if one looks closely at spectral lines with a spectroscope of very high resolution is that almost all spectral lines are actually multiplets. That is, they are not (as they first appear to be) single lines, but rather two or more finer lines very close together. This is due to the fact that electrons which travel around the nucleus of an atom may move in circular or elliptical orbits, and possess spin up or spin down. That is, the electrons which move between one energy level and another can possess slightly different energies even if they occupy the same orbital. Relativistically, the energy of a circular orbit is slightly different to the energy of an elliptical orbit (unlike in Newtonian mechanics where elliptical orbits with the same major axis possess the same energy) resulting in not a single spectral line, but two which have almost identical wavelengths. The multiplet structure of spectral lines is also called “fine structure” and was for many years a mystery to physicists. But if spectral lines had fine structure it meant that electron energy levels had fine structure also. In the hydrogen atom, for example, most of the energy levels are closely spaced pairs of levels. So an electron excited from the ground state s orbital may end up in the p orbital with spin up or the p orbital moving in a circle or the p orbital moving in an ellipse. Such electrons have very slightly different energy and upon falling back to the ground state will emit photons and slightly different wavelengths, giving the emission line a fine structure. It is the fine structure constant α that is key to the quantification of this difference in energy and it has been assumed that, like other constants of nature, its value did not vary in time or space.
But what if α has changed? If α has changed then this means that the strength with which photons and electrons interact has also changed. This would mean that what is supposedly a constant, is not and that would require explanation. Now because the theory that utilizes α assumes that it is constant, any explanation of a change in α must come from outside the theory: that is, in this case, from outside of Quantum Theory. A deeper, more explanatory theory that encompasses all that Quantum Theory does and which is able to explain changes in α would then have to be developed. Not only that, but changes in α, if found to be true, would open the way for the hypothesis that other physical constants may also have changed over time.
Constant changes
Here I present constraints from the two most significant studies on changes in the fine structure constant. In section 1 below, constraints from a natural fission reactor are discussed, in section 2 the constraints from quasar absorption spectroscopy are presented.
1. Constraints from the Oklo Reactor
In Gabon in Africa, some 1.8 billion years ago, some uranium 235, trapped in yellowcake was able to dissolve in some oxygenated water from a lake. Streams were able to carry these uranium ions to a filter, made of algae, which concentrated the uranium to a point where it reached critical mass and fission began. The fission process caused the water to heat beyond boiling and neutrons released were then able to escape causing the reactor to cool to a point where the water which had boiled away was replaced. This process then repeated itself for several million years so that today we find the uranium 235 is particularly depleted in this place. In order to constrain changes in α using this process we study the abundances of decay products from uranium 235 fission. The latest best study by Fujii et. al (2000) suggests that
Dα/α=(-0.04±0.15)×10-7
2. Constraints from Quasar Absorption Methods
Two techniques are used which utilize the light from quasars to study changes in the fine structure constant: the alkali doublet method and the many-multiplet (MM) method. Both methods involve intercepting light which was emitted by a high redshift quasar and has then past through the halo of a galaxy so that an absorption spectra is produced. Such measurements obviously require not only a quasar but a galaxy along the line of sight between Earth and the quasar. These spectral lines, as has been previously described, have fine structure and it is the change in this fine structure – specifically the distances between the lines when compared with laboratory spectra – that allows changes in α to be calculated.
The Alkali Doublet Method (AD)
It was this method of determining changes in the fine structure constant that first received media attention as far back as 1998 with the publication by Webb et al (1998) of papers such as “Limits on the Variability of Physical Constants” in Structure and Evolution of the IGM from QSO Absorption Line Systems, IAP Colloquium leading to articles in the Scientific American and “Physics News” on January 13, 1999 by Phillip F. Schewe and Ben Stein which carried a story titled “IS THE FINE STRUCTURE CONSTANT CHANGING?”
Murphy (2002) explains that “the relative wavelength separation between the two transitions of an alkali doublet is proportional to α2.” The reason why absorption lines of gas in galaxy halos are used along the line of sight to the quasar rather than the emission lines of the quasar itself is because the absorption lines are far more narrow and so provide a more precise probe. The quasars involved in this kind of study are so distant (a mean z of 2.6) that their visual magnitudes are very faint (mv < 19) and require an exposure time of between 1 and 2 hours on one of the worlds largest telescopes: the Keck I 10m on Mauna Kea in Hawaii. In one such study (Murphy et al, 2002) using this method,
Dα/α = (-0.5 ± 1.3) x 10-5
The physics of this method rests upon “doublets” of spectral lines where an electron moves from (for example) an s to a p orbital. Take for example an electron in the Si IV ion: there are two possible energies that a given particular electron in the excited state might possess: that corresponding to a photon of wavelength 1393.8 Angstroms and another corresponding to a photon of wavelength 1402.8 Angstroms – for a particular transition. Without a high resolution spectrometer this doublet appears as a single line, of course. It is the small difference in the wavelengths that puts the word “fine” into “fine structure”.
The Many Multiplet (MM) Method
This technique is preferred for probing changes in the fine structure constant as it offers an increase in precision of around one order of magnitude. Webb et al (1998) used this technique to publish groundbreaking results that echoed around the world capturing the imagination of the scientific community and received a large amount of attention in the popular media. Their study involved studying spectra from 128 absorption systems. The physics of the MM method involves looking at absorption spectra from a heavy ion such as Fe(II) say: where transitions can be made from the s orbital to any of 5 other energy levels corresponding to 5 different spectral lines rather than just 2. Moreover, transitions are then compared with those of, say Mg(II). The advantage of using MM over AD include the fact that it offers a 10 fold increase in precision, it permits an in-principle use of all QSO absorption lines and not just a single AD and systematic effects are minimized. Murphy (2007) explains further that “Using the many multiplet method, If α varies, the Fe II lines shift 10 times more than the Mg II (or Si IV) transitions: Mg II acts as an anchor against which we measure the large shifts in Fe II.”
This MM method led in 2002 to the publication of
Dα/α = (-0:574 ± 0.102) x 10-5.
Is it e, c or h that varies?
As has already been said, it is important to keep in mind that because e, c and h are constants with dimensions, there is no experiment that could ever conclude that it is they which vary rather than the measuring device.
Barrow discusses some of the implications of varying α for cosmology (Barrow, 2007). One of the most interesting may be that a varying α entails a violation of the Weak Equivalence Principle (WEP). The WEP can be stated as: “All bodies in the same gravitational field at the same point of spacetime will undergo the same acceleration.” Barrow points out that if α varies then the field which carries the variation will couple differently to different nuclei because they will contain different numbers of charged particles (protons, for example). Barrow says that his theory would lead “…to a relative acceleration of the order of 1013.” Although there may be no way to experimentally demonstrate if a constant with dimensions has changed, nonetheless arguments can be mounted which lend weight to taking the position that it is c rather than (say) e which varies. It is to one such argument that we now turn.
Consequences of Black Hole Thermodynamics
A paper by Davies, Lineweaver and Davis (2002) attempts to constrain variations in α using thermodynamic arguments. The second law of thermodynamics suggests that the entropy of any closed system must always increase. Now, as we know from the 1998 data, α was smaller in the past – so an increase in α according to its definition would mean that either e has increased (making the numerator, e2 a larger number) or hc (the denominator) has decreased. The insight provided by Davies et. al was that the entropy of a non-spinning black hole is well known and given by:
Now, a decrease in entropy is forbidden by the laws of physics. A decrease in entropy is tantamount to time running in reverse: indeed it is the second law which is often said to give time its arrow. It would seem from the above equation, therefore, that according to black hole thermodynamics, the only way α can change if we do not wish to do away with other cherished areas of physics, is to say that it is hc which has decreased: making α greater as time goes on and so ensuring that the second law is not violated. This is significant as some interpretations of a changing fine structure constant interpret this as a change in the charge on the electron. If Davies et. al. are correct, then we have a way of ruling out that, with some higher degree of probability, that possibility.
Fine Line Fine Structure
The media attention received by the teams involved in this research is well deserved, but it is interesting to look at how some of the popular press have reported what is quite a subtle and nuanced area of physics. The fine lines that need to be carefully measured in a quasar absorption spectra can lead to media walking a fine line between sober science on the one hand and sensational speculation on the other. One interpretation of a changing fine structure constant is that the speed of light may be changing…and it is this possibility that has captured the imagination of many in the popular science press. Moreover, such excitement has been seized upon by the researchers themselves: and why not? If science can make it closer to the front of the newspaper, all the better. But extraordinary claims require extraordinary evidence and as previously discussed claims about a changing speed of light cannot be falsified by experiment and so, in a very real sense constitutes a hypothesis which lies beyond the scope of science to investigate. In other words, no amount of evidence, no matter how extraordinary, could ever conceivably justify the claim that the speed of light, like any parameter with dimensions, is changing. This has not prevented much of the popular publicity on this issue making precisely that claim however. A large list of such publicity can be found at Michael Murphy’s web pages (see reference in bibliography). Discover Magazine led with a story titled “At the speed of light: what if Einstein was wrong?” While The Age in Melbourne claimed “Einstein's relativity theory hits a speed bump”. Even one of the leaders of this research, John Webb, titled his piece for Physics World “Are the laws of nature changing with time?” while USA Today exclaimed “Speed of light may not have been constant after all!”
It can safely be said that Einstein’s theory is in no way threatened by any measured change in the fine structure constant. Relativity will still work just as well, and its explanatory power would not be diminished. Time dilation, length contraction and the curvature of space would all still be physical phenomena forming part of our scientific world view. Of course, the BBC is more likely to lead with a story titled “Laws of Physics 'may change' as they did in May of 2002 than they would with a story “The fine structure constant has varied by a few parts in 10-5 over the last billion years”. Sometimes the truth is a little too prosaic. And the fact that the truth was thought to be too prosaic is not doubt the reason why this kind of publicity was generated in this way.
Putting aside questions about the sensationalist press or otherwise that this research has received, we turn now to an issue which is perhaps of greater significance than the somewhat spurious claim that relativity has been overturned. We look now at the significance of a changing fine structure constant for the evolution of life.
The Anthropic Principle.
Before we engage with the fine structure constant specifically, we should more generally look at the idea that the laws of physics seem peculiarly well suited to life. The Anthropic Principle has a number of forms but in broad terms it is the idea that the laws of physics: - or more precisely the constants of nature like the charge on an electron or G or α - are consistent with the existence of life. It might also be called “the statement of the very obvious”. In some sense, there is a triviality to the consideration that the conditions of the universe, including the laws that govern it, must be consistent with the appearance of observers like ourselves. If things were otherwise, then this debate could not be had as we would not be here to have it. The anthropic principle may be quite anthropocentric itself at times: the fact that much is made of the fact that the laws of physics are consistent with the emergence of life and intelligent observers such as ourselves in particular seems to be rather coy about the fact that the laws of physics and the constants of nature are necessarily consistent with everything that exists. The stronger form of the anthropic principle: the idea that the laws of nature are biofriendly will be discussed later: but it is worth mentioning now, in this context, that the laws of physics seem particularly fine tuned for the existence of tables and chairs, of clouds of nebulae and synchrotron radiation, of hurricanes on gas planets and collisions between asteroids. In short, the anthropic principle is somewhat of a misnomer when one realizes that the laws and constants are fine tuned for everything that exists and every process that occurs. Just how strong a version of the anthropic principle we might need to embrace given the apparent fine tuned biofriendliness of the universe is one of the major aims of the discussion in this present project. As an aside, some debate exists around various “flavors” of the Anthropic Principle: there is the weak (the laws of physics happen to be consistent with the existence of observers) to the strong (the laws of physics must be as they are for observers to exist) and there are others still such as the final (once intelligent life exists it will never die out having learned, eventually, to exploit the total resources of the universe itself to ensure its own immortal survival. The existence or otherwise of intelligent life in the universe has generally been explained or explained away by appeal to one of two theories: that of the intelligent designer and that of there being many worlds (we just happen to find ourselves in one of the universes which is just right for life: in an infinite number of other worlds, we do not exist). This latter argument against intelligent design was made as far back as the 18th century by English Philosopher David Hume who asked (recouched in modern parlance) “If the universe were due entirely to chance, how would you expect it to look?” Well you would expect it to look any way at all, of course: including the way that it actually does.
Recently Paul Davies has attempted to explain the existence of the universe we observe as a closed loop of sorts – this theory itself connected in some way to the present study of varying constants. Davies’ idea is that “life explains the universe even as the universe explains life” (Davies, 2007 (Cosmos Magazine)) by which Davies means that the universe itself seems to have purpose and the constants of nature seem to change to values that permit life and intelligence to emerge and evolve towards values that permit them to emerge and evolve so that they can explain why it is that they have those values. Although Davies sees this as a significant step (somewhat tongue in cheek referring to it as “Davies’ third way”) the author believes that this begs the question. Having exchanged this complaint with Davies himself, Davies’ admits that this is indeed the case, but argues that there really are no other alternatives given that the God hypothesis and the many-universes ideas are equally flawed, untestable and to him, unsatisfactory. He would rather find an explanation for the existence of the universe from within the universe itself rather than appealing to something outside the universe that can never be known: be that an intelligent designer or a plenitude of multiple universes. However, it might also be at this point that we embrace the words of the 20th century German philosopher Ludwig Wittgenstein and admit “Whereof one cannot speak, thereof one must remain silent”. This aside, I later look in some detail at “Davies’ third way” and how it may connect to a changing fine structure constant: something that, as yet does not appear explicitly in the literature.
Biofriendly Fine Tuning?
The anthropic principle, as has been said, can in some sense be dismissed as a statement of the obvious: the conditions of the universe must be consistent with our existence. The fact that much is made of how the conditions are favorable for “our” existence in some sense also seems to change the anthropic principle into an anthropocentric principle. A universe that seems peculiarly “fine tuned” for life is said to be a “biofriendly” one. Much has also been written about how those same laws seem to permit not only life but intelligent life which is able to explain it all, to arise. Of course, a similar argument could be made about the significance (or otherwise) of the fact that those same laws seem peculiarly fine tuned for any physical process that one cares to name in the universe. If the laws of physics were much different, tornadoes might never be able to form. Could we not play with the parameters and see ways in which tornadoes might be affected if, say, the charge on an electron were much different? Is the universe peculiarly fine tuned for the existence of tornadoes? After all, tornadoes and complex wind systems of all varieties, seem to be very common indeed when compared to places where life can supposedly flourish. We know that tornadoes are relatively common throughout the solar system and it is likely that around any star with a gas planet that high winds will exist on that planet. Further, it would seem from evolutionary biology that intelligence is far from being a convergent feature of evolution: it seems the universe does not really favor its coming into existence once life has appeared. Considering a feature like wings: birds, insects, mammals and even fish and dinosaurs apparently all independently evolved wings as they did eyes and many other features common to many species. Yet only one species out of the millions that have existed on this planet, as far as we know, ever evolved the ability to explain the universe that they found themselves in: homo sapiens. The fact that intelligence is such a rare feature might imply something about its quirkiness. On the other hand this does not necessarily diminish its significance. David Deutsch argues for the significance of intelligence on computational grounds: the way the universe works (namely explanations in the form of the laws of physics) are reflected in the brains of human beings. There is a deep and fundamental connection between the large complex behavior of the universe as a whole and the brains of human beings which contain, embedded in grey matter, theorems that can predict how that universe is going to evolve. Yet is intelligence, however significant, common in the universe? Is life common? If the universe is truly biofriendly, surely it should be?
“We could hardly find ourselves in a bio-hostile universe, could we?” asks Paul Davies (2007: podcast). I wish to venture that perhaps we could and that the universe is, maybe, not so friendly after all. An argument ventured by Neil Degrasse Tyson in the “Beyond Belief 2006” conference asked that exact question about how biofriendly the universe really is. Whilst it may be said that, yes, the form of the laws of physics seem special – nonetheless they lead to conditions that are, in fact, for the most part – completely hostile to life. Indeed, so far as we know, in a very narrow band upon one very small planet do the laws of physics permit life to arise. If this is the only place in such a vast universe that life can indeed flourish, how justified is the assertion that the laws are “biofriendly” at all? Indeed, considering almost all of the universe is completely devoid of life, could we not similarly argue about just how hostile the laws are to life? The laws could have been such that life may have arisen in far more places than it has. The laws could have been such that DNA could survive in a greater range of temperatures than it does and that such a narrow band of solar energy is utilized. Physicists have fun playing with the idea that if we tinker with this or that parameter then life would be extinguished in an instant: however, how many combinations of parameters might indeed lead to a universe where life is more robust, evolves more quickly and is able to survive in a far greater variety of conditions that it currently does. Such studies would be speculative and indeed have not been comprehensively done to date. It is certainly the case that “tinkering with the parameters” more often than not seems to lead to a universe which is either featureless or in some other way hostile to life (if the proton were slightly heavier than the neutron rather than the other way around, for example, all protons would decay into neutrons and the universe would contain no protons and without protons you cannot have atoms and without atoms you cannot have chemistry of any sort, so no life – to provide just one example) – however, just because we find ourselves in a universe that just so happens to permit life does not mean that this is the only kind of universe that would or even that this one is particularly friendly to life. Certainly there are some degrees of friendliness or hostility to life here and that quantification has not yet been done. It might turn out that we are right on the border between hostility and friendliness and other combinations of the parameters would lead to far more life and intelligence than we see in this, our universe.
If we accept the hypothesis that we are in a biofriendly universe without further defining what we really mean by “biofriendly” we encounter the same two explanations as earlier for why this is the case: The intelligent designer or The Multiverse hypothesis. The idea that ours is one universe of many at least goes back to Liebnitz in the 17th century who spoke of other worlds and ours being “the best”. The 20th century philosopher David Lewis in “On the plurality of worlds” took the idea of all possible worlds actually existing in some sense, seriously. If this is the case, then we simply find ourselves in the (or a) universe where the conditions are just right – we should not be surprised that things “seem” fine tuned. This means that the way things are (that is, the form that the laws of physics have) is completely random as all possible sets of laws are out there somewhere. It is no surprise, therefore, that we find ourselves in a universe that does indeed seem peculiarly fine-tuned for life. There are various meanings of the word “multiverse” out there in the scientific vernacular now: that which describes the set of all physically possible worlds as permitted by quantum theory (in all of these universes the laws of physics are the same, but the conditions are different) and that class of all logically possible worlds (of which the former “multiverse” is but a small subset) where all possible sets of physical laws and values for the parameters are somewhere realized.
“I am the alpha and the omega” (Revelation 22:13)
The biofriendliness of the fine structure constant in particular
According to Davies (2007), although the calculation has not been done, it may be the case that the value of the fine-structure constant can determine, in part, exactly how much carbon is produced in stars throughout the universe. Thus, if α were much different to what it is, then the whole carbon-chemistry enterprise may not have been possible. Without it, this means that the ability of life to get going and remain sustained would be threatened. Perhaps the value that α takes determines things like the likelihood of metabolic processes to occur – the ability of oxidative phosphorlation to occur – any type of reaction where electrons are transferred from one chemical species to another is likely regulated by the strength of the fine structure constant. However, it is worth noting that although the detailed calculations about the sensitivity or otherwise of carbon chemical processes to the value that the fine structure constant takes might not yet have been done; one calculation that has been done is the sensitivity of big bang nucleosynthesis to α. The fine structure constant determines in part the ratio of hydrogen to helium produced during big bang nucleosynthesis. If α were smaller then the electrostatic force of repulsion between protons would have been greater and this would mean that diprotons would be possible (Murphy, powerpoint, 2007). Given the current value of α at least one neutron is required to mediate the strong nuclear force, overcoming the repulsion between protons. So if α were much different than it was in the very early universe then a much greater proportion of helium would have been produced. More precisely, the ratio of hydrogen to helium depends exponentially on the value of the fine structure constant. This means the first stars that formed would have survived for a far more brief period and all the nuclear fuel in the universe would have quickly been used up. Billions of years ago, all stars would have died: life would not have been possible as we would now live in universe which has long since suffered heat death.
If the fine structure constant increases then carbon and most other atoms would become unstable and disintegrate, while if it decreased then water and most other molecules would become unstable. This is because an increase in α corresponds to an increase in the strength of the electromagnetic force whilst a decrease corresponds to a weakening of the electromagnetic force. To be more explicit: if α goes up, then the force between protons in the nucleus will increase to such an extent that the nucleus will become unstable as the proton-proton repulsion becomes too great for the strong force to keep it together, on the other hand if α goes down then the electrostatic attraction between ions forming chemical bonds becomes too weak to keep molecules together. Indeed if what has so far been observed is correct: that the fine structure constant was smaller in the past and has been increasing as time goes on, then extrapolation would mean that if the universe continues to expand it may well be the case that the fine structure constant becomes too large for carbon atoms to remain stable as the electrostatic force of repulsion between the protons becomes too great. If this occurs, then arguments about just how biofriendly the universe is will be mute: it really will just be nothing but a fortuitous time that we live in. Billions or trillions of years from now: life we be quite impossible.
Davies’ Third Way and a changing fine structure constant
Let us grant the concession that the laws and parameters are (at least for now) peculiarly biofriendly to the extent that they seem exquisitely fine tuned and that claims that the universe itself is actually rather biohostile as nothing but a quibble. Paul Davies has argued that we need not necessarily stand in the metaphysical camp that “God did it” or “All possible sets of physical laws exist somewhere” . Instead, Davies proposes what he sees as a somewhat more scientific theory that he has called, tongue in cheek: “Davies’ Third Way”. It entails the hypothesis that the laws of physics seem to have “focused in” on those which are biofriendly. Davies applies the laws of quantum physics: specifically a form of Heisenberg’s Uncertainly Principle, to the laws themselves. In the same way that there is an uncertaintly or “fuzziness” in the position, momentum, energy or indeed any other physical quantity that a particle may possess, this kind of “fuzziness” or uncertaintly could in principle be applied to those very laws themselves, and so by extent to the constants. He uses the example of quantum entanglement to describe what he means. Quantum entanglement is about spatially separated particles being linked in some way so that a measurement made upon one determines the measurement that must be made on another. Similarly, quantum theory permits a form of “backward causation” and this is not in dispute. If a later state of a system is captured, then we can infer possible histories of the particle(s) involved. Extending this idea to the universe itself: the laws of physics themselves may have started out fuzzy – but as time went on, they have picked out values for the constants and forms of the laws that are delicately fine tuned for life. Once upon a time, α may have been substantially different to what it was today. If it were very different, would life be possible today? Let us consider a concrete example: that of the charge on the electron – one of the constants which makes up the fine structure constant. Is the charge on the electron an infinitely precise quantity? According to Davies, it cannot be because the computational resources of the universe are finite and so there can be no infinitely precise quantities. As we go further back towards the big bang what we find is that the computational resources of the universe become smaller as the size of the universe and amount of matter within it decreases. Seth Lloyd from MIT famously calculated the computational power of the universe (it is around 10120 bits – see Lloyd (2006)) and while the number itself is not important: the fact that there even exists a limit, is. The charge on an electron cannot be infinitely precise for this reason and so in the past, when the computational resources of the universe were less, then the charge on an electron was even less precise – by extension all parameters were less precisely specified that they are now and so we can say that the laws of physics themselves possessed some degree of “fuzziness”. Why it is that the fuzziness has reduced in such a way as to permit the existence of life may yet prove to be an interesting question to explore. Davies’ third way suggests that the universe may have some “purpose” and that the purpose is, in part, the emergence of awareness. We are that part of the universe that is aware of itself. If the laws and parameters started out fuzzy and then sharpened to the values that they now possess it might be hypothesized that the universe in some way “knew” about us coming into existence. This seems to be wildly speculative metaphysics, however there may be some weight to lend to this argument with the fact that the constants have indeed sharpened up over time. We now know with some high degree of confidence that the fine structure constant was smaller in the past. It is safe to assume that at even earlier epochs it was smaller still and perhaps soon after, or even during, inflation the value was much different. We can now take these claims more seriously given that the evidence is in for a changing constant. The fine structure constant now possesses a value which permits life to exist and flourish: in at least one place in the universe. Did the universe itself in some way know this? Did it choose from the ensemble of all possible parameter values just the right mix of laws and constants to enable itself to become aware of itself? Given that backward causation is not explicitly prohibited by quantum theory and further given that at least one constant has changed over time, this kind of speculation is coming in for scientific rather than simply philosophical treatment.
Moving back to Davies’ original claim: if we simply apply Heisenberg’s Uncertainty Principle to the constants themselves, perhaps the measurement that we make of the fine structure constant today must have the value it does because of the anthropic principle, but the further back in time we go the less the number needs to be what it is. Yet because we make the measurement now we constrain the value that this parameter takes now.
Of course there may yet prove to be no connection at all between the idea that because the parameters of the universe cannot be specified to infinite precision because of the finite computational resources of the universe and the fact that we have actually measured a change in at least one of those parameters. This is the author’s own speculation – however perhaps part, at least, of the explanation for changing parameters could be the fact that the laws of physics are software running on matter hardware of the universe and the resources of that computer: its memory and its processing speed are changing as time goes on.
Extra Dimensions?
Superunification theories such as various M-theories of which string-theory is a well-known example, postulate the existence of extra spatial dimensions. If these other dimensions exist, they are very small, but over time their evolution due to the Hubble expansion of space would have, as a consequence, a variation in the size of fundamental constants such as α. Indeed, a variation in the fundamental parameters may be cited as experimental evidence for these theories as being true descriptions of reality. Confirmation of the findings of the 1999 and 2002 studies can be found in Tzanavaris et al (2005). Flambaum writes that “in superstring theories – which have additional dimensions compactified on tiny scales – any variation of the size of the extra dimensions results in changes in the 3-dimensional coupling constants.” (Flambaum, 2006).
Conclusions
The studies by Webb et. al using the many multiplet method are the first to show that a hitherto assumed-to-be constant of nature has varied in time. This technique is an extremely accurate one where systematic errors are minimized. The best constraint upon the variation in the fine structure fine structure constant is Dα/α = (-0:574 ± 0.102) x 10-5. That is, α was smaller in the past. Given the definition of α, it is a reasonable question to ask whether it is e, h or c that has varied. Although no experiment can be done to ever decide this question, some theoretical considerations - such as from black hole thermodynamics – seem to restrict the change to being a decrease in hc rather than an increase in e. Whatever the case, the significance of a changing fine structure constant has ramifications for many new areas of physics. An explanation for a varying constant such as α will not come from quantum theory, but necessarily some deeper, more fundamental explanation. String theory is one such possibility: if the extra dimensions of string theory are changing in size, then this means that the fine structure constant will change in just the way we observe. Further, the fact that the fine structure constant has changed and has changed to a value that many consider “biofriendly” is an issue worth exploring in itself. Although the magnitude of the change so far observed would not be large enough to cause any observable differences to chemistry or the possibility of the existence of life, nonetheless the mere fact that a change is possible at all leaves the door open for the change to continue into the future – perhaps to a point where the laws become biohostile. Only time will tell just how fine tuned the fine structure constant remains.
References
Barrow, J. “From alpha to omega: The Constants of Nature” Jonathan Cape (2002)
Barrow, J. “Varying Constants” arXiv: astro-ph/0511440v1. October 2007
Davies, P. “The Goldilocks Enigma” Penguin Books (2007)
Davies, P. “Life, The Universe and Everything” Cosmos Magazine (Issue 14, 2007)
Davies, P. “Science and the city” Podcast, April 2007.
Davies, P., Davis, T., & Lineweaver, C. “Black holes constrain cosmological constants,” Nature, August 8, 602 (2002).
Drinkwater, M., J. K. Webb, J. D. Barrow , V. V. Flambaum “Limits on the Variability of Physical Constants.” Published in: Structure and Evolution of the IGM from QSO Absorption Line Systems, IAP Colloquium and available at http://xxx.lanl.gov/PS_cache/astro-ph/pdf/9709/9709227v1.pdf
Flambaum, V. Variation of fundamental constants (2006)
At http://xxx.lanl.gov/PS_cache/physics/pdf/0608/0608261v1.pdf
Lloyd, S. “Programming the Universe: A Quantum Computer Scientist Takes On the Cosmos” Knopf (2006).
Murphy, M. “Probing variations in the fundamental constants with quasar absorption lines.” Thesis submitted for the award of PhD to UNSW, (2002) available at http://astronomy.swin.edu.au/~mmurphy/pub.html#thesis
Murphy, M. “Variable Fundamental Constant?” (2007) http://astronomy.swin.edu.au/~mmurphy/res.html
Murphy, M. “Publicity” http://astronomy.swin.edu.au/~mmurphy/pub.html#publicity
Murphy, M. “Variable Constants” Swinburne Astronomy Online PowerPoint presentation (2007)
Schewe, P. and Stein, B.
http://www.aip.org/enews/physnews/1999/split/pnu410-1.htm
Number 410 (Story #1), January 13, 1999 by In The American Institute of Physics News Bulletin
Tzanavaris, P.; Webb, J. K.; Murphy, M. T.; Flambaum, V. V.; Curran, S. J.
“Limits on Variations in Fundamental Constants from 21-cm and Ultraviolet Quasar Absorption Lines” Physical Review Letters, vol. 95, Issue 4, (2005)
Webb, J., Murphy, M. T., Flambaum, V. V., Dzuba, V. A., Churchhill, C. W., Prochaska, J. X., Barrow, J. D., Wolfe, A. M. “Possible evidence for a variable fine structure constant from QSO absorption lines: motivations, analysis and results”
Mon.Not.Roy.Astron.Soc. 327 (2001) 1223.