When Albert Einstein constructed his general theory of relativity he decided to resort to some reverse engineering and introduced a ‘pressure’ term in his equations. The value of this pressure was chosen such that it kept the general relativistic description of the universe stable against the gravitational attraction of the matter filling the universe. Einstein never really liked this fudge factor, but it was the only way to get the equations of general relativity to describe a universe that is static in size.
More than 10 years later, Edwin Hubble’s observations showed that the universe is in fact not static, but rather expanding. With this, the need for the pressure term disappeared. Einstein must have felt floored: if only he would have sticked to the bare equations without the fudge factor, he could have predicted the universe to be non-static. Throughout his later life, Einstein kept referring to the introduction of the pressure term as his ‘biggest blunder’.
Would Einstein have lived till the very end of the 20th century, he would certainly have changed this ordeal. Sure, our universe is expanding, but since the end of the 90’s we know that this expansion is accelerating. Today the universe is expanding faster than yesterday, and tomorrow it will be expanding again faster than today. Without Einstein’s fudge factor, a decelerating expansion is to be expected, and the pressure term is needed to switch from a description yielding a decelerating universe to one that yields an accelerating universe.
What is causing this pressure that is pushing space apart at ever accelerating rates? Cosmologists refer to ‘dark energy’ permeating space as what propels this cosmic acceleration. In order to explain the observed accelerated expansion of the universe, this dark energy should comprise the vast majority of the total energy content in the universe. Recent observations lead to a dark energy density in the universe corresponding roughly to one Planck energy (or equivalently: one Planck mass of about 20 microgram) per 1000 km cubed. The fact that this tiny density constitutes the dominating component of our universe just demonstrates the vast emptiness of space.
But what is this dark energy? No one knows. The most likely explanation is that dark energy is quantum mechanical in origin. In fact, most physicists would probably agree that dark energy results from quantum fluctuations, if only this would lead to predictions of the right magnitude of the dark energy effect. However, the standard quantum field-theoretical (QFT) approach leads to an overestimate of the dark energy density. How much of an over estimate? Well, any statement one can make on this will be an understatement. Applying standard quantum field theory considerations, vacuum fluctuations can be estimated to lead to an energy density of one Planck energy per Planck length cubed. That is a Planck energy per cube with sides of 0.000 000 000 000 000 000 000 000 000 000 000 016 m. A volume a wee bit different from 1000 km cubed.
Where have we gone wrong?
Some simple dimensional analysis hints at a potential solution. There are two key length scales entering the problem: the Planck length ℓ and the cosmic scale L (read: the diameter of the observable universe). The contrast between the two is vast: 61 orders of magnitude. Wouldn’t it be a huge surprise if these two extreme length scales can be combined into a volume of the right size to describe the dark energy density? Well – surprise, surprise – this is easy to achieve. The experimental value of the dark energy density happens to coincide with one Planck quantum per volume of size L2ℓ. Yet, as we saw above, standard quantum field theory predicts a zero point energy density of one Planck quantum per ℓ3. Can we change two of the ℓ’s in this equation into L’s?
Yes we can. Key is to realize that the ℓ3 volume enters into the theoretical description because standard QFT assumes one degree of freedom per Planck cube. So according to QFT our universe has a total of (L/ℓ)3 degrees of freedom. This however ignores the holographic nature of our universe that was postulated by Gerard ‘t Hooft in 1993. The holographic principle states that standard QFT vastly overestimates the number of degrees of freedom available. More precisely, the holographic principle forbids a system of linear size L to have more than (L/ℓ)2 degrees of freedom. So, this in itself already changes one ℓ in the equation for the dark energy density into an L. But there is more. QFT associates a zero-point energy of one Planck unit with each degree of freedom. This does not necessarily carry over into a holographic description. The degrees of freedom in the holographic description are non-local, and the wavelengths corresponding to the zero-point motion can probably be linked to the macroscopic length L, rather then to to the microscopic length ℓ. This effect (embodied in the so-called ‘UV/IR connection’) gives us another swap between ℓ and L in the equation for the dark energy density so that with all holographic effects incorporated we arrive at ℓ/L Planck energies per volume of size ℓ2L, or equivalently, one Planck energy per volume of size L2ℓ.
Is this all the correct way to look at the expansion of our universe? Or is the Planck energy per volume L2ℓ some coincidence? I don’t know the answer. What I do know, is that if the above is in essence correct, holographic considerations will be an integral element of the still elusive theory of quantum gravity. It is also clear that the strict holographic cut-offs to the number of degrees of freedom and the allowed energies per degree of freedom will be of immense help to regularize this theory of quantum gravity. History tells us that experimentally demonstrated discrepancies in our understanding of the fundamental laws of physics never last for more than a few decades. So I dare to make the prediction that in the first half of this century we will witness a revolution in our thinking about the universe in the form of a fully consistent theory of quantum gravity. These are exciting times!