Triple or Bust Paradox

Expectation values failing to predict long-term gains


Today I have a decision problem for you.

Alice offers Bob participation in a simple coin toss game. It’s called triple or bust. Alice start the game by writing an IOU to Bob for an amount of $ 1.00. Alice then makes at least six subsequent tosses with a fair coin. On each ‘heads’ Alice triples the IOU amount. On ‘tails’ she sets the IOU to zero. How much should Bob be prepared to pay Alice to participate in  this game, knowing that he can repeat this game as often as he likes?

Okay, let’s see: on each coin toss there is a 50:50 chance for tripling and for voiding the IOU. So on average, after a coin toss the IOU increases to 3/2 times the amount before the toss. That means that after the first coin toss, the expectation value for the IOU is $ 1.50. After two tosses the expectation is 1.50 times $ 1.50 or $ 2.25. This exponential growth continues, and after six tosses the IOU dollar amount has risen to 1.50^6 or 11.39. Any coin tosses after the sixth, will obviously continue the exponential growth of the expected IOU. In the long run, the game will yield returns closing in on the expectation value. So paying an amount less than $ 11.39 per game will make it advantageous to participate.

Bob has worked out the same logic and decides to offer Alice $ 10.00 per game.

Alice immediately accepts.

Bob pays Alice $ 10.00, Alice writes an IOU of $ 1.00, and starts tossing. Heads shows. Alice changes the IOU into $ 3.00. Again heads. The IOU is now $ 9.00. Then tails appears. “No need for any further coin tosses, okay?” Alice looks at Bob. Bob nods. Alice rips the IOU in pieces.

Bob decide to go for another game. Alice pockets another $ 10.00. Now tails shows in the first round. Once more an IOU gets shredded.

Bob is in it for the long haul, chasing a very profitable expectation value. He keeps playing.

After 37 games Bob has lost $ 370.00. Bob pays another $ 10.00. This time he is more lucky. After 5 heads in a row the IOU reads $ 243. Alice makes a sixth coin toss. Again a head. “Yes!! That’s 729 dollars!” Bob blurts out.

Alice writes down $ 729 on the IOU and prepares for a seventh coin toss.

“Wait a second” Bob intervenes. “Don’t throw another coin, just give me the 729 dollars.”

“I will give you another coin toss for free”, Alice replies. “As agreed upfront, I am entitled to give you additional coin tosses. I am sure you have incorporated this game feature into your decision to offer me $ 10. Isn’t it?”

Bob nods silently and stares at Alice’s hand containing the coin. She makes a seventh coin toss. Again heads. The IOU now reads $ 2187. An eight toss follows. A tail. Alice rips the IOU in pieces.

Bob shakes his head and quits the game.


What went wrong?  We have not made an error in our math, and neither has Bob. Something must be wrong in the logic.

It is correct that the expectation value for this game increases exponentially with the number of coin tosses. And for a fixed number of tosses per game, this expectation value does describe the returns that Bob will be make in the long run. It is also true that, having agreed at least six tosses, in the long run it is disadvantageous for Alice to add a seventh coin toss. And, again in the long run, it is even more disadvantageous for her to add an eight toss. Yet, giving Alice full liberty in adding any number of additional tosses gives her the power to make a killing in this game. Bob is guaranteed to lose every penny he puts into this game.

What is going on here?

The challenge is to understand the role of the expectation value for this game. Centuries of statistics research is based on applying expectation values as predictors for long-term gains. Putting your brain to sleep by ignoring the expectation value is not going to eliminate the paradox.

Zee’s Nutshell Trilogy

The books you wished you had when you were a student

Today I took delivery of my copy of Tony Zee’s third contribution to the Princeton University Press In a Nutshell series: “Group Theory in a Nutshell for Physicists”. With this book Tony has delivered a trilogy on fundamental physics. The earlier two books cover quantum field theory (“Quantum Field Theory in a Nutshell“) and general relativity (“Einstein Gravity in a Nutshell“).

Having the book in my possession for just a few hours and having thumbed through it, I am happy to report that, this book lives up to my high expectations and can stand shoulder to shoulder with Zee’s other two Nutshell books. I am not going to review the book here, but I do feel triggered to say a few words on Zee’s Nutshell trilogy in general. Hopefully, for those of you not familiar with Zee’s writing but eager to self-study fundamental physics, this will allow you to decide if these books are right for you.

For a start, if not clear already, I plead guilty to being a Zee fanboy. Not only is Zee’s Nutshell trilogy present on my bookshelves, I also own Zee’s popular science writings “An Old Man’s Toy“, and “Fearful Symmetry“. Stretching things a bit, these latter two books can be seen as the popular science versions of book 2 (Einstein Gravity) and book 3 (Group Theory) from the Nutshell trilogy. Stretching a bit more, one could classify Feynman’s “QED“, which contains an introduction by Zee, as the popular science version of Zee’s first Nutshell‘s book (Quantum Field Theory). You can find Zee’s introduction to “QED” at his homepage. This short text provides you with an appetizer for Zee’s writing style: informal, peppered with anecdotes, and rich in lighthearted remarks.

Throughout the Nutshell trilogy, Zee focuses on building physics intuition. If you are looking for strict mathematical rigor and formal derivations: forget about Zee’s Nutshell and look elsewhere. If you are looking for practical advice and eye-opening perspectives that help you build physics intuition on rather abstract subjects, the Nutshell trilogy is probably for you. Having said this: I don’t want to leave you with the impression that anywher in his Nutshell books Zee is sloppy in his math. Rather, I would judge these books as striking a healthy balance between handwaving and mathematical rigor.

The level addressed by Zee is that of an advanced undergraduate to graduate fundamental physics course. In self-study terms I’d say that Zee’s trilogy assumes a solid understanding of The Feynman Lectures in Physics, as well as a firm basis in linear algebra and analysis. From this, the three tomes bring you pretty close to research level. Quite a tall order, and it should not surprise you Zee’s trilogy counts a total of 2000 pages. I suspect that the majority audience for these books is not those whose self-study brings them in first-time contact with the areas studied. Rather it is an audience of readers who studied physics or a related discipline a while ago, but who feel the wish to revisit ‘some of that stuff’ to build a deeper and more up-to-date (and likely better) understanding. This group of readers is well served, as Zee set himself the standard to write the books “I wished I had when I was a student”.

Verlinde’s Dark Universe

Verlinde’s stab at the dark universe remains a stab in the dark

Lots of people have asked me for my views on Erik Verlinde’s latest paper “Emergent Gravity and the Dark Universe“. This fifty-one pages long preprint has attracted a fair bit of media attention. Particularly in the Netherlands, Verlinde’s name being attached to the draft paper has caused a true hype. Un-Dutch roaring headlines in the Dutch national newspapers include: “Breakthrough Theory: Dark Matter Is Utter Illusion – Dutch Professor Rivals Einstein“, “We Are at the Brink of a Revolution that could be larger than Quantum Physics and Relativity Combined“,  and “Breakthrough Article on Gravity Renders Verlinde the Most Celebrated Scientist of 2016“.

Last week I found myself standing in the back of a room somewhere in the south of the Netherlands. Erik Verlinde kicking off the session on dark matter at physics@veldhoven was the probable cause for the room being packed.

Erik Verlinde facing a packed room at physics@veldhoven
Just like other physicists, I am eagerly awaiting results from the various dark matter detection experiments. I certainly do not consider myself to belong to the group described by one of the subsequent speakers as ‘dark matter deniers’. At the same time, I do feel the standard model of cosmology contains too many coincidences to convince me dark matter is real. On balance, I remain sympathetic towards papers that provide an alternative to the somewhat baroque ‘gravity + dark matter + dark energy’ description of our universe. Verlinde’s paper states that this trinity can be reduced to the duo ‘gravity + dark energy’. In other words, Verlinde claims that in a universe with dark energy, the long-range effects of gravity get modified such that dark matter appears to be present

With a lot of hand waving I can dumb-down Verlinde’s position as follows:

1)  Spacetime (and gravity, its curvature) is emergent from the information captured in quantum correlations. This in itself represents by no means a new concept. Emergent spacetime is best understood for a model universe containing nothing else than ‘dark anti-energy’ (so-called AntiDeSitter space) and goes under the cryptic label ‘ER=EPR’.

2) In a more realistic spacetime solely containing dark energy (so-called DeSitter space), the ER=EPR correspondence still applies, albeit with a significant complication: non-local quantum correlations play up. This claim is new. If correct, this implies the breakdown of the much celebrated holographic principle first proposed by Erik’s MSc thesis adviser Nobel laureate Gerard ‘t Hooft.

3) Due to competition between local and non-local quantum correlations, emergent DeSitter spacetime does not thermalize over large length scales, thereby causing a ‘glassy behavior’ and ‘elastic dynamics’ which lead to long-range deviations in the gravitational behavior commonly attributed to dark matter.

So, to eliminate dark matter, Verlinde requires fundamental degrees of freedom that are non-holographic in nature and that also feature non-equilibrium behavior. Particularly at point 3) the paper is rather impenetrable (at least for me) and it is unclear to me how exactly the ‘glassy dynamics’ emerges. In his talk Verlinde didn’t address this point.

For the time being, we may step over any issues in the derivation, as in the end what matters is how successful Verlinde is in quantifying the apparent dark matter. The formula he proposes (equation 7.40 on page 38 in Verlinde’s preprint) adds to the gravitational acceleration a ‘dark acceleration’. The equation he provides applies to static mass distributions with spherical symmetry only, and can be condensed into:

<a2> =  c Ho g /2

Here, g denotes the (constant) gravitational acceleration over a spherical surface  centered around a spherically symmetric mass distribution, the angular brackets denote averaging over the whole sphere, a represents the apparent ‘dark acceleration’, c is the speed of light and Ho the current value for the Hubble constant. This represents a MOND-type modified gravity. Just like the phenomenological MOND description, Verlinde’s equation can be expected to struggle in describing dark-matter phenomena such as the acoustic oscillations in the cosmic microwave background (CMB).

My final ordeal? I had hoped Verlinde’s lengthy paper to culminate in an equation with wider applicability. Cosmology has evolved into a high-precision scientific discipline thanks to a wealth of quantitative information on the CMB. Verlinde’s paper doesn’t address dark matter effects in the CMB. It is unlikely that Verlinde’s approach will attract a professional following anywhere near to what the Dutch newspaper headlines suggest, unless Verlinde manages to apply his approach successfully to the acoustic oscillations in the CMB or any other area where MOND fails.

Until that happens I am most happy with my tax money going to dark matter detection experiments.


Holographic Dark Universe

Dark energy in a holographic universe.

When Albert Einstein constructed his general theory of relativity he decided to resort to some reverse engineering and introduced a ‘pressure’ term in his equations. The value of this pressure was chosen such that it kept the general relativistic description of the universe stable against the gravitational attraction of the matter filling the universe. Einstein never really liked this fudge factor, but it was the only way to get the equations of general relativity to describe a universe that is static in size.

More than 10 years later, Edwin Hubble’s observations showed that the universe is in fact not static, but rather expanding. With this, the need for the pressure term disappeared. Einstein must have felt floored: if only he would have sticked to the bare equations without the fudge factor, he could have predicted the universe to be non-static. Throughout his later life, Einstein kept referring to the introduction of the pressure term as his ‘biggest blunder’.

Einstein and Hubble

Would Einstein have lived till the very end of the 20th century, he would certainly have changed this ordeal. Sure, our universe is expanding, but since the end of the 90’s we know that this expansion is accelerating. Today the universe is expanding faster than yesterday, and tomorrow it will be expanding again faster than today. Without Einstein’s fudge factor, a decelerating expansion is to be expected, and the pressure term is needed to switch from a description yielding a decelerating universe to one that yields an accelerating universe.

What is causing this pressure that is pushing space apart at ever accelerating rates? Cosmologists refer to ‘dark energy’ permeating space as what propels this cosmic acceleration. In order to explain the observed accelerated expansion of the universe, this dark energy should comprise the vast majority of the total energy content in the universe. Recent observations lead to a dark energy density in the universe corresponding roughly to one Planck energy (or equivalently: one Planck mass of about 20 microgram) per 1000 km cubed. The fact that this tiny density constitutes the dominating component of our universe just demonstrates the vast emptiness of space.

But what is this dark energy? No one knows. The most likely explanation is that dark energy is quantum mechanical in origin. In fact, most physicists would probably agree that dark energy results from quantum fluctuations, if only this would lead to predictions of the right magnitude of the dark energy effect. However, the standard quantum field-theoretical (QFT) approach leads to an overestimate of the dark energy density. How much of an over estimate? Well, any statement one can make on this will be an understatement. Applying standard quantum field theory considerations, vacuum fluctuations can be estimated to lead to an energy density of one Planck energy per Planck length cubed. That is a Planck energy per cube with sides of 0.000 000 000 000 000 000 000 000 000 000 000 016 m. A volume a wee bit different from 1000 km cubed.

Where have we gone wrong?

Some simple dimensional analysis hints at a potential solution. There are two key length scales entering the problem: the Planck length ℓ and the cosmic scale L (read: the diameter of the observable universe). The contrast between the two is vast: 61 orders of magnitude. Wouldn’t it be a huge surprise if these two extreme length scales can be combined into a volume of the right size to describe the dark energy density? Well – surprise, surprise – this is easy to achieve. The experimental value of the dark energy density happens to coincide with one Planck quantum per volume of size L2ℓ. Yet, as we saw above, standard quantum field theory predicts a zero point energy density of one Planck quantum per ℓ3. Can we change two of the ℓ’s in this equation into L’s?

Yes we can. Key is to realize that the ℓ3 volume enters into the theoretical description because standard QFT assumes one degree of freedom per Planck cube. So according to QFT our universe has a total of (L/ℓ)3 degrees of freedom. This however ignores the holographic nature of our universe that was postulated by Gerard ‘t Hooft in 1993. The holographic principle states that standard QFT vastly overestimates the number of degrees of freedom available. More precisely, the holographic principle forbids a system of linear size L to have more than (L/ℓ)2 degrees of freedom. So, this in itself already changes one ℓ in the equation for the dark energy density into an L. But there is more. QFT associates a zero-point energy of one Planck unit with each degree of freedom. This does not necessarily carry over into a holographic description. The degrees of freedom in the holographic description are non-local, and the wavelengths corresponding to the zero-point motion can probably be linked to the macroscopic length L, rather then to to the microscopic length ℓ. This effect (embodied in the so-called ‘UV/IR connection’) gives us another swap between ℓ and L in the equation for the dark energy density so that with all holographic effects incorporated we arrive at ℓ/L Planck energies per volume of size ℓ2L, or equivalently, one Planck energy per volume of size L2ℓ.

Is this all the correct way to look at the expansion of our universe? Or is the Planck energy per volume L2ℓ some coincidence? I don’t know the answer. What I do know, is that if the above is in essence correct, holographic considerations will be an integral element of the still elusive theory of quantum gravity. It is also clear that the strict holographic cut-offs to the number of degrees of freedom and the allowed energies per degree of freedom will be of immense help to regularize this theory of quantum gravity. History tells us that experimentally demonstrated discrepancies in our understanding of the fundamental laws of physics never last for more than a few decades. So I dare to make the prediction that in the first half of this century we will witness a revolution in our thinking about the universe in the form of a fully consistent theory of quantum gravity. These are exciting times!