Why SpaceX won’t turn us into a multi-planetary species 

Fighting the laws of physics yields at best logarithmic progress, and rocket propulsion technology forms no exception

Anyone announcing the successful sale of tourist trips around the moon would attract ridicule and laughter. Unless your name is Elon Musk. In that case the announcement amounts to nothing more than a logical and rather modest step towards Musk’s promise of getting a million people to live on Mars.

One might question why humanity would be interested to colonize an inhospitable planet. Sure, rising CO2 levels in Earth’s atmosphere do pose a challenge, and this issue might make few of us long for a second planet. But when given a choice between a planet with a CO2 level a notch above 0.04% and and a planet with an atmosphere consisting 96% of CO2, the choice seems pretty clear to me. Risks other than rising CO2 levels are no different: trading Earth for Mars means going from bad to worse.

But let’s set aside the question for the need to migrate humans to Mars. I want to focus instead on the plausibility that Elon’s company, SpaceX, can indeed pull off its promise of human emigration to Mars. I am going to cut Elon some slack: I will not insist on the aggressive timeline he put forward (Mars colonization starting in 2024). So the question is: can the rocket technology utilized by SpaceX ultimately be expected to deliver Mars colonization?

Have a look at above diagram. It shows historical records for the distance humans have moved away from Earth’s surface. This distance, the altitude, is measured in Earth diameters. Note that the vertical axis covers a huge range in altitudes with each tick mark representing a factor 1000 increase. On the right hand side it is indicated which altitudes correspond to LEO (Low Earth Orbits), to Moon travel, to Interplanetary Travel, and to Interstellar Travel.

Two technologies are shown: 1) lighter-than-air balloon technology, and 2) chemical propulsion rocket technology. You don’t need to be a rocket scientist to spot that both technologies are characterized by a short initial period of rapid progress, followed by a long period of painstakingly slow progress. The transition between both regimes occurs when engineers hit upon fundamental limitations to the technology. When past the transition, progress is still feasible but this progress is logarithmically slow and typically realized by brute-force attacks.

The human altitude records for balloon technology starts in 1783, when the Montgolfier brothers launch a balloon on a tether. In it is Jean-François Pilâtre de Rozier, a chemistry and physics teacher. De Rozier stays aloft for almost four minutes at a height of 24 m, and makes it safely back to Earth. The altitude reached is modest, but the first airborne human is a fact. From that point on record after record gets broken. In less than a year the Montgolfier brothers 10-fold and again 10-fold the altitudes reached by humans. In the process De Rozier, the first airborne human, also becomes the first fatality in an air crash. But this doesn’t stop progress, and altitudes of a few kilometers are reached. But then, just two years after the first airborne human, progress slows down considerably. It takes more than two centuries for the next 10-folding of altitudes to take place.

From a physics perspective it is clear that once higher altitudes get reached, balloon builders start combating thinner and thinner atmospheres. Every next step in increasing altitude requires an exponential increase in volume-to-mass ratio of the manned balloon. It is this challenge that causes progress to slow down.

The human altitude records established with rockets follows a similar curve. This curve starts off with Yuri Gagarin’s 1961 space flight reaching a record altitude of 325 km. This makes Gagarin the first human officially leaving Earth’s atmosphere and reaching empty space. A few years later, also Gagarin, the first man in space, dies in a test flight. But progress in reaching higher altitudes is fast, and less than a year after Gagarin’s death the first humans orbit the Moon thereby establishing an altitude record exceeding that of Gagarin by as much as three orders of magnitude.

But then progress in reaching farther from Earth stalls. Two years later, in 1970 the human altitude record gets improved, but only marginally. The record of 401,056 km still holds today, almost half a century later. If we are optimistic and assume SpaceX is successful next year (!) in improving upon this altitude record, we get the rocket altitude curve as shown above.
Also here, from a physics perspective it is clear what is happening. With chemical propulsion technology the exhaust velocity for rockets is limited to 4.4 km/s. Given this limitation, reaching farther and attaining higher speeds (in rocket scientist speak: obtaining a higher delta-v) requires an exponential increase in the fueled-to-empty mass ratio of rockets. It is this challenge that makes progress stall.

Setting aside technical details, just eyeballing the rocket technology progress curve, drives down the conclusion that deep space exploration by humans is well past the era of rapid progress. This immediately raises the question “what technical miracle is SpaceX counting on?” It is clear that what SpaceX needs is a novel propulsion technology generating exhaust velocities well above the 4.4 km/h mark. This would create a novel curve in above plot that would flatten out at much higher altitude values. Yet, the sobering news is that SpaceX’s ITS (Interplanetary Transport System) is not based on any novel propulsion technology. ITS is based on the same chemical rocket propulsion technology that is responsible for the absence of progress in the above plot. It is the propulsion technology that proved extremely useful in bringing humans and payload into LEO. The technology can even be stretched to bring humans to the moon and back. But counting on it to colonize Mars carries the flavor of counting on balloons to bring humans into space. Won’t happen.

Andromeda: No Escape

Gravitationally you are stronger bonded to Andromeda than you are to Earth

If one of these days you find yourself under a dark night sky, have a look at the constellation Andromeda. With bare eyes you should just be able to spot a faint smudge in this constellation. You need sharp eyes that are well-adapted to the dark. It definitely helps if you happen to carry with you a pair of binoculars. And the dark should be real dark. That means a spot far away from city lights. Also the moon, with its overwhelming brightness, needs to be out of sight.

Once you have spotted it, look more closely at that faint smudge. It is the furthest object you can see with bare eyes. You are looking at a galaxy comparable to but somewhat larger than our Milky Way galaxy. It is the enormous distance you are away from Andromeda that reduces it to a faint fuzzy in the night sky. The light from this galaxy has been traveling an amazing 2.5 million years to reach you. In comparison, no human has ever reached a spot from which light would need to travel more than 1.3 seconds to reach Earth. The distance traveled by light in two-and-a-half million years is a distance way beyond human comprehension. Yet you are more strongly bonded to Andromeda than to Earth.


You read that correctly. You are gravitationally more strongly bonded to Andromeda than you are to earth.

Andromeda: a faint smudge in the night sky


Let me make that more precise. Gravitation makes you stick to earth. And this gravitational binding to earth is pretty strong. To escape earth’s gravitational pull from your present position, you would need to jump up at a speed of about 11 km per second (7 mi/s). No small task. And that is ignoring any drag due to Earth’s atmosphere. However, to escape that faint smudge in the sky, you need to jump much more fiercely. In fact, you need to jump such that you achieve a speed of 88 km/s (55 mi/s) relative to the same smudge. And no, I am not cheating, it’s a like-for-like comparison. It is you again jumping from your same present position, and that is again ignoring atmospheric drag.

Few people realize the amazing reach of gravity. Gravity adds up. Andromeda with its trillion stars is incredibly more heavy than earth, and an overwhelming gravitational attraction comes with it that easily compensates for the enormous distance. The fact that you are gravitationally bound to Andromeda, makes everything around you – earth, the solar system and the whole Milky Way – bound to Andromeda. It should therefore not surprise you that the Milky Way is on a head-on collision course with Andromeda. Both galaxies are falling into each other. Don’t be worried, this is a long fall, and you and I won’t witness the final stage of it, and neither will your children, your grand-children, your grand-grand-children, … , and so on including your grand-to-the-power-100,000,000-children. And when the galaxy merger finally takes place, it will perhaps be a most welcome event as around that time we – if indeed we still exists – will need some forceful intervention to pull us away from sun, which soon thereafter will blow up and turn into a red giant.

Rational Suckers

Braess’ paradox, a multiplayer Prisoner’s Dilemma, leading to avoidable suffering

Why do people skip queues, cause traffic jams, and create delays for everyone? Who are these misbehaving creatures lacking basic cooperation skills? Are they really all that different from you? Are you perhaps one of them?

Various situations involving social interaction drag you into a negative sum game, and make you part of a misbehaving gang. Welcome to Braess’ paradox.


Each morning at rush hour a total of 600 commuters drive their cars from point A to point B. All drivers are rational individuals eager to minimize their own travel time. The road sections AD, CB and CD are so capacious that the travel time on them is independent of the number of cars. The sections AD, and CB always take 10 minutes, and the short stretch CD takes no more than 3 minutes. The bridges, however, cause real bottlenecks, and the time it taken to traverse AC or DB varies in proportion to the number of cars taking that route. If N is the number of cars passing a bridge at rush hour, then the time to cross the section with this bridge is N/100 minutes.
Given all these figures, each morning each individual driver decides which route to take from A to B. Despite the freedom of choice for each commuter and despite all traffic flow information being available to each and every commuter, the outcome of all individual deliberations creates a repetitive treadmill. Each morning all 600 commuters crowd the route ACDB and patiently await the traffic jam at both bridges to resolve. The net result is a total travel time of 600/100 + 3 + 600/100 = 15 minutes for each of them.

Does this make sense?

At this stage you may want to pause and consider the route options. If you would be one of the 600 commuters, would you join the 599 others in following route ACDB?

Of course you would. There is no faster route. Alternative routes like ACB or ADB would take you 600/100 + 10 = 16 minutes, a full minute longer than the preferred route ACDB. So each morning you and 599 other commuters travel along route ACDB and patiently queue up at both bridges.

One day it is announced that the next morning the road stretch CD will be closed for maintenance work. This announcement is the talk of the day. Everyone agrees that this planned closure will create havoc. Would section AD or CB be closed, it would have no impact as these are idle roads. But section CD is used by each and every commuter. What a poorly planned maintenance, a closure of such a busy section should never be scheduled for rush hour!

The next morning all 600 commuters enter their cars expecting the worst. Each of them selects between the equivalent routes ACB and ADB. The result is that the 600 cars split roughly 50:50 over both routes, and that both bridges carry some 300 cars. Much to everyone’s surprise all cars reach point B in no more than 300/100 + 10 = 13 minutes. Two minutes faster than the route ACDB preferred by all drivers.

How can this be? If a group of rational individuals each optimize their own results, how can they all be better off when their individual choices are being restricted? How can it be that people knowingly make choices that can be predicted to lead to outcomes that are bad for everyone?

Asking these questions is admitting to the wishful thinking that competitive optimization should lead to an optimum outcome. Such is not the case, when multiple individuals compete for an optimal outcome, the overal result is an equilibrium and not to an optimum. We saw this in the game Prisoner’s Dilemma, and we see it here in a situation referred to as Braess’ paradox.

A question to test your understanding of the situation: what do you think will happen the next day when section CD is open again? Would all players avoid the section CD and stick to the 50:50 split over routes ACB and ADB, a choice better for all of them?

If all others would do that, that would be great news for you. It would give you the opportunity to follow route ACDB and arrive at B in a record time of about 9 minutes (300/100 + 3 + 301/100 minutes to be precise). But of course all other commuters will reason the same. So you will find yourself with 599 others again spending 15 minutes on the route ACDB. And even with the benefit of hindsight none of you will regret the choice you made: any other route would have taken you longer. Yet all of you surely hope that damn shortcut between C and D to get closed again.

And don’t assume this phenomenon doesn’t occur in real life.

Triple or Bust Paradox (part 2)

Beware of expectation values based on vanishing probabilities

A week ago I discussed the coin toss game ‘triple or bust‘. The game is between Alice and Bob. Alice start the game by writing a $ 1.00 IOU to Bob. Alice then makes at least six subsequent tosses with a fair coin. On each ‘heads’ Alice triples the IOU amount. On ‘tails’ she sets the IOU to zero.

The question is: how much should Bob be prepared to pay Alice to participate in this game?

As Bob can repeat this game as often as he likes, he focuses on the gains to be obtained in the long run. These are given by the expectation value for this game, which are easy to calculate. The game starts with an IOU dollar value of 1.00. On each coin toss the average IOU increases to 3/2 times the amount before the toss. That means that after the nth coin toss, the expectation value for the IOU is (3/2)^n. Alice can prevent this value to grow out of control by stopping at n=6 (completing six tosses). This sets the expectation value for the game to $ 11.39.

However, one might also reason as follows: in each game Alice can continue the tossing until tails shows. This voids the IOU and Bob will walk away empty handed. The game is worthless to Bob.

How to reconcile both reasonings?

The key issue is: what do we mean by the phrase “in the long run”? How many repeat games are required to achieve a gain per game that is close to the expectation value? The chances for Bob to win a game of n tosses is 1 out of 2^n. To reach the expectation value, N the number of repeats of the game, needs to be large enough to yield a number of wins much larger than 1. This means N >> 2^n.

Let’s tell Alice to make exactly n tosses per game, and see how Bob fares for a given number of repeats of the game. Suppose Bob has enough time to spare to reach N = 10000. For this N value, at around n = 13.3 we have 2^n equal in magnitude to N. So for n, the number of tosses per game, significantly smaller than 13 (N >> 2^n) Bob can expect to walk away with close to 1.5^n dollars per game. For n reaching values close to 13 or 14, it is completely uncertain if Bob will win a game. Effectively, after 10000 repeat games, Bob’s total return takes the shape of a lottery ticket. For n much large than 14  (N << 2^n) Bob has vanishing odds to reach a win.

The above plot shows the earnings for Bob obtained in three independent runs of 10000 games each. The horizontal axis shows n the number of coin tosses per game. The vertical axis shows Bob’s earnings per game divided by the expectation value 1.5^n. In line with the reasoning above, Bob’s earnings follow the expectation value. Around n = 14 the fluctuations in his earnings become large, and a transition happens. Above n = 14 Bob becomes progressively unlikely to make any earnings. If Alice has full freedom in increasing n, Bob is guaranteed to walk away empty handed.

In essence, the cause for the expectation value failing to represent the earnings of a participant to this game is the skeweness of the payout distribution increasing without bound. Mathematically the expected earnings for this game become ill-defined because taking the limit of N going to infinity (calculating the expectation value) followed by taking the limit of n going to infinity (allowing Alice an unlimited number of tosses) gives a different evaluation from the one based on reversing these two limits.

The same phenomenon of unbounded skeweness in the payout distribution causes the expectation values of the well known Saint Petersburg paradox to misrepresent the likely earnings.

Homo Retaliens

Why the urge to retaliate?

Game theory models human decisions based on just two characteristics: rationality and selfishness. This minimalistic approach teaches us a lot about the character of economic behavior and the emergence of strategies built on cooperation, competition, retaliation, etc.

By far the most well-known game-theoretical scenario concerns the prisoner’s dilemma (PD). This game is rather boring from a game theory perspective, yet over the years it has attracted an impressive amount of attention, particularly so in the pop science media. The reason for all the attention is that the predicted outcome for this simple game surprises most people. Game theory tells us that in PD rational players focused on optimising their own gains will knowingly avoid the win-win situation for this game and settle for a smaller return.

How can this be?

The simple answer is that in any game rational players will end up in an equilibrium of individual choices, and not necessarily in an optimum. PD is a game designed to render manifest the difference between the equilibrium outcome and the optimal outcome.

An example PD scenario runs as follows: you and your co-player are facing a task. Both of you have the choice to contribute towards accomplishing the task or to free ride. You both have to decide simultaneously and without any opportunity for communication. If both of you decide to free ride, nothing gets accomplished and you both walk out with no gain. Each player putting in an effort increases the total gain by 4 units. This gain will be split equally over both participants. Putting in an effort comes at an individual cost of 3 units.

It should be clear that this game forces a rational selfish individual to free ride. Regardless what choice the other makes, avoiding any effort form your side makes you avoid an investment of 3 units that would deliver you only 2 units gain. As this applies to both players, the outcome will be both players walking away empty handed, which is an inferior outcome compared to the 1 unit gain each could have received by contributing to accomplish the goal.

Although there is no paradox in this result, many people remain skeptical towards PD’s sub-optimal outcome. Some people reason they would play the game differently. They motivate these alternative strategic choices by introducing emotional rewards (“I prefer the other being happy”) or punishments (“if the other free rides, fine: it won’t make him happy”). However, we should not lose sight of the fact that the payoffs in PD are assumed to include all consequences – monetary or otherwise – of the strategic choices made. In other words, one should considering the payoffs to quantify the changes in happiness attributable to the various game outcomes.

However, considering non-monetary gains or losses does translate into a challenge towards PD. One might ask “is a PD-type game even feasible when incorporating real-life non-monetary consequences?”.

I am quite convinced that in human social setting PD-type encounters are the exception rather than the norm. This is because the severity of the consequences resulting from social regulation (retaliation and punishment) are not bounded. If such drivers are present, PD games can be rendered unfeasible. For instance, in a society with strong retaliation morals a PD payoff structure is not achievable.

To see why, let’s consider again the above PD example. We change the strategic choice ‘contribute’ into ‘contribute and retaliate’. Under this strategic choice a player contributes, but if the other turns out to free ride, a fight is started that will cost both parties and that will continue until the free rider’s net gain has dropped to what he would have earned if both had opted for a free ride. Such a change in payoff structure changes the game from PD (prisoner’s dilemma) into the coordination game SH (stag hunt). This change is irreversible: in the presence of retaliative behavior, it is not possible to recover a PD payoff structure no matter how one chooses to tune the monetary payoffs.

Compared to PD, the SH game still carries the same win-win outcome (mutual contribution) with the same return per player, and also still carries the same lose-lose outcome (mutual free ride) with the same zero gain per player. However, in contrast to the PD situation, in SH the win-win situation does form a rational choice (a Nash equilibrium). This is the case due to the fact that, given the choice of the other player, in an SH win-win neither player regrets their own choice. Effectively what happened is that under the win-win scenario the cost that would be incurred by retaliation eliminates any regrets after the fact.

We conclude that PD games cease to be feasible under the threat of retaliation. When including the costs associated with retaliation, what might seem to be a PD game, turns out to be an SH game. By eliminating PD games, retaliation also eliminates the inevitability of suboptimal (mutual free ride) outcomes. The same effect is seen when introducing retaliation options by repeating PD between players. In repeat PD games ‘contribute and retaliate’ strategies (tit-for-tat approaches) dominate over ‘free ride’ strategies.

Retaliation is a disruptive phenomenon. Amongst selfish individuals it eliminates PD type games and helps avoiding them getting stuck in suboptimal outcomes. One might therefore philosophize groups with retaliative behaviors to carry a evolutionary advantage over groups without such a trait. Whether this is correct or not, for sure retaliation is widespread amongst human societies.

In any case: if you feel not at ease with the two-player PD game outcome, your intuition probably is correct. But don’t fall in the trap of translating this discomfort into futile challenges towards the outcome of the game. Instead, challenge the PD scenario itself. I have yet to see a two-person decision scenario that is best describes as PD (rather than as a coordination game).

Now this doesn’t carry over into multi-player games. Many situations of humans failing to cooperate towards a greater good can be modeled as multi-player extensions of PD. But that is a subject deserving its own blog post.

How to Stomach a Black Hole

The enigmatic spacetime nature of black holes

A black hole is not some cosmic vacuum cleaner. A black hole even hardly classifies as a tangible object. The inside of any tangible object can, at least in principle, be inspected from the outside. You can scan your vacuum cleaner with X-rays and thereby reveal its internal workings. But a black hole’s ‘inner workings’ can not be inspect from the outside. The only way to peer inside a black hole is by ‘being there’. Just like you can peer into the future only by ‘being there’. 

So let’s stop thinking about black holes as tangible objects. It is more insightful to think of a black hole as localized future that has separated from, and lost contact with, the future developing outside the black hole. The defining characteristic for black holes is that anyone or anything inside a black hole, can not influence the universe outside the black hole. No signal (no light, no X-ray, no gravitational wave, nothing that carries information) can be transmitted from inside the black hole to observers outside the black hole. And obviously, this also implies that any object inside can not leave the black hole. 

The scenario usually considered that leads to ‘finding yourself inside a black hole’ is the scenario of a static black hole. A black hole in eternal existence. The only way to find yourself inside such a black hole is by falling into it. Yet there is an alternative scenario leading to the situation of ‘being inside a black hole’. It’s the scenario of a black hole growing from inside you. And this alternative scenario gives a better insight into the global spacetime nature of black holes. 

We are going to perform a gedanken experiment. This will tell you how to grow a black hole starting from inside your belly, and quickly expanding far beyond our solar system. And… the ‘you’ in this gedanken experiment will survive all of this. Heck, you even won’t notice you’re inside a black hole. 

You need some astronauts to build a cosmic wall for you. A huge spherical shell of bricks centred around you. Regular red 4 lb bricks will do fine. But you will need loads of them. A tredecillion bricks to be precise. That’s going to render you a thin spherical wall as heavy as our whole galaxy and positioned far beyond the orbits of the planets in our solar system. In preparation you start reading up on Schwarzschild radii, and conclude that in order to prevent the shell of bricks to form a black hole, the spherical wall should be large enough for a pulse of light starting at the center to travel more than 100 days before reaching the wall. 
So you instruct an army of astronauts to build a spherical brick wall with a radius somewhat larger than 100 light days. For the given total mass of bricks, this radius yields the particular advantage for the astronaut bricklayers building the wall to experience a gravitational acceleration due to the wall comparable to the gravitational acceleration at earth’s surface. 
Once the sphere is complete, earth is shielded from any starlight not emerging from sun. Yet, nothing changes for us in terms of gravitational phenomena. Earth continues its orbit around the sun, and people continue to walk on earth, fly in airplanes, and launch rockets. All of this is the case, thanks to Newtons’s shell theorem (or if you prefer, thanks to it’s general relativistic cousin Birkhoff’s theorem. In simple terms, this theorem states that a uniform spherical shell exerts no gravitational force on objects (Newtonian view) or doesn’t bend spacetime (Einsteinian view) anywhere inside, while on objects anywhere outside it exerts a gravitational force (bends spacetime) equal to the force (bending) that would result if the full mass of the shell would have been concentrated at it’s center. 
As long as your spherical cosmic wall retains a radius larger than 100 lightdays, no black hole will ever get formed and all is safe. But guess what. Under it’s own gravitational force the sphere of bricks starts shrinking. First slowly, but then at ever larger speed. As soon as the sphere shrinks to less than 100 light days radius… 

… nothing happens. People continue walking on earth, and the planets continue their orbits around the sun. Nothing whatsoever changes to the gravitational phenomena inside the cosmic wall. This cosmic wall is getting smaller but stays spherical in shape, and the shell theorem still applies. 

So the daily life of people continues. But something did change. In fact, something did change 100 days earlier. Something more abstract. Earth’s future did change. Since 100 days a narrow destiny awaits mankind. 

Spacetime diagram of event horizon (pink) formation due to infalling shell of matter (blue). Time runs upward, space is reduced to two dimensions. Two outgoing laser flashes (green) are shown, one before the horizon forms, and one after.

A 100 days before the cosmic wall passed the critical radius of 100 light days, an event horizon got initialized. A boundary between an inside and everything else outside that is causally disconnected from the inside. This horizon started small somewhere. It could have started in your belly or in mine. Either way, none of us would have noticed. The horizon expands at the speed of light, within a fraction of a second including all human beings, and reaching the shrinking spherical wall exactly the moment it’s radius passes the threshold of 100 light days. Once the spherical wall has fallen thru the horizon, the horizon stays constant in size measuring 100 light days in radius. 
This horizon seems a boundary entirely abstract in nature. Yet, it is a boundary that acts as a watershed in spacetime delineating markedly different futures, different destinies, on either side. Your destiny becomes apparent much later and well after the spherical wall passes thru the critical size of 100 light days. In fact, the earliest you would be able to know for sure you are heading towards a grim future disconnected from the future outside, is 100 days after the spherical wall passes thru the critical radius of 100 light days. Even then you would still experience the same gravitational phenomena, and earth would still orbit around the sun as if nothing had changed. 

But what then is the destiny awaiting you? Would you be teared apart? Would spaghettification happen at some point? 

No, none of this is going to happen. No matter how closely the wall of bricks approaches you, the shell theorem will keep you safe from any deadly gravitation effects. So then what disaster will happen? By now that should be clear. The shell theorem doesn’t protect you from being hit by a brick. And at some moment a true giga-tsunami of bricks will crush you out of existence. 

Does the above puzzle you? Does it give you an uneasy feeling of retro-causality creeping in? Isn’t it strange that a horizon would start to grow 100 days before the wall of bricks passes thru the critical radius? What if, seconds before the critical radius is reached small rockets connected to the bricks would fire thereby reversing the collapse of the wall and preventing the critical radius to be reached? Would the horizon that started growing 100 days earlier somehow be eradicated? 

Such reasoning is based on the wrong intuition of a horizon being some tangible boundary. Black hole horizons are not tangible objects. They represent boundaries between distinct futures. If, at the last minute, the brick wall collapse gets reversed, we have to conclude that no horizon started to grow 100 days earlier. And no, this does not introduce any retro-causality as all that changes is in the future.
Twitter: @HammockPhysics 

Triple or Bust Paradox

Expectation values failing to predict long-term gains

Today I have a decision problem for you.

Alice offers Bob participation in a simple coin toss game. It’s called triple or bust. Alice start the game by writing an IOU to Bob for an amount of $ 1.00. Alice then makes at least six subsequent tosses with a fair coin. On each ‘heads’ Alice triples the IOU amount. On ‘tails’ she sets the IOU to zero. How much should Bob be prepared to pay Alice to participate in  this game, knowing that he can repeat this game as often as he likes?

Okay, let’s see: on each coin toss there is a 50:50 chance for tripling and for voiding the IOU. So on average, after a coin toss the IOU increases to 3/2 times the amount before the toss. That means that after the first coin toss, the expectation value for the IOU is $ 1.50. After two tosses the expectation is 1.50 times $ 1.50 or $ 2.25. This exponential growth continues, and after six tosses the IOU dollar amount has risen to 1.50^6 or 11.39. Any coin tosses after the sixth, will obviously continue the exponential growth of the expected IOU. In the long run, the game will yield returns closing in on the expectation value. So paying an amount less than $ 11.39 per game will make it advantageous to participate.

Bob has worked out the same logic and decides to offer Alice $ 10.00 per game.

Alice immediately accepts.

Bob pays Alice $ 10.00, Alice writes an IOU of $ 1.00, and starts tossing. Heads shows. Alice changes the IOU into $ 3.00. Again heads. The IOU is now $ 9.00. Then tails appears. “No need for any further coin tosses, okay?” Alice looks at Bob. Bob nods. Alice rips the IOU in pieces.

Bob decide to go for another game. Alice pockets another $ 10.00. Now tails shows in the first round. Once more an IOU gets shredded.

Bob is in it for the long haul, chasing a very profitable expectation value. He keeps playing.

After 37 games Bob has lost $ 370.00. Bob pays another $ 10.00. This time he is more lucky. After 5 heads in a row the IOU reads $ 243. Alice makes a sixth coin toss. Again a head. “Yes!! That’s 729 dollars!” Bob blurts out.

Alice writes down $ 729 on the IOU and prepares for a seventh coin toss.

“Wait a second” Bob intervenes. “Don’t throw another coin, just give me the 729 dollars.”

“I will give you another coin toss for free”, Alice replies. “As agreed upfront, I am entitled to give you additional coin tosses. I am sure you have incorporated this game feature into your decision to offer me $ 10. Isn’t it?”

Bob nods silently and stares at Alice’s hand containing the coin. She makes a seventh coin toss. Again heads. The IOU now reads $ 2187. An eight toss follows. A tail. Alice rips the IOU in pieces.

Bob shakes his head and quits the game.


What went wrong?  We have not made an error in our math, and neither has Bob. Something must be wrong in the logic.

It is correct that the expectation value for this game increases exponentially with the number of coin tosses. And for a fixed number of tosses per game, this expectation value does describe the returns that Bob will be make in the long run. It is also true that, having agreed at least six tosses, in the long run it is disadvantageous for Alice to add a seventh coin toss. And, again in the long run, it is even more disadvantageous for her to add an eight toss. Yet, giving Alice full liberty in adding any number of additional tosses gives her the power to make a killing in this game. Bob is guaranteed to lose every penny he puts into this game.

What is going on here?

The challenge is to understand the role of the expectation value for this game. Centuries of statistics research is based on applying expectation values as predictors for long-term gains. Putting your brain to sleep by ignoring the expectation value is not going to eliminate the paradox.