Be A Winner: Ignore The Facts!

Game theory sheds light on the question “how come people get away with ignoring the facts?”

Advertisements

Facts are our friend when it comes to taking the right decisions. Facts can reveal, ahead of the decision, that a certain choice will inevitably lead to sub-optimal results. Results that will be regretted. Yet we see such ill-fated choices getting pushed ahead on a daily basis. Facts don’t seem to matter anymore.

How do people get away with fact-ignoring decisions? Surely in competitive situations these people will be crushed by those who do consider the facts and take well-informed decisions.

Well… you might think so. But think again. A move guaranteed to be sub-optimal can be a winning move.

Huh?

Let me make this more precise by introducing a simple game. 

Investment Dilemma 
You are part of a group of 500 players who face an investment decision. For each of you there is two choices: invest a dollar or don’t invest. Each dollar invested yields two dollars of return to each and every participant. (Sounds like an amazing yield?Just think of each dollar getting invested in much-needed infrastructure that will benefit each participant.) All of you make your individual decisions at the same time without any opportunity for prior discussion. You are all tasked to maximize your own return.

What do you do? Do you invest one dollar, or will you defect?

The choice should be clear. You can’t influence what the others do, you simply have to live with the return that results from their choices. But you are in control of the return resulting from your own decision. If you don’t invest, it doesn’t cost you, and also it doesn’t contribute to the gain going your way. Your net gain resulting from your decision to defect will be zero. If you do invest, this will cost you one dollar, while your payout gets increased by two dollars. Your net gain resulting from your decision to invest will be one dollar.

That was easy. You decide to invest.

And so do the 499 other players. You will walk out of the game 999 dollars richer. And so does everyone else.

Except… when some players chose to defect. Each player who defects will decrease your return by two dollars. And… each of them will walk out of the game with a net profit that is one dollar higher than yours. In the most extreme case you would be facing the situation where you, and none of the 499 others, invest. You gain one dollar, and all others walk away with double the gain.

Yet it was you who reasoned based on the relevant facts. You chose to invest. Your decision was optimal: you would not have improved your gain had you opted for the alternative. On the contrary, you defecting would have made you leave the game empty handed. All the others, on the other hand, made sub-optimal individual decisions. Any single one of them who would have opted for ‘invest’ rather than ‘defect’ would have increased their own gain by one dollar. And yet, despite their fact-ignoring moves, each of them is walking away with a gain larger than yours.

What is happening is that each player opting for the fact-ignoring choice ‘defect’ is destroying one dollar of their own return and also destroying two dollars of anyone else’s return. Obviously, each and every player who did make the fact-ignoring move ‘defect’ will happily ignore this factual analysis. At best they will walk out of the game with a big smile, at worst they will ridicule you and all others who made the move ‘invest’, and declare all of you to be a bunch of suckers.

Now you might object that this investment dilemma is a rather artificial game deliberately constructed for this effect (ignorant moves scoring better than well-informed moves) to emerge. Surely in a more competitive zero-sum game, where your loss is my gain and your gain is my loss, moves destined to lead to suboptimal result can not be winners!

Hmmm, are you sure? Let’s consider the following multi-player zero-sum game.

Majority Prediction

You are part of a group of four players. Each of you is facing the same two choices: choose “2” or choose “3”. All four of you make their choice simultaneously without any opportunities for communication. You win if you have correctly predicted the number of players ending up under the majority choice. Three sets of outcomes need to be considered:

  • If two players chose “2” and two players chose “3”, the majority size is 2. So the choice “2” wins and two dollars in total move from the players who chose “3” to the players who chose “2”. The losers contribute equally and the money is spread evenly over the winners.
  • If three players choose “2” and one player chooses “3”, or if three players choose “3” and one player chooses “2”, the choice “3” wins. In these cases six dollars move from the losers (those who chose “2”) to the winners (those who chose “3”), again with the losers contributing equally and the winners benefitting equally.
  • If all four players choose “2”, or all four players choose “3”, there is no winner, and no money gets transferred.

The various outcomes with their payouts are summarized in below figure.l


Each of the players is tasked to maximize their individual gains. How would you play? Do you choose “2” or do you choose “3”?

If you consider the facts and inspect the payoffs in above figure, the decision is easy. Regardless which combination of choices is made by the other three players, the choice “3” always leads to a higher gain than the choice “2”. For instance, if two opponents choose “3” and one opponent chooses “2”, you choosing “2” will lead to a gain of $1, while you choosing “3” will lead to a gain of $2. Choosing “3” leads to a better result than choosing “2”. You can check for yourself that this is always the case.

Considering these facts, you choose “3” without the slightest reservation. And you expect all opponents to arrive at the same choice. That would lead to each of you leaving the game empty-handed, an outcome to be expected for a symmetric zero-sum game. If everyone plays optimally no one will win.

Now considers what happens when amongst your three opponents two players ignore the above facts and make the ill-informed choice “2”, while the third opponent chooses “3”. You stick to the well-informed choice “3”. The result is that the fact-ignorers both walk away with a gain of $1, at the expense of you and the other player who made the rational choice. And the thing is: you don’t regret your choice. Had you made the choice “2”, given the choices of the others your loss would have been $2 instead of $1. Also in hindsight you have made the optimal play. On the other hand: each of the players who chose “2” has to live with the fact that they could have won $3 instead of $2.

Of course they don’t care. They won, you lost.

Let’s modify the game somewhat in order to make this effect (fact-ignorers defeating rational players) more pronounced. You are in the same game, but now the choices are not made simultaneously but in turn. The first player chooses “2”, and the second player also chooses “2”. You are in third position and it is your turn. What do you do?

The key observation to make here is that the game hasn’t changed really. The fact that decisions are made in turn is basically a distraction. The bare facts (the rules for the payouts) still dictate choice “3” for all four players participating. This is easy to see for the fourth player, but then it also follows for the third player, and so on.

The first player ignored the above facts when choosing “2”, and so did the second player by also opting for “2”. Now it is your turn. How do you extract maximum benefit from the errors made by the first two players? By now you know the answer: you have to select “3” and so should the fourth player. But let’s spell out the detailed reasoning. If you would choose “2”, the last player will opt for “3” and pocket a gain of $6. You lose $2. If instead you choose “3”, the last player will also select “3” to limit his loss to $1. As a result you also lose $1.

So you and the fourth player, who both carefully consider the relevant facts (including the fact that the first two players did choose “2”) will both choose “3”. As a result you both lose money to the two players who ignored all facts and played sub-optimally. You have found yourself in a zero-sum situation where you can’t benefit from the errors that others have made.

Let’s analyze the situation from start. By selecting “2” the first player makes a blunder. If all subsequent players play optimally by selecting “3”, the first player walks away with a loss of $6, and all other players gain $2. Now by selecting “2” the second player also makes a blunder. Instead of walking away with a gain of $2, he or she will leave the game with a gain of at most $1. Interestingly, while this blunder comes with a self-harm of a $1 drop in gains, the harm to each subsequent player is thrice as large. As a result the first player receives a gift in the form of a gain increase of $7. Such a multiplayer gain transfer is known under the name Morton’s effect, named after Andy Morton who first described this phenomenon in the game of poker.

The conclusion of all of this? Decision situations in which multiple parties interact and compete are tricky business. You can’t assume that parties who consistently disrespect the facts will necessarily accumulate a self-harm that puts them at a disadvantage. Even in simple and clear zero-sum encounters, parties ignoring the facts can thrive over those who carefully weigh all the facts. Ignorance does inflict self-harm, but the collateral harm to rational competitors can far exceed the self-harm. Particularly when the fraction of ignorants reaches a critical mass, this effect can become most pronounced.

George Carlin’s warning “never underestimate the power of stupid people in large groups” holds true, certainly so in this day and age.

Why SpaceX won’t turn us into a multi-planetary species 

Fighting the laws of physics yields at best logarithmic progress, and rocket propulsion technology forms no exception

Anyone announcing the successful sale of tourist trips around the moon would attract ridicule and laughter. Unless your name is Elon Musk. In that case the announcement amounts to nothing more than a logical and rather modest step towards Musk’s promise of getting a million people to live on Mars.

One might question why humanity would be interested to colonize an inhospitable planet. Sure, rising CO2 levels in Earth’s atmosphere do pose a challenge, and this issue might make few of us long for a second planet. But when given a choice between a planet with a CO2 level a notch above 0.04% and and a planet with an atmosphere consisting 96% of CO2, the choice seems pretty clear to me. Risks other than rising CO2 levels are no different: trading Earth for Mars means going from bad to worse.

But let’s set aside the question for the need to migrate humans to Mars. I want to focus instead on the plausibility that Elon’s company, SpaceX, can indeed pull off its promise of human emigration to Mars. I am going to cut Elon some slack: I will not insist on the aggressive timeline he put forward (Mars colonization starting in 2024). So the question is: can the rocket technology utilized by SpaceX ultimately be expected to deliver Mars colonization?


Have a look at above diagram. It shows historical records for the distance humans have moved away from Earth’s surface. This distance, the altitude, is measured in Earth diameters. Note that the vertical axis covers a huge range in altitudes with each tick mark representing a factor 1000 increase. On the right hand side it is indicated which altitudes correspond to LEO (Low Earth Orbits), to Moon travel, to Interplanetary Travel, and to Interstellar Travel.

Two technologies are shown: 1) lighter-than-air balloon technology, and 2) chemical propulsion rocket technology. You don’t need to be a rocket scientist to spot that both technologies are characterized by a short initial period of rapid progress, followed by a long period of painstakingly slow progress. The transition between both regimes occurs when engineers hit upon fundamental limitations to the technology. When past the transition, progress is still feasible but this progress is logarithmically slow and typically realized by brute-force attacks.

The human altitude records for balloon technology starts in 1783, when the Montgolfier brothers launch a balloon on a tether. In it is Jean-François Pilâtre de Rozier, a chemistry and physics teacher. De Rozier stays aloft for almost four minutes at a height of 24 m, and makes it safely back to Earth. The altitude reached is modest, but the first airborne human is a fact. From that point on record after record gets broken. In less than a year the Montgolfier brothers 10-fold and again 10-fold the altitudes reached by humans. In the process De Rozier, the first airborne human, also becomes the first fatality in an air crash. But this doesn’t stop progress, and altitudes of a few kilometers are reached. But then, just two years after the first airborne human, progress slows down considerably. It takes more than two centuries for the next 10-folding of altitudes to take place.

From a physics perspective it is clear that once higher altitudes get reached, balloon builders start combating thinner and thinner atmospheres. Every next step in increasing altitude requires an exponential increase in volume-to-mass ratio of the manned balloon. It is this challenge that causes progress to slow down.

The human altitude records established with rockets follows a similar curve. This curve starts off with Yuri Gagarin’s 1961 space flight reaching a record altitude of 325 km. This makes Gagarin the first human officially leaving Earth’s atmosphere and reaching empty space. A few years later, also Gagarin, the first man in space, dies in a test flight. But progress in reaching higher altitudes is fast, and less than a year after Gagarin’s death the first humans orbit the Moon thereby establishing an altitude record exceeding that of Gagarin by as much as three orders of magnitude.

But then progress in reaching farther from Earth stalls. Two years later, in 1970 the human altitude record gets improved, but only marginally. The record of 401,056 km still holds today, almost half a century later. If we are optimistic and assume SpaceX is successful next year (!) in improving upon this altitude record, we get the rocket altitude curve as shown above.
Also here, from a physics perspective it is clear what is happening. With chemical propulsion technology the exhaust velocity for rockets is limited to 4.4 km/s. Given this limitation, reaching farther and attaining higher speeds (in rocket scientist speak: obtaining a higher delta-v) requires an exponential increase in the fueled-to-empty mass ratio of rockets. It is this challenge that makes progress stall.

Setting aside technical details, just eyeballing the rocket technology progress curve, drives down the conclusion that deep space exploration by humans is well past the era of rapid progress. This immediately raises the question “what technical miracle is SpaceX counting on?” It is clear that what SpaceX needs is a novel propulsion technology generating exhaust velocities well above the 4.4 km/h mark. This would create a novel curve in above plot that would flatten out at much higher altitude values. Yet, the sobering news is that SpaceX’s ITS (Interplanetary Transport System) is not based on any novel propulsion technology. ITS is based on the same chemical rocket propulsion technology that is responsible for the absence of progress in the above plot. It is the propulsion technology that proved extremely useful in bringing humans and payload into LEO. The technology can even be stretched to bring humans to the moon and back. But counting on it to colonize Mars carries the flavor of counting on balloons to bring humans into space. Won’t happen.

Andromeda: No Escape

Gravitationally you are stronger bonded to Andromeda than you are to Earth

If one of these days you find yourself under a dark night sky, have a look at the constellation Andromeda. With bare eyes you should just be able to spot a faint smudge in this constellation. You need sharp eyes that are well-adapted to the dark. It definitely helps if you happen to carry with you a pair of binoculars. And the dark should be real dark. That means a spot far away from city lights. Also the moon, with its overwhelming brightness, needs to be out of sight.

Once you have spotted it, look more closely at that faint smudge. It is the furthest object you can see with bare eyes. You are looking at a galaxy comparable to but somewhat larger than our Milky Way galaxy. It is the enormous distance you are away from Andromeda that reduces it to a faint fuzzy in the night sky. The light from this galaxy has been traveling an amazing 2.5 million years to reach you. In comparison, no human has ever reached a spot from which light would need to travel more than 1.3 seconds to reach Earth. The distance traveled by light in two-and-a-half million years is a distance way beyond human comprehension. Yet you are more strongly bonded to Andromeda than to Earth.

Huh?

You read that correctly. You are gravitationally more strongly bonded to Andromeda than you are to earth.

Andromeda: a faint smudge in the night sky

 

Let me make that more precise. Gravitation makes you stick to earth. And this gravitational binding to earth is pretty strong. To escape earth’s gravitational pull from your present position, you would need to jump up at a speed of about 11 km per second (7 mi/s). No small task. And that is ignoring any drag due to Earth’s atmosphere. However, to escape that faint smudge in the sky, you need to jump much more fiercely. In fact, you need to jump such that you achieve a speed of 88 km/s (55 mi/s) relative to the same smudge. And no, I am not cheating, it’s a like-for-like comparison. It is you again jumping from your same present position, and that is again ignoring atmospheric drag.

Few people realize the amazing reach of gravity. Gravity adds up. Andromeda with its trillion stars is incredibly more heavy than earth, and an overwhelming gravitational attraction comes with it that easily compensates for the enormous distance. The fact that you are gravitationally bound to Andromeda, makes everything around you – earth, the solar system and the whole Milky Way – bound to Andromeda. It should therefore not surprise you that the Milky Way is on a head-on collision course with Andromeda. Both galaxies are falling into each other. Don’t be worried, this is a long fall, and you and I won’t witness the final stage of it, and neither will your children, your grand-children, your grand-grand-children, … , and so on including your grand-to-the-power-100,000,000-children. And when the galaxy merger finally takes place, it will perhaps be a most welcome event as around that time we – if indeed we still exists – will need some forceful intervention to pull us away from sun, which soon thereafter will blow up and turn into a red giant.

Rational Suckers

Braess’ paradox, a multiplayer Prisoner’s Dilemma, leading to avoidable suffering

Why do people skip queues, cause traffic jams, and create delays for everyone? Who are these misbehaving creatures lacking basic cooperation skills? Are they really all that different from you? Are you perhaps one of them?

Various situations involving social interaction drag you into a negative sum game, and make you part of a misbehaving gang. Welcome to Braess’ paradox.

image

Each morning at rush hour a total of 600 commuters drive their cars from point A to point B. All drivers are rational individuals eager to minimize their own travel time. The road sections AD, CB and CD are so capacious that the travel time on them is independent of the number of cars. The sections AD, and CB always take 10 minutes, and the short stretch CD takes no more than 3 minutes. The bridges, however, cause real bottlenecks, and the time it taken to traverse AC or DB varies in proportion to the number of cars taking that route. If N is the number of cars passing a bridge at rush hour, then the time to cross the section with this bridge is N/100 minutes.
Given all these figures, each morning each individual driver decides which route to take from A to B. Despite the freedom of choice for each commuter and despite all traffic flow information being available to each and every commuter, the outcome of all individual deliberations creates a repetitive treadmill. Each morning all 600 commuters crowd the route ACDB and patiently await the traffic jam at both bridges to resolve. The net result is a total travel time of 600/100 + 3 + 600/100 = 15 minutes for each of them.

Does this make sense?

At this stage you may want to pause and consider the route options. If you would be one of the 600 commuters, would you join the 599 others in following route ACDB?

Of course you would. There is no faster route. Alternative routes like ACB or ADB would take you 600/100 + 10 = 16 minutes, a full minute longer than the preferred route ACDB. So each morning you and 599 other commuters travel along route ACDB and patiently queue up at both bridges.

One day it is announced that the next morning the road stretch CD will be closed for maintenance work. This announcement is the talk of the day. Everyone agrees that this planned closure will create havoc. Would section AD or CB be closed, it would have no impact as these are idle roads. But section CD is used by each and every commuter. What a poorly planned maintenance, a closure of such a busy section should never be scheduled for rush hour!

The next morning all 600 commuters enter their cars expecting the worst. Each of them selects between the equivalent routes ACB and ADB. The result is that the 600 cars split roughly 50:50 over both routes, and that both bridges carry some 300 cars. Much to everyone’s surprise all cars reach point B in no more than 300/100 + 10 = 13 minutes. Two minutes faster than the route ACDB preferred by all drivers.

How can this be? If a group of rational individuals each optimize their own results, how can they all be better off when their individual choices are being restricted? How can it be that people knowingly make choices that can be predicted to lead to outcomes that are bad for everyone?

Asking these questions is admitting to the wishful thinking that competitive optimization should lead to an optimum outcome. Such is not the case, when multiple individuals compete for an optimal outcome, the overal result is an equilibrium and not to an optimum. We saw this in the game Prisoner’s Dilemma, and we see it here in a situation referred to as Braess’ paradox.

A question to test your understanding of the situation: what do you think will happen the next day when section CD is open again? Would all players avoid the section CD and stick to the 50:50 split over routes ACB and ADB, a choice better for all of them?

If all others would do that, that would be great news for you. It would give you the opportunity to follow route ACDB and arrive at B in a record time of about 9 minutes (300/100 + 3 + 301/100 minutes to be precise). But of course all other commuters will reason the same. So you will find yourself with 599 others again spending 15 minutes on the route ACDB. And even with the benefit of hindsight none of you will regret the choice you made: any other route would have taken you longer. Yet all of you surely hope that damn shortcut between C and D to get closed again.

And don’t assume this phenomenon doesn’t occur in real life.

Triple or Bust Paradox (part 2)

Beware of expectation values based on vanishing probabilities

A week ago I discussed the coin toss game ‘triple or bust‘. The game is between Alice and Bob. Alice start the game by writing a $ 1.00 IOU to Bob. Alice then makes at least six subsequent tosses with a fair coin. On each ‘heads’ Alice triples the IOU amount. On ‘tails’ she sets the IOU to zero.

The question is: how much should Bob be prepared to pay Alice to participate in this game?

As Bob can repeat this game as often as he likes, he focuses on the gains to be obtained in the long run. These are given by the expectation value for this game, which are easy to calculate. The game starts with an IOU dollar value of 1.00. On each coin toss the average IOU increases to 3/2 times the amount before the toss. That means that after the nth coin toss, the expectation value for the IOU is (3/2)^n. Alice can prevent this value to grow out of control by stopping at n=6 (completing six tosses). This sets the expectation value for the game to $ 11.39.

However, one might also reason as follows: in each game Alice can continue the tossing until tails shows. This voids the IOU and Bob will walk away empty handed. The game is worthless to Bob.

How to reconcile both reasonings?

The key issue is: what do we mean by the phrase “in the long run”? How many repeat games are required to achieve a gain per game that is close to the expectation value? The chances for Bob to win a game of n tosses is 1 out of 2^n. To reach the expectation value, N the number of repeats of the game, needs to be large enough to yield a number of wins much larger than 1. This means N >> 2^n.

Let’s tell Alice to make exactly n tosses per game, and see how Bob fares for a given number of repeats of the game. Suppose Bob has enough time to spare to reach N = 10000. For this N value, at around n = 13.3 we have 2^n equal in magnitude to N. So for n, the number of tosses per game, significantly smaller than 13 (N >> 2^n) Bob can expect to walk away with close to 1.5^n dollars per game. For n reaching values close to 13 or 14, it is completely uncertain if Bob will win a game. Effectively, after 10000 repeat games, Bob’s total return takes the shape of a lottery ticket. For n much large than 14  (N << 2^n) Bob has vanishing odds to reach a win.


The above plot shows the earnings for Bob obtained in three independent runs of 10000 games each. The horizontal axis shows n the number of coin tosses per game. The vertical axis shows Bob’s earnings per game divided by the expectation value 1.5^n. In line with the reasoning above, Bob’s earnings follow the expectation value. Around n = 14 the fluctuations in his earnings become large, and a transition happens. Above n = 14 Bob becomes progressively unlikely to make any earnings. If Alice has full freedom in increasing n, Bob is guaranteed to walk away empty handed.

In essence, the cause for the expectation value failing to represent the earnings of a participant to this game is the skeweness of the payout distribution increasing without bound. Mathematically the expected earnings for this game become ill-defined because taking the limit of N going to infinity (calculating the expectation value) followed by taking the limit of n going to infinity (allowing Alice an unlimited number of tosses) gives a different evaluation from the one based on reversing these two limits.

The same phenomenon of unbounded skeweness in the payout distribution causes the expectation values of the well known Saint Petersburg paradox to misrepresent the likely earnings.

Homo Retaliens

Why the urge to retaliate?

Game theory models human decisions based on just two characteristics: rationality and selfishness. This minimalistic approach teaches us a lot about the character of economic behavior and the emergence of strategies built on cooperation, competition, retaliation, etc.

By far the most well-known game-theoretical scenario concerns the prisoner’s dilemma (PD). This game is rather boring from a game theory perspective, yet over the years it has attracted an impressive amount of attention, particularly so in the pop science media. The reason for all the attention is that the predicted outcome for this simple game surprises most people. Game theory tells us that in PD rational players focused on optimising their own gains will knowingly avoid the win-win situation for this game and settle for a smaller return.

How can this be?

The simple answer is that in any game rational players will end up in an equilibrium of individual choices, and not necessarily in an optimum. PD is a game designed to render manifest the difference between the equilibrium outcome and the optimal outcome.

An example PD scenario runs as follows: you and your co-player are facing a task. Both of you have the choice to contribute towards accomplishing the task or to free ride. You both have to decide simultaneously and without any opportunity for communication. If both of you decide to free ride, nothing gets accomplished and you both walk out with no gain. Each player putting in an effort increases the total gain by 4 units. This gain will be split equally over both participants. Putting in an effort comes at an individual cost of 3 units.


It should be clear that this game forces a rational selfish individual to free ride. Regardless what choice the other makes, avoiding any effort form your side makes you avoid an investment of 3 units that would deliver you only 2 units gain. As this applies to both players, the outcome will be both players walking away empty handed, which is an inferior outcome compared to the 1 unit gain each could have received by contributing to accomplish the goal.

Although there is no paradox in this result, many people remain skeptical towards PD’s sub-optimal outcome. Some people reason they would play the game differently. They motivate these alternative strategic choices by introducing emotional rewards (“I prefer the other being happy”) or punishments (“if the other free rides, fine: it won’t make him happy”). However, we should not lose sight of the fact that the payoffs in PD are assumed to include all consequences – monetary or otherwise – of the strategic choices made. In other words, one should considering the payoffs to quantify the changes in happiness attributable to the various game outcomes.

However, considering non-monetary gains or losses does translate into a challenge towards PD. One might ask “is a PD-type game even feasible when incorporating real-life non-monetary consequences?”.

I am quite convinced that in human social setting PD-type encounters are the exception rather than the norm. This is because the severity of the consequences resulting from social regulation (retaliation and punishment) are not bounded. If such drivers are present, PD games can be rendered unfeasible. For instance, in a society with strong retaliation morals a PD payoff structure is not achievable.

To see why, let’s consider again the above PD example. We change the strategic choice ‘contribute’ into ‘contribute and retaliate’. Under this strategic choice a player contributes, but if the other turns out to free ride, a fight is started that will cost both parties and that will continue until the free rider’s net gain has dropped to what he would have earned if both had opted for a free ride. Such a change in payoff structure changes the game from PD (prisoner’s dilemma) into the coordination game SH (stag hunt). This change is irreversible: in the presence of retaliative behavior, it is not possible to recover a PD payoff structure no matter how one chooses to tune the monetary payoffs.

Compared to PD, the SH game still carries the same win-win outcome (mutual contribution) with the same return per player, and also still carries the same lose-lose outcome (mutual free ride) with the same zero gain per player. However, in contrast to the PD situation, in SH the win-win situation does form a rational choice (a Nash equilibrium). This is the case due to the fact that, given the choice of the other player, in an SH win-win neither player regrets their own choice. Effectively what happened is that under the win-win scenario the cost that would be incurred by retaliation eliminates any regrets after the fact.

We conclude that PD games cease to be feasible under the threat of retaliation. When including the costs associated with retaliation, what might seem to be a PD game, turns out to be an SH game. By eliminating PD games, retaliation also eliminates the inevitability of suboptimal (mutual free ride) outcomes. The same effect is seen when introducing retaliation options by repeating PD between players. In repeat PD games ‘contribute and retaliate’ strategies (tit-for-tat approaches) dominate over ‘free ride’ strategies.

Retaliation is a disruptive phenomenon. Amongst selfish individuals it eliminates PD type games and helps avoiding them getting stuck in suboptimal outcomes. One might therefore philosophize groups with retaliative behaviors to carry a evolutionary advantage over groups without such a trait. Whether this is correct or not, for sure retaliation is widespread amongst human societies.

In any case: if you feel not at ease with the two-player PD game outcome, your intuition probably is correct. But don’t fall in the trap of translating this discomfort into futile challenges towards the outcome of the game. Instead, challenge the PD scenario itself. I have yet to see a two-person decision scenario that is best describes as PD (rather than as a coordination game).

Now this doesn’t carry over into multi-player games. Many situations of humans failing to cooperate towards a greater good can be modeled as multi-player extensions of PD. But that is a subject deserving its own blog post.

How to Stomach a Black Hole

The enigmatic spacetime nature of black holes

A black hole is not some cosmic vacuum cleaner. A black hole even hardly classifies as a tangible object. The inside of any tangible object can, at least in principle, be inspected from the outside. You can scan your vacuum cleaner with X-rays and thereby reveal its internal workings. But a black hole’s ‘inner workings’ can not be inspect from the outside. The only way to peer inside a black hole is by ‘being there’. Just like you can peer into the future only by ‘being there’. 

So let’s stop thinking about black holes as tangible objects. It is more insightful to think of a black hole as localized future that has separated from, and lost contact with, the future developing outside the black hole. The defining characteristic for black holes is that anyone or anything inside a black hole, can not influence the universe outside the black hole. No signal (no light, no X-ray, no gravitational wave, nothing that carries information) can be transmitted from inside the black hole to observers outside the black hole. And obviously, this also implies that any object inside can not leave the black hole. 

The scenario usually considered that leads to ‘finding yourself inside a black hole’ is the scenario of a static black hole. A black hole in eternal existence. The only way to find yourself inside such a black hole is by falling into it. Yet there is an alternative scenario leading to the situation of ‘being inside a black hole’. It’s the scenario of a black hole growing from inside you. And this alternative scenario gives a better insight into the global spacetime nature of black holes. 

We are going to perform a gedanken experiment. This will tell you how to grow a black hole starting from inside your belly, and quickly expanding far beyond our solar system. And… the ‘you’ in this gedanken experiment will survive all of this. Heck, you even won’t notice you’re inside a black hole. 

You need some astronauts to build a cosmic wall for you. A huge spherical shell of bricks centred around you. Regular red 4 lb bricks will do fine. But you will need loads of them. A tredecillion bricks to be precise. That’s going to render you a thin spherical wall as heavy as our whole galaxy and positioned far beyond the orbits of the planets in our solar system. In preparation you start reading up on Schwarzschild radii, and conclude that in order to prevent the shell of bricks to form a black hole, the spherical wall should be large enough for a pulse of light starting at the center to travel more than 100 days before reaching the wall. 
So you instruct an army of astronauts to build a spherical brick wall with a radius somewhat larger than 100 light days. For the given total mass of bricks, this radius yields the particular advantage for the astronaut bricklayers building the wall to experience a gravitational acceleration due to the wall comparable to the gravitational acceleration at earth’s surface. 
Once the sphere is complete, earth is shielded from any starlight not emerging from sun. Yet, nothing changes for us in terms of gravitational phenomena. Earth continues its orbit around the sun, and people continue to walk on earth, fly in airplanes, and launch rockets. All of this is the case, thanks to Newtons’s shell theorem (or if you prefer, thanks to it’s general relativistic cousin Birkhoff’s theorem. In simple terms, this theorem states that a uniform spherical shell exerts no gravitational force on objects (Newtonian view) or doesn’t bend spacetime (Einsteinian view) anywhere inside, while on objects anywhere outside it exerts a gravitational force (bends spacetime) equal to the force (bending) that would result if the full mass of the shell would have been concentrated at it’s center. 
As long as your spherical cosmic wall retains a radius larger than 100 lightdays, no black hole will ever get formed and all is safe. But guess what. Under it’s own gravitational force the sphere of bricks starts shrinking. First slowly, but then at ever larger speed. As soon as the sphere shrinks to less than 100 light days radius… 

… nothing happens. People continue walking on earth, and the planets continue their orbits around the sun. Nothing whatsoever changes to the gravitational phenomena inside the cosmic wall. This cosmic wall is getting smaller but stays spherical in shape, and the shell theorem still applies. 

So the daily life of people continues. But something did change. In fact, something did change 100 days earlier. Something more abstract. Earth’s future did change. Since 100 days a narrow destiny awaits mankind. 

Spacetime diagram of event horizon (pink) formation due to infalling shell of matter (blue). Time runs upward, space is reduced to two dimensions. Two outgoing laser flashes (green) are shown, one before the horizon forms, and one after.

A 100 days before the cosmic wall passed the critical radius of 100 light days, an event horizon got initialized. A boundary between an inside and everything else outside that is causally disconnected from the inside. This horizon started small somewhere. It could have started in your belly or in mine. Either way, none of us would have noticed. The horizon expands at the speed of light, within a fraction of a second including all human beings, and reaching the shrinking spherical wall exactly the moment it’s radius passes the threshold of 100 light days. Once the spherical wall has fallen thru the horizon, the horizon stays constant in size measuring 100 light days in radius. 
This horizon seems a boundary entirely abstract in nature. Yet, it is a boundary that acts as a watershed in spacetime delineating markedly different futures, different destinies, on either side. Your destiny becomes apparent much later and well after the spherical wall passes thru the critical size of 100 light days. In fact, the earliest you would be able to know for sure you are heading towards a grim future disconnected from the future outside, is 100 days after the spherical wall passes thru the critical radius of 100 light days. Even then you would still experience the same gravitational phenomena, and earth would still orbit around the sun as if nothing had changed. 

But what then is the destiny awaiting you? Would you be teared apart? Would spaghettification happen at some point? 

No, none of this is going to happen. No matter how closely the wall of bricks approaches you, the shell theorem will keep you safe from any deadly gravitation effects. So then what disaster will happen? By now that should be clear. The shell theorem doesn’t protect you from being hit by a brick. And at some moment a true giga-tsunami of bricks will crush you out of existence. 

Does the above puzzle you? Does it give you an uneasy feeling of retro-causality creeping in? Isn’t it strange that a horizon would start to grow 100 days before the wall of bricks passes thru the critical radius? What if, seconds before the critical radius is reached small rockets connected to the bricks would fire thereby reversing the collapse of the wall and preventing the critical radius to be reached? Would the horizon that started growing 100 days earlier somehow be eradicated? 

Such reasoning is based on the wrong intuition of a horizon being some tangible boundary. Black hole horizons are not tangible objects. They represent boundaries between distinct futures. If, at the last minute, the brick wall collapse gets reversed, we have to conclude that no horizon started to grow 100 days earlier. And no, this does not introduce any retro-causality as all that changes is in the future.
Twitter: @HammockPhysics