Rationalizing the Traveler’s Dilemma

Another recent article in Scientific American is about the Traveler’s Dilemma problem in game theory.

For those too lazy to click on any of those links, here’s the basic problem (taken from the Wikipedia article):

An airline loses two suitcases belonging to two different travelers. Both suitcases happen to be identical and contain identical antiques. An airline manager tasked to settle the claims of both travelers explains that the airline is liable for a maximum of $100 per suitcase, and in order to determine an honest appraised value of the antiques the manager separates both travelers so they can’t confer, and asks them to write down the amount of their value at no less than $2 and no larger than $100. He also tells them that if both write down the same number, he will treat that number as the true dollar value of both suitcases and reimburse both travelers that amount. However, if one writes down a smaller number than the other, this smaller number will be taken as the true dollar value, and both travelers will receive that amount along with a bonus/malus: $2 extra will be paid to the traveler who wrote down the lower value and a $2 deduction will be taken from the person who wrote down the higher amount. The challenge is: what strategy should both travelers follow to decide the value they should write down?

Traveler’s Dilemma can be seen as a generalization of the infamous Prisoner’s Dilemma problem (and in fact reduces to it if you replace $100 with $3 in the problem statement), and suffers the same perverse situation where rational players will make the choices that result in the worst outcomes for them.

Why’s that? Suppose you and the other player both pick $100, which results in both of you getting $100. However, if you pick $99 instead, you get $99 + $2 = $101, which is better, so you’d choose $99 instead of $100. (In fact, choosing $99 is always at least as good as, and sometimes better than, choosing $100.) But since the other player is also rational, he’ll also choose $99 instead of $100, and as a result both of you would get $99. Similarly, if you reduce your choice to $98, you would wind up with $98 + $2 = $100, which is better than $99. Of course, the other player also knows this, and will choose $98 instead of $99 as well. According to this analysis, the race-to-the-bottom continues until both you and your opponent choose $2, resulting in each of you only getting $2.

Another way to look at it is, whatever the other player picks, your best strategy is to underbid him by $1. Since he’s also playing rationally, he’ll try to do the same to you. As a result, you both pick the lowest choice possible, because any other choice will fare strictly worse against that same rational strategy — both players picking $2 is the Nash equilibrium.

However, this analysis is empiricly wrong. When you do a Traveler’s Dilemma with actual people, most don’t play the Nash equilibrium as we would expect from the above analysis. Instead, most of them choose a high dollar amount, typically somewhere between $90 and $100. According to the analysis, this is an irrational choice, but since most players play this way, the result is that they have a far better outcome than if they had picked $2, so the irrational choice is better.

So, is playing irrationally is the truly rational way to play? Or is there something wrong with our original analysis?

My hypothesis to explain what’s going on holds that the players are in fact playing rationally; it’s just that our game theoretic model isn’t properly considering the valuations the players are actually using.

I believe the game theoretic model for Traveller’s Dilemma makes a critical error in the assumptions it makes about the players: it assumes they are completely risk averse, unwilling to risk choosing anything other than the option with the highest guaranteed payoff. Note that choosing $2 is the only choice that guarantees the player will make at least $2 in the end; all other choices risk making $0 depending on how the other player chooses.

In reality, people are going to have some non-zero degree of risk tolerance; they’ll be willing to accept the risk of a non-optimal payoff for the chance of a larger one. Just how much risk a person is willing to accept depends on their own risk tolerance profile, and the dollar amounts at stake.

Here’s why I think rational real-world players tend to choose something in the $90s instead of the Nash equilibrium of $2. They understand that choosing $2 guarantees them a payoff of $2, but $2 is of little real-world value to most people. After all, what can you buy with just $2 anyway? However, the maximum possible payoff is $101, and they’d prefer a payoff closer to that than $2. But to get a payoff in the $90s, both players need to choose something in the $90s. A player doesn’t know with certainty what the other player will choose, but it’s likely the other player will see little value in $2 but a lot of value in the $90s. So, the player must choose whether or not to risk a guaranteed $2 for the chance to get $90 or so. Since $2 is almost worthless but $90 certainly isn’t, most players choose to take the risk.

This line of thinking doesn’t provide any obvious dollar amount a player is expected to choose, but does provide an escape from the race-to-the-bottom scenario in the traditional analysis. (In fact, if individual dollars aren’t very valuable to a player, he probably isn’t too concerned with getting $93 versus $95 anyway.) Since most people are willing to take the risk, chances are the risk does pay off, and both players come out much better than they would have if they had played the Nash equilibrium.

Like all good hypothesis, there is a way to test mine against the traditional model (and the null hypothesis of “people are dumb but like picking big numbers”. My hypothesis depends on the absolute dollar amounts involved in the game: it assumes that $2 is of little value to a player but $90+ is. The traditional model, however, does not. I predict that if the game were changed so that the allowed range ran from $2,000,000 to $100,000,000 and the bonus/malus was increased to $2,000,000 — in other words, multiply all dollar values by 1,000,000 — we would see most players play the Nash equilibrium instead of taking the risk.

Why? Because to most people, $2,000,000 is a lot of money, and risking losing that for the chance to gain $100,000,000 is a lot harder to swallow. Plus, when the decreasing marginal utility of money is considered, that $100,000,000 isn’t actually worth 50 times more than $2,000,000, whereas $100 would be seen as worth 50 times (or possibly more!) as much as $2. On top of that, it’s hard to wrap one’s mind around just how much $100,000,000 is; I could find a way to send $2,000,000 without too much difficulty, but $100,000,000? I’d have to really work to spend half of that before I die.

So in Traveller’s Dilemma x 106, a player is expected to be a lot more risk averse than before. And since both players must take the risk to see the payoff, few, if any, players are actually going to take it, it’s much less likely for the risk to actually pay off. As a result, most players will stick with the Nash equilibrium of $2,000,000 as the rational choice.

(By the way, if anyone wants to run this experiment with real money, I volunteer.)

So, Traveller’s Dilemma doesn’t show that people don’t act rationally, just that the game-theoretic model doesn’t properly consider risk tolerance of the players. Sure, people act irrationally all the time, but this situation isn’t one of them.