(Originally posted by Pat on 3/20/10)
Would you pay $1 for a 1 in 1 million chance of $100,000?
I think I would. I think most people would.
Yet
if John Rawls is right that we should be maximally risk-averse (at
least in matters of distributive justice), then you should never gamble
anything ever.
And
conversely if the utilitarians are right that we should maximize our
expected utility, we should only play when the odds are good.

Now, if you think the utilitarians are right (they do seem more plausible in this case), consider now this alternative setup.
You
are given a choice between two rooms. In the left room you will roll a
die, and if you roll an even number, you will receive $10 million. If
you roll an odd number you will receive nothing. In the right room you
will not have to roll a die; you will automatically receive $4 million.
I would choose the right room, as I think most people would.
But the expected utility of the right room is only $4 million, while the expected utility of the left room is $5 million.
So perhaps we should maximize the minimum after all? But then, what if you are given the choice of these two rooms?
In
the left room you will roll a die, and if you roll an even number you
will receive $10 million. If you roll an odd number you will receive
nothing. In the right room you will not have to roll a die; you will
automatically receive $100.
I
would now choose the left room, despite the risk of getting nothing—for
the 50% chance of $10 million is too much to pass up. Yet on the
Rawlsian view I should choose the right room, because a guaranteed $100
is better than risking getting nothing.
Intriguingly,
the proverb “a bird in the hand is worth two in the bush”, taken
literally, would in fact provide us with a more plausible account than
either. This could mean that we should weight risk such that a
guaranteed 1 unit is equivalent to a 50% chance of of not 2 units (that
would be just utilitarianism again), but rather 2*2 units, 4 units.
Under this model, the following should be a completely ambivalent
choice:
In
the left room you will roll a die, and if you roll an even number you
will receive $8 million. If you roll an odd number you will receive
nothing. In the right room, you will receive a guaranteed $2 million.
I
do, in fact, feel pretty ambivalent about that choice! Drop the $8
million to $4 million and I would choose the right room, though
utilitarians say I should be completely ambivalent. Conversely, raise
the $8 million to $20 million and I think I would go for the left
instead. Raise it to $1 billion and I definitely would. $2 million would
make my family's life a lot easier—but with $1 billion I could shift
the course of the world.
Does this mean that a silly old proverb just beat the two leading formal models of economic rationality? Yes, I think it does!
On
the other hand, I can't think of any really compelling reason to make
the risk factor exactly 2, and not, say, 1.5 or 2.8 or perhaps even 4.9.
If
you're allowed to do the task multiple times, the utilitarian answer is
clearly right. But if you can only do it once, I think risk is
relevant. How relevant, I am not sure.
Moreover, one must consider (as I did in reflecting on these problems) what one would do with
the wealth. Between a guaranteed $1 billion and a 50% chance of $4
billion, I think I'd be happy with $1 billion, because I can still do a
great deal with that. But between a guaranteed $10 and a 50% chance of
$40, I think I'd take the chance. (On the other hand, all these choices
leave me feeling conflicted and uncertain.)
One
could probably conduct a survey of people's responses... but what if
people aren't being rational after all? What if there is a better way of
making these decisions than the one most people use?
These
are not idle speculations; the model of economic rationality we use has
many real-world applications. How much should you pay for a better car
air bag, or for a more secure airplane ticket? How much should you spend
on health insurance? When is it rational to gamble? Even if we accept
Rawls' veil of ignorance as the basis of distributive justice, how do we
make decisions based on this?
No comments:
Post a Comment