People do not seek to simply maximize their earnings. They seek to maximize their utility, and this need not be exactly the same. Someone who is risk neutral would not purchase insurance against fire, because the loss of a small premium each month would outweigh the mitigation of a very large loss. Yet, many people are risk averse, and do purchase insurance. An old explanation is that your utility as a function of wealth may be concave. It looks like a square root function. A shift to the left along the x-axis leads to a larger decline in the y axis (utility) than an increase does.
It’s appealingly simple, with an intuitive evolutionary explanation – far better to think there’s a lion in the grass and be wrong, than to be eaten. The trouble is that it quickly leads to nonsense. People buy lottery tickets and insurance at the same time. Friedman and Savage (1948) allow for a utility function to have convex and concave sections, and thus you can explain the anomalous behavior. Chetty and Szeidl (2006) give an intuitive explanation for this through “consumption commitments”. Many items, like a house or a car, cannot be bought in pieces. You make a commitment to maintaining a certain amount of income necessary to maintain those items, and so are risk averse and willing to buy insurance. At the same time, you are also willing to buy lottery tickets because this shifts you from out of your current house to a really nice, new house.
Friedman and Savage’s explanation isn’t adequate either, though, because it isn’t robust to scaling up the size of the gambles. (That is to say, it isn’t robust to an “affine transformation”). If we assume that your utility curve does not vary from instant to instant, (a necessary assumption, in truth, or else it would not be possible to really make any claims whatsoever about the future), someone who would not take a 50/50 shot at losing $100 to gain $105 would not take a 50/50 gamble to lose $20,000 or else gain literally infinite money. Or, in an example due to Samuelson, one might reject one hundred separate iterations of a 50/50 shot at losing $100 or gaining $200, but accept if the hundred are offered as a package deal. Empirically plausible estimates over small stakes do not scale up to large stakes. (The set-ups here have largely been borrowed from Matthew Rabin’s “Risk Aversion and Expected-Utility Theory” from 2000. It is short, and I recommend reading. I have lost the citation for Samuelson, however.)
Our analysis of decision making under uncertainty got better once we started systematically testing how people actually act when presented with different choices. Kahneman and Tversky (1979) are the first to present the fourfold pattern of risk which shows up experimentally. People overweight the small probabilities, both of losses and of gains. People are also loss averse, so they will regard the same situation as worse if it is framed as a loss. If I have a coffee mug, there should be no difference in what you’re willing to pay to buy the mug, and what you’re willing to pay if I gave you the mug and then asked how much would you pay to not have it be taken away.
Another complication which shows up empirically is that people show different discount rates when something is happening now, rather than in the future. Suppose that I offer you a choice between $100 dollars today, or $105 tomorrow. It doesn’t necessarily matter what you pick, but if your behavior is consistent then you should have the same choice if I offer you $100 one year from now, or $105 a year and a day from now. People with this time-inconsistent discounting can be described as hyperbolic discounters, after Laibson (1997). Our discount rate themself decline over time, rather than us valuing future events at an exponentially declining, but smooth rate. Re-reading Laibson, I am struck by his suggestion that America’s declining savings rate could simply be financial innovation better “parting fools from their money”. I have not explored it any depth, but it does make me question if we can draw a line from working in finance to an increase in welfare.
It is possible that the story is much simpler. Ryan Oprea recently published an article in the AER entitled “Decisions Under Risk Are Decisions Under Complexity”. He shows that the cognitive biases which seem to be due to risk are in fact just reactions to complexity. The experiment consisted of giving people options which are difficult to work through, but are no lotteries at all – they were purely deterministic payoffs of different amounts, and there is an optimal strategy which should be the same among all people. Instead of a 10% chance of $25, they might offer 10% of $25. Both of these have the same expected value, but the second is deterministic. If you behave exactly the same, it suggests that risk-aversion is about simplifying choice when the world is too hard to think about.
This was challenged by Banki, Simonsohn, Walatka, and Wu. It is not so much that they are denying that this happens, but that the effect only shows up amongst dumber people. (Well, people who performed poorly on the comprehension questions). This was 75% of subjects, but among the remaining 25%, the usual pattern applied. These studies don’t seem in profound disagreement with each other, though! When things are too hard to think about, people behave in seemingly non-optimal ways.
Risk-aversion is extremely common and important. Unfortunately, we lack a general law of why it exists as it does, and humans behave in persistently irrational ways. I think these can be best explained as humans simplifying the world, and that there is room for government intervention to increase welfare simply by making the world simpler.
I think that this can be explained by realizing that the expected value doesn't actually carry much information for most distributions.
Suppose I have an idea that I am 10% sure will make me $100. It's tempting to just say that the expected value is $90, and that's what's important. Any other venture with a $90 expected value I should treat the same, if I'm risk-neutral.
But where did the 10% and $100 come from? Well, they're just expected values of some other distributions, aren't they? And those distributions could look like *anything*, they don't even have to be normal! So the "variance" in my $90 expected value could not only be large, but could (and generally will) have bizarre and non-intuitive dependencies on any of its inputs.
If "risk-neutral" means only caring about the expected value, that's really saying that you don't care at all about any aspect of the actual shape of any of the distributions involved! That sounds pretty dramatically irrational to me.
Really, I think that this behavior is explained by people (heuristically) taking into account parameters other than just expected value, and that being "risk-neutral" really means insensitivity to only some of those parameters.
For lottery tickets, people get signifigant utility out of the action of scratching/playing, which is also shown by the fact that people play free online gambling games. It can be hypothesized that the inherent risk of loss in gambling gives a higher thrill, increasing the utility over that of a free gamling simulator.