I tend to be rather harshly against paternalistic interventions. I believe that preference relationships are revealed through action, and so, as conditions approach those required for the first welfare theorem to be true, modifying people’s behavior is more and more likely to lead to social loss. I think most paternalistic interventions are simply imposing one group’s preferences on another. We take a view of the right way to live a life, and punish deviance from it, even if the only person affected is the one doing it. It is condescending and disgusting to me. It reeks of presumption and arrogance.
So now, with that out of the way, I want to discuss reasons for paternalism.
The knapsack problem is a style of problem which revolves around choosing the optimal combination of things. Imagine we have a knapsack, in which we can pack items which have different sizes and values. With a very few items, the problem is trivial — we can simply test all combinations, and find the one which maximizes value. Unfortunately, this problem scales non-linearly with the number of items. Large numbers of items simply cannot be computed in time for it to be relevant for us. Instead, we have to apply algorithms, which simplify the problem so that we can solve it in a reasonable amount of time. You may not believe that there could truly be such an increase
Let’s give an example. (Taken from “Investment Incentives in Near-Optimal Mechanisms”). Imaging we have a knapsack of size 20, and three items, with values 11, 11, and 12, and sizes 10, 10, and 11. A “greedy” algorithm divides each item's value by its size, and adds the highest relative value items to the bag until the point that no more can be added. In this case, we add the first two items, since 11 over 10 is greater than 12 over 11, and stop there. The total value of the bag is 22, and you perfectly fill the bag. The greedy algorithm can go awry, though. Imagine the last item’s value was actually 13, with its size remaining 11. 13 over 11 is greater than 11 over 10, so the algorithm picks that item and can put no further items in the bag. Instead of 22, the value of the items is only 13. The loss from a greedy algorithm relative to optimal is bounded at ½, thankfully, which you can see if you squint and imagine the amounts getting larger and larger. All algorithms are like this. While there may be sets of choices which applying the algorithm leads to optimal results, there’s no guarantees – only bounds on worst case performance, of which some may be better or worse than others.
In life, we face uncountably many choices. Everyone has the goal of maximizing their utility, and searches for the combination of actions which maximizes this. We don’t think about this most of the time, because we have lots and lots of little algorithms to make decisions for us. We don’t reconsider whether we want to go into work every day, or by what route we take, or which entrance to go into; whether we should do our job, or stand up and jump about; if we should eat or not; if we should take narcotics; or any other of the millions and millions of micro actions we could take. We cannot perfectly optimize.
People have different levels of intelligence, which can be thought of as computational power. As our computational power decreases, we must apply more and more simple algorithms to decide. Obviously, these algorithms must be worse on some axis than our normal algorithms, or else we would have simply adopted them all along. Take the greedy algorithm. It is dependent upon finding the value-to-size ratio for all the items, and only then deciding what items to include. If someone is unable to find all of the options, then they must randomly sample and decide among only some of the possible decisions. In this event, the potential loss is unbounded. As people can sample more and more of it, the inefficiency declines asymptotically. Smart people, who can include more possibilities in the values they compare, can impose their solutions for a gain in welfare, even on those whose algorithms do not include imitating the successful.
Thus, we can rationalize banning drugs or gambling. There are people with defective algorithms for maximizing happiness. They are stuck in local maximum. You can imagine how someone whose algorithm is “do what I want, whenever” will find themselves far worse off than someone who does the things which are unpleasant now, and pay off later. They simply do not grasp the loss from short-sightedness. This is not a question of people having different discount rates or time preferences – people really are simply worse off.
This is the case for paternalism. The other ones are either polite rephrasing of this, or do not care about actually making people better off in the slightest. Imposing your preferences upon others, when it makes them worse off without a gain to yourself exceeding that, is really bad. I think this should be easy for anyone of ordinary moral sensibilities to see. When someone imposes their preference to have intercourse onto another, we call that “rape”; when someone imposes their preference that you live with them against your will, we call that “kidnapping” and “false imprisonment”. If you’re going to stop people from doing what they want, you better have a damn good reason why. Perhaps we should be more cautious about thinking that people smoking cigarettes really makes them worse off, or people killing themselves, or working a job for a wage we deem too low, or living in a house we deem too small. Government intervention should be thought of as harmful by default. It must be used carefully.
Nick, see this related paper I'm working on: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4715593.
Do you think paternalism is more justified (as suggested by the word itself) when it comes to the interactions between parents and children? Its tough to fit children into a rational agent framework; its not like a 6 month old can strike a Coasian bargain with their parents to stop smoking around them now in return for some portion of their future earnings.
You could posit that in some cases, the principal-agent problems between the state/society and a child are lesser than those same problems between parent(s) and their child. I think that the vast majority of the time, you would trust the parents to be better guarantors of a child(s) well-being than the state, but in extreme cases, there is a definite argument.
And unlike arguments about intelligence, addiction, or old age, it is highly likely that a young child will be a capable agent in the future, and it intuitively feels to me like that future capable agent's wishes about how it would like to be treated now are best approximated by a default to parental preference, but with some societal backstop to guard against parental malice.