I tend to be rather harshly against paternalistic interventions. I believe that preference relationships are revealed through action, and so, as conditions approach those required for the first welfare theorem to be true, modifying people’s behavior is more and more likely to lead to social loss. I think most paternalistic interventions are simply imposing one group’s preferences on another. We take a view of the right way to live a life, and punish deviance from it, even if the only person affected is the one doing it. It is condescending and disgusting to me. It reeks of presumption and arrogance.
So now, with that out of the way, I want to discuss reasons for paternalism.
The knapsack problem is a style of problem which revolves around choosing the optimal combination of things. Imagine we have a knapsack, in which we can pack items which have different sizes and values. With a very few items, the problem is trivial — we can simply test all combinations, and find the one which maximizes value. Unfortunately, this problem scales non-linearly with the number of items. Large numbers of items simply cannot be computed in time for it to be relevant for us. Instead, we have to apply algorithms, which simplify the problem so that we can solve it in a reasonable amount of time. You may not believe that there could truly be such an increase
Let’s give an example. (Taken from “Investment Incentives in Near-Optimal Mechanisms”). Imaging we have a knapsack of size 20, and three items, with values 11, 11, and 12, and sizes 10, 10, and 11. A “greedy” algorithm divides each item's value by its size, and adds the highest relative value items to the bag until the point that no more can be added. In this case, we add the first two items, since 11 over 10 is greater than 12 over 11, and stop there. The total value of the bag is 22, and you perfectly fill the bag. The greedy algorithm can go awry, though. Imagine the last item’s value was actually 13, with its size remaining 11. 13 over 11 is greater than 11 over 10, so the algorithm picks that item and can put no further items in the bag. Instead of 22, the value of the items is only 13. The loss from a greedy algorithm relative to optimal is bounded at ½, thankfully, which you can see if you squint and imagine the amounts getting larger and larger. All algorithms are like this. While there may be sets of choices which applying the algorithm leads to optimal results, there’s no guarantees – only bounds on worst case performance, of which some may be better or worse than others.
In life, we face uncountably many choices. Everyone has the goal of maximizing their utility, and searches for the combination of actions which maximizes this. We don’t think about this most of the time, because we have lots and lots of little algorithms to make decisions for us. We don’t reconsider whether we want to go into work every day, or by what route we take, or which entrance to go into; whether we should do our job, or stand up and jump about; if we should eat or not; if we should take narcotics; or any other of the millions and millions of micro actions we could take. We cannot perfectly optimize.
People have different levels of intelligence, which can be thought of as computational power. As our computational power decreases, we must apply more and more simple algorithms to decide. Obviously, these algorithms must be worse on some axis than our normal algorithms, or else we would have simply adopted them all along. Take the greedy algorithm. It is dependent upon finding the value-to-size ratio for all the items, and only then deciding what items to include. If someone is unable to find all of the options, then they must randomly sample and decide among only some of the possible decisions. In this event, the potential loss is unbounded. As people can sample more and more of it, the inefficiency declines asymptotically. Smart people, who can include more possibilities in the values they compare, can impose their solutions for a gain in welfare, even on those whose algorithms do not include imitating the successful.
Thus, we can rationalize banning drugs or gambling. There are people with defective algorithms for maximizing happiness. They are stuck in local maximum. You can imagine how someone whose algorithm is “do what I want, whenever” will find themselves far worse off than someone who does the things which are unpleasant now, and pay off later. They simply do not grasp the loss from short-sightedness. This is not a question of people having different discount rates or time preferences – people really are simply worse off.
This is the case for paternalism. The other ones are either polite rephrasing of this, or do not care about actually making people better off in the slightest. Imposing your preferences upon others, when it makes them worse off without a gain to yourself exceeding that, is really bad. I think this should be easy for anyone of ordinary moral sensibilities to see. When someone imposes their preference to have intercourse onto another, we call that “rape”; when someone imposes their preference that you live with them against your will, we call that “kidnapping” and “false imprisonment”. If you’re going to stop people from doing what they want, you better have a damn good reason why. Perhaps we should be more cautious about thinking that people smoking cigarettes really makes them worse off, or people killing themselves, or working a job for a wage we deem too low, or living in a house we deem too small. Government intervention should be thought of as harmful by default. It must be used carefully.
Nick, see this related paper I'm working on: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4715593.
The average anti-libertarian paternalist (me) would say you are correct, but our disagreement stems from the fact that I think every human brain operates on an algorithm that has some obvious inefficiencies. For example, I think the modal person optimizes short term enjoyment over long term benefits, even when doing so is plainly irrational. This is why people smoke. The only relevant question is whether most people are like this. If you think they are (which I do) then some level of paternalism for everyone is justified.
Our second disagreement is moral. As you note, intelligence is variable, as are traits like self-control and impulsivity. If I am a high intelligence person with great self control, I can sell a lot of heroin to low intelligence impulsive people, and make a lot of money by killing them. Should I be allowed to do this? Well, if you think every human being is made in the image of God, it’s plainly wrong.
Our third disagreement stems from the fact that utility is not objective. For example, does psychological enjoyment count as utility? Let’s say it does, and I’m a sadistic serial killer who derives psychic enjoyment from murder. But I’m also a utility maximizer, so I only murder elderly women who are fighting cancer and have a very low quality of life. From a strictly utilitarian perspective, if we are counting psychic enjoyment as utility, this is totally acceptable.