I recently picked up a copy of Milton Friedman’s 1953 book, “Essays in Positive Economics”, from a used book store in Berkeley. It is best known for its lead essay, “The Methodology of Positive Economics”, which remains interesting, and somewhat right, even if it is ultimately wrong.
He starts by differentiating positive from normative economics. Normative economics is the study of what should be; positive economics is the study of what is. According to Friedman, a model is to predict, and can be judged only on the basis of its ability to predict. The realism of the assumptions don’t matter. For example, the leaves on a tree are thicker on the side where the sun shines. To Friedman, it is perfectly reasonable to describe the tree as having maximized its sunlight received given the cost of investing in leaves. Likewise, a businessman can be said to maximize expected profits, and to behave like a homo economicus, even if he is not explicitly calculating out how to optimize his use of resources. A hypothesis is never proven, but a long track record of accurate predictions gives us confidence that the model will continue to predict out of sample, and we can use it again and again.
The essay falters on a few main points. As he notes, there is an almost infinite array of possible hypotheses. We must have some sense of what is plausible and implausible to guide us in deciding what hypotheses we should test at all. Since all realizations of events are noisy, we may not be able to quickly dismiss false ideas. Neither will we have sufficient empirical evidence to converge to any one hypothesis. We would need to run more tests than we have time. It is perhaps then not surprising that the movement to find more plausible assumptions for our models — to give them microfoundations — has been particularly strong in macroeconomics, where we only have a few observations at a time.
Second, Friedman is far too cavalier about the accuracy of the idealized models. He posits that market behavior is close to perfect competition, when this is wildly untrue. The assumptions of perfect competition — perfect information, identical firms, no fixed costs, no transaction costs — cannot be abstracted away from. We spend nowhere near the socially optimal amount on research and development, for the simple reason that the inventor cannot appropriate the gains. This isn’t a small deviation, it’s an enormous one. He is willing to grant that it is not always accurate to treat firms as being under perfect competition, but is dismissive of the models of imperfect competition from Chamberlin and Robinson. Here there may be some allowance for the progression of science forward — Dixit and Stiglitz (1977) allowed us to explain the patterns of international trade (Krugman 1979), why geographic concentration exists (Krugman 1991), and many other things – but even then he should know, and indeed allows for later, substantial deviations from perfect competition.
Yet, there is something to the essay. Economics is ultimately about making better decisions in the future. If something helps us do that, it is good. It needn’t matter if we don’t entirely understand why it does that, in much the same way that it doesn’t matter if we don’t entirely understand why an LLM gives the output it does, so long as it is useful.
There is a greater hunger for economics to be wrong. It often gives us conclusions which challenge our preconceptions, or it tells us that our mental models of the world are wrong. To this end, people are willing to indulge in criticizing the unrealisticness of the assumptions, with little ability to increase predictive accuracy. Anyone who has ever read the Guardian, or has been known as an economist at a party, knows what I am talking about. Behavioral economics has expanded in the public eye far beyond its real contributions (prospect theory) to become a complete reason for rejecting the knowledge of economists. This is obviously silly and unserious.
It is striking, though, that good economic reasoning — which doubtless Milton Friedman would approve of — can often take the opposite form of questioning the assumptions. Take the case of minimum wage laws. A basic theory of price controls in a competitive market is that they will create distortions. Firms hire until the point that hiring more is no longer profitable, and raising the wages above this means that unemployment necessarily results. Measuring the effects of changes in minimum wage laws is very hard, however. Changes might be “endogenous”, in that they are caused by forces which are in the model. A state legislature might correctly expect economic growth to occur and raise the minimum wage, thus causing us to think that the minimum wage law increased both employment and wages. What researchers use is a “difference-in-differences” approach, where you study workers at the county or state border, under the assumption that since they share an economic area, they are otherwise the same except for the regulations in one area but not the other. Card and Krueger (1994) took this approach to compare fast food workers in New Jersey and Pennsylvania, and found that, if anything, the minimum wage increased employment.
It is noteworthy that there is a model of the world in which a wage floor increases employment and output. If employers possess market power, then they can hire below their optimal quantity in order to drive down wages. Raising marginal cost to marginal revenue means they gain nothing from underhiring, and so have to hire the optimal number of people. Those who favored minimum wages for reasons of redistribution tend to gravitate to believing this, both out of wishful thinking and a desire to explain some empirical results.
And yet, the assumptions are too fuzzy. In order for wage floors to increase employment and output, the floors would have to be unique to every company. If firms face different wages — and indeed they do — an optimal wage floor for one industry would cause disemployment in another, and have no effect on another. The businesses most affected, like fast food and retail, are notoriously cutthroat, low-margin businesses too. It is implausible that a McDonald’s in Philadelphia is exercising much in the way of market power over its employees in the fast food sector alone, let alone in all the other possible occupations that low-skilled workers might work in.
And so later empirical work has largely — not entirely — backed minimum wage laws causing unemployment. Neumark and Wachser (2000) reassessed Card and Kreuger, and found the results be due to faulty data. (Neumark and Wachser used tax roll data rather than phone surveys, which were more accurate). Since then we have had 25 years of back and forth arguing over how best to measure the effects. We could not assess the conclusions of theory in any reasonable amount of time, so we must rely upon whatever assumptions seem most realistic.
I like the essay for its practical mindedness. Economics is to be a science to make predictions and to improve the world. I simply don’t think that we can entirely ignore the realism of assumptions, and that Friedman isn’t entirely ignoring them either, in practice.
For those who wish to learn more, I highly suggest reading Kevin Bryan’s review, which can be found here.
it's just empirically difficult to measure what exactly the minimum wage does because the situation simply can't be measured ceteris paribus in reality. and because there are so many circumstances that can affect the outcome, I think it's very helpful to have simple models.
Is there a typo here?
"Card and Krueger (1994) took this approach to compare fast food workers in New Jersey and Pennsylvania, and found that, if anything, the minimum wage increased unemployment."
I thought their findings were that minimum wage did not increase unemployment.