Predictions Are Hard, Especially About the Future
What have we really learned from endogenous growth models?
One of the main tasks of macroeconomists is to forecast the future economy. This has taken the form of either DSGE models which attempt to forecast the business cycle, or growth models which abstract away from cyclical variations to predict growth in the long run. It is the second which we will discuss.
The first serious growth model is the Solow growth model, which remains a staple of introductory macro classes. In it, output (Y) is a function of capital (K) and labor (L). The share of output to each factor is determined by the exponents multiplying them. If Cobb-Douglas, these exponents add up to one, and thus the share to each is equivalent to their share of one. Labor is taken to be exogenously determined, and only capital can accumulate. Thus, we can represent changes in technology as a change in a scalar A, which multiplies labor alone. If an economy has too little capital, then it raises output by converging to the ideal level given its amount of labor. If it has too much, then the country saves less and lets its capital depreciate. This can be extended, as in Mankiw-Romer-Weil (1992) to include human capital, or anything else. (More detail can be found here).
Later work, called “endogenous growth”, would extend this to incorporate the things determining A. It was trying to solve two puzzles, as recounted by Romer (1994). The first was that Solow’s model strongly implied that the marginal return to capital in poor countries should be much higher than it actually was, and capital should flow from rich to poor countries. It isn’t, and in fact rich countries were (at the time) growing much faster than poor countries. It has since changed; it may change again. The other was that Solow’s model exists in the world of perfect competition, and one of the requirements of that framework is that everything is paid its marginal product. Ideas have high fixed costs to find, and low costs to disseminate, so they are obviously not. The first models, the AK models (as in Romer (1986)), did this by multiplying A by research efforts, with research a function of savings. (AK models include all reasons for increasing returns in that term). Later stuff, like Romer 1990, focus more explicitly on research by firms. The most notable prediction is that if the return to research falls below the rate of return on capital, then no growth happens because nobody creates new ideas.
The endogenous growth work has been well-cited, and certainly inspired a lot of thinking. It is good to have a clear framework for thinking about growth. What I want to emphasize is that these models are extremely fragile to a bunch of assumptions about how technological progress takes place, the nature of the production function, and the stability of parameters over time. Many of these assumptions are driven not by verisimilitude, but by analytical convenience.
There are two Chad Jones papers relevant to this, both from 1995, “Time Series Tests of Endogenous Growth Models”, and “R&D-Based Models of Economic Growth”. The first argues that changes that affect the rate of discovery of new ideas ought to affect the rate of growth of the economy. This hasn’t happened. Instead, it just keeps moving with remarkable consistency along a log-linear line.
The other points out that the endogenous growth models assume that an increase in the number of researchers should lead to an increase in the rate of growth. If it doesn’t, then there must be decreasing returns to new researchers, perhaps because new ideas are getting harder to find, and we will converge back to the Solow model in the long run. I’ll note that decreasing returns to research activity is itself an assumption, and need not always hold. In particular, they assume that ideas never depreciate, which doesn’t seem true; and in any event the rate at which ideas are getting harder to find need not hold into the future.
Going back to the largely rule out any non-scalar changes in productivity. (We call this a Hicks-neutral shock). Things which change the optimal ratio of capital to labor are just not in the framework at all. When we try and extend it to the future, we are simply speculating about what future technology will be like. Even stuff like Acemoglu (2002) on directed technological change, where the technologies we find are affected by what factors are relatively more available and is dictated, just has to assume that the elasticity which governs the technology we find over time will stay the same. For decades, the capital labor ratio was stable – so stable that Nicholas Kaldor labeled it one of the stylized facts of economic growth. It is stable no longer. There is considerable argument about the magnitude of the decline of the labor share of income, but it probably has declined, and could even more in the future. Nothing is written.
In short, much thought in the growth literature has boiled down to trying to explain why growth has been log-linear (constant growth rate), and coming up with a mathematically plausible framework for that straight line to go on indefinitely, in spite of all the plausible reasons we have for why it shouldn’t. I am reminded of Scott Alexander’s post on “Heuristics That Almost Always Work”. It is often the right prediction to say that things will continue in much the same way that they always have. The trouble is that telling when they won’t is what makes these models useful!
This is why I am intensely skeptical of forecasts of the future of AI. We have to assume stable parameters. I don’t think we have any reason to believe this is true. If AI is a big deal, it will shake up everything we know about the economy. It cannot simply be boiled down to an increase in labor, or an increase in capital. What we produce and how we produce it will change. Given the technology taking a particular form, we might be able to say things about what the economy will be like in a years, but certainly not five!
Just because something has a model does not mean it is certain. It is important to be precise in what you are saying, absolutely, and all serious work should be able to write down a model to incorporate what they believe will be the case. But we must not confuse the precision of a model for being accurate.
I think economics is at its best when we are analyzing things which will recur. Every year there are new students – I can absolutely believe that the things we learned from an RCT in this year will improve the education of children four years from now. But I simply cannot believe that we can say anything about how the economy will be in 20 years with any confidence whatsoever, and we economists do the public a disservice when we do not communicate how uncertain we are.

"One of the main tasks of macroeconomists is to forecast the future economy."
No. That's the job of every stock buyer/seller, businessman and consumer.
The macro or micro _economist's_ job is to make forecasts conditional on policy.
Another thing about endogenous growth models – particularly Romer’s one – that always seemed lacking to me was their ability to account for some of the key scientific / technological developments that have been foundational to long run economic growth.
On the Romer model, technology (or “ideas”) is modelled as having similar characteristics to that of a public good: non-rivalry and non-excludability. Following Micro 101 reasoning, the positive externalities associated with public goods lead to under-provision; and thus, in the absence of market intervention, we should expect a suboptimal amount of technological innovation to take place. The intuition behind this result is straightforward: in perfectly competitive markets, factor inputs are paid their marginal products, which exhaust total output, thereby leaving no rents to reward technological innovation. Under perfect competition firms must price goods at marginal cost, and because technological development is usually a fixed cost, there is no economic incentive to innovate: there is no prospect that a firm which does so will be compensated for the fixed cost that it's sunk into R&D. This leaves a role for government intervention to ensure that firms that engage in technological development can secure rents on their investment – typically, through the assignment of property rights or the provision of subsidies / tax breaks to firms engaged in R&D.
Increasing returns to scale and the non-excludability of technological innovation are, however, not the only sources of market failure in a setting of perfect competition. Another, and it would seem to me more salient, source of market failure is radically incomplete information. Assigning property rights to technological developments will raise innovation only if prospective innovators perceive the rents those rights would allow them to capture. However, in the case of groundbreaking technological developments such as the discovery of calculus or the development of the internet, this would require anticipating all the markets and surplus that those innovations created. Ex ante, this is impossible. Therefore, assigning property rights (or trying to internalise rents generally) will not be sufficient to increase the amount of technological innovation. Innovators need not only the possibility of rents but also the ability to discern them. If this is the case, then I don’t see how Romer’s model – and endogenous growth models more generally – can suffice to explain the long run economic growth we’ve observed over recent centuries.
My basic point is that, in the case of groundbreaking technological developments, prospective innovators cannot discern the potential rents of their discoveries; therefore, incentive schemes based on increasing the size of these rents will not have a significant impact on their R&D decisions.
Curious to hear your thoughts!