One of the more striking and important papers of the last ten years is “Are Ideas Getting Harder to Find?”, by Bloom, Jones, Van Reenen and Webb. The main finding is in the title, for the answer is yes. It is not so much that the rate of discovering ideas is falling – it is, in fact, continuing on at a largely steady pace – but that finding those new ideas required more and more workers and more and more inputs. For example, Moore’s law, which is the empirical regularity that the number of transistors which can be packed into a computer chip has doubled every two years for almost 50 years, has required more and more researchers to accomplish. Compared to the 1970s, doubling the density of a computer chip now requires 18 times as many researchers.
This has major implications. In endogenous growth models, if there are declining returns to idea discovery, then we rule out the possibility of long-run growth. We will converge to a steady-state level, with total output being determined by the size of the labor force. Now, the long-run is far away (my understanding is that calibrations say that we will approach the steady state in hundreds of years), but we would expect the rate of growth to substantially slow in the future.
BJVRW have a big case they’re trying to prove, and must work to show it. First, we can show that it is true in the big picture view. The federal government maintains accounts of the amount of money spent on “intellectual property products”. Obviously, the researchers we could be adding should fall in quality over time, so they deflate by dividing by the average salary of college educated workers. The argument is that, roughly, researchers should be paid their expected contribution. They find a massive increase in spending, without a corresponding change in growth.
They then check that this story is born out in studies of particular industries, because it would be otherwise conceivable that the sluggishness of total factor productivity growth was due to increasing misallocation. They then show, with examples from computers, agricultural crop yields, and pharmaceuticals, that the decline in research productivity is broad-based. They check this again by seeing if the decline in research productivity isn’t just due to research being spread across more firms and industries, and show it is indeed not. Their findings were explicitly replicated by Boeing and Huenermund (2020) for Germany and China — this decline is global.
They don’t take a strong position on why these declining returns are occurring, although there are others who make good cases – Ben Jones, for example, makes the case that as we get further from the basics, the time needed to become an expert in any given subject increases. Accordingly, invention comes later in life, with fewer spillovers across subjects and more teamwork required. The “Renaissance Man” is dead.
Like any provocative thesis should be, Bloom, Jones, Van Reenen, and Webb has not been free from criticism. One line of attack has been to say that researchers aren’t the same as they once were. As we accumulate more and more researchers, it’s highly unlikely that they’re going to be as smart as the ones who went into research. We can see this in studies of the intellectual ability of college graduates. For example, while one might naively think that the college wage premium has fallen, a superb working paper from Tomas Guanziroli, Ariadna Jou, and Beatriz Rache shows that, after adjusting for the quality of students it has actually considerably increased. In the aggregate, college graduates earned 16% less in 2018 relative to their old premium in 2003, but controlling for student quality flips to an increase of 24% relative to the old premium. Newer universities in Brazil are less prestigious than the older ones, and attract students who perform worse in standardized tests.
However, BJVRW take this into account by deflating number of workers by the wage per worker. Ekerdt and Wu have a revisionist take on this. Their argument is that wages do substantially diverge from marginal product, because companies have less information about people’s productivities than the workers themselves. Thus, if there is an increase in demand for research workers, the people they hire will be overpaid relative to the old workers. Measuring this is tough, but they compare the mean log earnings of workers who go between research and production, to those who do not. People who switch from research to production have considerably lower wages, while those who go the other way have higher wages.
I’m not entirely convinced by their method here, largely because I wonder if we can really be so confident that the wages relative to marginal product are well measured. Skills in the two sector might not entirely match up. Nevertheless, it is compelling that there is some mismeasurement of ability. Some interesting bits from Ekerdt and Wu include pointing out the two margins of selection. Less productive college educated graduates are actually selecting out of research, but this is swamped by the massive increase in college attendance.
BJVRW take research productivity as being growth in a year divided by inputs, but this can’t take account of the possibility that an invention in one year might raise productivity over many years. Julian Alston and Philip Pardey, in “Are Ideas Really Getting Harder to Find?”, consider R&D as increasing a stock of knowledge, which will then increase the rate of discovery in the future. They are coming from a background in agriculture, where research has an extremely long lifecycle, of 35 to 50 years, which is no doubt what prompted their article. Technologies become obsolete as well, as conditions change, such that a change in the obsolescence rate of technology would reduce TFP. (We’ll expand on that later). They also point out that BJVRW’s measure of spending on research, the BEA’s “gross domestic investment in intellectual property products” is actually more of a stock than a flow, and so can’t be interpreted as one year’s worth of investment.
Yoshiki Ando, James Bessen, and Xiupeng Wang argue that the rate of obsolescence of R&D must have risen. If the marginal productivity of researchers has truly been falling, then it makes the behavior of firms puzzling. Why are they hiring more and more researchers? Elementary reasoning tells us that if the expected profits from something fall, we should get less investment into it (all else being equal). You would also expect for managers to maybe notice if their company’s research productivity is falling 8-10% per year – I think I’d notice if my research division got half as productive every 6 to 7 years!
We should think of theory first. There are theoretical reasons for us to have both too much and too little investment into research and development. For us having too little, there are the obvious reasons – inventors can capture only a small part of what they create. For too much, we need to take into account business stealing. If there are multiple companies trying to gain an advantage over the other, then we may well end up spending a socially excessive amount on research. In practice, the positive spillovers dominate business stealing, but a change in business stealing can change the decision to invest or not.
Ando, Bessen, and Wang look at how much research by a firm changes their productivity, and finds that the returns to research expenditures have actually increased. Doubling R&D in one period increases TFP by one percent in the next period. (This is not unreasonable because R&D tends to be a small percentage of expenditures.). They eliminate other possibilities, like monopsony power and adjustment costs, and are left over with obsolescence. We saw a similar argument in Alston and Pardey. If technologies become obsolete faster, then it’s possible for research productivity to be the same as it ever was, but for observed total factor productivity to stay the same as it ever was, or even decline.
I still fundamentally believe the direction of Bloom, Jones, Van Reenen and Webb. There is considerable evidence that ideas really are getting harder to find. It may, however, not be as bad as we thought; and with the coming of AI, there is no reason to think that it must hold forever.
My economics knowledge is not sophisticated enough to critique the argument here, but I can say a few things as a scientist.
My degree is in theoretical physics, and there I can tell you that there is no shortage of readily attackable problems, many of which are likely to lead to significant advancements. Few people work on them, though, as incentives punish researchers so harshly, that (famously) many (even fairly recent) big-name superstars and Nobel prize winners would be denied tenure in today's environment. Many would not be accepted into even middling grad schools!
That disturbs me a lot, since in very difficult, high-impact fields like physics, the output of high-performers totally dominates everything.
Unfortunately, despite claims to the contrary, there is no non-applied physics research to speak of in industry, and certainly no theoretical physics research. So I see a definite slowdown in discoveries here, with a cause that's fairly obvious to me.