We want to know if AGI is coming. Chow, Halperin, and Mazlish have a paper called “Transformative AI, Existential Risk, and Real Interest Rates” arguing that, if we believe the markets, it is not coming for some time. The reasoning is simple. If we expect to consume much more in the future, and people engage in smoothing their incomes over time, then people will want to borrow more now. The real interest rate would rise. The reasoning also works if AI is unaligned, and has a chance of destroying all of us. People would want to spend what they have now. They would be disinclined to save, and real interest rates would have to rise in order to induce people to lend.
The trouble is that “economic growth” is not really one thing. It consists both of expanding our quantity of units consumed for a given amount of resources, but also in expanding what we are capable of consuming at all. Take the television – it has simultaneously become cheaper and greatly improved in quality. One can easily imagine a world in which the goods stay the same price, but greatly improve in quality. Thus, the marginal utility gained from one dollar increases in the future, and we would want to save more, not less. The coming of AGI could be heralded by falling interest rates and high levels of saving.
This is not a new idea. Trammel (2024) explores this, particularly in section 4. He uses an amusing example earlier in the paper of a society where horses are what people consume. “If the economists of the Golden Horde had estimated the contemporaneous relationship between utility and consumption in their society, and used the estimates to make claims about the welfare implications of consumption growth, they would have gone badly wrong”.
I would argue that most economic growth in the past has taken the form of expanding capabilities, rather than reducing the cost of things which always existed. We can get a sense of what people consume from the consumption baskets which the Bureau of Labor Statistics uses to calculate inflation. The present ones can be found here, beginning on page 10. Large swathes of what we buy would not be available, at any price, in the past. You may have come across articles arguing that we are today richer than Rockefeller was, such as this by Don Boudreaux. This is substantially true. 6% of our budget is spent on medical care – is anything today comparable to that a hundred years ago? 3% is spent on recreation services, largely on video streaming – did anything exist like that? Moreover, many of the goods which existed in the past have vastly improved in quality. While being conceptually similar, goods are indeed different now.
To make things very plain, imagine that I could buy a good now for $100. If, in one year, there will be a good which is twice as good for the same price, I will save my money and buy it next year. If the price will fall in half in one year, and I can buy two units of the good, I will buy the good now. Most economic growth is more like the first, than the second.
And what will AI be like? If we grant that AI is roughly “linearly improving” in the capabilities it has now, then it is in part a substitute for capabilities we already have. An LLM is essentially equivalent to a dedicated personal assistant. Self-driving cars are equivalent to hiring a chauffeur. Of course, as these capabilities improve we would struggle to replace them with humans. Maybe AI, abstracted sufficiently, is just linear regressions which could be done by hand by a billion people; but in any event no collection of humans could ever do it as fast. Likewise, a chauffeur can drive, but cannot follow behind another automobile as close as train cars. Already it exceeds humanity in its speed of answering, and its quality. There are so many ways in which AGI leads to capabilities which are not available at any price today. We expect AGI to discover medical cures which will extend our life spans. People’s projections of what AGI could do are already inferring that it will have capabilities beyond what we can have at any price. Does anyone seriously think that the “infinite growth in finite time” style takeoffs would mean we have exactly the same goods as today, but unlimited quantities of them? Once we allow for AI to find completely new things, the logic of the paper breaks.
The reasoning from real interest rates is at odds with the stock market performance of many of the companies which stand to benefit most from AI. Does the explosion of Nvidia to a market cap of over 3 trillion dollars indicate that the market thinks that AI won’t be a big deal? Of course this is imperfect; I do not pretend to have on hand an index of the values of every company, nor could I separate out the components of the major tech companies’ stock prices. It is, however, a better indicator of the future of AI than the real interest rate, which need not have any connection to AGI at all!