Ludwig Straub, Winner of the John Bates Clark Medal
New results from new methods
Ludwig Straub has just won the John Bates Clark medal, which is the annual award for the best economist under the age of 40. It is an award which is well-deserved, and indeed the only reasonable choice which could be made this year. It is not often that you have novel methods which open up new fields of inquiry, as Straub – with frequent collaborators Matthew Rognlie and Adrien Auclert – have done.
That innovation is heterogeneity in macroeconomics. The workhorse model beforehand assumes a representative agent, who is the average of all the people in the economy. This makes things very simple to compute. We solve for what the agent would do, and then we multiply by the number of the people in the economy. People are different from one another, of course, and as we shall see, this does matter. However, heterogeneity makes things very hard to compute, because the optimal actions of the agents depend on the distribution of wealth, which must be kept track of the whole time. Auclert, Bence Bardoczy, Rognlie, and Straub (2021) introduced a tractable method for solving these models with heterogeneity, and much of his career since has been exploiting the implications.
But we should set up what we’re doing more. The basic core of New Keynesian macroeconomics are three equations. You have a “demand curve”, where the consumer maximizes lifetime utility subject to a budget constraint. Since they face declining marginal utility, they would like to smooth their consumption evenly. A rise in real interest rates causes people to shift their consumption into the future because the return to doing so is now higher. You have a “supply curve” of firms producing a continuum of goods varieties, each charging a (ideally) a constant markup over marginal cost. In each period, only a fraction of firms can change their prices, which means that monetary policy now matters. Higher than expected demand will lead to an increase in output, because it eats away at the markup, and lower than expected demand means that firms will be unable to sell what they produce. This “supply curve” is the Phillips curve, which will relate unexpected inflation and deflation to output.
Finally, you need a rule governing how the central bank responds to deviations from target. This is because without, the correspondence between interest rates and inflation is indeterminate. (Suppose the central bank set the interest rate at 0%. The rate of inflation would be whatever we expect it to be, and any arbitrary inflation rate would be an equilibrium. This is Sargent and Wallace (1975)). The Taylor rule says that the central bank will respond more than one-to-one with inflation, and this pins down an equilibrium.
Finally, we say that long-run supply is given by the vertical line below. What we are measuring in this model, ultimately, is the deviation from the long-run trend.
This is the core of the Representative Agent New Keynesian model (or RANK). It is, in essence, the real business cycle model with a friction for price changing added in. You can build it up further by adding more and more frictions to better match the data, as in Smets and Wouters (2007) and its many descendants, such as wage setting frictions and habit formation, or you can add a financial sector, or you can add different levels of price stickiness in different sectors, or rational inattention – it’s really a quite flexible model.
One of the consistent sources of inspiration for macroeconomists has been providing “micro-foundations”. The goal of the economist is not merely to predict what will happen, but to show it would change under different conditions, and to show what particular variables would have to be altered in order to get the desired results. Accordingly, we do not simply predict what inflation will be by just drawing lines on the time series going forward – the outcomes we predict are built up from the minimum number of facts we need to know about the behavior of maximizing consumers. If we want to know the risk aversion of consumers, for instance, we should get it from well-identified experiments and plug it into the model, rather than let the number change around until it matches the data. The idea is that we want to identify things which are invariant to the particular circumstances of today, and will be useful in the future.
RANK doesn’t really do this well. They can match the aggregate data, but they must do so by putting in numbers which are completely incompatible with the micro-data. For instance, suppose someone gets a windfall in the current period. This will be governed by the same parameter which determines how people change their consumption over time when interest rates change. In order to match the macro data, we require that people have a low elasticity of intertemporal substitution – consumption does not fall off a cliff or spike immediately when interest rates are raised or lowered, so clearly people must have a strong preference for even consumption over time. This predicts that they would have a low marginal propensity to consume.
However, we are extremely confident from experiments that people have an extremely high marginal propensity to consume, or MPC. Boehm, Fize, and Jaravel (2025) find that when people in France are randomly assigned to receive money with no conditions, they spend 23% of the newly received money within a single month. This is starkly in contrast with what RANK implies – that the only thing affecting income is lifetime income, which is divided up over someone’s life cycle.
This means that RANK will have a very hard time explaining why fiscal transfers seem to work, and indeed create multipliers of additional spending. Since the government has to eventually balance the budget, whether through taxes or inflation, people will simply save enough to pay their future taxes and exactly counterbalance a fiscal transfer. Robert Hall (1978) showed that this constraint, called “Ricardian Equivalence” after a chance remark by David Ricardo in the early 1800s, holds with remarkable generality – just so long as anyone cares about future generations, even if the amount they care is very small, taxes shouldn’t have any indirect effects on output.
HANK can also explain the forward guidance puzzle. The central bank communicating what they will do to interest rates in the future naturally does have an impact on saving and consumption behavior today – what is unexpected is that, in the NK model, it doesn’t actually matter when the announced change will take place. An announced rate hike 10 years in the future has exactly the same impact as a rate hike today. Obviously, nobody believes this is true, but it is what the model implies.
Adding in heterogeneity can answer all of this, and is wonderfully summarized in Auclert, Rognlie, and Straub’s essay in the Annual Review. To get the MPCs to match, you allow for people who have little wealth to immediately spend much more, while people with liquid wealth have a much lower marginal propensity to consume. There will be a continuum of MPCs as a function of wealth. To explain fiscal transfers working, we’ll note that people at the bottom of the wealth spectrum have a higher MPC because they are unable to borrow as much as they would like to smooth out their consumption. Thus, not only are fiscal transfers effective (see my article on the unreasonable effectiveness of cash transfers from last week), who they go to will have implications for the economic effects. And finally, it solves the forward guidance puzzle by having people who can’t perfectly borrow, and so people cannot perfectly act upon their anticipated income gains 20 years hence.
Adding in heterogeneity has some striking implications for how monetary policy will affect the economy. With RANK, all of the action is happening through the channel of intertemporal substitution. In HANK, though, the impact of monetary policy is through its effect on real variables. An interest rate cut leads to higher employment and wages, which increases demand and thus leads to more employment and wages. Most of the action is happening in the latter channel, not the former.
This means that changes in outcomes will take time, and the effects of shocks will occur over time rather than happening immediately. “The Intertemporal Keynesian Cross” by Auclert, Rognlie and Straub (2025) indicates that we should see a persistent inflationary effect when there are fiscal transfers. Now, does that not explain what happened in the aftermath of the pandemic? We had massive fiscal transfers, and then had inflation which proved much more persistent than our models believed would be the case. HANK is the organizing framework which can explain that.
The meaningfulness of the heterogenous effects of policy for aggregate consumption is actually much stronger when it comes to fiscal policy than for monetary policy. There have been two papers, “HANKSSON” and Christina Patterson (2023), which have evaluated the correlation of the incidence of monetary policy shocks with the marginal propensity to consume. If they are correlated, then any given shock will be amplified – e.g. if the first people laid off in a recession are those who spend relatively more of their income, then the fall in effective aggregate demand will be larger than if it is the rich who are laid off first. Those papers found, admittedly, that while the poor are laid off first, they also have changes in savings which counterbalance the effect. However, monetary policy is not the ideal test case – if you look at the annual review article, there are cases where heterogeneity doesn’t matter for monetary policy (see Werning, 2015) but that deficit financed fiscal shocks become completely different once you add back in heterogeneity. Macroeconomics does not end at monetary policy.
So why didn’t we just do this earlier? It took a while to come up with the framework for calculating a person’s optimal response. We can obviously rule out agent-based models, where everything is dependent on everything else – you can give people heuristic rules, and see what they converge to, but you basically have to rule out people actually being rational and optimizing if you do that. (I would like to suggest, for background context, my article earlier today, which is largely on dynamic games).
Instead, we have a continuum of wealth. There is a mapping from income to marginal propensity to consume. As you hit the economy with a shock, it causes a change in the distribution of wealth, which in turn affects how much people spend, which affects wealth, and so on. It will converge, but calculating this will take a long time, and you have to include the wealth distribution as a variable every time. A distribution, being continuous, has infinite dimensions. You can deal with this dimensionality by binning it, and one way of incorporating heterogeneity is simply to say that there are two agents, with a fraction being hand-to-mouth and having an MPC of 1 and the other fraction optimizing over time, which is essentially just smushing the dimensions down to 2, but unfortunately this does not solve your problems. You still won’t be able to match the data (Kaplan, Moll, and Violante, 2018).
I am simplifying the following greatly. Boppart, Krusell, and Mitman (2018) first implement a solution which works in sequence space. We start at an equilibrium, and we know that the economy will eventually return to equilibrium. We need to figure out the sequence of events for getting there. The path, divided by the size of the shock, gives us a linearized impulse response Auclert, Bardoczy, Rognlie, and Straub (2021) use a Jacobian, which is a matrix of derivatives. Basically it gives you how much every variable at every date would change if you wiggled one of them a little bit. The advantage for speed is that you compute that once, ahead of time, and solving is simpler in the future.
Their approach is not without tradeoffs, of course. The approach is only accurate for small shocks, because with small changes, there won’t be enough change for the second derivatives, the curvature, to matter. Very large shocks will possibly produce different dynamics that we don’t know about.
Still, the method is useful. Take Auclert, Rigato, Rognlie and Straub (2024), in “New Pricing Models, Same Old Phillips Curves?”. In the New Keynesian model we described earlier in the blog post, we assumed that the friction on price changes was due to only a randomly selected fraction of firms being able to change their prices. (They are visited by the “Calvo Fairy”). You could also model the constraint on changing prices as being due to firms having to pay a cost to do so. This has the advantage of being more empirically realistic – there is extensive evidence, going back to Bils and Klenow (2004) and Nakamura and Steinsson (2008) that price changes are more frequent during times of higher inflation – but is much less convenient to work with analytically. What ARRS show is that, for shocks that are not too large, you can represent any arbitrary menu cost model as a Calvo pricing model.
Moving forward, Straub has a few exciting things in the pipeline. Thomas Winberry, Auclert, Rognlie and Straub (2025) points out how the same logic for consumption by households can be brought to bear on investments by firms, although at the moment we lack the parameter estimates from well-identified studies. The only thing that comes to mind is that something can be taken out of the micro-finance experiments in the developing world, and we could estimate the effects of foreign aid perhaps better than we could before. In “The Macroeconomics of Tariff Shocks”, Auclert, Rognlie, and Straub trace out the impact of the Trump tariffs from April 2025. (They’re bad). And just published in the QJE, Straub, (with Andersen, Huber, Johannesen, and Vestergaard) has a paper on disaggregated economic accounts for the nation of Denmark. It’s an extraordinary data collection effort, one which is needed for assessing the impact of shocks and policies when heterogeneity matters.
Ludwig Straub has not only done that, of course, although it is the most important part of his work. With Ivan Werning, he has a paper reassessing the optimal capital taxation theorem of Chamley (1986) and Judd (1985). This result has gotten simplified into “no taxation on capital”, but it never really was just that. That’s a distortion, frankly, of what they imply. In the long run, Chamley and Judd found that you will converge to no tax on capital; the basic intuition here is that capital accumulates, and labor does not. Accordingly, distortions from taxes to capital will accumulate over time. Straub and Werning (2018) point out that, first off, there are many cases where this convergence to zero does not happen. If the intertemporal elasticity of substitution is below one, then the losses from people’s reductions in savings is outweighed by what we are able to collect in taxes. Even if the intertemporal elasticity of substitution is above one, the transition to the long run state of no capital taxes would take hundreds of years under reasonable parameter values. The “no taxes on capital” result which is common in the public mind is likely to be partially correct in practice, because we definitely don’t want to tax investment before its returns are realized.
There is a funny counterpart paper, Auclert, Michael Cai, Rognlie and Straub (2024) which argues that, in many cases, the optimal tax rate on labor is to slowly increase to almost, but not quite, 100%. The basic intuition here is that because there are imperfect capital markets, people desire to save now as a precaution. Thus, an increase in expected future taxes leads people to work more now to save for the future, and since due to market power people are working too little, this actually raises total welfare. This is a result which is, I should emphasize, merely extremely funny and not a very reasonable guide to policy; it did, though, get me thinking.
Macroeconomics is not my area of expertise. Its language, methods, and ways of thought remain mysterious to me after all this time. Until someone comes along to write a better review of Auclertist-Rognliest-Straubist Thought, though, I hope this one might profit you for having read.



Thanks for the explanation of the models, but I don't understnad how they allow for a cental bank that has an outcome target, Infation or NGDP? I suppose that a heterogeneity of d demanders is good, but I'd think the main problem is that the prodution side is heterogeneous. Different sectors would respond differently to a shock depending on how downwardly sticky prices and wages are in each.
Or am I failing to understand the your explanation?