Arnaud Costinot:
Costinot is best known for his 2012 paper “New Trade Models, Same Old Gains?” with Costas Arkolakis and Andres Rodriguez-Clare, which has proven enormously influential in trade theory.[4] The increasing availability of firm-level microdata has allowed economists to be much more specific in their predictions about trade. Firms are heterogenous, and vary widely in their production functions. The Melitz model of trade, and its later extension with Gianmarco Ottoviano, predicts that when the cost of exporting falls, the more efficient firms are the ones who take advantage of it. They benefit more from the expansion in market size than they are hurt by the increase in competition.[5]
Arkolakis, Costinot, and Rodriguez-Clare are able to show that, for a large class of trade models including Eaton and Kortum (2002),[6]Krugman (1980),[7] and many variations of Melitz (2003),[8] the gains from trade depend only on the share of expenditure on domestic goods, and the elasticity of imports with respect to trade costs. Plugging in estimates for those two parameters gives a range of the gains from trade between .7 and 1.4 percent. With the Melitz model, these results hold so long as there is a Pareto distribution of firm productivity.[9]
Arkolakis, Costinot, and Rodriguez-Clare, now with Dave Donaldson, extend their work to analyze the gains from trade when markups are variable, in "The Elusive Pro-Competitive Effects of Trade". The gains from a country opening up their markets to trade may increase competition, and reduce the markups of domestic firms. On the other hand, foreign firms might be able to increase their markups. When we drop the assumption of CES utility (which necessitates constant markups), the welfare impact can by summarized by a constant multiplied by the welfare impact of trade under CES utility. Plugging in parameter estimates shows that opening trade need not improve allocation. The positive change to domestic firms is offset by the negative change to foreign firms.[10][11]
Costinot, with Donaldson, looked at the effects of the economic integration of the United States on agriculture more generally, using a dataset of the potential productivity of every section of land for every crop in the United States. This allows them to estimate the optimal combination of crops if there were no trade barriers, and calculate how far away from optimal trade barriers push us. They find that up to 80% of the economic growth of agriculture between 1880 and 1997 is due to trade.[12] Costinot and Donaldson, with Cory Smith, scale this up to the entire globe and apply it to climate change. Allowing production patterns to adjust would substantially mitigate damages to crops, with international trade having only a limited role. [13][14][15] Costinot and Donaldson also use this dataset to test whether Ricardian comparative advantage explains the pattern of agricultural trade around the world, finding strong evidence that it does.[16][17]
Matthew Gentzkow:
Gentzkow studies the transmission of information, in theory and in practice. His work has identified why persuasion can occur, the causes of media bias, and the effect of the media on real outcomes.
His most important paper is the seminal "Bayesian Persuasion", co-authored with Emir Kamenica in 2011.[6][7] The problem they consider is that of a sender who sends a signal to a receiver, and who is bound to truthfully report the findings of experiments. Both sender and receiver start with an accurate prior, and thus revealing true information cannot change their average posterior. The sender does, however, have discretion over what experiments they implement. How and when can that agent increase their utility? Kamenica and Gentzkow are able to show that, so long as the response of the receiver is non-linear as a function of their beliefs, and the distribution of posteriors is convex, the sender can design a signal which changes the receiver's action to their benefit.
To give a concrete example, consider a prosecutor who wishes to convince a judge that a defendant is guilty. The prosecutor does not care about guilt or innocence, but simply wants to maximize the number of people convicted. Both the prosecutor and judge know that the true probability a defendant is guilty is .3, and also assume that a full investigation would perfectly establish who is and isn’t guilty. The judge will convict whenever their posterior probability of guilt exceeds .5. If the prosecutor conducts no investigation, they will get no people convicted; if they conduct a full investigation, then 30% of people will get convicted. It is possible for them to do better, however. Suppose they test the blood at the crime scene for its blood type, and it’s type A. 42% of people in the US have type A blood, so the likelihood of guilt is pushed above .5, and the prosecutor stops there. For the rest, the prosecutor fully investigates. It is possible for up to 60% of people in the example to be convicted, despite everyone knowing the true probabilities involved.[8][9]
This has had a considerable influence on economic theory, and created the subfield of information design. It explains why, to give a few examples, everyone can benefit from schools only providing coarse information about grades,[10] police should imperfectly randomize where they patrol, or why Google might reduce congestion by only sharing imperfect information.[11][12][13]
Gentzkow has also studied the extent and sources of media bias in America, commonly with Jesse Shapiro. In “Media Bias and Reputation”, they describe biased media as rationally arising from consumer uncertainty. People do not initially know which news sources are accurate or not, but they do have prior beliefs about how the world is. When they see a news source agree with them, they change their beliefs about the source and regard it as higher. In contrast with prior work, their model predicts that increased competition should reduce bias.[14] Gentzkow, Shapiro, and Sinkinson find that, historically, competition would increase the ideological diversity of the newspapers available.[15]
They empirically explore how consumers drive media bias in “What Drives Media Slant?”. In order to estimate this, they must first construct an index of slant which has meaningful cardinality, which they do using differences in language choice between Republicans and Democrats in the 2005 Congressional Register. Having measured how slanted different newspapers are, they can then estimate the demand for slant by looking at the difference in circulation between neighborhoods which have more Republicans or Democrats. They then take this estimate of demand for slant, and ask how much slant would newspapers choose, if they were being profit-maximizing. Since the actual amount of slant closely matches the profit-maximizing amount, they conclude that owners have no influence on the amount of media bias, and that it is in fact driven by what people demand. Two newspapers which have the same owner will be no more similar to each other than if they were owned by different people.[16]