We quantify the barriers that impede the integration of immigrants into foreign labor markets and investigate their aggregate implications. We develop a model of occupational choice with natives and immigrants of multiple types whose decisions are subject to wedges which distort their allocation across occupations. We estimate the model to match salient features of U.S. and cross-country individual-level data. We find that there are sizable GDP gains from removing the wedges faced by immigrants in U.S. labor markets, accounting for approximately one-fifth of the overall economic contribution of immigrants to the U.S. economy. These effects arise from both increased flows from non-participation to predominantly manual jobs as well as from reallocation within the market sector that raises productivity in non-routine cognitive jobs. We contrast our findings for the U.S. with estimates for 11 high-income countries and document substantial differences in the magnitude of immigrant wedges across countries. Importantly, we find differences in the distribution of immigrant wedges across occupations lead to substantial variation in the gains from removing immigrant misallocation, even among countries with similar average degrees of distortions.
Based on patterns of employment transitions, we identify three different types of workers in the US labor market: α’s β’s and γ’s. Workers of type α make up over half of all workers, are most likely to remain on the same job for more than 2 years and, when they become unemployed, typically find a new job within 1 quarter. Workers of type γ comprise less than one-fifth of workers, have a low probability of staying on the same job for more than 2 years and, when they become unemployed, face a high probability of remaining jobless for more than 1 year. Workers of type β are in between αs and γ’s. The earnings losses caused by displacement are relatively small and transitory for α-workers, while they are large and persistent for γ-workers. During the Great Recession, excess unemployment for α-workers rose by little and was reabsorbed quickly; unemployment for γ-workers rose by 20 percentage points and was not reabsorbed 4 years after its peak. We use a search-theoretic model of the labor market to rationalize the different patterns of employment transitions across types. The model naturally explains both the variation in the consequences of job displacement across types, and the variation in the dynamics of unemployment during the Great Recession. Our view is that several puzzling micro and macro phenomena about the labor market are driven by the behavior of the small group of γ-workers.
We adopt an analytically tractable Aiyagari-type model to study the distinctive roles of unconditional lump-sum transfers and public debt in reducing consumption inequality due to uninsurable income risk. We show that in the absence of wealth inequality, using lump-sum transfers is not an optimal policy for reducing consumption inequality---because the Ramsey planner opts to rely solely on public debt to mitigate income risk without the need for lump-sum transfers. This result is surprising in light of the popularity of universal basic income advocated by many politicians and scholars.
Quantitative macroeconomics is often portrayed as a science—because of its intensive use of high-powered mathematics—with the possible limitation of being unable to conduct controlled experiments. To qualify as a science, however, theories in that discipline must meet a minimum number of criteria: (i) It has explanatory power to explain phenomena; (ii) it has predictive power to yield quantifiable and falsifiable statements about new phenomenon; and (iii) it has operational power to change the world.
A scientific theory consists of axioms and working hypotheses that facilitate the derivation of contestable statements from the axioms.2 Hence, simply laying out a list of contradictions between a theory’s implications and the data is often insufficient to disqualify a theory as science; it may have just challenged its working hypotheses, not its axioms. But, challenging a theory’s working hypotheses is a crucial step to improve or falsify a theory. This is why Isaac Newton spent so much effort in his Principia Mathematica to deal with the law of motion under air friction.
This article discusses one of the working hypotheses of the Arrow-Debreu paradigm and its dynamic stochastic general equilibrium reincarnation in quantitative macroeconomics—the supply curve and its embodiment in the neoclassical production function. The supply curve is a much stronger pillar than the demand curve in holding up the Arrow-Debreu paradigm, but we argue in this article that the neoclassical production function embodying the supply curve is full of cracks.
More specifically, we show that the neoclassical production function is not quantifiable as a working hypothesis to support the Arrow-Debreu DSGE model, unlike the chemical reaction equations based on Lavoisier’s oxygen theory of combustion. The neoclassical production function relies on the unobservable and unmeasurable Solow residual to explain the quantity of output produced at the firm, industry, or national level, and the hypothetical factors of production (capital and labor) are much like “fire, air, water, and earth” in the ancient Greek theory of the universe. Because the working hypotheses of quantitative macroeconomics are not themselves quantifiable, the neoclassical theory is not yet a science. And this explains the lack of power for DSGE models to predict the 2008 Financial Crisis and the inability of economic theory to change the world by engineering or recreating economic prosperity in developing countries.
The effect of economic shocks on business cycles fluctuations may vary across industries. For example, shocks that originate in a single industry may propagate elsewhere, either up or down stream in the production chain. Thus, industries that are more connected may be more vulnerable to industry-specific economic shocks. However, any model of industrial connectedness must account for the fact that much of the inter-industry correlation will be driven by national shocks. In light of this, we develop a panel Markov-switching model for industry-level data that incorporates a number of features relevant for sub-national analysis. First, we model industry-level trends to differentiate between cyclical downturns and secular decline in an industry. Second, we incorporate a national-level business cycle that industries may or may not attach to. Third, we model comovement off of the national-level cycle as factors that affect clusters of industries. We find that there are industry groupings that comove because their production networks are intrasectoral and industry groupings that lack inter or intra-sectoral classification, but most industries move together.
Cohen, Diether, and Malloy (Journal of Finance, 2007), find that shifts in the demand curve predict negative stock returns. We use their approach to examine changes in supply and demand at the time of FOMC announcements. We show that shifts in the demand for borrowing Treasuries and agencies predict quantitative easing. A reduction in the quantity demanded at all points along the demand curve predicts expansionary quantitative easing announcements.
We investigate a test of conditional predictive ability described in Giacomini and White (2006; Econometrica). Our main goal is simply to demonstrate existence of the null hypothesis and, in doing so, clarify just how unlikely it is for this hypothesis to hold. We do so using a simple example of point forecasting under quadratic loss. We then provide simulation evidence on the size and power of the test. While the test can be accurately sized we find that power is typically low.
Free college proposals have become increasingly popular in many countries of the world. To evaluate their potential effects, we develop and estimate a dynamic model of college enrollment, performance, and graduation. A central piece of the model, student effort, has a direct effect on class completion, and an indirect effect in mitigating the risk of not completing a class or not remaining in college. We estimate the model using rich, student-level administrative data from Colombia, and use the estimates to simulate free college programs that differ in eligibility requirements. Among these, universal free college expands enrollment the most, but it does not affect graduation rates and has the highest per-graduate cost. Performance-based free college, in contrast, delivers a slightly lower enrollment expansion yet a greater graduation rate at a lower per-graduate cost. Relative to universal free college, performance-based free college places greater risk on students, but precisely for this reason leads them to better outcomes. Nonetheless, even performance-based free college fails to deliver a large increase in graduation rate, suggesting that additional, complementary policies might be required to elicit the large effort increase needed to raise graduation rates.
This paper studies the impact of a new class of investors on the dynamics of U.S. housing affordability after the Financial Crisis. Using a novel instrumental variable and processing 85 million housing transactions, we find that investors' purchases increase the price-to-income ratio, especially in the bottom price-tier, the entry point for first-time buyers. Investors cause a short-run reduction in the vacancy rate of owner-occupied units and a medium-run positive response of construction. These equilibrium responses mitigate the effect on affordability. The effects on price-to-income and price-to-rent ratios depend on the housing supply elasticity. In highly elastic areas investors affect rents more than prices, whereas in areas that are highly inelastic investors have the opposite effect.
We study a dynamic macro model to capture the trade-off between policies that simultaneously decrease output and the rate of transmission of an epidemic. We find that optimal policies initially restrict employment but partial loosening occurs before the peak of the epidemic. The arrival of a vaccine (even if only a small fraction can be vaccinated in the short run) implies a relaxation of stay-at-home policies and, in some cases, results in an increase in the speed of infection. The monetary value of producing a vaccine decreases rapidly as time passes. The value that society assigns to averting deaths is a major determinant of the optimal policy.
This paper uses a dynamic competitive spatial equilibrium framework to evaluate the contribution of rural-urban migration induced by structural transformation to the behavior of Chinese housing markets. In the model, technological progress drives workers facing heterogeneous mobility costs to migrate from the rural agricultural sector to the higher paying urban manufacturing sector. Upon arrival to the city, workers purchase housing using long-term mortgages. Quantitatively, the model fits cross-sectional house price behavior across a representative sample of Chinese cities between 2003 and 2015. The model is then used to evaluate how changes to city migration policies and land supply regulations affect the speed of urbanization and house price appreciation. The analysis indicates that making migration policy more egalitarian or land policy more uniform would promote urbanization but also would contribute to larger house price dispersion
We estimate the effects that the different financial deregulations in the U.S. have had on the country's income distribution. We find that the different reforms have moved inequality in drastically different directions. On the one hand, during the late 1970s and early 1980s, the removal of intra- and inter-state branching restrictions and the elimination of state-varying rates ceilings decreased inequality, as they mostly enhanced the incomes of workers in the lower tail of the income distribution. On the other hand, the repeal of the Glass-Steagall Act in 1999 substantially increased inequality, as it mostly –and by large amounts-- increased the incomes of workers in the upper tail of the distribution.
To explore the mechanisms underlying the different effects, we also examine the responses within and across individuals in different age groups and compare finance vs non-finance workers. Our findings indicated that models based solely on capital skill complementarities (CSC) are insufficient because they would imply similar responses to all reforms. We construct a model that emphasizes the endogenous changes in the heterogeneous access (and choices) of households' financial products. The model naturally explains how the different deregulations impacted the opposite tails of the income distribution by capturing the changes in the financial markets available to households of different incomes and characteristics.
This article extends the work of Fawley and Neely (2013) to describe how major central banks have evolved unconventional monetary policies to encourage real activity and maintain stable inflation rates from 2013 through 2019. By 2013, central banks were moving from lump-sum asset purchase programs to continuing asset purchase programs, which are conditioned on economic conditions, careful communication strategies, bank lending programs with incentives and negative interest rates. This article reviews how central banks tailored their unconventional monetary methods to their various challenges and the structures of their respective economies.
This paper quantifies the positive and normative effects of capital controls on international economic activity under The Bretton Woods international financial system. We develop a three region world economic model consisting of the U.S., Western Europe, and the Rest of the World. The model allows us to quantify the impact of these controls through an open economy general equilibrium capital flows accounting framework. We find these controls had large effects. Counterfactuals show that world output would have been 6% larger had the controls not been implemented. We show that the controls led to much higher welfare for the rest of the world, moderately higher welfare for Europe, but much lower welfare for the U.S. We interpret the large U.S. welfare loss as an estimate of the implicit value to the U.S. of preventing capital flight from other countries and thus promoting economic and political stability in ally and developing countries.
When setting initial compensation some firms set a fixed non-negotiable wage while others bargain. In this paper we propose a parsimonious search and matching model with two sided heterogeneity, where search intensity and the degree of randomness in matching are endogenous, and firms decide whether to bargain or post wages. We study the implications of heterogeneous search costs and market tightness on the choice of the wage setting mechanism, as well as the relationship between bargaining prevalence and wage level, residual wage dispersion, and labor market tightness. We find that bargaining prevalence is positively correlated with wages, residual wage dispersion, and labor market tightness, both in the model and in the data.
Who prevails when fiscal and monetary authorities disagree about the value of public expenditure and how much to discount the future? When the fiscal authority sets debt as its main policy instrument it achieves fiscal dominance, rendering the preferences of the central bank, and thus its independence, irrelevant. When the central bank sets the nominal interest rate it renders fiscal impatience (its debt bias) irrelevant, but still faces its expenditure bias. I find that the expenditure bias is about an order of magnitude more severe than the debt bias and has a major impact on welfare through higher public spending, while the effect on other policies is relatively minor. I also find that the central bank can do little to overcome the negative impact of the fiscal authority's expenditure bias, though there are still gains from properly designing the central bank.
We develop a quantitative theory of mortality trends and population dynamics. In our theory, individuals incur time and/or goods costs over their life cycle, to adopt a better health technology that increases their age-specific survival probability. Technology adoption is a source of a dynamic externality: As more individuals adopt the better technology, the marginal benefit of future adoption increases. The allocation of time and/or goods also depends on total factor productivity (TFP): As TFP grows, more resources are allocated to technology adoption. Both channels---the dynamic externality and TFP---result in lower mortality. Our theory is consistent with three key facts: (i) The cross-country correlation between mortality and income is negative, (ii) mortality in poor countries has converged to that of rich countries although the income of poor countries has not, and (iii) mortality decline precedes economic take-off. We calibrate the model to match mortality in France from 1816 to 2010. Quantitatively, the model accounts for 54% of the closing of the mortality gap between France and low-income countries over the past 50 years.
We study the role of financial development on the aggregate and welfare implications of reducing trade barriers on imports of physical capital and intermediate inputs. We document that financially underdeveloped economies feature a slower response of real GDP, consumption, and investment following trade liberalization episodes that improve access to imported production inputs. To quantify the role of financial development, we set up a quantitative general equilibrium model with heterogeneous firms subject to financial constraints and estimate it to match salient features from Colombian plant-level data. We find that the adjustment to a decline of import tariffs on physical capital and intermediate inputs is significantly slower in financially underdeveloped economies in line with the empirical evidence. These effects reduce the welfare gains from trade liberalization and make them more unequal across agents.
A key property of the Aiyagari-type heterogeneous-agent models is that the equilibrium interest rate of public debt lies below the time discount rate. This fundamental property, however, implies that the Ramsey planner's fiscal policy may be time-inconsistent because the forward-looking planner would have a dominate incentive to issue plenty of debt, such that all households are fully self-insured against idiosyncratic risk whenever the interest rate of government borrowing is lower than the household time discount rate. But such a full self-insurance allocation may be paradoxical because, to achieve it, the optimal labor tax rate may approach 100\% and aggregate consumption may approach zero. This is puzzling from an intuitive perspective because near the point of full self-insurance the marginal gains of increasing debt should be less than the marginal costs of financing the debt under distortionary taxes. We show that this puzzling behavior originates from the assumption that the planner must commit to future plans at time zero. Under such a full commitment, the Ramsey planner opts to exploit the low interest cost of borrowing to front load consumption by sacrificing future consumption in the long run because future utilities are heavily discounted compared to the inverse of the interest rate on government bonds. We demonstrate our points analytically using a tractable heterogeneous-agents model featuring non-linear preferences and a well-defined distribution of household wealth.
This paper demonstrates that heterogeneity in firms’ promotion of human capital accumulation is an important determinant of life-cycle earnings inequality. I use administrative micro data from Germany to show that different establishments offer systematically different earnings growth rates for their workers. This observation suggests that that the increase in inequality over the life cycle reflects not only inherent worker variation, but also differences in the firms that workers happen to match with over their lifetimes. To quantify this channel, I develop a life-cycle search model with heterogeneous workers and firms. In the model, a worker’s earnings can grow through both human capital accumulation and labor market competition channels. Human capital growth depends on both the worker’s ability and the firm’s learning environment. I find that heterogeneity in firm learning environments accounts for 40% of the increase in cross-sectional earnings variance over the life cycle, and that this mechanism is especially important for young workers. I then show that differences in labor market histories partially shape the worker-specific income profiles estimated by reduced-form statistical earnings processes. Finally, because young workers do not fully internalize the benefits of matching to high-growth firms, changes to the structure of unemployment insurance policies can incentivize these workers to search for better matches.
We compare the evolution of corporate credit spreads during two large crises: the Great Financial Crisis (GFC) and the COVID-19 pandemic. These crises initially featured spread increases of similar magnitudes, but the pandemic was much more short-lived. The microdata reveal that firm leverage was a more important predictor of credit spreads during the GFC, but that firm liquidity was more important during the pandemic. In a model of the firm capital structure that is calibrated to match the joint distribution of leverage, liquidity, and credit spreads, we show that the GFC resembled a combination of real TFP and credit market shocks, while the pandemic was more akin to a short-lived cash flow shock. We study the effectiveness of credit market interventions in response to these shocks: policies such as QE or credit guarantees are ineffective against real shocks, but can greatly mitigate the effects of financial and cash flow shocks. Transfers and grants (similar to the PPP) are effective if the policymaker’s objective is to prevent corporate bankruptcies.
We study efficient risk sharing in a model where agents operate linear production technologies with private information about idiosyncratic productivity. Capital is the sole factor of production, and accumulable. We establish a time-invariant, one-to-one mapping between the capital allocated to an agent and his lifetime utility entitlement. The mapping implies properties that are distinct from those in models with private information about endowments. In contrast to the latter, the value of the risk-sharing arrangement in our model always remains above the autarky value. There is no need for long-term commitment. Further, in our model, there are no net expected transfers each period across individuals. This allows us to decentralize the efficient allocation into one-period insurance contracts that do not require long-term commitment on the part of the principal or agent. Furthermore, while the efficient allocation implies an increasing dispersion of lifetime utility entitlements and consumption, this need not lead to declines in individual consumption as in the endowment model. When technology is sufficiently productive, all individuals experience consumption growth.
We design an infinite-horizon heterogeneous-agents and incomplete-markets model to demonstrate analytically that in the absence of any redistributional effects of government policies, optimal capital tax is zero despite capital overaccumulation under precautionary savings and borrowing constraints. Our result indicates that public debt is a better tool than capital taxation to restore aggregate productive efficiency.
Digital currencies store balances in anonymous electronic addresses. We analyze the trade-offs between safety and convenience of aggregating balances in addresses, electronic wallets and banks. In our model agents balance the risk of theft of a large account with the cost to safeguarding a large number of passwords of many small accounts. Account custodians (banks, wallets and other payment service providers) have different objectives and tradeoffs on these dimensions; we analyze the welfare effects of differing industry structures and interdependencies, and in particular the consequences of "password aggregation" programs which in effect consolidate risks across accounts.
We construct a real-time dataset (FRED-SD) with vintage data for the U.S. states that can be used to forecast both state-level and national-level variables. Our dataset includes approximately 28 variables per state, including labor market, production, and housing variables. We conduct two sets of real-time forecasting exercises. The first forecasts state-level labor-market variables using five different models and different levels of industrially-disaggregated data. The second forecasts a national-level variable exploiting the cross-section of state data. The state-forecasting experiments suggest that large models with industrially-disaggregated data tend to have higher predictive ability for industrially-diversified states. For national-level data, we find that forecasting and aggregating state-level data can outperform a random walk but not an autoregression.
This paper studies the impact of collaboration on research output. First, we build a micro founded model for scientific knowledge production, where collaboration between researchers is represented by a bipartite network. The equilibrium of the game incorporates both the complementarity effect between collaborating researchers and the substitutability effect between concurrent projects of the same researcher. Next, we develop a Bayesian MCMC procedure to estimate the structural parameters, taking into account the endogenous matching of researchers and projects. Finally, we illustrate the empirical relevance of the model by analyzing the coauthorship network of economists registered in the RePEc Author Service.
Both large establishments and large cities are known to offer workers an earnings premium. In this paper, we show that these two premia are closely linked by documenting a new fact: when workers move to a large city, they also move to larger establishments. We then ask how much of the city- size earnings premium can be attributed to transitions to larger and better-paying establishments. Using administrative data from Spain, we find that 38 percent of the city-size earnings premium can be explained by establishment-size composition. Most of the gains from the transition to larger establishments realize in the short-term upon moving to the large city. Establishment size explains 29 percent of the short-term gains, but only 5 percent of the medium-term gains that accrue as workers gain experience in the large city. The small contribution to the medium-term gains is due to two facts: first, within large cities workers transition to large establishments only slightly faster than in smaller cities; second, the relationship between earnings and establishment size is weaker in large cities.
High-frequency financial and economic activity indicators are usually time aggregated before forecasts of low-frequency macroeconomic events, such as recessions, are computed. We propose a mixed-frequency modelling alternative that delivers high-frequency probability forecasts (including their confidence bands) for these low-frequency events. The new approach is compared with single-frequency alternatives using loss functions adequate to rare event forecasting. We provide evidence that: (i) weekly-sampled spread improves over monthly-sampled to predict NBER recessions, (ii) the predictive content of the spread and the Chicago Fed Financial Condition Index (NFCI) is supplementary to economic activity for one-year-ahead forecasts of contractions, and (iii) a weekly activity index can date the 2020 business cycle peak two months in advance using a mixed-frequency filtering.
This paper illustrates a challenge in analyzing the learning algorithms resulting in second-order difference equations. We show in a simple monetary model that the learning dynamics do not converge to the rational expectations monetary steady state. We then show that to guarantee convergence, the gain parameter used in the learning rule has to be restricted based on economic fundamentals in the monetary model.
We review the macroeconomic performance over the period since the Global Financial Crisis and the challenges in the pursuit of the Federal Reserve’s dual mandate. We characterize the use of forward guidance and balance sheet policies after the federal funds rate reached the effective lower bound. We also review the evidence on the efficacy of these tools and consider whether policymakers might have used them more forcefully. Finally, we examine the post-crisis experience of other major central banks with these policy tools.