Forecasts are a central component of policy making; the Federal Reserve''s forecasts are published in a document called the Greenbook. Previous studies of the Greenbook''s inflation forecasts have found them to be rationalizable but asymmetric if considering particular sub-periods, e.g., before and after the Volcker appointment. In these papers, forecasts are analyzed in isolation, assuming policymakers value them independently. We analyze the Greenbook fore- casts in a framework in which the forecast errors are allowed to interact. We find that allowing the losses to interact makes the unemployment forecasts virtually symmetric, the output forecasts symmetric prior to the Volcker appointment, and the inflation forecasts symmetric after the onset of the Great Moderation.
This paper surveys recent developments in the evaluation of point and density forecasts in the context of forecasts made by Vector Autoregressions. Specific emphasis is placed on highlighting those parts of the existing literature that are applicable to direct multi-step forecasts and those parts that are applicable to iterated multi-step forecasts. This literature includes advancements in the evaluation of forecasts in population (based on true, unknown model coefficients) and the evaluation of forecasts in the finite sample (based on estimated model coefficients). The paper then examines in Monte Carlo experiments the finite-sample properties of some tests of equal forecast accuracy, focusing on the comparison of VAR forecasts to AR forecasts. These experiments show the tests to behave as should be expected given the theory. For example, using critical values obtained by bootstrap methods, tests of equal accuracy in population have empirical size about equal to nominal size.
We investigate the pairwise correlations of 11 U.S. fixed income yield spreads over a sample that includes the Great Financial Crisis of 2007-2009. Using cross-sectional methods and nonparametric bootstrap breakpoint tests, we characterize the crisis as a period in which pairwise correlations between yield spreads were systematically and significantly altered in the sense that spreads comoved with one another much more than in normal times. We find evidence that, for almost half of the 55 pairs under investigation, the crisis has left spreads much more correlated than they were previously. This evidence is particularly strong for liquidity- and default-risk-related spreads, long-term spreads, and the spreads that were most likely directly affected by policy interventions.
This paper develops the theory of multi-step ahead forecasting for vector time series that exhibit
temporal nonstationarity and co-integration. We treat the case of a semi-infinite past by developing
the forecast filters and the forecast error filters explicitly. We also provide formulas for forecasting from a finite data sample. This latter application can be accomplished by using large matrices, which remains practicable when the total sample size is moderate. Expressions for the
mean square error of forecasts are also derived and can be implemented readily. The flexibility and generality of these formulas are illustrated by four diverse applications: forecasting euro area macroeconomic aggregates; backcasting fertility rates by racial category; forecasting long memory inflation data; and forecasting regional housing starts using a seasonally co-integrated model.
In this paper we provide analytical, simulation, and empirical evidence on a test of equal economic value from competing predictive models of asset returns. We define economic value using the concept of a performance fee - the amount an investor would be willing to pay to have access to an alternative predictive model that is used to make investment decisions. We establish that this fee can be asymptotically normal under modest assumptions. Monte Carlo evidence shows that our test can be accurately sized in reasonably large samples. We apply the proposed test to predictions of the US equity premium.
The Malthusian theory of evolution disregards a pervasive fact about human
societies: they expand through conflict. When this is taken account of the long-run
favors not a large population at the level of subsistence, nor yet institutions that
maximize welfare or per capita output, but rather institutions that maximize free
resources. These free resources are the output available to society after deducting
the payments necessary for subsistence and for the incentives needed to induce pro-
duction, and the other claims to production such as transfer payments and resources
absorbed by elites. We develop the evolutionary underpinnings of this model, and
examine the implications of free resource maximization for the evolution of societies
in several applications. Since free resources are increasing both in per capita
income and population, evolution will favor large rich societies. We will show how
technological improvement is likely to increase per capita output as well as increase
population, and how economically inefficient institutions such as bureaucracy arise.
In this paper we provide analytical and Monte Carlo evidence that Chow and Predictive tests can be consistent against alternatives that allow structural change to occur at either end of the sample. Attention is restricted to linear regression models that may have a break in the intercept. The results are based on a novel reparameterization of the actual and potential break point locations. Standard methods parameterize both of these locations as fixed fractions of the sample size. We parameterize these locations as more general integer valued functions. Power at the ends of the sample is evaluated by letting both locations, as a percentage of the sample size, converge to zero or one. We find that for a potential break point function, the tests are consistent against alternatives that converge to zero or one at sufficiently slow rates and are inconsistent against alternatives that converge sufficiently quickly. Monte Carlo evidence supports the theory though large samples are sometimes needed for reasonable power.
Previous research has established that the Federal Reserve’s large scale asset purchases
(LSAPs) significantly influenced international bond yields. We use dynamic term structure
models to uncover to what extent signaling and portfolio balance channels caused
these declines. For the U.S. and Canada, the evidence supports the view that LSAPs
had substantial signaling effects. For Australian and German yields, signaling effects
were present but likely more moderate, and portfolio balance effects appear to have
played a relatively larger role than in the U.S. and Canada. Portfolio balance effects
were small for Japanese yields and signaling effects basically nonexistent. These findings
about LSAP channels are consistent with predictions based on interest rate dynamics
during normal times: Signaling effects tend to be large for countries with strong yield
responses to conventional U.S. monetary policy surprises, and portfolio balance effects
are consistent with the degree of substitutability across international bonds, as measured
by the covariance between foreign and U.S. bond returns.
Factor models have become useful tools for studying international business cycles.
Block factor models can be especially useful as the zero restrictions on the loadings of
some factors may provide some economic interpretation of the factors. These models,
however, require the econometrician to predefine the blocks, leading to potential
misspecification. In Monte Carlo experiments, we show that even small misspecification can lead
to substantial declines in t. We propose an alternative model in which the blocks are
chosen endogenously. The model is estimated in a Bayesian framework using a hierarchical
prior, which allows us to incorporate series-level covariates that may influence and explain
how the series are grouped. Using international business cycle data, we find our country
clusters differ in important ways from those identified by geography alone. In particular,
we find that similarities in institutions (e.g., legal systems, language diversity) may be
just as important as physical proximity for analyzing business cycle comovements.
A large literature studies the information contained in national-level economic
indicators, such as financial and aggregate economic activity variables, for forecasting and
nowcasting U.S. business cycle phases (expansions and recessions.) In this paper, we investigate whether there is additional information useful for identifying business cycle phases
contained in subnational measures of economic activity. Using a probit model to forecast the
NBER expansion and recession classification, we assess the incremental information content
of state-level employment growth over a commonly used set of national-level predictors. As
state-level data adds a large number of predictors to the model, we employ a Bayesian model
averaging procedure to construct forecasts. Based on a variety of forecast evaluation metrics,
we find that including state-level employment growth substantially improves nowcasts and
very short-horizon forecasts of the business cycle phase. The gains in forecast accuracy are
concentrated during months of national recession.
We use a regression discontinuity approach and present new institutional evidence to investigate whether affordable housing policies influenced the market for securitized subprime mortgages.
We use merged loan-level data on non-prime mortgages with individual- and neighborhood-level
data for California and Florida. We find no evidence that lenders increased subprime originations or altered loan pricing around the discrete eligibility cutoffs for the Government-Sponsored Enterprises'''' (GSEs) affordable housing goals or the Community Reinvestment Act. Although we find evidence that the GSEs bought significant quantities of subprime securities, our results indicate that these purchases were not directly related to affordable housing mandates.
In this paper we provide estimates of the coefficient of relative risk aversion using information on self-reports of subjective personal well-being from multiple datasets, including three cross-sectional surveys and two panel surveys, namely the Gallup World Poll, the European Social Survey, the World Values Survey, the British Household Panel Survey for the United Kingdom, and the General Social Survey for the United States. We additionally consider the implications of allowing for health-state dependence in the utility function on the estimates of risk aversion and examine how the marginal utility of income changes in poor health states. Our estimates of relative risk aversion with cross-section data vary closely around 1, which corresponds to logarithmic utility, while the estimates with panel data are slightly larger. We find that controlling for health dependence generally reduces these estimates. In contrast with other studies in the literature, our results also suggest that the marginal utility of income increases when satisfaction with health deteriorates, and this effect is robust across the various datasets analyzed.
We study the contraction of foreign direct investment (FDI) flows in the United States during the recent financial crisis and show their unusual non-resiliency, which depends in part on the global nature of the economic recession, but also on the increases in the cost of financing FDI in the economies in which the flows originate. To formally study the effects of external financial conditions on FDI in the United States, we exploit the three dimensions of a panel of U.S. inward FDI flows organized by recipient U.S. industries, source countries, and years for the recorded flows. Changes in the cost of finance in the source countries have little or no effect on total inward flows (the sum of equity, debt, and reinvested earnings) over the 2006-2010 period. However, U.S. industries characterized by more financial vulnerability experience statistically significant variations in the debt and equity components of inward FDI flows in response to the changes in the cost of capital that occurred in the source countries during the crisis.
In this paper we analyze how spillovers in mortgage adoption affect mortgage product choice across neighborhoods and across borrowers of different racial or ethnic groups. We use loan-level data on subprime mortgages for metropolitan areas in California and Florida during 2004 and 2005, the peak years of the subprime mortgage boom. We identify an important and statistically significant effect of spillovers, both within and across groups, on the consumers\' choice of hybrid mortgage products that were popular during this period. In particular, we find that the group-specific spillover effects are strengthened by the group affiliation (race and ethnicity) of the borrower. The effects are particularly important among Hispanic and white borrowers, but not among black borrowers.
Characterizing asset price volatility is an important goal for financial economists. The literature has shown that variables that proxy for the information arrival process can help explain and/or forecast volatility. Unfortunately, however, obtaining good measures of volume and/or order flow is expensive or difficult in decentralized markets such as foreign exchange. We investigate the extent that Japanese capital flows—which are released weekly—reflect information arrival that improves foreign exchange and equity volatility forecasts. We find that capital flows can help explain transitory shocks to GARCH volatility.
We investigate whether race and ethnicity influenced subprime loan pricing during
2005, the peak of the subprime mortgage expansion. We combine loan-level data on the
performance of non-prime securitized mortgages with individual- and neighborhood-
level data on racial and ethnic characteristics for metropolitan areas in California and
Florida. Using a model of rate determination that accounts for predicted loan performance,
we evaluate the differences in subprime mortgage rates in terms of racial and
ethnic groups and neighborhood characteristics. We find evidence of adverse pricing
for blacks and Hispanics. The evidence of adverse pricing is strongest for purchase
mortgages and mortgages originated by non-depository institutions.
This paper examines the asymptotic and finite-sample properties of tests of
equal forecast accuracy when the models being compared are overlapping in the
sense of Vuong (1989). Two models are overlapping when the true model con-
tains just a subset of variables common to the larger sets of variables included
in the competing forecasting models. We consider an out-of-sample version of
the two-step testing procedure recommended by Vuong but also show that an
exact one-step procedure is sometimes applicable. When the models are over-
lapping, we provide a simple-to-use fixed regressor wild bootstrap that can be
used to conduct valid inference. Monte Carlo simulations generally support
the theoretical results: the two-step procedure is conservative while the one-step
procedure can be accurately sized when appropriate. We conclude with an em-
pirical application comparing the predictive content of credit spreads to growth
in real stock prices for forecasting U.S. real GDP growth.
Do parents alter their investment in their child’s human capital in response to changes in school inputs? If they do, then ignoring this effect will bias the estimates of school and parental inputs in educational production functions. This paper tries to answer this question by studying out-of-school suspensions and their effect on parental involvement in children’s education. The use of out-of-school suspensions is the novelty of this paper. Out-of-school suspensions are chosen by the teacher or the principal of the school and not by parents, but they are a consequence of student misbehavior. To account for the nature of these out-of-school suspensions, they are instrumented with measures of “principal’s preference toward discipline.” The estimates show that, without controlling for selection, the level of parental involvement is negatively correlated with the number of out-of-school suspensions. Once selection is accounted for, the effect disappears that is, out-of-school suspensions do not affect parental involvement in children’s education.
Much of the literature examining the effects of oil shocks asks the question ―What is an oil shock? and has concluded that oil-price increases are asymmetric in their effects on the US economy. That is, sharp increases in oil prices affect economic activity adversely, but sharp decreases in oil prices have no effect. We reconsider the directional symmetry of oil-price shocks by addressing the question ―Where is an oil shock?, the answer to which reveals a great deal of spatial/directional asymmetry across states. Although most states have typical responses to oil-price shocks—they are affected by positive shocks only—the rest experience either negative shocks only (5 states), both positive and negative shocks (5 states), or neither shock (5 states).
We investigate the importance of trend inflation and the real-activity gap for explaining observed inflation variation in G7 countries since 1960. Our results are based on a bivariate unobserved-components model of inflation and unemployment in which inflation is decomposed into a stochastic trend and transitory component. As in recent implementations of the New Keynesian Phillips Curve, it is the transitory component of inflation, or “inflation gap”, that is driven by the real-activity gap, which we measure as the deviation of unemployment from its natural rate. Even when allowing for changes in the contributions of trend inflation and the inflation gap, we find that both are important determinants of inflation variation at business cycle horizons for all G7 countries throughout much of the past 50 years. Also, the real-activity gap explains a large fraction of the variation in the inflation gap for each country, both historically and in recent years. Taken together, the results suggest the New Keynesian Phillips Curve, once augmented to include trend inflation, is an empirically relevant model for the G7 countries. We also provide new estimates of trend inflation for the G7 that incorporate information in the real-activity gap for identification and, through formal model comparisons, new statistical evidence regarding structural breaks in the variability of trend inflation and the inflation gap.
With rare exception, studies of monetary policy tend to neglect the timing of innovations to monetary policy instruments. Models which take timing seriously are often difficult to compare to standard monetary VARs because each uses different frequencies. We propose using MIDAS regressions that nests both ideas: Accurate (daily) timing of innovations to policy are embedded in a monthly-frequency VAR to determine the macroeconomic effects of high-frequency policy shocks. We find that policy have greatest effects on variables thought of as heavily expectations oriented and that, contrary to some VAR studies, the effects of policy shocks on real variables are small.
This paper analyzes the empirical performance of two alternative ways in which multi-factor models with time-varying risk exposures and premia may be estimated. The first method echoes the seminal two-pass approach advocated by Fama and MacBeth (1973). The second approach extends previous work by Ouysse and Kohn (2010) and is based on a Bayesian approach to modelling the latent process followed by risk exposures and idiosynchratic volatility. Our application to monthly, 1979-2008 U.S. data for stock, bond, and publicly traded real estate returns shows that the classical, two-stage approach that relies on a nonparametric, rolling window modelling of time-varying betas yields results that are unreasonable. There is evidence that all the portfolios of stocks, bonds, and REITs have been grossly over-priced. On the contrary, the Bayesian approach yields sensible results as most portfolios do not appear to have been misspriced and a few risk premia are precisely estimated with a plausibile sign. Real consumption growth risk turns out to be the only factor that is persistently priced throughout the sample.
Despite the remarkable improvement of female labor market characteristics, a sizeable gender wage gap exists in Colombia. We employ quantile regression techniques to examine the degree to which current small differences in the distribution of observable characteristics can explain the gender gap. We find that the gap is largely explained by gender differences in the rewards to labor market characteristics and not by differences in the distribution of characteristics. We claim that Colombian women experience both a “glass ceiling effect’’ and also (what we call) a “quicksand floor effect” because gender differences in returns to characteristics primarily affect women at the top and the bottom of the distribution. Also, self selection into the labor force is crucial for gender gaps: if all women participated in the labor force, the observed gap would be roughly 50% larger at all quantiles.
What explains differences in pre-market factors? Three types of inputs are believed to determine the skills agents take to the labor market: ability, family inputs, and school inputs. Therefore to answer the previous question it is crucial to understand first the relative importance of each of those inputs. The literature on the production of achievement has not been able to provide an estimation that can take the three factors into account simultaneously at the student level. This paper intends to fill this gap by providing an estimation of the production function of achievement where both types of investments (families and schools) are considered in a framework where the inputs are allowed to be correlated with the unobserved term, ability to learn. I do that by using parents’ saving for their child’s postsecondary education to control for the unobserved component (i.e., ability to learn) in the production of skills. The estimates for the role of family inputs are in line with previous findings. Additionally, the estimates of school inputs show that they are also important for the formation of students’ skills even after controlling for ability to learn.
Regime switching models have been assuming a central role in financial applications because of their well-known ability to capture the presence of rich non-linear patterns in the joint distribution of asset returns. This paper examines how the presence of regimes in means, variances, and correlations of asset returns translates into explicit dynamics of the Markowitz mean-variance frontier. In particular, the paper shows both theoretically and through an application to international equity portfolio diversification that substantial differences exist between bull and bear regime-specific frontiers, both in statistical and in economic terms. Using Morgan Stanley Capital International (MSCI) investable indices for five countries/macro-regions, it is possible to characterize the mean-variance frontiers and optimal portfolio strategies in bull periods, in bear periods, and in periods where high uncertainty exists on the nature of the current regime. A recursive back-testing exercise shows that between 1998 and 2010, adopting a switching mean-variance strategy may have yielded considerable risk-adjusted payoffs, which are the largest in correspondence to the 2007-2009 financial crisis.
We perform a comprehensive examination of the recursive, comparative predictive performance of a number of linear and non-linear models for UK stock and bond returns. We estimate Markov switching, threshold autoregressive (TAR), and smooth transition autoregressive (STR) regime switching models, and a range of linear specifications in addition to univariate models in which conditional heteroskedasticity is captured by GARCH type specifications and in which predicted volatilities appear in the conditional mean. The results demonstrate that U.K. asset returns require non-linear dynamics be modeled. In particular, the evidence in favor of adopting a Markov switching framework is strong. Our results appear robust to the choice of sample period, changes in the adopted loss function and to the methodology employed to test the null hypothesis of equal predictive accuracy across competing models.
This paper presents empirical evidence on the efficacy of forecast averaging using the ALFRED real-time database. We consider averages taken over a variety of different bivariate VAR models that are distinguished from one another based upon at least one of the following: which variables are used as predictors, the number of lags, using all available data or data after the Great Moderation, the observation window used to estimate the model parameters and construct averaging weights, and for forecast horizons greater than one, whether or not iterated- or direct-multistep methods are used. A variety of averaging methods are considered. Our results indicate that the benefits to model averaging relative to BIC-based model selection are highly dependent upon the class of models being averaged over. We provide a novel decomposition of the forecast improvements that allows us to determine which types of averaging methods and models were most (and least) useful in the averaging process.
This paper develops a novel and effective bootstrap method for simulating asymptotic critical values for tests of equal forecast accuracy and encompassing among many nested models. The bootstrap, which combines elements of fixed regressor and wild bootstrap methods, is simple to use. We first derive the asymptotic distributions of tests of equal forecast accuracy and encompassing applied to forecasts from multiple models that nest the benchmark model – that is, reality check tests applied to nested models. We then prove the validity of the bootstrap for these tests. Monte Carlo experiments indicate that our proposed bootstrap has better finite-sample size and power than other methods designed for comparison of non-nested models. We conclude with empirical applications to multiple-model forecasts of commodity prices and GDP growth.
This chapter provides an overview of pseudo-out-of-sample tests of unconditional predictive ability. We begin by providing an overview of the literature, including both empirical applications and theoretical contributions. We then delineate two distinct methodologies for conducting inference: one based on the analytics in West (1996) and the other based on those in Giacomini and White (2006). These two approaches are then carefully described in the context of pairwise tests of equal forecast accuracy between two models. We consider both non-nested and nested comparisons. Monte Carlo evidence provides some guidance as to when the two forms of analytics are most appropriate, in a nested model context.
Smooth-transition autoregressive (STAR) models have proven to be worthy competitors of Markov-switching models of regime shifts, but the assumption of a time-invariant threshold level does not seem realistic and it holds back this class of models from reaching their potential usefulness. Indeed, an estimate of a time-varying threshold level of unemployment, for example, might serve as a meaningful estimate of the natural rate of unemployment. More precisely, within a STAR framework, one might call the time-varying threshold the “tipping level” rate of unemployment, at which the mean and dynamics of the unemployment rate shift. In addition, once the threshold level is allowed to be time-varying, one can add an error-correction term—between the lagged level of unemployment and the lagged threshold level—to the autoregressive terms in the STAR model. In this way, the time-varying latent threshold level serves dual roles: as a demarcation between regimes and as part of an error-correction term.