What explains differences in pre-market factors? Three types of inputs are believed to determine the skills agents take to the labor market: ability, family inputs, and school inputs. Therefore to answer the previous question it is crucial to understand first the relative importance of each of those inputs. The literature on the production of achievement has not been able to provide an estimation that can take the three factors into account simultaneously at the student level. This paper intends to fill this gap by providing an estimation of the production function of achievement where both types of investments (families and schools) are considered in a framework where the inputs are allowed to be correlated with the unobserved term, ability to learn. I do that by using parents’ saving for their child’s postsecondary education to control for the unobserved component (i.e., ability to learn) in the production of skills. The estimates for the role of family inputs are in line with previous findings. Additionally, the estimates of school inputs show that they are also important for the formation of students’ skills even after controlling for ability to learn.
Regime switching models have been assuming a central role in financial applications because of their well-known ability to capture the presence of rich non-linear patterns in the joint distribution of asset returns. This paper examines how the presence of regimes in means, variances, and correlations of asset returns translates into explicit dynamics of the Markowitz mean-variance frontier. In particular, the paper shows both theoretically and through an application to international equity portfolio diversification that substantial differences exist between bull and bear regime-specific frontiers, both in statistical and in economic terms. Using Morgan Stanley Capital International (MSCI) investable indices for five countries/macro-regions, it is possible to characterize the mean-variance frontiers and optimal portfolio strategies in bull periods, in bear periods, and in periods where high uncertainty exists on the nature of the current regime. A recursive back-testing exercise shows that between 1998 and 2010, adopting a switching mean-variance strategy may have yielded considerable risk-adjusted payoffs, which are the largest in correspondence to the 2007-2009 financial crisis.
We perform a comprehensive examination of the recursive, comparative predictive performance of a number of linear and non-linear models for UK stock and bond returns. We estimate Markov switching, threshold autoregressive (TAR), and smooth transition autoregressive (STR) regime switching models, and a range of linear specifications in addition to univariate models in which conditional heteroskedasticity is captured by GARCH type specifications and in which predicted volatilities appear in the conditional mean. The results demonstrate that U.K. asset returns require non-linear dynamics be modeled. In particular, the evidence in favor of adopting a Markov switching framework is strong. Our results appear robust to the choice of sample period, changes in the adopted loss function and to the methodology employed to test the null hypothesis of equal predictive accuracy across competing models.
This paper presents empirical evidence on the efficacy of forecast averaging using the ALFRED real-time database. We consider averages taken over a variety of different bivariate VAR models that are distinguished from one another based upon at least one of the following: which variables are used as predictors, the number of lags, using all available data or data after the Great Moderation, the observation window used to estimate the model parameters and construct averaging weights, and for forecast horizons greater than one, whether or not iterated- or direct-multistep methods are used. A variety of averaging methods are considered. Our results indicate that the benefits to model averaging relative to BIC-based model selection are highly dependent upon the class of models being averaged over. We provide a novel decomposition of the forecast improvements that allows us to determine which types of averaging methods and models were most (and least) useful in the averaging process.
This paper develops a novel and effective bootstrap method for simulating asymptotic critical values for tests of equal forecast accuracy and encompassing among many nested models. The bootstrap, which combines elements of fixed regressor and wild bootstrap methods, is simple to use. We first derive the asymptotic distributions of tests of equal forecast accuracy and encompassing applied to forecasts from multiple models that nest the benchmark model – that is, reality check tests applied to nested models. We then prove the validity of the bootstrap for these tests. Monte Carlo experiments indicate that our proposed bootstrap has better finite-sample size and power than other methods designed for comparison of non-nested models. We conclude with empirical applications to multiple-model forecasts of commodity prices and GDP growth.
This chapter provides an overview of pseudo-out-of-sample tests of unconditional predictive ability. We begin by providing an overview of the literature, including both empirical applications and theoretical contributions. We then delineate two distinct methodologies for conducting inference: one based on the analytics in West (1996) and the other based on those in Giacomini and White (2006). These two approaches are then carefully described in the context of pairwise tests of equal forecast accuracy between two models. We consider both non-nested and nested comparisons. Monte Carlo evidence provides some guidance as to when the two forms of analytics are most appropriate, in a nested model context.
Smooth-transition autoregressive (STAR) models have proven to be worthy competitors of Markov-switching models of regime shifts, but the assumption of a time-invariant threshold level does not seem realistic and it holds back this class of models from reaching their potential usefulness. Indeed, an estimate of a time-varying threshold level of unemployment, for example, might serve as a meaningful estimate of the natural rate of unemployment. More precisely, within a STAR framework, one might call the time-varying threshold the “tipping level” rate of unemployment, at which the mean and dynamics of the unemployment rate shift. In addition, once the threshold level is allowed to be time-varying, one can add an error-correction term—between the lagged level of unemployment and the lagged threshold level—to the autoregressive terms in the STAR model. In this way, the time-varying latent threshold level serves dual roles: as a demarcation between regimes and as part of an error-correction term.
We use a simple partial adjustment econometric framework to investigate the effects of the crisis on the dynamic properties of a number of yield spreads. We find that the crisis has caused substantial disruptions revealed by changes in the persistence of the shocks to spreads as much as by in their unconditional mean levels. Formal breakpoint tests confirm that the financial crisis has been over approximately since the Spring of 2009. The financial crisis can be conservatively dated as a August 2007 – June 2009 phenomenon, although some yield spread series seem to point out to an end of the most serious disruptions as early as in December 2008. We uncover evidence that the LSAP program implemented by the Fed in the US residential mortgage market has been effective, in the sense that the risk premia in this market have been uniquely shielded from the disruptive effects of the crisis.
Since Galí , long-run restricted VARs have become the standard for identifying the effects of technology shocks. In a recent paper, Francis et al.  proposed an alternative to identify technology as the shock that maximizes the forecast-error variance share of labor productivity at long horizons. In this paper, we propose a variant of the Max Share identification, which focuses on maximizing the variance share of labor productivity in the frequency domain. We consider the responses to technology shocks identified from various frequency bands. Two distinct technology shocks emerge. An expansionary shock increases productivity, output, and hours at business-cycle frequencies. The technology shock that maximizes productivity in the medium and long runs instead has clear contractionary effects on hours, while increasing output and productivity.
Using formal statistical tests, we detect (i) significant volatility increases for various types of capital flows for a period of changes in business cycle comovement among the G7 countries, and (ii) mixed evidence of changes in covariances and correlations with a set of macroeconomic variables.
Despite its role in monetary policy and finance, the expectations hypothesis (EH) of the term structure of interest rates has received virtually no empirical support. The empirical failure of the EH has been attributed to a variety of econometric biases associated with the single-equation models most often used to test it; however, none of these explanations appears to account for the massives failure reported in the literature. We note that traditional tests of the EH are based on two assumptions—the EH per se and an assumption about the expectations generating process (EGP) for the short-term rate. Arguing that convential tests of the EH could reject it because the EGP embedded in these tests is significantly at odds with the true EGP, we investigate this possibility by analyzing the out-of-sample predictive prefromance of several models for predicting interest rates and a model that assumes the EH holds. Using standard methods that take into account parameter uncertainty, the null hypothesis of equal predictive accuracy of each models relative to the random walk alternative is never rejected.
Oil prices rose sharply prior to the onset of the 2007-2009 recession. Hamilton (2005) noted that nine of the last ten recessions in the United States were preceded by a substantial increase in the price of oil. In this paper, we consider whether oil price shocks significantly increase the probability of recessions in a number of countries. Because business cycle turning points generally are not available for other countries, we estimate the turning points together with oil’s effect in a Markov-switching model with time-varying transition probabilities. We find that, for most countries, oil shocks do affect the likelihood of entering a recession. In particular, for a constant, zero term spread, an average-sized shock to WTI oil prices increases the probability of recession in the U.S. by nearly 50 percentage points after one year and nearly 90 percentage points after two years.
Recent research [e.g., DeMiguel, Garlappi and Uppal, (2009), Rev. Fin. Studies] has cast doubts on the out-of-sample performance of optimizing portfolio strategies relative to naive, equally weighted ones. However, existing results concern the simple case in which an investor has a one-month horizon and meanvariance preferences. In this paper, we examine whether their result holds for longer investment horizons, when the asset menu includes bonds and real estate beyond stocks and cash, and when the investor is characterized by constant relative risk aversion preferences which are not locally mean-variance for long horizons. Our experiments indicates that power utility investors with horizons of one year and longer would have on average benefited, ex-post, from an optimizing strategy that exploits simple linear predictability in asset returns over the period January 1995 - December 2007. This result is insensitive to the degree of risk aversion, to the number of predictors being included in the forecasting model, and to the deduction of transaction costs from measured portfolio performance.
We examine whether simple VARs can produce empirical portfolio rules similar to those obtained under a range of multivariate Markov switching models, by studying the effects of expanding both the order of the VAR and the number/selection of predictor variables included. In a typical stock bond strategic asset allocation problem on US data, we compute the out-of-sample certainty equivalent returns for a wide range of VARs and compare these measures of performance with those typical of non linear models that account for bull-bear dynamics and characterize the differences in the implied hedging demands for a long-horizon investor with constant relative risk aversion preferences. In a horse race in which models are not considered in their individuality but instead as an overall class, we find that a power utility investor with a constant coefficient of relative risk aversion of 5 and a 5-year horizon, would be ready to pay as much as 8.1% in real terms to be allowed to select models from the MS class, while analogous calculation for the whole class of expanding window VAR leads to a disappointing 0.3% per annum. We conclude that most (if not all) VARs cannot produce portfolio rules, hedging demands, or out-of-sample performances that approximate those obtained from equally simple non-linear frameworks.
This paper presents empirical evidence on the disagreement among Federal Open Market Committee (FOMC) forecasts. In contrast to earlier studies that analyze the range of FOMC forecasts available in the Monetary Policy Report to the Congress, we analyze the forecasts made by each individual member of the FOMC from 1992 to 1998. This newly available dataset, while rich in detail, is short in duration. Even so, we are able to identify a handful of patterns in the forecasts related to i) forecast horizon; ii) whether the individual is a Federal Reserve Bank president, governor, and/or Vice Chairman; and iii) whether individual is a voting member of the FOMC. Additional comparisons are made between forecasts made by the FOMC and the Survey of Professional Forecasters.
We use a dynamic latent factor model to analyze comovements in OECD budget surpluses. The world factor underlying common fluctuations in budget surpluses across countries explains an average of 28 to 44 percent of the variation in individual country surpluses. The world factor, which can be interpreted as a global budget surplus index, declines substantially in the 1980s, rises throughout much of the 1990s to a peak in 2000, before declining again after the financial crisis of 2008. We then estimate similar world factors in national output gaps, dividend-price ratios, and military spending that significantly explain variation in the world budget surplus factors. Idiosyncratic components of national budget surpluses correlate with well known “unusual” country circumstances, such as the Swedish banking crisis of the early 1990s.
The number of commercial banks in the United States has fallen by more than 50 percent since 1984. This consolidation of the U.S. banking industry and the accompanying large increase in average (and median) bank size have prompted concerns about the effects of consolidation and increasing bank size on market competition and on the number of banks that regulators deem “too–big–to–fail.” Agency problems and perverse incentives created by government policies are often cited as reasons why many banks have pursued acquisitions and growth, though bankers often point to economies of scale. This paper presents new estimates of ray-scale and expansion-path scale economies for U.S. banks based on non-parametric local-linear estimation of a model of bank costs. Unlike prior studies that use models with restrictive parametric assumptions or limited samples, our methodology is fully non-parametric and we estimate returns to scale for all U.S. banks over the period 1984–2006. Our estimates indicate that as recently as 2006, most U.S. banks faced increasing returns to scale, suggesting that scale economies are a plausible (but not necessarily only) reason for the growth in average bank size and that the tendency toward increasing scale is likely to continue unless checked by government intervention.
This paper presents analytical, Monte Carlo, and empirical evidence linking in-sample tests of predictive content and out-of-sample forecast accuracy. Our approach focuses on the negative effect that finite-sample estimation error has on forecast accuracy despite the presence of significant population-level predictive content. Specifically, we derive simple-to-use in-sample tests that test not only whether a particular variable has predictive content but also whether this content is estimated precisely enough to improve forecast accuracy. Our tests are asymptotically non-central chi-square or non-central normal. We provide a convenient bootstrap method for computing the relevant critical values. In the Monte Carlo and empirical analysis, we compare the effectiveness of our testing procedure with more common testing procedures.
This paper develops bootstrap methods for testing whether, in a finite sample, competing out-of-sample forecasts from nested models are equally accurate. Most prior work on forecast tests for nested models has focused on a null hypothesis of equal accuracy in population — basically, whether coefficients on the extra variables in the larger, nesting model are zero. We instead use an asymptotic approximation that treats the coefficients as non-zero but small, such that, in a finite sample, forecasts from the small model are expected to be as accurate as forecasts from the large model. Under that approximation, we derive the limiting distributions of pairwise tests of equal mean square error, and develop bootstrap methods for estimating critical values. Monte Carlo experiments show that our proposed procedures have good size and power properties for the null of equal finite-sample forecast accuracy. We illustrate the use of the procedures with applications to forecasting stock returns and inflation.
Many studies have documented disparities in the regional responses to monetary policy shocks. However, because of computational issues, the literature has often neglected the richest level of disaggregation: the city. In this paper, we estimate the city-level responses to monetary policy shocks in a Bayesian VAR. The Bayesian VAR allows us to model the entire panel of metropolitan areas through the imposition of a shrinkage prior. We then seek the origin of the city-level asymmetric responses. We find strong evidence that population density and the size of the local government sector mitigate the effects of monetary policy on local employment. The roles of the traditional interest rate, equity, and credit channels are marginalized relative to the previous findings based on less-granular definitions of regions. However, the relevance of the interest rate and credit channels appears to be more robust to business cycle uncertainty.
It has become common practice to estimate the response of asset prices to monetary policy actions using market-based measures such as the unexpected change in the federal funds futures rate as proxies for monetary policy shocks. I show that because interest rates and market-based measures of monetary policy shocks respond simultaneously to all news rather than simply news about monetary policy actions, estimates of the response of interest rates to monetary policy using only monetary policy news measures are biased. I propose a methodology that corrects for this “joint-response bias.” The results indicate that when the bias is accounted for the response of Treasury yields to monetary policy actions is considerably smaller than previously estimated.
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists’ long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies.
Nearly all journal rankings in economics use some weighted average of citations to calculate a journal’s impact. These rankings are often used, formally or informally, to help assess the publication success of individual economists or institutions. Although ranking methods and opinions are legion, scant attention has been paid to the usefulness of any ranking as representative of the many articles published in a journal. First, because the distributions of citations across articles within a journal are seriously skewed, and the skewness differs across journals, the appropriate measure of central tendency is the median rather than the mean. Second, large shares of articles in the highest-ranked journals are cited less frequently than typical articles in much-lower-ranked journals. Finally, a ranking that uses the h-index is very similar to one that uses total citations, making it less than ideal for assessing the typical impact of articles within a journal.
This paper develops a framework for inferring common Markov-switching components in a panel data set with large cross-section and time-series dimensions. We apply the framework to studying similarities and differences across U.S. states in the timing of business cycles. We hypothesize that there exists a small number of cluster designations, with individual states in a given cluster sharing certain business cycle characteristics. We find that although oil-producing and agricultural states can sometimes experience a separate recession from the rest of the United States, for the most part, differences across states appear to be a matter of timing, with some states entering recession or recovering before others.
Advances in information-processing technology have significantly eroded the advantages of small scale and proximity to customers that traditionally enabled community banks and other small-scale lenders to thrive. Nonetheless, U.S. credit unions have experienced increasing membership and market share, though consolidation has reduced the number of credit unions and increased their average size. We investigate the evolution of the efficiency and productivity of U.S. credit unions between 1989 and 2006 using a new methodology that benchmarks the performance of individual firms against an estimated order-α quantile lying “near” the efficient frontier. We construct a cost analog of the widely-used Malmquist productivity index, and decompose the index to estimate changes in cost and scale efficiency, and changes in technology, that explain changes in cost-productivity. We find that cost-productivity fell on average across all credit unions but especially among smaller credit unions. Smaller credit unions confronted an unfavorable shift in technology that increased the minimum cost required to produce given amounts of output. In addition, all but the largest credit unions became less scale efficient over time.
We analyze the relationship between housing and the business cycle in a set of 51 U.S. cities. Most surprisingly, we find that declines in house prices are often not followed by declines in employment. We also find that national permits are a better leading indicator for a city’s employment than a city’s own permits.
We simultaneously identify two government spending shocks: military spending shocks as defined by Ramey (2011) and federal spending shocks as defined by Perotti (2008). We analyze the effect of these shocks on state-level personal income and employment. We find regional patterns in the manner in which both shocks affect state- level variables. Moreover, we find differences in the propagation mechanisms for military versus non-military spending shocks. The former benefits economies with larger manufacturing and retail sectors and states that receive military contracts. While non-military shocks also benefit states with the proper industrial mix, they appear to stimulate economic activity in lower-income states.
Welfare gains to long-horizon investors may derive from time diversification that exploits non-zero intertemporal return correlations associated with predictable returns. Real estate may thus become more desirable if its returns are negatively serially correlated. While it could be important for long horizon investors, time diversification has been mostly investigated in asset menus without real estate and focusing on in-sample experiments. This paper evaluates ex post, out-of-sample gains from diversification when E-REITs belong to the investment opportunity set. We find that diversification into REITs increases both the Sharpe ratio and the certainty equivalent of wealth for all investment horizons and for both Classical and Bayesian (who account for parameter uncertainty) investors. The increases in Sharpe ratios are often statistically significant. However, the out-of sample average Sharpe ratio and realized expected utility of long-horizon portfolios are frequently lower than that of a one-period portfolio, which casts doubts on the value of time diversification.
Using a self-exciting threshold autoregressive model, we confirm the presence of nonlinearities in sectoral real exchange rate (SRER) dynamics across Mexico, Canada and the US in the pre-NAFTA and post-NAFTA periods. Measuring transaction costs using the estimated threshold bands, we find evidence that Mexico still faces higher transaction costs than their developed counterparts. Trade liberalization is associated with reduced transaction costs and lower relative price differentials among countries. Other determinants of transaction costs are distance and nominal exchange rate volatility. Our results show that the half-lives of SRERs shocks, calculated by Monte Carlo integration, imply much faster adjustment in the post-NAFTA period.
We study the cross-section correlations of net, total, and disaggregated capital flows for the major source and recipient European Union countries. We seek evidence of changes in these correlations since the introduction of the euro to understand whether the European Union can be considered a unique entity with regard to its international capital flows. We make use of Ng's (2006) “uniform spacing" methodology to rank cross-section correlations and shed light on potential common factors driving international capital flows. We find that a common factor structure is suitable for equity flows disaggregated by sign but not for net and total flows. We only find mixed evidence that correlations between types of flows have changed since the introduction of the euro.