Results 1 - 17 of 17 for yield curve has inverted signaling [collection: workingpapers]
This article examines how the real economy and inflation and inflation expectations evolved in response to the six tightening episodes enacted by the FOMC since 1983. The findings indicate that the sixth episode (2015-2018) differed in several key dimensions compared with the previous five episodes. In the first five episodes, the data show the FOMC was generally tightening into a strengthening economy with building price pressures. In contrast, in the final episode the FOMC began its tightening regime during a deceleration in economic activity and with headline and core inflation remaining well below the FOMC’s 2 percent inflation target. Moreover, both short- and long-term inflation expectations were drifting lower. These developments helped explain why there was a one-year gap between the first and second increases in the federal funds target rate in the final episode. Another key difference is that in three of the first five episodes, the FOMC continued to tighten after the yield curve inverted; a recession then followed shortly thereafter. However, in the final episode, the FOMC ended its tightening policy about eight months before the yield curve inverted.
Deciding to undertake a series of tightening actions present unique challenges for Federal Reserve policymakers. These challenges are both political and economic. Using a variety of economic and financial market metrics, this article examines how the economy and financial markets evolved in response to the five tightening episodes enacted by the FOMC since 1983. The primary aim is to compare the most-recent episode, from December 2015 to December 2018, with the previous four episodes. The findings in this article indicate that the current episode bears some resemblance to previous Fed tightening episodes, but also differs in several key dimensions. For example, in the first four episodes, the data show the FOMC was generally tightening into a strengthening economy with building price pressures. In contrast, in the fifth episode the FOMC began its tightening regime during a deceleration in economic activity and with headline and core inflation remaining well below the FOMC’s 2 percent inflation target. Moreover, both short- and long-term inflation expectations were drifting lower. These developments helped explain why there was a one-year gap between the first and second increases in the federal funds target rate in the most-recent episode. Another key difference is that in three of the first four episodes, the FOMC continued to tighten after the yield curve inverted; a recession then followed shortly thereafter. However, in the final episode, the FOMC ended its tightening policy about eight months before the yield curve inverted. It remains to be seen if a recession follows this inversion.
Based on a switching-cost model, we examine empirically the hypotheses that bank loan mark-ups are countercycical and asymmetric in their responsiveness to recessionaly and expansionary impulses. The first econometric model treats changes in the mark-up as a continuous variable. The second treats them as an ordered categorical variable due to the discrete nature of prime rate changes. By allowing the variance to switch over time as a Markov process, we present the first conditionally heteroscedastic discrete choice (ordered probit) model for time-series applications. This feature yields a remarkable improvement in the likelihood function. Specifications that do not account for conditional heteroscedasticity find evidence of both countercyclical and asymmetric mark-up behavior. Incontrast, the heteroscedastic ordered probit finds the mark-up to be countercycical but not significantly asymmetric. We explain why controlling for conditional heteroscedasticity may be important when testing for downward stickiness in loan rates.
The Federal Reserve’s unconventional monetary policy announcements in 2008-2009 substantially reduced international long-term bond yields and the spot value of the dollar. These changes closely followed announcements and were very unlikely to have occurred by chance. A simple portfolio choice model can produce quantitatively plausible changes in U.S. and foreign excess bond yields. The jump depreciations of the USD are fairly consistent with estimates of the impacts of previous equivalent monetary policy shocks. The policy announcements do not appear to have reduced yields by reducing expectations of real growth. Unconventional policy can reduce international long-term yields and the value of the dollar even at the zero bound.
Quantitative macroeconomics is often portrayed as a science—because of its intensive use of high-powered mathematics—with the possible limitation of being unable to conduct controlled experiments. To qualify as a science, however, theories in that discipline must meet a minimum number of criteria: (i) It has explanatory power to explain phenomena; (ii) it has predictive power to yield quantifiable and falsifiable statements about new phenomenon; and (iii) it has operational power to change the world.A scientific theory consists of axioms and working hypotheses that facilitate the derivation of contestable statements from the axioms.2 Hence, simply laying out a list of contradictions between a theory’s implications and the data is often insufficient to disqualify a theory as science; it may have just challenged its working hypotheses, not its axioms. But, challenging a theory’s working hypotheses is a crucial step to improve or falsify a theory. This is why Isaac Newton spent so much effort in his Principia Mathematica to deal with the law of motion under air friction.This article discusses one of the working hypotheses of the Arrow-Debreu paradigm and its dynamic stochastic general equilibrium reincarnation in quantitative macroeconomics—the supply curve and its embodiment in the neoclassical production function. The supply curve is a much stronger pillar than the demand curve in holding up the Arrow-Debreu paradigm, but we argue in this article that the neoclassical production function embodying the supply curve is full of cracks.More specifically, we show that the neoclassical production function is not quantifiable as a working hypothesis to support the Arrow-Debreu DSGE model, unlike the chemical reaction equations based on Lavoisier’s oxygen theory of combustion. The neoclassical production function relies on the unobservable and unmeasurable Solow residual to explain the quantity of output produced at the firm, industry, or national level, and the hypothetical factors of production (capital and labor) are much like “fire, air, water, and earth” in the ancient Greek theory of the universe. Because the working hypotheses of quantitative macroeconomics are not themselves quantifiable, the neoclassical theory is not yet a science. And this explains the lack of power for DSGE models to predict the 2008 Financial Crisis and the inability of economic theory to change the world by engineering or recreating economic prosperity in developing countries.
Despite its role in monetary policy and finance, the expectations hypothesis (EH) of the term structure of interest rates has received virtually no empirical support. The empirical failure of the EH has been attributed to a variety of econometric biases associated with the single-equation models most often used to test it; however, none of these explanations appears to account for the massives failure reported in the literature. We note that traditional tests of the EH are based on two assumptions—the EH per se and an assumption about the expectations generating process (EGP) for the short-term rate. Arguing that convential tests of the EH could reject it because the EGP embedded in these tests is significantly at odds with the true EGP, we investigate this possibility by analyzing the out-of-sample predictive prefromance of several models for predicting interest rates and a model that assumes the EH holds. Using standard methods that take into account parameter uncertainty, the null hypothesis of equal predictive accuracy of each models relative to the random walk alternative is never rejected.
This article presents a new Qual VAR model for incorporating information from qualitative and/or discrete variables in vector autoregressions. With a Qual VAR, it is possible to create dynamic forecasts of the qualitative variable using standard VAR projections. Previous forecasting methods for qualitative variables, in contrast, only produce static forecasts. I apply the Qual VAR to forecasting the 2001 business recession out of sample and to analyzing the Romer and Romer (1989) narrative measure of monetary policy contractions as an endogenous variable in a VAR. Out of sample, the model predicts the timing of the 2001 recession quite well relative to the recession probabilities put forth at the time by professional forecasters. Qual VARs -- which include information about the qualitative variable -- can also enhance the quality of density forecasts of the other variables in the system.
We estimate a number of multivariate regime switching VAR models on a long monthly data set for eight variables that include excess stock and bond returns, the real T-bill yield, predictors used in the finance literature (default spread and the dividend yield), and three macroeconomic variables (inflation, real industrial production growth, and a measure of real money growth). Heteroskedasticity may be accounted for by making the covariance matrix a function of the regime. We find evidence of four regimes and of time-varying covariances. We provide evidence that the best in-sample fit is provided by a four state model in which the VAR(1) component fails to be regime-dependent. We interpret this as evidence that the dynamic linkages between financial markets and the macroeconomy have been stable over time. We show that the four-state model can be helpful in forecasting applications and to provide one-step ahead predicted Sharpe ratios.
This paper analyzes the empirical performance of two alternative ways in which multi-factor models with time-varying risk exposures and premia may be estimated. The first method echoes the seminal two-pass approach advocated by Fama and MacBeth (1973). The second approach extends previous work by Ouysse and Kohn (2010) and is based on a Bayesian approach to modelling the latent process followed by risk exposures and idiosynchratic volatility. Our application to monthly, 1979-2008 U.S. data for stock, bond, and publicly traded real estate returns shows that the classical, two-stage approach that relies on a nonparametric, rolling window modelling of time-varying betas yields results that are unreasonable. There is evidence that all the portfolios of stocks, bonds, and REITs have been grossly over-priced. On the contrary, the Bayesian approach yields sensible results as most portfolios do not appear to have been misspriced and a few risk premia are precisely estimated with a plausibile sign. Real consumption growth risk turns out to be the only factor that is persistently priced throughout the sample.
Smooth-transition autoregressive (STAR) models have proven to be worthy competitors of Markov-switching models of regime shifts, but the assumption of a time-invariant threshold level does not seem realistic and it holds back this class of models from reaching their potential usefulness. Indeed, an estimate of a time-varying threshold level of unemployment, for example, might serve as a meaningful estimate of the natural rate of unemployment. More precisely, within a STAR framework, one might call the time-varying threshold the “tipping level” rate of unemployment, at which the mean and dynamics of the unemployment rate shift. In addition, once the threshold level is allowed to be time-varying, one can add an error-correction term—between the lagged level of unemployment and the lagged threshold level—to the autoregressive terms in the STAR model. In this way, the time-varying latent threshold level serves dual roles: as a demarcation between regimes and as part of an error-correction term.
Working Papers 17 items