Intro to DSGE + State Space Models

Intro

Background

A DSGE Model

Small-Scale DSGE Model

Final Goods Producers

Intermediate Goods Producers

Households

Monetary and Fiscal Policy

Exogenous Processes

Equilibrium Conditions

Equilibrium Conditions – Continued

Steady State

Solving DSGE Models

Solving DSGE Models

What is a Local Approximation?

What is a Log-Linear Approximation?

Loglinearization of New Keynesian Model

Canonical Linear Rational Expectations System

How Can One Solve Linear Rational Expectations Systems? A Simple Example

A Simple Example

Solving a More General System

Solving a More General System

Solving a More General System

Proposition

If there exists a solution to Eq. (\ref{eq_stabcond}) that expresses the forecast errors as function of the fundamental shocks \(\epsilon_t\) and sunspot shocks \(\zeta_t\), it is of the form

\begin{eqnarray} \eta_t &=& \eta_1 \epsilon_t + \eta_2 \zeta_t \label{eq_etasol} \\ &=& ( - V_{.1}D_{11}^{-1} U_{.1}^{\prime}\Psi_x^J + V_{.2} \widetilde{M}) \epsilon_t + V_{.2} M_\zeta \zeta_t, \notag \end{eqnarray}

where \(\widetilde{M}\) is an \((k-r) \times l\) matrix, \(M_\zeta\) is a \((k-r) \times p\) matrix, and the dimension of \(V_{.2}\) is \(k\times (k-r)\). The solution is unique if \(k = r\) and \(V_{.2}\) is zero.

At the End of the Day…

Measurement Equation

State Space Models and The Kalman Filter

State Space Models


A state space model consists of

Measurement Equation

The measurement equation is of the form

\begin{eqnarray} y_t = Z_{t|t-1} s_t + d_{t|t-1} + u_t , \quad t=1,\ldots,T \end{eqnarray}

where \(y_t\) is a \(n \times 1\) vector of observables, \(s_t\) is a \(m \times 1\) vector of state variables, \(Z_{t|t-1}\) is an \(n \times m\) vector, \(d_{t|t-1}\) is a \(n\times 1\) vector, and \(u_t\) are innovations (or often ``measurement errors’’) with mean zero and \(\mathbb{E}_{t-1}[ u_t u_t’] = H_{t|t-1}\).

Transition Equation

The transition equation is of the form

\begin{eqnarray} s_t = T_{t|t-1} s_{t-1} + c_{t|t-1} + R_{t|t-1} \eta_t \end{eqnarray}

where \(R_t\) is \(m \times g\), and \(\eta_t\) is a \(g \times 1\) vector of innovations with mean zero and variance \(\mathbf{E}_{t|t-1}[ \eta_t \eta_t’] = Q_{t|t-1}\).

Adding it all up

If the system matrices \(Z_t, d_t, H_t, T_t, c_t, R_t, Q_t\) are non-stochastic and predetermined, then the system is linear and \(y_t\) can be expressed as a function of present and past \(u_t\)’s and \(\eta_t\)’s.

  1. calculate predictions \(y_t|Y^{t-1}\), where \(Y^{t-1} = [ y_{t-1}, \ldots, y_1]\),
  2. obtain a likelihood function \[ p(Y^T| \{Z_t, d_t, H_t, T_t, c_t, R_t, Q_t \}) \]
  3. back out a sequence \[ \left\{ p(s_t |Y^t, \{Z_\tau, d_\tau, H_\tau, T_\tau, c_\tau, R_\tau, Q_\tau \} ) \right\} \]

The algorithm is called the Kalman Filter and was originally adopted from the engineering literature.

A Useful Lemma

Let \((x’,y’)’\) be jointly normal with \[ \mu = \left[ \begin{array}{c} \mu_x \\ \mu_y \end{array} \right] \quad \mbox{and} \quad \Sigma = \left[ \begin{array}{cc} \Sigma_{xx} & \Sigma_{xy} \\ \Sigma_{yx} & \Sigma_{yy} \end{array} \right] \] Then the \(pdf(x|y)\) is multivariate normal with

\begin{eqnarray} \mu_{x|y} &=& \mu_x + \Sigma_{xy} \Sigma_{yy}^{-1}(y - \mu_y) \\ \Sigma_{xx|y} &=& \Sigma_{xx} - \Sigma_{xy} \Sigma_{yy}^{-1} \Sigma_{yx} \end{eqnarray}

\(\Box\)

A Bayesian Interpretation to the Kalman Filter


**Note:** The subsequent analysis is conditional on the system matrices \\(Z\_t, d\_t, H\_t, T\_t, c\_t, R\_t, Q\_t\\). For notational convenience we will, however, drop the system matrices from the conditioning set.

The calculations will be based on the following conditional distribution, represented by densities:

  1. Initialization: \(p(s_{t-1}|Y^{t-1})\)

  2. Forecasting:

    \begin{eqnarray*} p(s_t|Y^{t-1}) &=& \int p(s_t|s_{t-1}, Y^{t-1} ) p(s_{t-1}|Y^{t-1}) ds_{t-1} \\ p(y_t|Y^{t-1}) & = & \int p(y_t | s_t, Y^{t-1} ) p(s_t|Y^{t-1}) d s_t \end{eqnarray*}

  3. Updating: \[ p(s_t|Y^t) = \frac{ p(y_t|s_t, Y^{t-1} ) p(s_t|Y_{t-1}) }{ p(y_t|Y^{t-1} )} \]

Initialization

Forecasting

Since \(s_{t-1}\) and \(\eta_t\) are independent multivariate normal random variables, it follows that

\begin{eqnarray} s_t |Y^{t-1} \sim {\cal N}( \hat{s}_{t|t-1}, P_{t|t-1}) \end{eqnarray}

where

\begin{eqnarray*} \hat{s}_{t|t-1} & = & T_t A_{t-1} + c_t \\ P_{t|t-1} & = & T_t P_{t-1} T_t’ + R_t Q_t R_t' \end{eqnarray*}

Forecasting \(y_t\)

The conditional distribution of \(y_t|s_t, Y^{t-1}\) is of the form

\begin{eqnarray} y_t|s_t, Y^{t-1} \sim {\cal N}(Z_t s_t + d_t, H_t) \end{eqnarray}

Since \(s_t|Y^{t-1} \sim {\cal N}( \hat{s}_{t|t-1}, P_{t|t-1})\), we can deduce that the marginal distribution of \(y_t\) conditional on \(Y^{t-1}\) is of the form

\begin{eqnarray} y_t | Y_{t-1} \sim {\cal N} (\hat{y}_{t|t-1}, F_{t|t-1}) \end{eqnarray}

where

\begin{eqnarray*} \hat{y}_{t|t-1} & = & Z_t \hat{s}_{t|t-1} + d_t \\ F_{t|t-1} & = & Z_t P_{t|t-1} Z_t’ + H_t \end{eqnarray*}

Updating

To obtain the posterior distribution of \(s_t | y_t, Y^{t-1}\) note that

\begin{eqnarray} s_t & = & \hat{s}_{t|t-1} + (s_t - \hat{s}_{t|t-1}) \\ y_t & = & Z_t \hat{s}_{t|t-1} + d_t + Z_t(s_t - \hat{s}_{t|t-1}) + u_t \end{eqnarray}

and the joint distribution of \(s_t\) and \(y_t\) is given by

\begin{eqnarray} \left[ \begin{array}{c} s_t \\ x_t \end{array} \right] \Big| Y^{t-1} \sim {\cal N} \left( \left[ \begin{array}{c} \hat{s}_{t|t-1} \\ \hat{y}_{t|t-1} \end{array} \right], \left[ \begin{array}{cc} P_{t|t-1} & P_{t|t-1} Z_t’ \\ Z_t P_{t|t-1}’ & F_{t|t-1} \end{array} \right] \right) \end{eqnarray}

\begin{eqnarray} s_t | y_t , Y^{t-1} \sim {\cal N}(A_t, P_t) \end{eqnarray}

where

\begin{eqnarray*} A_t & = & \hat{s}_{t|t-1} + P_{t|t-1}Z_t’F_{t|t-1}^{-1}(y_t - Z_t\hat{s}_{t|t-1} - d_t)\\ P_t & = & P_{t|t-1} - P_{t|t-1} Z_t’F_{t|t-1}^{-1}Z_tP_{t|t-1} \\ \end{eqnarray*}

The conditional mean and variance \(\hat{y}_{t|t-1}\) and \(F_{t|t-1}\) were given above. This completes one iteration of the algorithm. The posterior \(s_t|Y^t\) will serve as prior for the next iteration. \(\Box\)

Likelihood Function

We can define the one-step ahead forecast error

\begin{eqnarray} \nu_t = y_t - \hat{y}_{t|t-1} = Z_t (s_t - \hat{s}_{t|t-1}) + u_t \end{eqnarray}

The likelihood function is given by

\begin{eqnarray} p(Y^T | \mbox{parameters} ) & = & \prod_{t=1}^T p(y_t|Y^{t-1}, \mbox{parameters}) \nonumber \\ & = & ( 2 \pi)^{-nT/2} \left( \prod_{t=1}^T |F_{t|t-1}| \right)^{-1/2} \nonumber \\ & ~ & \times \exp \left\{ - \frac{1}{2} \sum_{t=1}^T \nu_t F_{t|t-1} \nu_t’ \right\} \end{eqnarray}

This representation of the likelihood function is often called prediction error form, because it is based on the recursive prediction one-step ahead prediction errors \(\nu_t\). \(\Box\)

Multistep Forecasting

The Kalman Filter can also be used to obtain multi-step ahead forecasts. For simplicity, suppose that the system matrices are constant over time. Since

\begin{eqnarray} s_{t+h-1|t-1} = T^h s_{t-1} + \sum_{s=0}^{h-1} T^s c + \sum_{s=0}^{h-1} T^s R \eta_t \end{eqnarray}

it follows that

\begin{eqnarray*} \hat{s}_{t+h-1|t-1} &=& \EE[s_{t+h-1|t-1}|Y^{t-1} ] = T^h A_{t-1} + \sum_{s=0}^{h-1} T^s c \\ P_{t+h-1|t-1} & = & var[s_{t+h-1|t-1}|Y^{t-1} ] = T^hP_{t-1}T^h + \sum_{s=0}^{h-1} T^s RQR’T^{s’} \end{eqnarray*}

which leads to

\begin{eqnarray} y_{t+h-1} | Y_{t-1} \sim {\cal N} (\hat{y}_{t+h-1|t-1}, F_{t+h-1|t-1}) \end{eqnarray}

where

\begin{eqnarray*} \hat{y}_{t+h-1|t-1} & = & Z \hat{s}_{t+h-1|t-1} + d \\ F_{t+h-1|t-1} & = & Z P_{t+h-1|t-1} Z’ + H \end{eqnarray*}

The multi-step forecast can be computed recursively, simply by omitting the updating step in the algorithm described above. \(\Box\)

Example 1: New Keynesian DSGE

Observables

\includegraphics[width=4in]{dsge1_observables}

Impulse Responses

\begin{center} \includegraphics[width=3.5in]{dsge1_all_irf} \end{center}

Filtered Technology Shock (Mean)

\begin{center} \includegraphics[width=3.5in]{filtered_technology_shock} \end{center}

Log Likelihood Increments

\begin{center} \includegraphics[width=3.5in]{log_lik} \end{center}

Forecast of Output Growth

\begin{center} \includegraphics[width=3.5in]{ygr_forecast} \end{center}

Forecast of Inflation

\begin{center} \includegraphics[width=3.5in]{infl_forecast} \end{center}

Forecast of Interest Rate

\begin{center} \includegraphics[width=3.5in]{int_forecast} \end{center}

Example 2 – ARMA models

Consider the ARMA(1,1) model of the form

\begin{eqnarray} y_t = \phi y_{t-1} + \epsilon_t + \theta \epsilon_{t-1} \quad \epsilon_t \sim iid{\cal N}(0,\sigma^2) \end{eqnarray}

The model can be rewritten in state space form

\begin{eqnarray} y_t & = & [ 1 \; \theta] \left[ \begin{array}{c} \epsilon_t \\ \epsilon_{t-1} \end{array} \right] + \phi y_{t-1}\\ \left[ \begin{array}{c} \epsilon_t \\ \epsilon_{t-1} \end{array} \right] & = & \left[ \begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array} \right] \left[ \begin{array}{c} \epsilon_{t-1} \\ \epsilon_{t-2} \end{array} \right] + \left[ \begin{array}{c} \eta_t \\ 0 \end{array} \right] \end{eqnarray}

where \(\eta_t \sim iid{\cal N}(0,\sigma^2)\). Thus, the state vector is composed of \(\alpha_t = [\epsilon_t, \epsilon_{t-1}]’\) and \(d_{t|t-1} = \rho y_{t-1}\). The Kalman filter can be used to compute the likelihood function of the ARMA model conditional on the parameters \(\phi, \theta, \sigma^2\). A numerical optimization routine has to be used to find the maximum of the likelihood function. \(\Box\)

A Model with Time Varying Coefficients

Consider the following regression model with time varying coefficients

\begin{eqnarray} y_t & = & x_t’ \beta_t + u_t \\ \beta_t & = & T \beta_{t-1} + c + \eta_t \end{eqnarray}

There are many reasons to believe that macroeconomic relationships are not stable over time. An entire branch of the econometrics literature is devoted to tests for structural breaks, that is, tests for changes in the parameter values. However, to be able to predict future changes in the parameter values it is important to model the time variation in the parameters. The state variable \(\alpha_t\) corresponds now to the vector of regression parameters \(\beta_t\). It is often assumed that the regression coefficients follow univariate random walks of the form

\begin{eqnarray} \beta_{j,t} = \beta_{j,t-1} + \eta_{j,t} \end{eqnarray}

Hence, the only unknown parameters are \(var[u_t]\) and \(var[\eta_{j,t}]\). The Kalman filter can provide us with a sequence of estimates for the time varying coefficients. \[ \{ \EE[\beta_t|Y^t,X^t] \}_{t=1}^T, \quad \{ var[\beta_t|Y^t,X^t] \}_{t=1}^T \] and the likelihood of the data conditional on \(\mathbf E[u_t u_t’]\), \(\mathbf E[\eta_t \eta_t’]\), \(T\) and \(c\). \(\Box\)

Bibliography

References

References

An, S., and F. Schorfheide. (2007): “Bayesian Analysis of Dsge Models,Econometric Reviews, 26, 113–72.
Christiano, L. J., M. Eichenbaum, and C. L. Evans. (2005): “Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy,” Journal of Political Economy, 113, 1–45.
Fernandez-Villaverde, J., J. Rubio-Ramirez, and F. Schorfheide. (2016): “Solution and Estimation Methods for Dsge Models,Handbook of Macroeconomics, , 527 724.
Galí, J. (2008): Monetary Policy, Inflation, and the Business Cycle: An Introduction to the New Keynesian Framework, 2nd ed.Princeton University Press.
Hamilton, J. (1994): Time Series Analysis, Princeton, New Jersey: Princeton University Press.
Harvey, A. C. (1991): Forecasting, Structural Time Series Models and the Kalman Filter, Cambridge, United Kingdom: University of Cambridge Press.
Herbst, E., and F. Schorfheide. (2015): Bayesian Estimation of Dsge Models, Princeton: Princeton University Press.
Ireland, P. N. (2004): “A Method for Taking Models to the Data,” Journal of Economic Dynamics and Control, 28, 1205–26.
Sims, C. A. (2002): “Solving Linear Rational Expectations Models,” Computational Economics, 20, 1–20.
Smets, F., and R. Wouters. (2007): “Shocks and Frictions in Us Business Cycles: A Bayesian Dsge Approach,” American Economic Review, 97, 586–608.
Woodford, M. (2003): Interest and Prices, Princeton University Press.