State-space representation:
\begin{align} y_t = [\begin{array}{cc} 1 & 1 \end{array} ] s_t, \quad s_t = \left[ \begin{array}{cc} {\color{blue} \phi_1} & 0 \ {\color{blue} \phi_3} & {\color{blue} \phi_2} \end{array} \right] s_{t-1} + \left[ \begin{array}{c} 1 \ 0 \end{array} \right] \epsilon_t. \label{eq_exss} \end{align}
The state-space model can be re-written as ARMA(2,1) process \[ (1- {\color{blue} \phi_1} L)(1-{\color{blue} \phi_2} L) y_t = (1-({\color{blue} \phi_2} - {\color{blue} \phi_3} )L) \epsilon_t. \]
Relationship between state-space parameters \({\color{blue} \phi}\) and structural parameters \({\color{red} \theta}\): \[ {\color{blue} \phi_1} = {\color{red} \theta_1^2}, \quad {\color{blue} \phi_2} = {\color{red} (1-\theta_1^2) }, \quad {\color{blue} \phi_3 - \phi_2} = {\color{red} - \theta_1 \theta_2 }. \]
\begin{beamerboxesrounded}{Model} Reduced form: $ (1- {\color{blue} \phi_1} L)(1-{\color{blue} \phi_2} L) y_t = (1-({\color{blue} \phi_2} - {\color{blue} \phi_3} )L) \epsilon_t. $
\vspace*{0.5cm}
Relationship of ${\color{blue} \phi}$ and ${\color{red} \theta}$: $ {\color{blue} \phi_1} = {\color{red} \theta_1^2}, \quad {\color{blue} \phi_2} = {\color{red} (1-\theta_1^2) }, \quad {\color{blue} \phi_3 - \phi_2} = {\color{red} - \theta_1 \theta_2 }. $ \end{beamerboxesrounded}
Reduced form:
\( (1- {\color{blue} \phi_1} L)(1-{\color{blue} \phi_2} L) y_t
= (1-({\color{blue} \phi_2} - {\color{blue} \phi_3} )L) \epsilon_t. \)
Relationship of \({\color{blue} \phi}\) and \({\color{red} \theta}\):
\( {\color{blue} \phi_1} = {\color{red} \theta_1^2}, \quad
{\color{blue} \phi_2} = {\color{red} (1-\theta_1^2) }, \quad
{\color{blue} \phi_3 - \phi_2} = {\color{red} - \theta_1 \theta_2 }.
\)
\begin{center} \includegraphics[width=2in]{static/ss_weakid} \end{center}
Reduced form:
\( (1- {\color{blue} \phi_1} L)(1-{\color{blue} \phi_2} L) y_t
= (1-({\color{blue} \phi_2} - {\color{blue} \phi_3} )L) \epsilon_t. \)
Relationship of \({\color{blue} \phi}\) and \({\color{red} \theta}\):
\( {\color{blue} \phi_1} = {\color{red} \theta_1^2}, \quad
{\color{blue} \phi_2} = {\color{red} (1-\theta_1^2) }, \quad
{\color{blue} \phi_3 - \phi_2} = {\color{red} - \theta_1 \theta_2 }.
\)
\begin{center} \includegraphics[width=2in]{static/ss_noglobalid5.pdf} \end{center}
Posterior expectations can be approximated by Monte Carlo averages.
If we have draws from \(\{ \theta^i\}_{i=1}^N\) from \(p(\theta|Y)\), then (under some regularity conditions) \[ \frac{1}{N} \sum_{i=1}^N h(\theta^i) \stackrel{a.s.}{\longrightarrow} \mathbb{E}[h(\theta)|Y]. \]
``Standard’’ approach in DSGE model literature (Schorfheide, 2000; Otrok, 2001): use Markov chain Monte Carlo (MCMC) methods to generate a sequence of serially correlated draws \(\{ \theta^i\}_{i=1}^N\).
Unfortunately, ``standard’’ MCMC can be quite inaccurate, especially in medium and large-scale DSGE models:
If \(\theta^i\)’s are draws from \(g(\cdot)\) then \[ \mathbb{E}_\pi[h] \approx \frac{ \frac{1}{N} \sum_{i=1}^N h(\theta^i) w(\theta^i)}{ \frac{1}{N} \sum_{i=1}^N w(\theta^i) }, \quad w(\theta) = \frac{f(\theta)}{g(\theta)}. \]
\begin{center} \includegraphics[width=4in]{static/is.pdf} \end{center}
Our state-space model: \[ y_t = [1~ 1]s_t, \quad s_t = \left[\begin{array}{cc}\theta^2_1 & 0 \ (1-\theta_1^2) - \theta_1 \theta_2 & (1-\theta_1^2)\end{array}\right]s_{t-1} + \left[\begin{array}{c} 1 \\\ 0\end{array}\right]\epsilon_t. \]
Innovation: \(\epsilon_t \sim iid N(0,1)\).
Prior: uniform on the square \(0\le \theta_1 \le 1\) and \(0 \le \theta_2 \le 1\).
Simulate \(T = 200\) observations
given \(\theta = [0.45, 0.45]’\), which is observationally equivalent to \(\theta = [0.89, 0.22]’\)
\includegraphics[width=.8\linewidth]{static/smc_ss_density.pdf} \[ \pi_n(\theta) = \frac{{\color{blue}[p(Y|\theta)]^{\phi_n}} p(\theta)}{\int {\color{blue}[p(Y|\theta)]^{\phi_n}} p(\theta) d\theta} = \frac{f_n(\theta)}{Z_n}, \quad \phi_n = \left( \frac{n}{N_\phi} \right)^\lambda \]
\begin{center} \includegraphics[width=4in]{static/smc_ss_contour.pdf} \end{center}
\begin{center} \includegraphics[width=3in]{static/smc_evolution_of_particles.pdf} \end{center}
\(\pi_n(\theta)\) is represented by a swarm of particles \(\{ \theta_n^i,W_n^i \}_{i=1}^N\):
\[ \bar{h}_{n,N} = \frac{1}{N} \sum_{i=1}^N W_n^i h(\theta_n^i) \stackrel{a.s.}{\longrightarrow} \mathbb{E}_{\pi_n}[h(\theta_n)]. \]
C is Correction; S is Selection; and M is Mutation.
Correction. Reweight the particles from stage \(n-1\) by defining the incremental weights
\begin{equation} \tilde w_{n}^{i} = [p(Y|\theta^{i}_{n-1})]^{\phi_{n} - \phi_{n-1}} \label{eq_smcdeftildew} \end{equation}
and the normalized weights
\begin{equation} \tilde{W}^{i}_{n} = \frac{\tilde w_n^{i} W^{i}_{n-1}}{\frac{1}{N} \sum_{i=1}^N \tilde w_n^{i} W^{i}_{n-1}}, \quad i = 1,\ldots,N. \end{equation}
An approximation of \(\mathbb{E}_{\pi_n}[h(\theta)]\) is given by
\begin{equation} \tilde{h}_{n,N} = \frac{1}{N} \sum_{i=1}^N \tilde W_n^{i} h(\theta_{n-1}^i). \label{eq_deftildeh} \end{equation}
Selection.
Mutation.
Correction.
Selection. (Optional Resampling)}
Let \(\{ \hat{\theta} \}_{i=1}^N\) denote \(N\) \(iid\) draws from a multinomial distribution
characterized by support points and weights \(\{\theta_{n-1}^i,\tilde{W}_n^i \}_{i=1}^N\)
and set \(W_n^i=1\).
An approximation of \(\mathbb{E}_{\pi_n}[h(\theta)]\) is given by
\begin{equation} \hat{h}_{n,N} = \frac{1}{N} \sum_{i=1}^N W^i_n h(\hat{\theta}_{n}^i). \label{eq_defhath} \end{equation}
Mutation. Propagate the particles \(\{ \hat{\theta}_i,W_n^i \}\) via \(N_{MH}\) steps of a MH algorithm with transition density \(\theta_n^i \sim K_n(\theta_n| \hat{\theta}_n^i; \zeta_n)\) and stationary distribution \(\pi_n(\theta)\). An approximation of \(\mathbb{E}_{\pi_n}[h(\theta)]\) is given by
\begin{equation} \bar{h}_{n,N} = \frac{1}{N} \sum_{i=1}^N h(\theta_{n}^i) W^i_n. \label{eq_defbarh} \end{equation}
Correction Step:
Selection Step: the resampling of the particles
Mutation Step:
Goal: strong law of large numbers (SLLN) and central limit theorem (CLT) as \(N \longrightarrow \infty\) for every iteration \(n=1,\ldots,N_\phi\).
Regularity conditions:
Idea of proof (Chopin, 2004): proceed recursively
\[ \sqrt{N} \left( \frac{1}{N} \sum_{i=1}^N h(\theta_{n-1}^i) W_{n-1}^i - \mathbb{E}_{\pi_{n-1}}[h(\theta)] \right) \Longrightarrow N\big(0,\Omega_{n-1}(h)\big) \]
\[ \sqrt{N} \left( \frac{1}{N} \sum_{i=1}^N h(\theta_{n}^i) W_{n}^i - \mathbb{E}_{\pi_{n}}[h(\theta)] \right) \Longrightarrow N\big(0,\Omega_{n}(h)\big) \]
Suppose that the \(n-1\) approximation (with normalized weights) yields \[ \sqrt{N} \left( \frac{1}{N} \sum_{i=1}^N h(\theta_{n-1}^i) W_{n-1}^i - \mathbb{E}_{\pi_{n-1}}[h(\theta)] \right) \Longrightarrow N\big(0,\Omega_{n-1}(h)\big) \]
Then
\begin{eqnarray*} \hspace{-0.70in} \sqrt{N} \left( \frac{ \frac{1}{N} \sum_{i=1}^N h(\theta_{n-1}^i) {\color{red} [p(Y|\theta_{n-1}^i)]^{\phi_n - \phi_{n-1}} } W_{n-1}^i}{ \frac{1}{N} \sum_{i=1}^N {\color{red} [p(Y|\theta_{n-1}^i)]^{\phi_n - \phi_{n-1}} } W_{n-1}^i} - \mathbb{E}_{\pi_{n}}[h(\theta)] \right) \Longrightarrow& N\big(0, \tilde{\Omega}_n(h) \big) \end{eqnarray*}
where \[ \hspace{-0.5in} \tilde{\Omega}_n(h) = \Omega_{n-1}\big( {\color{red} v_{n-1}(\theta)} (h- \mathbb{E}_{\pi_n}[h] ) \big) \quad {\color{red} v_{n-1}(\theta) = [p(Y|\theta)]^{\phi_n - \phi_{n-1}} \frac{Z_{n-1}}{Z_n} } \]
This step relies on likelihood evaluations from iteration \(n-1\) that are already stored in memory.
We are using the Markov transition kernel \(K_n(\theta|\hat{\theta})\) to transform draws $\hat{\theta}_n^i$ into draws $θ_n^i$.
To preserve the distribution of the \(\hat{\theta}_n^i\)’s it has to be the case that \[ {\color{blue} \pi_n(\theta)} = \int K_n(\theta|\hat{\theta}) {\color{red} \pi_n(\hat{\theta})} d \hat{\theta}. \]
It can be shown that the overall asymptotic variance after the mutation is the sum of
This step is embarassingly parallelizable, well designed for single instruction, multiple data (SIMD) processing.
Transition kernel \(K_n(\theta|\hat{\theta}_{n-1};\zeta_n)\): generated by running \(M\) steps of a Metropolis-Hastings algorithm.
Lessons from DSGE model MCMC:
Blocking: Partition the parameter vector \(\theta_n\) into \(N_{blocks}\) equally sized blocks, denoted by \(\theta_{n,b}\), \(b=1,\ldots,N_{blocks}\). (We generate the blocks for \(n=1,\ldots,N_\phi\) randomly prior to running the SMC algorithm.)
Example: random walk proposal density:
\begin{eqnarray*} \vartheta_b | (\theta^i_{n,b,m-1}, \theta^i_{n,-b,m}, \Sigma^*_{n,b}) &\sim& {\color{blue} N \bigg( \theta^i_{n,b,m-1}, c_n^2 \Sigma^*_{n,b} \bigg)}. \end{eqnarray*}
Let \(\Sigma_n^*=\mathbb{V}_{\pi_n}[\theta]\).
Adjust scaling factor according to \[ c_{n} = c_{n-1} f \big( 1-R_{n-1}(\zeta_{n-1}) \big), \] where \(R_{n-1}(\cdot)\) is population rejection rate from iteration \(n-1\) and \[ f(x) = 0.95 + 0.10 \frac{e^{16(x - 0.25)}}{1 + e^{16(x - 0.25)}}. \]
\begin{center} \includegraphics[width=2in]{static/smc_ss.pdf} \end{center}
Notes: The dashed line in the top panel indicates the target acceptance rate of 0.25.
\begin{center} \includegraphics[width=3in]{static/smc_clt_nphi100.pdf} \end{center}
Notes: The figure shows \(N \mathbb{V}[\bar\theta_j]\) for each parameter as a function of the number of particles \(N\). \(\mathbb{V}[\bar\theta_j]\) is computed based on \(N_{run}=1,000\) runs of the SMC algorithm with \(N_\phi=100\). The width of the bands is \((2\cdot 1.96) \sqrt{3/N_{run}} (N \mathbb{V}[\bar\theta_j])\).
How I do it – distributed memory parallelization in Fortran
Could be better with more programming.
The extent to which HPC can help us is determined by the amount of algorithm that can be executed in parallel vs. serial.
Suppose a fraction \(B\in[0,1]\) must executed in serial fashion for a particular algorithm.
Amdahls Law: Theoretical gain from using \(N\) processors in an algorithm is given by: \[ R(N) = B + \frac{1}{N}(1-B) \]
Question: What is \(B\) for our SMC algorithm?
Answer: about 0.1!
\includegraphics[width=4.5in]{static/amdahls_law}
\begin{center} \includegraphics[width=3in]{static/smc_lambda.pdf} \end{center}
Notes: The figure depicts hairs of \(\mbox{InEff}_N[\bar{\theta}]\) as function of \(\lambda\). The inefficiency factors are computed based on \(N_{run}=50\) runs of the SMC algorithm. Each hair corresponds to a DSGE model parameter.
\begin{center} \includegraphics[width=3in]{static/smc_nphi_vs_npart.pdf} \end{center}
Notes: Plot of \(\mathbb{V}[\bar{\theta}] / \mathbb{V}_\pi[\theta]\) for a specific configuration of the SMC algorithm. The inefficiency factors are computed based on \(N_{run}=50\) runs of the SMC algorithm. \(N_{blocks}=1\), \(\lambda=2\), \(N_{MH}=1\).
\begin{center} \includegraphics[width=3in]{static/smc_nblocks_vs_npart.pdf} \end{center}
Notes: Plot of \(\mathbb{V}[\bar{\theta}] / \mathbb{V}_\pi[\theta]\) for a specific configuration of the SMC algorithm. The inefficiency factors are computed based on \(N_{run}=50\) runs of the SMC algorithm. \(N_{\phi}=100\), \(\lambda=2\), \(N_{MH}=1\).
Recall \(\tilde{w}^i_n = [p(Y|\theta_{n-1}^i)]^{\phi_n-\phi_{n-1}}\).
Then
\begin{eqnarray*} \frac{1}{N} \sum_{i=1}^N \tilde{w}^i_n W_{n-1}^i &\approx& \int [p(Y|\theta)]^{\phi_n-\phi_{n-1} } \frac{ p^{\phi_{n-1}}(Y|\theta) p(\theta)}{\int p^{\phi_{n-1}}(Y|\theta) p(\theta)d\theta} d\theta \\\ &=& \frac{ \int p(Y|\theta)^{\phi_n} p(\theta) d\theta}{\int p(Y|\theta)^{\phi_{n-1}} p(\theta) d\theta } \end{eqnarray*}
Thus, \[ \prod_{n=1}^{N_\phi} \left(\frac{1}{N} \sum_{i=1}^N \tilde{w}^i_n W_{n-1}^i \right) \approx \int p(Y|\theta)p(\theta)d\theta . \]
\begin{center} \begin{tabular}{l@{\hspace*{0.5cm}}cc@{\hspace*{0.5cm}}cc} \hline\hline & \multicolumn{2}{c}{$N_{\phi}=100$} & \multicolumn{2}{c}{$N_{\phi}=400$} \\\ $N$ & Mean($\ln \hat p(Y)$) & SD($\ln \hat p(Y)$) & Mean($\ln \hat p(Y)$) & SD($\ln \hat p(Y)$)\ \hline 500 & -352.19 & (3.18) & -346.12 & (0.20) \\\ 1,000 & -349.19 & (1.98) & -346.17 & (0.14) \\\ 2,000 & -348.57 & (1.65) & -346.16 & (0.12) \\\ 4,000 & -347.74 & (0.92) & -346.16 & (0.07) \\\ \hline \end{tabular} \end{center}
Notes: Table shows mean and standard deviation of log marginal data density estimates as a function of the number of particles \(N\) computed over \(N_{run}=50\) runs of the SMC sampler with \(N_{blocks}=4\), \(\lambda=2\), and \(N_{MH}=1\).
\begin{align} \mbox{\color{red}{Likelihood Tempering:} } p_n(Y|\theta) = [p(Y|\theta)]^{\phi_n}, \quad \phi_n \uparrow 1. \label{eq:tempering.lh} \end{align}
Can easily control how “close” consecutive posteriors are to one another.
Need to pick \(\phi_n\) (though we have some experience).
\begin{align} \mbox{\color{blue}{Data Tempering:} } p_n(Y|\theta) = p(y_{1: \lfloor \phi_n T \rfloor}), \quad \phi_n \uparrow 1. \label{eq:tempering.data} \end{align}
Arguably more natural for time series application.
Typically produces mores inefficient samples of \(\theta\).
Cai_2019 generalize both likelihood and data tempering!
Imagine one has draws from the posterior
\begin{equation} \tilde{\pi}(\theta) \propto \tilde{p}(\tilde{Y}|\theta) p(\theta), \end{equation}
where the posterior \(\tilde{\pi}(\theta)\) differs from the posterior \(\pi(\theta)\) because of:
Define the stage-\(n\) likelihood function:
\begin{equation} p_n(Y|\theta) = [p(Y|\theta)]^{\phi_n}[\tilde{p}(\tilde{Y}|\theta)]^{1-\phi_n}, \quad \phi_n \uparrow 1. \label{eq:tempering.general} \end{equation}
\color{red}{Generalized Data Tempering}: SMC that use this likelihood.
\[p_n(Y|\theta) = [p(Y|\theta)]^{\phi_n}[\tilde{p}(\tilde{Y}|\theta)]^{1-\phi_n} \]
With \(\tilde{p}(\cdot)=1\): identical to likelihood tempering.
With \(\tilde{p}(\cdot) = p(\cdot)\),\(Y=y_{1: \lfloor \phi_m T \rfloor}\), and \(\tilde{Y}=y_{1: \lfloor \phi_{m-1} T \rfloor}\),
generalizes data tempering by allowing for a gradual transition between \(y_{1: \lfloor \phi_{m-1} T \rfloor}\) and \(y_{1: \lfloor \phi_m T \rfloor}\).
By allowing \(Y\) to differ from \(\tilde{Y}\): incorporate data revisions between time \(\lfloor \phi_{m-1} T \rfloor\) and \(\lfloor \phi_m T \rfloor\).
\(p(\cdot) \ne \tilde p(\cdot)\): one can transition between the posterior distribution of two models with the same parameters.
The SMC algorithm have a number of tuning parameters:
This paper: choose \(\phi_n\) adaptively, with no fixed \(N_{\phi}\).
Key idea: choose \(\phi_n\) to target a desired level \(\widehat{ESS}_n^*\).
the closer the desired \(\widehat{ESS}_n^*\) to the previous \(\widehat{ESS}_{n-1}\), the smaller the increment \(\phi_n - \phi_{n-1}\)
\begin{align*} w^i(\phi) = [p(Y|\theta^i_{n-1})]^{\phi - \phi_{n-1}}, \quad W^i(\phi) = \frac{w^i(\phi) W^i_{n-1}}{\frac{1}{N}\sum\limits_{i=1}^N w^i(\phi) W^i_{n-1}}, \\\ \widehat{ESS}(\phi) = N \big/ \left( \frac{1}{N} \sum_{i=1}^N ({W}_n^i(\phi))^2\right) \end{align*}
We will choose \(\phi\) to target a desired level of ESS:
\begin{align} f(\phi) = \widehat{ESS}(\phi) - \alpha \widehat{ESS}_{n-1} = 0, \label{eq:adaptive.alpha} \end{align}
where \(\alpha\) (\(\le 1\)) is a tuning constant:
In time-honored tradition of macroeconometrics, let’s estimate the Smets2007 model.
Compare the accuracy (and precision) of SMC algorithm versus the speed:
\(\alpha\in \{0.9, 0.95, 0.97, 0.98\}\).
Fixed tempering schedule, \(N_{\phi} = 500\), from Herbst_2014.
Measure of accuracy: std of log MDD, computed across 50 runs of SMC algorithm.
measure of speed: average runtime across these runs
</figure_1_StdVsTime.pdf>
</figure_2_tempering_scheds_all.pdf>
Time-accuracy curve is convex:
Fixed schedule is slightly inefficient.
All of the adaptive schedules are convex.
Very little information (relative to fixed schedule) are added to likelihood function initially.
Towards the end, a lot of information is added.
Scenario 1:
Let \(Y=y_{1:T}\) and \(\tilde{Y}=\tilde{y}_{1:T_1}\), define the stage \((n)\) posterior: \[ \pi_{n}(\theta) = \frac{p(y_{1:T} | \theta)^{\phi_{n}} p(\tilde{y}_{1:T_1} | \theta)^{1-\phi_{n}} p(\theta) }{\int p(y_{1:T} | \theta)^{\phi_{n}} p(\tilde{y}_{1:T_1} | \theta)^{1-\phi_{n}}p(\theta) d\theta}. \label{eq:lttpost_2} \] The incremental weights are given by \[ \tilde{w}^i_{n}(\theta) = p(y_{1:T} | \theta)^{\phi_{n}-\phi_{n-1}} p(\tilde{y}_{1:T_1} | \theta)^{\phi_{n-1}-\phi_{n}} \] Define the Conditional Marginal Data Density (CMDD)
\begin{align} \text{CMDD}_{2|1} =\prod_{n=1}^{N_\phi} \left( \frac{1}{N} \sum\limits_{i=1}^{N} \tilde{w}^i_{(n)} W^i_{(n-1)} \right) \label{eq:CMDD1} \end{align}
Thus:
\begin{align} \text{CMDD}_{2|1} \approx \frac{\int p(y_{1:T} | \theta) p(\theta) d\theta}{\int p(\tilde{y}_{1:T_1} | \theta) p(\theta) d\theta} = \frac{ p(y_{1:T})}{p(\tilde{y}_{1:T_1})}. \label{eq:CMDD2} \end{align}
</figure_3_all_StdVsTime.pdf>
[Kohn2010] Kohn, Giordani & Strid, Adaptive Hybrid Metropolis-Hastings Samplers for DSGE Models, Working Paper, (2010). ↩
[ChibR08] Chib & Srikanth Ramamurthy, Tailored Randomized Block MCMC Methods with Application to DSGE Models, Journal of Econometrics, 155(1), 19-38 (2010). ↩
[curdia_reis2010] Curdia & Reis, Correlated Disturbances and U.S. Business Cycles, Manuscript, Columbia University and FRB New York, (2010). ↩
[Herbst2011a] @unpublishedHerbst2011a, author = Ed Herbst, title = Gradient and Hessian-based MCMC for DSGE Models, note= Unpublished Manuscript, Federal Reserve Board, year = 2012 ↩
[Chopin2004a] Nicolas Chopin, A Sequential Particle Filter for Static Models, Biometrika, 89(3), 539-551 (2004). ↩
[DelMoraletal2006] Del Moral, Doucet & Jasra, Sequential Monte Carlo Samplers, Journal of the Royal Statistical Society, Series B, 68(Part 3), 411-436 (2006). ↩
[Creal2007] Drew Creal, Sequential Monte Carlo Samplers for Bayesian DSGE Models, Unpublished Manuscript, Vrije Universitiet, (2007). ↩
[DurhamGeweke2011] Durham & Geweke, Massively Parallel Sequential Monte Carlo Bayesian Inference, Unpublished Manuscript, (2011). ↩
[Cai_2019] Cai, Del Negro, Herbst, , Matlin, Sarfati, Schorfheide & Frank, Online Estimation of DSGE Models, SSRN Electronic Journal, (2019). link. doi. ↩
[Smets2007] Smets & Wouters, Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach, American Economic Review, 97, 586-608 (2007). ↩
[Herbst_2014] Herbst & Schorfheide, Sequential Monte Carlo Sampling for DSGE Models, Journal of Applied Econometrics, 29(7), 1073 1098 (2014). link. doi. ↩