%2multibyte Version: 5.50.0.2960 CodePage: 936
\documentclass[final,notitlepage]{article}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{latexsym}
\setcounter{MaxMatrixCols}{10}
%TCIDATA{OutputFilter=LATEX.DLL}
%TCIDATA{Version=5.50.0.2960}
%TCIDATA{Codepage=936}
%TCIDATA{}
%TCIDATA{BibliographyScheme=Manual}
%TCIDATA{Created=Wed May 15 17:28:15 2002}
%TCIDATA{LastRevised=Wednesday, September 28, 2016 12:58:16}
%TCIDATA{}
%TCIDATA{}
%TCIDATA{Language=American English}
%TCIDATA{CSTFile=LaTeX article (bright).cst}
%TCIDATA{PageSetup=65,65,72,72,0}
%TCIDATA{AllPages=
%H=36
%F=29,\PARA{038
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }
%}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{acknowledgement}[theorem]{Acknowledgement}
\newtheorem{algorithm}[theorem]{Algorithm}
\newtheorem{axiom}[theorem]{Axiom}
\newtheorem{case}[theorem]{Case}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{conclusion}[theorem]{Conclusion}
\newtheorem{condition}[theorem]{Condition}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{criterion}[theorem]{Criterion}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{exercise}[theorem]{Exercise}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{problem}[theorem]{Problem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{solution}[theorem]{Solution}
\newtheorem{summary}[theorem]{Summary}
\newenvironment{proof}[1][Proof]{\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
\input{tcilatex}
\textwidth=16.0cm
\oddsidemargin=0cm \evensidemargin=0cm
\topmargin=-20pt
\numberwithin{equation}{section}
\baselineskip=100pt
\textheight=21cm
\def\baselinestretch{1.2}
\begin{document}
\title{Online Supplement to\textquotedblleft Robust Forecast
Comparison\textquotedblright }
\author{Sainan Jin$^{1}\,,$ Valentina Corradi$^{2}$ and Norman R. Swanson$%
^{3}$ \\
%EndAName
$^{1}$Singapore Management University\\
$^{2}$University of Surrey\\
$^{3}$Rutgers University}
\date{September 2016}
\maketitle
\section{Introduction}
The material contained in this supplement includes: (i) tabulated findings
from Monte Carlo experiments that are omitted from the aforementioned paper
for the sake of brevity; and (ii) proofs of all lemmas used in the
aforementioned paper.
\section{Additional Monte Carlo Findings}
\textbf{Pairwise comparisons: stationary case}
\smallskip
In this part of the supplemental paper, we first re-examine three data
generating processes (DGPs) discussed in the aforementioned paper. In the
three DGPs, we allow the two forecast errors to be dependent on each other
with non-independent observations and generate $e_{kt}$ according to
\begin{equation*}
e_{kt}=(1-\lambda )(\sqrt{\rho }\widetilde{e}_{0t}+\sqrt{1-\rho }\widetilde{e%
}_{kt})+\lambda e_{k,t-1},
\end{equation*}%
where $(\widetilde{e}_{0t},\widetilde{e}_{1t},\widetilde{e}_{2t})$ are $%
i.i.d.$ but have different marginals in different DGPs. The parameters $%
\lambda $ and $\rho $ determine the mutual dependence of $e_{1t}$ and $e_{2t}
$ and their autocorrelations. The three DGPs are presented here for your
reference:
\noindent DGP2: $(\widetilde{e}_{0t},\widetilde{e}_{1t},\widetilde{e}_{2t})$
are $i.i.d$ $N(0,I_{3});$
\noindent DGP4:\ $\widetilde{e}_{1t}$ $_{\symbol{126}}$ $i.i.d.N(0,1.5)$ and
$\widetilde{e}_{kt}$ $_{\symbol{126}}$ $i.i.d.N(0,1),$ for $k=0$ and $2;$
\noindent DGP6: $\widetilde{e}_{0t}$ $_{\symbol{126}}$ $i.i.d.$ Beta(1,1), $%
\widetilde{e}_{1t}$ $_{\symbol{126}}$ $i.i.d.$ Beta(1,2), and $\widetilde{e}%
_{2t}$ $_{\symbol{126}}$ $i.i.d.$ Beta(2,4); where all are recentered around
their population means, i.e., 1/2, 1/3 and 1/3, respectively.
The results for the case where $\rho =\lambda =0.3$ are shown in Table 1 in
the aforementioned paper. Simulation results for the DGPs with different
values of $\rho $ and $\lambda $\ are reported in Tables 1-4 in this
supplement. The main entries in the table are rejection frequencies.
Qualitatively similar results obtain when DGPs are specified using other
moderate values of $\rho $\ and $\lambda $ (e.g., $\rho =\lambda =0.5,$ $%
\rho =0.3,$ $\lambda =0.5,$ and $\rho =0.5$, $\lambda =0.3$). The tests
perform poorly when the forecast errors are highly dependent and strongly
autocorrelated (e.g., $\rho =\lambda =0.8$). In most cases, our tests have
worse power performance than the DM\ test, particularly for small sample
sizes; but our tests have better size properties, especially when both $\rho
$ and $\lambda $\ are large.
We also conduct Monte Carlo simulations for DGPs with parameter estimation
error. In these experiments, DGPs (DGP PEE1 - DGP PEE 16, as presented in
the table below) are the same as those examined in Corradi and Swanson
(2007).\ In the setup, the benchmark model (denoted by DGP PEE1 below) is an
AR(1). The benchmark model is also called the \textquotedblleft
small\textquotedblright\ model. Note that the benchmark or \textquotedblleft
small\textquotedblright\ model in our test statistic calculations is always
estimated as $y_{t}=\alpha +\beta y_{t-1}+\epsilon _{t}$; and the
\textquotedblleft big\textquotedblright\ model is the same, but with $%
x_{t-1} $ added as an additional regressor. The null hypothesis is that the
smaller (size) model outperforms the \textquotedblleft
big\textquotedblright\ alternative model. DGPs PEE1 - PEE4 satisfy the null
hypothesis whereas DGPs PEE 5 - PEE 16 satisfy the alternative hypothesis.
Note that only in DGP PEE7 and DGP PEE13 is the alternative model
\textquotedblleft correct\textquotedblright , and all other alternative
models are clearly misspecified. Thus the power in these cases might be low.
We set $P=0.5T$, and $T=600$ as in Corradi and Swanson (2007).
Tables 5-6 show\ that the results from these simulations are qualitatively
similar to those discussed in the paper with no parameter estimation error,
when the nulls are least favorable to the alternatives, while the tests are
mostly under-sized when the nulls are not least favorable to the
alternatives. This verifies our theory, which predicts that the stationary
bootstrap works well for least favorable nulls. Our tests in general have
comparable power performance relative to the DM test; and the DM\ test has
more conservative sizes compared to our tests in all DGPs.
\begin{center}
\textbf{Data Generating Processes with Parameter Estimation Errors}
\end{center}
\hline
\bigskip
$u_{1,t}\sim iidN(0,1),$ $u_{2,t}\sim iidN(0,1)$
$x_{t}=1+0.3x_{t-1}+u_{1,t}${\footnotesize \ }
{\footnotesize DGP PEE1: }$y_{t}=1+0.3y_{t-1}+u_{2,t}${\footnotesize \ }
{\footnotesize DGP PEE2: }$y_{t}=1+0.3y_{t-1}+0.3u_{3,t-1}+u_{3,t}$
{\footnotesize DGP PEE3: }$y_{t}=1+0.6y_{t-1}+u_{2,t}${\footnotesize \ }
{\footnotesize DGP PEE4: }$y_{t}=1+0.6y_{t-1}+0.3u_{3,t-1}+u_{3,t}$
{\footnotesize DGP PEE5 : }$y_{t}=1+0.3y_{t-1}+\exp (\tan
^{-1}(x_{t-1}/2))+u_{3,t}$
{\footnotesize DGP PEE6 : }$y_{t}=1+0.3y_{t-1}+\exp (\tan
^{-1}(x_{t-1}/2))+0.3u_{3,t-1}+u_{3,t}$
{\footnotesize DGP PEE7 : }$y_{t}=1+0.3y_{t-1}+x_{t-1}+u_{3,t}$
{\footnotesize DGP PEE8 : }$y_{t}=1+0.3y_{t-1}+x_{t-1}+0.3u_{3,t-1}+u_{3,t}$
{\footnotesize DGP PEE9 : }$y_{t}=1+0.3y_{t-1}+x_{t-1}1\{x_{t-1}>1/(1-0.3)%
\}+u_{3,t}$
{\footnotesize DGP PEE10 : }$y_{t}=1+0.3y_{t-1}+x_{t-1}1\{x_{t-1}>1/(1-0.3)%
\}+0.3u_{3,t-1}+u_{3,t}$
{\footnotesize DGP PEE11 : }$y_{t}=1+0.6y_{t-1}+\exp (\tan
^{-1}(x_{t-1}/2))+u_{3,t}$
{\footnotesize DGP PEE12 : }$y_{t}=1+0.6y_{t-1}+\exp (\tan
^{-1}(x_{t-1}/2))+0.3u_{3,t-1}+u_{3,t}$
{\footnotesize DGP PEE13 : }$y_{t}=1+0.6y_{t-1}+x_{t-1}+u_{3,t}$
{\footnotesize DGP PEE14: }$y_{t}=1+0.6y_{t-1}+x_{t-1}+0.3u_{3,t-1}+u_{3,t}$
{\footnotesize DGP PEE15: }$y_{t}=1+0.6y_{t-1}+x_{t-1}1\{x_{t-1}>1/(1-0.3)%
\}+u_{3,t}$
{\footnotesize DGP PEE16: }$y_{t}=1+0.6y_{t-1}+x_{t-1}1\{x_{t-1}>1/(1-0.3)%
\}+0.3u_{3,t-1}+u_{3,t}.${\footnotesize \ }
\hline\hline
\pagebreak
\section{Proofs of the Lemmas Used in the Paper}
\bigskip
\textbf{Lemma A.1: }\textit{Suppose that Assumptions A.2 and A.4 hold and
let }$\alpha \in \lbrack 0,0.5)$\textit{. Then, for }$k=1,...,l,$
\textit{(a) sup}$_{t}\left\| \mathit{n}^{\alpha }H_{k}(t)\right\| \overset{p}%
{\rightarrow }0;$
\textit{(b) sup}$_{t}\left\| \mathit{n}^{\alpha }(\widehat{\beta }%
_{k,t}-\beta _{k,0})\right\| \overset{p}{\rightarrow }0;$
\textit{(c)} \textit{sup}$_{t}\left\Vert \mathit{n}^{1/2}H_{k}(t)\right\Vert
=O_{p}(1).$\medskip
\noindent \qquad \textbf{Proof of Lemma A.1: }The results follow from Lemma
A.1 and the proof of Lemma 2.3.2 of McCracken (2000).
The following lemma holds for all $k=1,...,l.$\medskip
\ \ \
\textbf{Lemma A.2: }\textit{(a)} \textit{Suppose that Assumption A.1 holds.
Then, for each }$\varepsilon >0,$\textit{\ there exists }$\delta >0$\textit{%
\ such that for all }$x,$ $\overset{.}{x}\in \mathcal{X}^{-}$ or $x,$ $%
\overset{.}{x}\in \mathcal{X}^{+}$, \textit{\ }%
\begin{equation}
\underset{T\rightarrow \infty }{\overline{\lim }}\left\Vert \underset{\rho
_{g}^{\ast }((x,\beta _{k}),(\overset{.}{x},\overset{.}{\beta _{k}}))<\delta
}{\sup }\left\vert \nu _{k,n}^{g}\left( x,\beta _{k}\right) -\nu
_{k,n}^{g}\left( \overset{.}{x},\overset{.}{\beta _{k}}\right) \right\vert
\right\Vert _{q}<\varepsilon , \label{L11}
\end{equation}%
where
\begin{equation}
\rho _{g}^{\ast }\left( (x,\beta _{k}),\left( \overset{.}{x},\overset{.}{%
\beta _{k}}\right) \right) =\left\{ E\left[ \left( 1(e_{kt}(\beta _{k})\leq
x)-1\left( e_{kt}\left( \overset{.}{\beta _{k}}\right) \leq \overset{.}{x}%
\right) \right) \right] ^{2}\right\} ^{1/2}. \label{L12}
\end{equation}
\textbf{\ }\textit{(b) Suppose that Assumption A.1}$^{\ast }$\textit{\
holds. Then, for each }$\varepsilon >0,$\textit{\ there exists }$\delta >0$%
\textit{\ such that for all }$x,$ $\overset{.}{x}\in \mathcal{X}^{-}$ or $x,$
$\overset{.}{x}\in \mathcal{X}^{+}$, \textit{\ }%
\begin{equation}
\underset{T\rightarrow \infty }{\overline{\lim }}\left\Vert \underset{\rho
_{c}^{\ast }((x,\beta _{k}),(\overset{.}{x},\overset{.}{\beta _{k}}))<\delta
}{\sup }\left\vert \nu _{k,n}^{c}\left( x,\beta _{k}\right) -\nu
_{k,n}^{c}\left( \overset{.}{x},\overset{.}{\beta _{k}}\right) \right\vert
\right\Vert _{q}<\varepsilon , \label{L13}
\end{equation}%
where
\begin{equation*}
\rho _{c}^{\ast }((x,\beta _{k}),(\overset{.}{x},\overset{.}{\beta _{k}}%
))=\left\{ E\left\vert \int_{-\infty }^{x}1(e_{k,t}(\beta _{k})\leq
s)ds-\int_{\mathcal{-}\infty }^{\overset{.}{x}}1(e_{k,t}(\overset{.}{\left(
\beta _{k}\right) }\leq s)ds\right\vert ^{r}\right\} ^{1/r}1(x<0,\text{ }%
\overset{.}{x}<0)
\end{equation*}%
\begin{equation}
+\left\{ E\left\vert \int_{x}^{\infty }1(e_{k,t}(\beta _{k})>s)ds-\int_{%
\overset{.}{x}}^{\infty }1(e_{k,t}(\overset{.}{\beta _{k}})>s)ds\right\vert
^{r}\right\} ^{1/r}1(x\geq 0,\text{ }\overset{.}{x}\geq 0). \label{L14}
\end{equation}%
\medskip
\noindent \qquad \textbf{Proof of Lemma A.2:} We first prove part (a).
Without loss of generality (WLOG), we verify the conditions of Theorem 2.2
in Andrews and Pollard (1994) hold with $Q=q$ and $\gamma =1$ for the case
when $x,$ $\overset{.}{x}$ $\in \mathcal{X}^{+}$, which is bounded on the
real line. The mixing condition is implied by Assumption A.1(i). The
bracketing condition also holds by the following argument. Let
\begin{equation*}
\mathcal{F}_{k}^{g+}=\{1(e_{k,t}(\beta _{k})\leq x):(x,\beta _{k})\in
\mathcal{X}^{+}\times \Theta _{k0}\}.
\end{equation*}%
We now show $\mathcal{F}_{k}^{g+}$ is a class of uniformly bounded functions
satisfying the $L^{2}-$continuity conditions. Let $\sup_{(\overset{.}{x},%
\overset{.}{\beta _{k}})}$ denote sup$_{\{(\overset{.}{x},\overset{.}{\beta
_{k}})\in \mathcal{X}^{+}\times \Theta _{k0},\text{ \TEXTsymbol{\vert}}%
\overset{.}{x}-x|\leq r_{1},||\overset{.}{\beta _{k}}-\beta _{k}||\leq r_{2},%
\sqrt{r_{1}^{2}+r_{2}^{2}}\leq \widetilde{r}\}},$ we have
\begin{eqnarray}
&&\underset{t}{\sup }E\underset{(\overset{.}{x},\overset{.}{\beta _{k}})}{%
\sup }\left\vert 1(e_{k,t+\tau }(\overset{.}{\beta }_{k})\leq \overset{.}{x}%
)-1(e_{k,t+\tau }(\beta _{k})\leq x)\right\vert ^{2} \notag \\
&=&E\underset{(\overset{.}{x},\overset{.}{\beta }_{k})}{\sup }\left\vert
1(e_{k,t+\tau }\leq m_{k}\left( Z_{k,t+\tau },\overset{.}{\beta _{k}}\right)
-m_{k}\left( Z_{k,t+\tau },\beta _{k0}\right) +\overset{.}{x})\right. \notag
\\
&&\left. -1(e_{k,t+\tau }\leq m_{k}(Z_{k,t+\tau },\beta
_{k})-m_{k}(Z_{k,t+\tau },\beta _{k0})+x)\right\vert \notag \\
&\leq &E\underset{(\overset{.}{x},\overset{.}{\beta _{k}})}{\sup }1\left\{
|e_{k,t+\tau }-m_{k}(Z_{k,t+\tau },\beta _{k})+m_{k}(Z_{k,t+\tau },\beta
_{k0})-x|\leq \right. \notag \\
&&\left. |m_{k}(Z_{k,t+\tau },\overset{.}{\beta _{k}})-m_{k}(Z_{k,t+\tau
},\beta _{k})+\overset{.}{x}-x|\right\} \notag \\
&\leq &E\underset{(\overset{.}{x},\overset{.}{\beta _{k}})}{\sup }1\left\{
|e_{k,t+\tau }-m_{k}(Z_{k,t+\tau },\beta _{k})+m_{k}(Z_{k,t+\tau },\beta
_{k0})-x|\leq ||M_{k}(Z_{k,t+\tau },\beta _{k}^{\ast })||r_{2}+r_{1}\right\}
\notag \\
&\leq &C\underset{\beta _{k}\in \Theta _{k,}}{\sup }E||M_{k}(Z_{k,t},\beta
_{k})||r_{2}+r_{1} \notag \\
&\leq &\widetilde{C}\widetilde{r}. \label{SE2}
\end{eqnarray}%
where $\beta _{k}^{\ast }$ lies between $\overset{.}{\beta _{k}}$ and $\beta
_{k}.$ The first inequality is due to the fact \TEXTsymbol{\vert}$1(z\leq
t)-1(z\leq 0)|\leq 1(|z|\leq |t|)$ for any scalars $z$ and $t.$ The second
inequality follows from Assumption A.1(ii), the triangle inequality and the
Cauchy-Schwartz inequality. The third inequality holds by Assumptions
A.1(ii) and (iii), and $\widetilde{C}=\sqrt{2}C(\underset{\beta _{k}\in
\Theta _{k0}}{\sup }E||M_{k}(Z_{k,t},\beta _{k})||\vee 1)$ is finite by
Assumption A.1(ii). The desired bracketing condition holds because the $%
L^{2}-$continuity condition implies the bracketing number satisfies
\begin{equation*}
N(\varepsilon ,\mathcal{F}_{k}^{g+})\leq C(1/\varepsilon )^{L_{k}+1}.
\end{equation*}%
The other cases can be done in the same fashion.
\noindent To prove part (b), WLOG, we only verify the case for $x_{1}\geq 0$
and $x_{2}\geq 0.$ We show that the result follows from Theorem 3 of Hansen
(1996) with $a=L_{\max }+1,$ $\lambda =1.$ Let%
\begin{equation*}
\mathcal{F}_{k}^{c+}=\{\int_{x}^{\infty }1(e_{k,t}(\beta _{k})>s)ds:(x,\beta
_{k})\in \mathcal{X}^{+}\times \Theta _{k0}\}.
\end{equation*}%
Then the functions in $\mathcal{F}_{k}^{c+}$ satisfy the Lipschitz condition:%
\begin{eqnarray*}
&&\left\vert \int_{\overset{.}{x}}^{\infty }1(e_{k,t+\tau }(\overset{.}{%
\beta _{k}})>s)ds-\int_{x}^{\infty }1(e_{k,t+\tau }(\beta
_{k})>s)ds\right\vert \\
&=&\left\vert \max \left\{ e_{k,t+\tau }+m_{k}\left( Z_{k,t+\tau },\beta
_{k0}\right) -m_{k}\left( Z_{k,t+\tau },\overset{.}{\beta _{k}}\right) -%
\overset{.}{x},0\right\} \right. \\
&&\left. -\max \left\{ e_{k,t+\tau }+m_{k}(Z_{k,t+\tau },\beta
_{k0})-m_{k}(Z_{k,t+\tau },\beta _{k})-x,0\right\} \right\vert \\
&\leq &\left\vert m_{k}(Z_{k,t+\tau },\overset{.}{\beta _{k}}%
)-m_{k}(Z_{k,t+\tau },\beta _{k})\right\vert +\left\vert \overset{.}{x}%
-x\right\vert \\
&\leq &\sqrt{2}(\sup_{\beta _{k}\in \Theta _{k0}}||M_{k}(Z_{k,t+\tau },\beta
_{k})||\vee 1)(||\overset{.}{\beta _{k}}-\beta _{k}||^{2}+(\overset{.}{x}%
-x)^{2})^{1/2}
\end{eqnarray*}%
where the first inequality follows from the fact that \TEXTsymbol{\vert}max$%
\{z_{1},0\}-$max$\{z_{2},0\}|\leq |z_{1}-z_{2}|$ and the triangle
inequality, and the second inequality holds by Assumption A.1$^{\ast }$(ii)
and the Cauchy-Schwartz inequality. We have max$_{k}\sup_{\beta _{k}\in
\Theta _{k0}}||M_{k}(Z_{k,t+\tau },\beta _{k})||_{r}<\infty $ by Assumption
A.1$^{\ast }$(ii) which yields the conditions (12) and (13) of Hansen
(1996). Finally, the mixing condition (11) in Hansen (1996) holds by
Assumption A.1$^{\ast }$(i).
\ \medskip
\textbf{Lemma A.3: }\textit{Suppose that Assumptions A.1, A.1}$^{\ast },$%
\textit{\ and A.4 hold. Denote }$\zeta _{k,t+\tau }^{i}(x,\beta
)=f_{k,t+\tau }^{i}(x,\beta )-Ef_{k,t+\tau }^{i}(x,\beta )-f_{k,t+\tau
}^{i}(x,\beta _{0})+Ef_{k,t+\tau }^{i}(x,\beta _{0}),$ $i=g,c.$ Then, for $%
k=2,...,l,$
\textit{(a) }%
\begin{eqnarray*}
\underset{t}{\text{sup}}E\sup_{\{\beta \}\in N(n^{-\alpha }\varepsilon
)}\sup_{x\in \mathcal{X}^{+}}[\zeta _{k,t+\tau }^{i}(x,\beta )]^{2} &\leq
&Cn^{-\alpha }\varepsilon , \\
\underset{t}{\text{sup}}E\sup_{\{\beta \}\in N(n^{-\alpha }\varepsilon
)}\sup_{x\in \mathcal{X}^{-}}[\zeta _{k,t+\tau }^{i}(x,\beta )]^{2} &\leq
&Cn^{-\alpha }\varepsilon ,\text{ }i=g,c
\end{eqnarray*}
\textit{(b) }%
\begin{eqnarray*}
\underset{t}{\text{sup}}|E\sup_{\{\beta ,\text{ }\overset{.}{\beta }\}\in
N(n^{-\alpha }\varepsilon )}\sup_{x\in \mathcal{X}^{+}}\zeta _{k,t+\tau
}^{i}(x,\beta )\zeta _{k,t+\tau +j}^{i}(x,\overset{.}{\beta })| &\leq &%
\widetilde{C}\alpha (j)^{d}(n^{-\alpha }\varepsilon )^{2}, \\
\underset{t}{\text{sup}}|E\sup_{\{\beta ,\text{ }\overset{.}{\beta }\}\in
N(n^{-\alpha }\varepsilon )}\sup_{x\in \mathcal{X}^{-}}\zeta _{k,t+\tau
}^{i}(x,\beta )\zeta _{k,t+\tau +j}^{i}(x,\overset{.}{\beta })| &\leq &%
\widetilde{C}\alpha (j)^{d}(n^{-\alpha }\varepsilon )^{2},
\end{eqnarray*}
\textit{\ }
where $d=1$ and $\delta /(2+\delta )$ for $i=g$ and $c,$ respectively.%
\medskip
\noindent \qquad \textbf{Proof of Lemma A.3: }Part (a) holds directly from
the proof of Lemma A.2 by taking $\overset{.}{x}=x$ and $q=2$ and applying
the Cauchy-Schwartz inequality$.$
\noindent For part (b), WLOG, we consider the case $x\geq 0$.
Define \{$x^{\ast },\gamma _{1}^{\ast },\gamma _{2}^{\ast }$\}$=$ argsup$%
_{\{x\in \mathcal{X}^{+},\text{ }\{\gamma _{1},\text{ }\gamma _{2}\}\in
N(n^{-\alpha }\varepsilon )\}}\zeta _{k,t+\tau }^{i}(x,\gamma _{1})\zeta
_{k,t+\tau +j}^{i}(x,\gamma _{2}),$ where we suppress the dependence of $%
\left( x^{\ast },\gamma _{1}^{\ast },\gamma _{2}^{\ast }\right) $ on $i=g$
or $c.$ By the proof of Lemma A.2, it is easy to verify $\left\Vert \zeta
_{k,t+\tau }^{i}(x^{\ast },\gamma _{1}^{\ast })\right\Vert _{2+\delta }\leq
\left\Vert \sup_{\beta \in N(n^{-\alpha }\varepsilon )}\sup_{x\in \mathcal{X}%
^{+}}\zeta _{k,t+\tau }^{i}(x,\beta )\right\Vert _{2+\delta }=Cn^{-\alpha
}\varepsilon $. By Assumptions A.1, A.1$^{\ast }$ and Corollary 1.1 of Bosq
(1996),
\begin{eqnarray*}
&&|cov(\zeta _{k,t+\tau }^{g}(x^{\ast },\gamma _{1}^{\ast }),\text{ }\zeta
_{k,t+\tau +j}^{g}(x^{\ast },\gamma _{2}^{\ast }))| \\
&\leq &4\alpha (j)\left\Vert \sup_{\beta \in N(n^{-\alpha }\varepsilon
)}\sup_{x\in \mathcal{X}^{+}}\zeta _{k,t+\tau }^{g}(x,\beta )\right\Vert
_{\infty }\left\Vert \sup_{\beta \in N(n^{-\alpha }\varepsilon )}\sup_{x\in
\mathcal{X}^{+}}\zeta _{k,t+\tau +j}^{g}(x,\beta )\right\Vert _{\infty } \\
&\leq &C\alpha (j)(n^{-\alpha }\varepsilon )^{2},\text{ }
\end{eqnarray*}%
and%
\begin{eqnarray*}
&&|cov(\zeta _{k,t+\tau }^{c}(x^{\ast },\gamma _{1}^{\ast }),\text{ }\zeta
_{k,t+\tau +j}^{c}(x^{\ast },\gamma _{2}^{\ast }))| \\
&\leq &2(1+2/\delta )(2\alpha (j))^{\delta /(2+\delta )}\left\Vert
\sup_{\beta \in N(n^{-\alpha }\varepsilon )}\sup_{x\in \mathcal{X}^{+}}\zeta
_{k,t+\tau }^{c}(x,\beta )\right\Vert _{2+\delta }\left\Vert \sup_{\beta \in
N(n^{-\alpha }\varepsilon )}\sup_{x\in \mathcal{X}^{+}}\zeta _{k,t+\tau
+j}^{c}(x,\beta )\right\Vert _{2+\delta } \\
&\leq &C\alpha (j)^{\delta /(2+\delta )}(n^{-\alpha }\varepsilon )^{2}.
\end{eqnarray*}%
This completes the proof.
\medskip
\textbf{Lemma A.4:} \textit{(a) Suppose that Assumptions A.1-A.4 hold. Then,
we have for }$k=1,...,l,$%
\begin{eqnarray}
&&\underset{x\in \mathcal{X}^{+}}{\sup }|\xi _{k1}^{g}(x)-\nu
_{k,n}^{g}(x,\beta _{k0})+\nu _{1,n}^{g}(x,\beta _{1,0})|\overset{p}{%
\rightarrow }0, \label{L21} \\
&&\underset{x\in \mathcal{X}^{-}}{\sup }|\xi _{k1}^{g}(x)-\nu
_{k,n}^{g}(x,\beta _{k0})+\nu _{1,n}^{g}(x,\beta _{1,0})|\overset{p}{%
\rightarrow }0 \notag
\end{eqnarray}
\textit{(b) Suppose that Assumptions A.1}$^{\ast }$, \textit{A.2, A.3}$%
^{\ast }$\textit{\ and A.4 hold. Then, we have for }$k=1,...,l,$%
\begin{eqnarray}
&&\underset{x\in \mathcal{X}^{+}}{\sup }|\xi _{k1}^{c}(x)-\nu
_{k,n}^{c}(x,\beta _{k0})+\nu _{1,n}^{c}(x,\beta _{10})|\overset{p}{%
\rightarrow }0, \label{L22} \\
&&\underset{x\in \mathcal{X}^{-}}{\sup }|\xi _{k1}^{c}(x)-\nu
_{k,n}^{c}(x,\beta _{k0})+\nu _{1,n}^{c}(x,\beta _{10})|\overset{p}{%
\rightarrow }0. \notag
\end{eqnarray}
\noindent \qquad \textbf{Proof of Lemma A.4:} WLOG, we consider the case $%
x\geq 0.$ Denote $\zeta _{k,t+\tau }^{i}\left( x,\widehat{\beta }_{t}\right)
=f_{k,t+\tau }^{i}\left( x,\widehat{\beta }_{t}\right) -Ef_{k,t+\tau
}^{i}(x,\beta )|_{\beta =\widehat{\beta }_{t}}-f_{k,t+\tau }^{i}(x,\beta
_{0})+Ef_{k,t+\tau }^{i}(x,\beta _{0}),$ $i=g,$ $c,$ then
\begin{equation*}
\xi _{k1}^{i}(x)-\nu _{1,n}^{i}(x,\beta _{10})+\nu _{k,n}^{i}(x,\beta
_{k0})=n^{-1/2}\sum_{t}\zeta _{k,t+\tau }^{i}\left( x,\widehat{\beta }%
_{t}\right) .
\end{equation*}
Fix $\varepsilon _{0},$ $\delta >0.$ By Lemma A.1 (b), for all $\varepsilon
>0,$ there exists $T_{0}$ such that for all $T>T_{0},$ $P\left(
\sup_{k}\sup_{t}n^{\alpha }\left\Vert \widehat{\beta }_{k,t}-\beta
_{k0}\right\Vert >\varepsilon \right) <\delta /2.$ It is useful then to note
that for all $T>T_{0}$ and $\varepsilon _{0}>0,$%
\begin{eqnarray}
&&P\left( \sup_{x\in \mathcal{X}^{+}}n^{-1/2}\left\vert \sum_{t}\zeta
_{k,t+\tau }^{i}\left( x,\widehat{\beta }_{t}\right) \right\vert
>\varepsilon _{0}\right) \notag \\
&\leq &P\left( \sup_{\{\beta _{t}\}\in N(n^{-\alpha }\varepsilon
)}\sup_{x\in \mathcal{X}^{+}}n^{-1/2}\left\vert \sum_{t}\zeta _{k,t+\tau
}^{i}(x,\beta _{t})\right\vert >\varepsilon _{0}\right) +P\left(
\sup_{k}\sup_{t}n^{\alpha }\left\Vert \widehat{\beta }_{k,t}-\beta
_{k0}\right\Vert >\varepsilon \right) \notag \\
&\leq &P\left( \sup_{\{\beta _{t}\}\in N(n^{-\alpha }\varepsilon
)}\sup_{x\in \mathcal{X}^{+}}n^{-1/2}\left\vert \sum_{t}\zeta _{k,t+\tau
}^{i}(x,\beta _{t})\right\vert >\varepsilon _{0}\right) +\delta /2
\label{L24}
\end{eqnarray}%
where $\{\beta _{t}\}\equiv \{\beta _{t}\}_{t=R}^{T}$ is a nonrandom
sequence. Now we show that there exists $T_{1}>T_{0}$ such that for all $%
T>T_{1},$ the first term on the right hand side (r.h.s.) of (\ref{L24}) is
less than $\delta /2.$ For the remainder of this proof only, let $\sum_{j}$
denote the summation $\sum_{-n+1\leq j\neq 0\leq n-1}.$ Applying the
Chebyshev's inequality, we have%
\begin{eqnarray}
&&\varepsilon _{0}^{2}P\left( \sup_{\{\beta _{t}\}\in N(n^{-\alpha
}\varepsilon )}\sup_{x\in \mathcal{X}^{+}}n^{-1/2}\left\vert \sum_{t}\zeta
_{k,t+\tau }^{i}(x,\beta _{t})\right\vert >\varepsilon _{0}\right) \notag \\
&\leq &E\left( \sup_{\{\beta _{t}\}\in N(n^{-\alpha }\varepsilon
)}\sup_{x\in \mathcal{X}^{+}}n^{-1/2}\left\vert \sum_{t}\zeta _{k,t+\tau
}^{i}(x,\beta _{t})\right\vert \right) ^{2} \notag \\
&=&E\left( \sup_{\{\beta _{t}\}\in N(n^{-\alpha }\varepsilon )}\sup_{x\in
\mathcal{X}^{+}}n^{-1}\sum_{t}\left[ \zeta _{k,t+\tau }^{i}(x,\beta _{t})%
\right] ^{2}\right) \notag \\
&&+E\left( \sup_{\{\beta _{t},\beta _{t+j}\}\in N(n^{-\alpha }\varepsilon
)}\sup_{x\in \mathcal{X}^{+}}\sum_{j}\left[ n^{-1}\sum_{t=R}^{T-\left\vert
j\right\vert }\zeta _{k,t+\tau }^{i}(x,\beta _{t})\zeta _{k,t+\tau
+j}^{i}\left( x,\beta _{t+j}\right) \right] \right) \notag \\
&\leq &E\left( \sup_{\{\beta _{t}\}\in N(n^{-\alpha }\varepsilon
)}\sup_{x\in \mathcal{X}^{+}}n^{-1}\sum_{t}\left[ \zeta _{k,t+\tau
}^{i}(x,\beta _{t})\right] ^{2}\right) \notag \\
&&+\sum_{j}\left\{ n^{-1}\sum_{t=R}^{T-\left\vert j\right\vert }\left\vert E
\left[ \sup_{\{\beta _{t},\beta _{t+j}\}\in N(n^{-\alpha }\varepsilon
)}\sup_{x\in \mathcal{X}^{+}}\zeta _{k,t+\tau }^{i}(x,\beta _{t})\zeta
_{k,t+\tau +j}^{i}(x,\beta _{t+j})\right] \right\vert \right\} . \label{P1}
\end{eqnarray}
For part (a), substituting the results of Lemma A.3 into (\ref{P1}), the
r.h.s. of (\ref{P1}) is less than or equal to
\begin{eqnarray}
&&\widetilde{C}(n^{-\alpha }\varepsilon )+\sum_{j}(1-|j|/n)\widetilde{C}%
\alpha (j)(n^{-\alpha }\varepsilon )^{2} \notag \\
&\leq &\widetilde{C}n^{-\alpha }\varepsilon \left\{
1+2\sum_{j=1}^{n-1}\alpha (j)\right\} \notag \\
&\leq &Cn^{-\alpha }\varepsilon ,\text{ say,} \label{L44}
\end{eqnarray}%
provided $0T_{1}>T_{0},$ $%
\varepsilon $ $<(\delta \varepsilon _{0}^{2}n^{\alpha }/2C)$ and $%
00,$ there exists $\delta ,$ such that $P(\underset{t}{\sup }$ $%
\underset{x\in \mathcal{X}^{+}}{\sup }n^{\alpha }||\beta _{k,t}^{\ast
}(x)-\beta _{k0}||$ \noindent $\leq \varepsilon )<\delta /2$ for
sufficiently large $n.$ Let
\begin{equation*}
A_{1n}=\text{ }\underset{x\in \mathcal{X}^{+}}{\sup }\underset{\text{ }%
\{\beta _{k}\}\in N_{k}(n^{-\alpha }\varepsilon )}{\text{ }\sup }\left\Vert
\frac{\partial F_{k}(x,\beta _{k})}{\partial \beta _{k}}-\frac{\partial
F_{k}(x,\beta _{k0})}{\partial \beta _{k}}\right\Vert .
\end{equation*}%
Then $A_{1n}=O(n^{-\eta \alpha })$ by Assumption A.3(ii).
\begin{eqnarray*}
A_{2n} &\equiv &\text{ }\underset{x\in \mathcal{X}^{+}}{\sup }\left\Vert
n^{-1}\sum_{t}\frac{\partial F_{k}(x,\beta _{k,t}^{\ast }(x))}{\partial
\beta _{k}}-\frac{\partial F_{k}(x,\beta _{k0})}{\partial \beta _{k}}%
\right\Vert \\
&\leq &\underset{x\in \mathcal{X}^{+}}{\text{ }\sup }\underset{t}{\sup }%
\left\Vert \frac{\partial F_{k}(x,\beta _{k,t}^{\ast }(x))}{\partial \beta
_{k}}-\frac{\partial F_{k}(x,\beta _{k0})}{\partial \beta _{k}}\right\Vert
=O_{p}(n^{-\eta \alpha }).
\end{eqnarray*}%
where the last equality holds because $P(A_{2n}\leq A_{1n})\rightarrow 1$ as
$n\rightarrow \infty $ by construction. Now we have the desired result
\begin{eqnarray*}
&&\underset{x\in \mathcal{X}^{+}}{\sup }\left\vert n^{-1/2}\sum_{t}\left(
F_{k}(x,\widehat{\beta }_{k,t})-F_{k}(x,\beta _{k0})\right) -n^{1/2}\left(
\frac{\partial F_{k}(x,\beta _{k0})}{\partial \beta ^{\prime }}\right) B_{k}%
\overline{H}_{k,n}\right\vert \\
&=&\underset{x\in \mathcal{X}^{+}}{\sup }\left\vert n^{-1/2}\sum_{t}\left(
\frac{\partial F_{k}(x,\beta _{k,t}^{\ast }(x))}{\partial \beta _{k}^{\prime
}}\right) \left( \widehat{\beta }_{k,t}-\beta _{k0}\right) -n^{1/2}\left(
\frac{\partial F_{k}(x,\beta _{k0})}{\partial \beta ^{\prime }}\right) B_{k}%
\overline{H}_{k,n}\right\vert \\
&\leq &\underset{x\in \mathcal{X}^{+}}{\sup }\left\vert
n^{-1/2}\sum_{t}\left( \frac{\partial F_{k}(x,\beta _{k,t}^{\ast }(x))}{%
\partial \beta _{k}^{\prime }}-\frac{\partial F_{k}(x,\beta _{k0})}{\partial
\beta ^{\prime }}\right) \left( \widehat{\beta }_{k,t}-\beta _{k0}\right)
\right\vert \\
&&+\sqrt{n}\underset{x\in \mathcal{X}^{+}}{\sup }\left\vert \left( \frac{%
\partial F_{k}(x,\beta _{k0})}{\partial \beta ^{\prime }}\right)
n^{-1}\sum_{t}\left( \widehat{\beta }_{k,t}-\beta _{k0}\right) -\left( \frac{%
\partial F_{k}(x,\beta _{k0})}{\partial \beta ^{\prime }}\right) B_{k}%
\overline{H}_{k,n}\right\vert \\
&\leq &A_{2n}\underset{t}{\sup }\left\Vert \sqrt{n}\left( \widehat{\beta }%
_{k,t}-\beta _{k0}\right) \right\Vert +\underset{x\in \mathcal{X}^{+}}{\sup }%
\left\Vert \frac{\partial F_{k}(x,\beta _{k0})}{\partial \beta ^{\prime }}%
\right\Vert \left\Vert n^{-1/2}\sum_{t}\left( \widehat{\beta }_{k,t}-\beta
_{k0}\right) -B_{k}\sqrt{n}\overline{H}_{k,n}\right\Vert \\
&=&o_{p}(1)+o_{p}(1)=o_{p}(1)
\end{eqnarray*}%
where the first $o_{p}(1)$ follows from the fact that\ $A_{2n}\underset{%
t=R,...,T}{\sup }\left\Vert \sqrt{n}\left( \widehat{\beta }_{k,t}-\beta
_{k0}\right) \right\Vert =O_{p}(n^{-\alpha (1+\eta )+1/2})=o_{p}(1)$ for all
$\alpha \in (1/2(1+\eta ),1/2)$ by Lemma A.1(b), and the second $o_{p}(1)$
holds by Assumption A.3(iii), Lemma A.1(c) and the following argument%
\begin{eqnarray*}
\left\Vert n^{-1/2}\sum_{t=R}^{T}\left( \widehat{\beta }_{k,t}-\beta
_{k0}\right) -B_{k}\sqrt{n}\overline{H}_{k,n}\right\Vert &=&\left\Vert
n^{-1/2}\sum_{t=R}^{T}B_{k}(t)H_{k}(t)-B_{k}n^{-1/2}\sum_{t=R}^{T}H_{k}(t)%
\right\Vert \\
&=&\left\Vert n^{-1/2}\sum_{t=R}^{T}(B_{k}(t)-B_{k})H_{k}(t)\right\Vert \\
&\leq &\sup_{t}\left\Vert B_{k}(t)-B_{k}\right\Vert
\sup_{t}n^{1/2}\left\Vert H_{k}(t)\right\Vert \\
&=&o_{p}(1)O_{p}(1)=o_{p}(1).
\end{eqnarray*}
The proof of part (b) is similar and thus omitted.\medskip
\textbf{Lemma A.6:} \textit{(a) Suppose that Assumptions A.1-A.4 hold. Then,
we have for }$k=2,...,l,$
\begin{equation*}
\left(
\begin{array}{c}
v_{k,n}^{g}(^{.},\beta _{k,0})-v_{1,n}^{g}(^{.},\beta _{1,0}) \\
\sqrt{n}\overline{H}_{k,n} \\
\sqrt{n}\overline{H}_{1,n}%
\end{array}%
\right) \mathit{\ }\Rightarrow \left(
\begin{array}{c}
\widetilde{g}_{k}(^{.}) \\
v_{k0} \\
v_{10}%
\end{array}%
\right)
\end{equation*}%
and except at zero, the sample paths of $\widetilde{g}_{k}(^{.})$ are
uniformly continuous with respect to a pseudometric $\rho _{g}$ on $\mathcal{%
X}$ with probability one, where for $x_{1},x_{2}\in \mathcal{X}^{+}$ or $%
x_{1},x_{2}\in \mathcal{X}^{-},$%
\begin{equation*}
\rho _{g}(x_{1},x_{2})=\left\{ E[(1(e_{1,t}\leq x_{1})-1(e_{k,t}\leq
x_{1}))-(1(e_{1,t}\leq x_{2})-1(e_{k,t}\leq x_{2}))]^{2}\right\} ^{1/2}.
\end{equation*}
\textit{(b) Suppose Assumptions A.1}$^{\ast }$, \textit{A.2, A.3}$^{\ast }$%
\textit{\ and A.4 hold. Then, we have for }$k=2,...,l,$\textit{\ }%
\begin{equation*}
\left(
\begin{array}{c}
v_{k,n}^{c}(^{.},\beta _{k0})-v_{1,n}^{c}(^{.},\beta _{10}) \\
\sqrt{n}\overline{H}_{k,n} \\
\sqrt{n}\overline{H}_{1,n}%
\end{array}%
\right) \mathit{\ }\Rightarrow \left(
\begin{array}{c}
\widetilde{c}_{k}(^{.}) \\
v_{k0} \\
v_{10}%
\end{array}%
\right)
\end{equation*}%
and except at zero, the sample paths of $\widetilde{c}_{k}(^{.})$ are
uniformly continuous with respect to a pseudometric $\rho _{c}$ on $\mathcal{%
X}$ with probability one, where for $x_{1},x_{2}\in \mathcal{X}^{+}$ or $%
x_{1},x_{2}\in \mathcal{X}^{-},$%
\begin{equation*}
\rho _{c}(x_{1},x_{2})=\left\{ E\left\vert \int_{-\infty
}^{x_{1}}(1(e_{1,t}\leq s)-1(e_{k,t}\leq s))ds-\int_{-\infty
}^{x_{2}}(1(e_{1,t}\leq s)-1(e_{k,t}\leq s))ds\right\vert ^{r}\right\}
^{1/r}1(x_{1}<0,x_{2}<0)
\end{equation*}%
\begin{equation*}
+\left\{ E\left\vert \int_{x_{1}}^{\infty
}(1(e_{1,t}>s)--1(e_{k,t}>s)))ds-\int_{x_{2}}^{\infty
}(1(e_{1,t}>s)-1(e_{k,t}>s)))ds\right\vert ^{r}\right\} ^{1/r}1(x_{1}\geq
0,x_{2}\geq 0).
\end{equation*}
\noindent \qquad \textbf{Proof of Lemma A.6:} We first prove (a). By Theorem
10.2 of Pollard (1990), the results hold if we have (i) total boundedness of
the pseudometric space $\left( \mathcal{X},\rho _{g}\right) ,$ (ii)
stochastic equicontinuity of $\left\{ v_{k,n}^{g}(\cdot ,\beta
_{k0})-v_{1,n}^{g}(\cdot ,\beta _{10}):n\geq 1\right\} $ and (iii) finite
dimensional (fidi) convergence. The first two conditions follow from Lemma
A.2. We now verify condition (iii), i.e., we need to show that $%
(v_{k,n}^{g}(x_{1},\beta _{k0})$
\noindent $-v_{1,n}^{g}(x_{1},\beta _{10}),...,v_{k,n}^{g}(x_{J},\beta
_{10})-v_{1,n}^{g}(x_{J},\beta _{k0}),$ $\sqrt{n}\overline{H}_{k,n}^{\prime
},\sqrt{n}\overline{H}_{1,n}^{\prime })^{\prime }$ converges in distribution
to $(\widetilde{g}_{k}(x_{1}),$
\noindent $...,\widetilde{g}_{k}(x_{J}),$ $v_{k0}^{\prime },v_{10}^{\prime
})^{\prime }$ $\ \forall x_{1},...,x_{J}$ $\in \mathcal{X}^{+}$ or $%
x_{1},...,x_{J}$ $\in \mathcal{X}^{-},$ and $\forall J\geq 1.$ The central
limit theorem (CLT) holds for $\sqrt{n}\overline{H}_{k,n}$ by Lemma 4.1 in
West (1996). A CLT for bounded random variables under $\alpha -$mixing
conditions (see Hall and Heyde, 1980) hold for $v_{k,n}^{g}(x_{j},\beta
_{k0})-v_{1,n}^{g}(x_{j},\beta _{10}),$ $j=1,...,J.$ Then one obtains the
above weak convergence result by the Cramer-Wold device. This establishes
part (a).
\noindent For part (b), we need to verify the fidi convergence again. Note
that the moment condition of Hall and Heyde (1980, Corollary 5.1) holds
since (WLOG), for $x>0,$%
\begin{equation*}
E\left\vert \int_{x}^{\infty }(1(e_{1,t}>s)ds-1(e_{k,t}>s))ds\right\vert
^{2+\delta }\leq E\left\vert e_{1,t}-e_{k,t}\right\vert ^{2+\delta }<\infty .
\end{equation*}%
The mixing condition also holds since we have $\sum \alpha (j)^{\delta
/(2+\delta )}\leq C\sum j^{-M\delta /(2+\delta )}<\infty $ by Assumption A.1$%
^{\ast }.$\medskip
\textbf{Lemma HA.1: }\textit{(a) Suppose that Assumption HA.1 holds. Then,
for each }$\varepsilon >0,$\textit{\ there exists }$\delta >0$\textit{\ such
that for all }$x,$ $\overset{.}{x}$ $\in \mathcal{X}^{+}$ or $x,$ $\overset{.%
}{x}$ $\in \mathcal{X}^{-},$%
\begin{equation}
\underset{T\rightarrow \infty }{\overline{\lim }}\left\Vert \underset{\rho
_{hg}^{\ast }(x,\overset{.}{x}))<\delta }{\sup }|\nu _{k,n}^{hg}(x)-\nu
_{k,n}^{hg}(\overset{.}{x})|\right\Vert _{q}<\varepsilon ,
\end{equation}%
where
\begin{equation}
\rho _{hg}^{\ast }(x,\overset{.}{x})=\{E[1(e_{k,t+\tau }\leq
x)-1(e_{k,t+\tau }\leq \overset{.}{x})]^{2}\}^{1/2}.
\end{equation}
\textbf{\ }\textit{(b) Suppose that Assumption HA.1}$^{\ast }$\textit{\
holds. Then, for each }$\varepsilon >0,$\textit{\ there exists }$\delta >0$%
\textit{\ such that for all }$x,$ $\overset{.}{x}$ $\in \mathcal{X}^{+}$ or $%
x,$ $\overset{.}{x}$ $\in \mathcal{X}^{-},$ \textit{\ }%
\begin{equation}
\underset{T\rightarrow \infty }{\overline{\lim }}\left\Vert \underset{\rho
_{hc}^{\ast }(x,\overset{.}{x})<\delta }{\sup }|\nu _{k,n}^{hc}(x)-\nu
_{k,n}^{hc}(\overset{.}{x})|\right\Vert _{q}<\varepsilon , \label{L13b}
\end{equation}%
where
\begin{equation*}
\rho _{hc}^{\ast }(x,\overset{.}{x})=\left\{ E\left\vert \int_{-\infty
}^{x}1(e_{k,t+\tau }\leq s)ds-\int_{-\infty }^{\overset{.}{x}}1(e_{k,t+\tau
}\leq s)ds\right\vert ^{r}\right\} ^{1/r}1(x<0,\text{ }\overset{.}{x}<0)
\end{equation*}%
\begin{equation}
\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }+\left\{ E\left\vert
\int_{x}^{\infty }1(e_{k,t+\tau }>s)ds-\int_{\overset{.}{x}}^{\infty
}1(e_{k,t+\tau }>s)ds\right\vert ^{r}\right\} ^{1/r}1(x\geq 0,\text{ }%
\overset{.}{x}\geq 0). \label{L14b}
\end{equation}
\noindent \qquad \textbf{Proof of Lemma HA.1:} Assumptions HA.4 and HA.1 (i)
(resp. HA.1*(i)) imply that $\{e_{k,t+\tau }:$ $t\geq R\}$ is an $\alpha -$%
mixing sequence with mixing coefficients $\alpha (l)=O(l^{-C_{0}}),$ where $%
C_{0}$ is as defined in HA.1 (resp. HA.1*). Note that Theorem 2.2 in Andrews
and Pollard (1994) and Theorem 3 in Hansen (1996) do not require any
stationarity assumption, the proof is analogous to that of Lemma A.2. For
example, for part (b), Eq. (12) of Hansen (1996) is satisfied with our
mixing coefficient $C_{0}=1/q-1/r,$ Eq. (12) is true by Assumption HA.1\
(ii) and Equation (13) is satisfied with the dominating function $b$ =1.
Then theorem 3 in Hansen (1996) follows by taking $a=1$ and $\lambda =1.$%
\medskip
\textbf{Lemma HA.2:} \textit{(a) Suppose Assumptions HA.1* and HA.4 hold.
Then, we have for }$k=2,...,l,$
\begin{equation*}
v_{k,n}^{hg}(^{.})-v_{1,n}^{hg}(^{.})\Rightarrow \widetilde{hg}_{k}(^{.})
\end{equation*}%
and except at zero, the sample paths of $\widetilde{hg}_{k}(^{.})$ are
uniformly continuous with respect to a pseudometric $\rho _{hg}$ on $%
\mathcal{X}$ with probability one, where for $x_{1},x_{2}\in \mathcal{X}^{+}$
or $x_{1},x_{2}$ $\in \mathcal{X}^{-},$%
\begin{equation*}
\rho _{hg}(x_{1},x_{2})=\left\{ E[(1(e_{1,t+\tau }\leq x_{1})-1(e_{k,t+\tau
}\leq x_{1}))-(1(e_{1,t+\tau }\leq x_{2})-1(e_{k,t+\tau }\leq
x_{2}))]^{2}\right\} ^{1/2}.
\end{equation*}
\textit{(b) Suppose that Assumptions A.1}$^{\ast }$, \textit{A.2, A.3}$%
^{\ast }$\textit{\ and A.4 hold. Then, we have for }$k=2,...,l,$\textit{\ }%
\begin{equation*}
v_{k,n}^{hc}(^{.})-v_{1,n}^{hc}(^{.})\mathit{\ }\Rightarrow \widetilde{hc}%
_{k}(^{.})
\end{equation*}%
and except at zero, the sample paths of $\widetilde{hc}_{k}(^{.})$ are
uniformly continuous with respect to a pseudometric $\rho _{hc}$ on $%
\mathcal{X}$ with probability one, where for $x_{1},x_{2}\in \mathcal{X}^{+}$
or $x_{1},x_{2}$ $\in \mathcal{X}^{-},$%
\begin{eqnarray*}
&&\rho _{hc}(x_{1},x_{2}) \\
&=&\left\{ E\left\vert \int_{-\infty }^{x_{1}}(1(e_{1,t+\tau }\leq
s)-1(e_{k,t+\tau }\leq s))ds-\int_{-\infty }^{x_{2}}(1(e_{1,t+\tau }\leq
s)-1(e_{k,t+\tau }\leq s))ds\right\vert ^{r}\right\} ^{1/r}1(x_{1}<0,x_{2}<0)
\end{eqnarray*}%
\begin{equation*}
+\left\{ E\left\vert \int_{x_{1}}^{\infty }(1(e_{1,t+\tau }>s)-1(e_{k,t+\tau
}>s)))ds-\int_{x_{2}}^{\infty }(1(e_{1,t+\tau }>s)-1(e_{k,t+\tau
}>s)))ds\right\vert ^{r}\right\} ^{1/r}1(x_{1}\geq 0,x_{2}\geq 0).
\end{equation*}%
\smallskip
\noindent \qquad \textbf{Proof of Lemma HA.2:} The proof is analogous to
that of Lemma A.6. The total boundedness of the pseudometric space $\left(
\mathcal{X},\rho _{i}\right) ,i=hg$ and $hc,$ and the stochastic
equicontinuity of $\left\{ v_{k,n}^{i}(\cdot )-v_{1,n}^{i}(\cdot ):n\geq
1\right\} ,i=hg$ and $hc$ follow from Lemma HA.1. The finite dimensional
convergence follows from Hall and Heyde (1980). Then the result follows from
Theorem 10.2 of Pollard (1990).
\pagebreak
\begin{thebibliography}{9}
\bibitem{References} Andrews, D. W. K. and D. Pollard (1994), An
Introduction to Functional Central Limit Theorems for Dependent Stochastic
Processes, \textit{International Statistical Review} 62, 119-132.
\bibitem{} Corradi, V. and N. R. Swanson (2007), Nonparametric Bootstrap
Procedures For Predictive Inference Based On Recursive Estimation Schemes,
\textit{International Economic Review} 48, 67--109.
\bibitem{} Davidson, J. (1994), \textit{Stochastic Limit Theory}, Oxford
University Press, New York.
\bibitem{} Hall, P. and C. C. Heyde (1980), Martingale Limit Theory and Its
Applications, Academic Press.
\bibitem{} Hansen, B. E. (1996), Stochastic Equicontinuity for Unbounded
Dependent Heterogeneous Arrays, \textit{Econometric Theory} 12, 347-359.
\bibitem{} McCracken, M. W. (2000), Robust Out-of-Sample Inference, \textit{%
Journal of Econometrics} 99, 195-223.
\bibitem{} Pollard, D. (1990), Empirical Processes: Theory and Applications,
CBMS Conference Series in Probability and Statistic, Vol.2, Institute of
Mathematical Statistics, Hayward.
\bibitem{} West, \ K. D. (1996), Asymptotic Inference about Predictive
Ability, \textit{Econometrica} 64, 1067-1084.
\end{thebibliography}
\pagebreak
%TCIMACRO{\TeXButton{linespread}{\linespread{1.0}}}%
%BeginExpansion
\linespread{1.0}%
%EndExpansion
%TCIMACRO{\TeXButton{B}{\begin{table}{\centering}}}%
%BeginExpansion
\begin{table}{\centering}%
%EndExpansion
\caption{ GL and CL Forecast Superiority Tests (DGPs 2,
4 and 6)\label{key}}%
\begin{equation*}
\begin{tabular}{cccc|ccc}
\hline\hline
& {\small DGP2} & {\small DGP4} & {\small DGP6} & {\small DGP2} & {\small %
DGP4} & {\small DGP6} \\
${\small S}_{n}$ & \multicolumn{3}{c|}{\small GL forecast superiority} &
\multicolumn{3}{|c}{\small CL forecast superiority} \\
\multicolumn{4}{c|}{\small n=100} & \multicolumn{3}{|c}{\small n=100} \\
{\small 0.63} & {\small 0.136} & {\small 0.582} & {\small 0.277} & {\small %
0.170} & {\small 0.710} & {\small 0.372} \\
{\small 0.54} & {\small 0.130} & {\small 0.605} & {\small 0.259} & {\small %
0.169} & {\small 0.726} & {\small 0.361} \\
{\small 0.44} & {\small 0.115} & {\small 0.558} & {\small 0.247} & {\small %
0.170} & {\small 0.712} & {\small 0.345} \\
{\small 0.35} & {\small 0.129} & {\small 0.559} & {\small 0.239} & {\small %
0.131} & {\small 0.684} & {\small 0.348} \\
{\small 0.25} & {\small 0.134} & {\small 0.555} & {\small 0.290} & {\small %
0.151} & {\small 0.696} & {\small 0.380} \\
{\small 0.16} & {\small 0.124} & {\small 0.541} & {\small 0.270} & {\small %
0.146} & {\small 0.693} & {\small 0.363} \\
{\small DM} & {\small 0.165} & {\small 0.845} & {\small 0.506} & & & \\
\multicolumn{4}{c|}{\small n=500} & \multicolumn{3}{|c}{\small n=500} \\
{\small 0.54} & {\small 0.140} & {\small 0.978} & {\small 0.516} & {\small %
0.161} & {\small 0.996} & {\small 0.718} \\
{\small 0.45} & {\small 0.127} & {\small 0.974} & {\small 0.517} & {\small %
0.118} & {\small 0.997} & {\small 0.715} \\
{\small 0.36} & {\small 0.126} & {\small 0.981} & {\small 0.491} & {\small %
0.142} & {\small 0.998} & {\small 0.678} \\
{\small 0.27} & {\small 0.106} & {\small 0.975} & {\small 0.482} & {\small %
0.130} & {\small 0.999} & {\small 0.679} \\
{\small 0.17} & {\small 0.105} & {\small 0.971} & {\small 0.462} & {\small %
0.125} & {\small 0.995} & {\small 0.672} \\
{\small 0.08} & {\small 0.109} & {\small 0.976} & {\small 0.478} & {\small %
0.109} & {\small 0.995} & {\small 0.661} \\
{\small DM} & {\small 0.146} & {\small 1.000} & {\small 0.906} & & & \\
\multicolumn{4}{c|}{\small n=1000} & \multicolumn{3}{|c}{\small n=1000} \\
{\small 0.50} & {\small 0.129} & {\small 1.000} & {\small 0.717} & {\small %
0.147} & {\small 1.000} & {\small 0.898} \\
{\small 0.41} & {\small 0.130} & {\small 1.000} & {\small 0.698} & {\small %
0.121} & {\small 1.000} & {\small 0.891} \\
{\small 0.33} & {\small 0.130} & {\small 1.000} & {\small 0.697} & {\small %
0.128} & {\small 1.000} & {\small 0.877} \\
{\small 0.24} & {\small 0.090} & {\small 1.000} & {\small 0.687} & {\small %
0.108} & {\small 1.000} & {\small 0.876} \\
{\small 0.15} & {\small 0.098} & {\small 1.000} & {\small 0.699} & {\small %
0.088} & {\small 1.000} & {\small 0.879} \\
{\small 0.06} & {\small 0.095} & {\small 1.000} & {\small 0.696} & {\small %
0.119} & {\small 1.000} & {\small 0.859} \\
{\small DM} & {\small 0.165} & {\small 1.000} & {\small 0.986} & & & \\
\hline\hline
\multicolumn{7}{c}{{\small Notes: See Notes to Table 1 in the paper. }$%
{\small \rho =\lambda =0.5.}$}%
\end{tabular}%
\
\end{equation*}
%TCIMACRO{\TeXButton{E}{\end{table}}}%
%BeginExpansion
\end{table}%
%EndExpansion
%TCIMACRO{\TeXButton{linespread}{\linespread{1.25}} }%
%BeginExpansion
\linespread{1.25}
%EndExpansion
\
\pagebreak
%TCIMACRO{\TeXButton{linespread}{\linespread{1.0}}}%
%BeginExpansion
\linespread{1.0}%
%EndExpansion
%TCIMACRO{\TeXButton{B}{\begin{table}{\centering}}}%
%BeginExpansion
\begin{table}{\centering}%
%EndExpansion
\caption{GL and CL Forecast Superiority Tests (DGPs 2,
4 and 6)\label{key}}%
\begin{equation*}
\begin{tabular}{cccc|ccc}
\hline\hline
& {\small DGP2} & {\small DGP4} & {\small DGP6} & {\small DGP2} & {\small %
DGP4} & {\small DGP6} \\
${\small S}_{n}$ & \multicolumn{3}{c|}{\small GL forecast superiority} &
\multicolumn{3}{|c}{\small CL forecast superiority} \\
\multicolumn{4}{c|}{\small n=100} & \multicolumn{3}{|c}{\small n=100} \\
{\small 0.63} & {\small 0.116} & {\small 0.580} & {\small 0.222} & {\small %
0.120} & {\small 0.751} & {\small 0.352} \\
{\small 0.54} & {\small 0.125} & {\small 0.579} & {\small 0.219} & {\small %
0.146} & {\small 0.750} & {\small 0.341} \\
{\small 0.44} & {\small 0.109} & {\small 0.579} & {\small 0.208} & {\small %
0.130} & {\small 0.762} & {\small 0.355} \\
{\small 0.35} & {\small 0.112} & {\small 0.589} & {\small 0.232} & {\small %
0.123} & {\small 0.735} & {\small 0.328} \\
{\small 0.25} & {\small 0.120} & {\small 0.580} & {\small 0.259} & {\small %
0.118} & {\small 0.720} & {\small 0.348} \\
{\small 0.16} & {\small 0.127} & {\small 0.615} & {\small 0.241} & {\small %
0.136} & {\small 0.770} & {\small 0.361} \\
{\small DM} & {\small 0.127} & {\small 0.887} & {\small 0.494} & & & \\
\multicolumn{4}{c|}{\small n=500} & \multicolumn{3}{|c}{\small n=500} \\
{\small 0.54} & {\small 0.112} & {\small 0.990} & {\small 0.506} & {\small %
0.129} & {\small 1.000} & {\small 0.709} \\
{\small 0.45} & {\small 0.112} & {\small 0.988} & {\small 0.484} & {\small %
0.088} & {\small 1.000} & {\small 0.709} \\
{\small 0.36} & {\small 0.105} & {\small 0.989} & {\small 0.479} & {\small %
0.121} & {\small 1.000} & {\small 0.709} \\
{\small 0.27} & {\small 0.098} & {\small 0.983} & {\small 0.495} & {\small %
0.107} & {\small 1.000} & {\small 0.698} \\
{\small 0.17} & {\small 0.101} & {\small 0.984} & {\small 0.447} & {\small %
0.121} & {\small 1.000} & {\small 0.727} \\
{\small 0.08} & {\small 0.124} & {\small 0.987} & {\small 0.472} & {\small %
0.108} & {\small 1.000} & {\small 0.713} \\
{\small DM} & {\small 0.130} & {\small 1.000} & {\small 0.927} & & & \\
\multicolumn{4}{c|}{\small n=1000} & \multicolumn{3}{|c}{\small n=1000} \\
{\small 0.50} & {\small 0.115} & {\small 1.000} & {\small 0.767} & {\small %
0.107} & {\small 1.000} & {\small 0.918} \\
{\small 0.41} & {\small 0.119} & {\small 1.000} & {\small 0.718} & {\small %
0.106} & {\small 1.000} & {\small 0.909} \\
{\small 0.33} & {\small 0.120} & {\small 1.000} & {\small 0.715} & {\small %
0.107} & {\small 1.000} & {\small 0.914} \\
{\small 0.24} & {\small 0.089} & {\small 1.000} & {\small 0.709} & {\small %
0.094} & {\small 1.000} & {\small 0.912} \\
{\small 0.15} & {\small 0.104} & {\small 1.000} & {\small 0.749} & {\small %
0.090} & {\small 1.000} & {\small 0.912} \\
{\small 0.06} & {\small 0.092} & {\small 1.000} & {\small 0.726} & {\small %
0.116} & {\small 1.000} & {\small 0.899} \\
{\small DM} & {\small 0.122} & {\small 1.000} & {\small 0.995} & & & \\
\hline\hline
\multicolumn{7}{c}{{\small Notes: See Notes to Table 1 in the paper.} $%
{\small \rho =0.5}$, ${\small \lambda =0.3.}$}%
\end{tabular}%
\
\end{equation*}
%TCIMACRO{\TeXButton{E}{\end{table}}}%
%BeginExpansion
\end{table}%
%EndExpansion
%TCIMACRO{\TeXButton{linespread}{\linespread{1.25}} }%
%BeginExpansion
\linespread{1.25}
%EndExpansion
\
\pagebreak
%TCIMACRO{\TeXButton{linespread}{\linespread{1.0}}}%
%BeginExpansion
\linespread{1.0}%
%EndExpansion
%TCIMACRO{\TeXButton{B}{\begin{table}{\centering}}}%
%BeginExpansion
\begin{table}{\centering}%
%EndExpansion
\caption{GL and CL Forecast Superiority Tests (DGPs 2,
4 and 6)\label{key}}%
\begin{equation*}
\begin{tabular}{cccc|ccc}
\hline\hline
& {\small DGP2} & {\small DGP4} & {\small DGP6} & {\small DGP2} & {\small %
DGP4} & {\small DGP6} \\
${\small S}_{n}$ & \multicolumn{3}{c|}{\small GL forecast superiority} &
\multicolumn{3}{|c}{\small CL forecast superiority} \\
\multicolumn{4}{c|}{\small n=100} & \multicolumn{3}{|c}{\small n=100} \\
{\small 0.63} & {\small 0.146} & {\small 0.580} & {\small 0.222} & {\small %
0.120} & {\small 0.751} & {\small 0.352} \\
{\small 0.54} & {\small 0.125} & {\small 0.579} & {\small 0.219} & {\small %
0.146} & {\small 0.750} & {\small 0.341} \\
{\small 0.44} & {\small 0.109} & {\small 0.579} & {\small 0.208} & {\small %
0.130} & {\small 0.762} & {\small 0.355} \\
{\small 0.35} & {\small 0.112} & {\small 0.589} & {\small 0.232} & {\small %
0.123} & {\small 0.735} & {\small 0.328} \\
{\small 0.25} & {\small 0.120} & {\small 0.580} & {\small 0.259} & {\small %
0.118} & {\small 0.720} & {\small 0.348} \\
{\small 0.16} & {\small 0.127} & {\small 0.615} & {\small 0.241} & {\small %
0.136} & {\small 0.770} & {\small 0.361} \\
{\small DM} & {\small 0.127} & {\small 0.887} & {\small 0.494} & & & \\
\multicolumn{4}{c|}{\small n=500} & \multicolumn{3}{|c}{\small n=500} \\
{\small 0.54} & {\small 0.112} & {\small 0.990} & {\small 0.506} & {\small %
0.129} & {\small 1.000} & {\small 0.709} \\
{\small 0.45} & {\small 0.112} & {\small 0.988} & {\small 0.484} & {\small %
0.088} & {\small 1.000} & {\small 0.709} \\
{\small 0.36} & {\small 0.105} & {\small 0.989} & {\small 0.479} & {\small %
0.121} & {\small 1.000} & {\small 0.709} \\
{\small 0.27} & {\small 0.098} & {\small 0.983} & {\small 0.495} & {\small %
0.107} & {\small 1.000} & {\small 0.698} \\
{\small 0.17} & {\small 0.101} & {\small 0.984} & {\small 0.447} & {\small %
0.121} & {\small 1.000} & {\small 0.727} \\
{\small 0.08} & {\small 0.124} & {\small 0.987} & {\small 0.472} & {\small %
0.108} & {\small 1.000} & {\small 0.713} \\
{\small DM} & {\small 0.130} & {\small 1.000} & {\small 0.927} & & & \\
\multicolumn{4}{c|}{\small n=1000} & \multicolumn{3}{|c}{\small n=1000} \\
{\small 0.50} & {\small 0.115} & {\small 1.000} & {\small 0.767} & {\small %
0.107} & {\small 1.000} & {\small 0.918} \\
{\small 0.41} & {\small 0.119} & {\small 1.000} & {\small 0.718} & {\small %
0.106} & {\small 1.000} & {\small 0.909} \\
{\small 0.33} & {\small 0.120} & {\small 1.000} & {\small 0.715} & {\small %
0.107} & {\small 1.000} & {\small 0.914} \\
{\small 0.24} & {\small 0.089} & {\small 1.000} & {\small 0.709} & {\small %
0.094} & {\small 1.000} & {\small 0.912} \\
{\small 0.15} & {\small 0.104} & {\small 1.000} & {\small 0.749} & {\small %
0.090} & {\small 1.000} & {\small 0.912} \\
{\small 0.06} & {\small 0.092} & {\small 1.000} & {\small 0.726} & {\small %
0.116} & {\small 1.000} & {\small 0.899} \\
{\small DM} & {\small 0.122} & {\small 1.000} & {\small 0.995} & & & \\
\hline\hline
\multicolumn{7}{c}{{\small Notes: See Notes to Table 1 in the paper. }$%
{\small \rho =0.3,\lambda =0.5.}$}%
\end{tabular}%
\
\end{equation*}
%TCIMACRO{\TeXButton{E}{\end{table}}}%
%BeginExpansion
\end{table}%
%EndExpansion
%TCIMACRO{\TeXButton{linespread}{\linespread{1.25}} }%
%BeginExpansion
\linespread{1.25}
%EndExpansion
\
\pagebreak
%TCIMACRO{\TeXButton{linespread}{\linespread{1.0}}}%
%BeginExpansion
\linespread{1.0}%
%EndExpansion
%TCIMACRO{\TeXButton{B}{\begin{table}{\centering}}}%
%BeginExpansion
\begin{table}{\centering}%
%EndExpansion
\caption{GL and CL Forecast Superiority Tests (DGPs 2,
4 and 6)\label{key}}%
\begin{equation*}
\begin{tabular}{cccc|ccc}
\hline\hline
& {\small DGP2} & {\small DGP4} & {\small DGP6} & {\small DGP2} & {\small %
DGP4} & {\small DGP6} \\
${\small S}_{n}$ & \multicolumn{3}{c|}{\small GL forecast superiority} &
\multicolumn{3}{|c}{\small CL forecast superiority} \\
\multicolumn{4}{c|}{\small n=100} & \multicolumn{3}{|c}{\small n=100} \\
{\small 0.63} & {\small 0.213} & {\small 0.399} & {\small 0.216} & {\small %
0.280} & {\small 0.522} & {\small 0.327} \\
{\small 0.54} & {\small 0.185} & {\small 0.389} & {\small 0.198} & {\small %
0.286} & {\small 0.450} & {\small 0.338} \\
{\small 0.44} & {\small 0.155} & {\small 0.378} & {\small 0.183} & {\small %
0.220} & {\small 0.480} & {\small 0.299} \\
{\small 0.35} & {\small 0.181} & {\small 0.368} & {\small 0.180} & {\small %
0.223} & {\small 0.435} & {\small 0.298} \\
{\small 0.25} & {\small 0.170} & {\small 0.320} & {\small 0.209} & {\small %
0.205} & {\small 0.432} & {\small 0.292} \\
{\small 0.16} & {\small 0.162} & {\small 0.315} & {\small 0.195} & {\small %
0.206} & {\small 0.389} & {\small 0.281} \\
{\small DM} & {\small 0.283} & {\small 0.566} & {\small 0.404} & & & \\
\multicolumn{4}{c|}{\small n=500} & \multicolumn{3}{|c}{\small n=500} \\
{\small 0.54} & {\small 0.197} & {\small 0.660} & {\small 0.276} & {\small %
0.258} & {\small 0.763} & {\small 0.439} \\
{\small 0.45} & {\small 0.184} & {\small 0.632} & {\small 0.274} & {\small %
0.210} & {\small 0.753} & {\small 0.391} \\
{\small 0.36} & {\small 0.166} & {\small 0.589} & {\small 0.259} & {\small %
0.181} & {\small 0.733} & {\small 0.369} \\
{\small 0.27} & {\small 0.164} & {\small 0.563} & {\small 0.248} & {\small %
0.170} & {\small 0.685} & {\small 0.358} \\
{\small 0.17} & {\small 0.131} & {\small 0.514} & {\small 0.217} & {\small %
0.141} & {\small 0.663} & {\small 0.317} \\
{\small 0.08} & {\small 0.128} & {\small 0.507} & {\small 0.192} & {\small %
0.144} & {\small 0.653} & {\small 0.300} \\
{\small DM} & {\small 0.276} & {\small 0.860} & {\small 0.546} & & & \\
\multicolumn{4}{c|}{\small n=1000} & \multicolumn{3}{|c}{\small n=1000} \\
{\small 0.50} & {\small 0.181} & {\small 0.817} & {\small 0.347} & {\small %
0.221} & {\small 0.908} & {\small 0.509} \\
{\small 0.41} & {\small 0.178} & {\small 0.776} & {\small 0.309} & {\small %
0.185} & {\small 0.909} & {\small 0.489} \\
{\small 0.33} & {\small 0.160} & {\small 0.763} & {\small 0.294} & {\small %
0.186} & {\small 0.889} & {\small 0.466} \\
{\small 0.24} & {\small 0.144} & {\small 0.753} & {\small 0.289} & {\small %
0.158} & {\small 0.854} & {\small 0.428} \\
{\small 0.15} & {\small 0.151} & {\small 0.706} & {\small 0.269} & {\small %
0.128} & {\small 0.844} & {\small 0.396} \\
{\small 0.06} & {\small 0.118} & {\small 0.663} & {\small 0.286} & {\small %
0.137} & {\small 0.799} & {\small 0.369} \\
{\small DM} & {\small 0.280} & {\small 0.953} & {\small 0.650} & & & \\
\hline\hline
\multicolumn{7}{c}{{\small Notes: See Notes to Table 1 in the paper. }$%
{\small \rho =\lambda =0.8.}$}%
\end{tabular}%
\
\end{equation*}
%TCIMACRO{\TeXButton{E}{\end{table}}}%
%BeginExpansion
\end{table}%
%EndExpansion
%TCIMACRO{\TeXButton{linespread}{\linespread{1.25}} }%
%BeginExpansion
\linespread{1.25}
%EndExpansion
\
\pagebreak
%TCIMACRO{\TeXButton{linespread}{\linespread{1.0}}}%
%BeginExpansion
\linespread{1.0}%
%EndExpansion
%TCIMACRO{\TeXButton{B}{\begin{table}{\centering}}}%
%BeginExpansion
\begin{table}{\centering}%
%EndExpansion
\caption{GL and CL Forecast Superiority Tests (DGPs PEE 1-8)\label{key}}%
\begin{equation*}
\begin{tabular}{ccccccccc}
\hline\hline
& {\small DGP PEE1\ } & {\small DGP PEE2} & {\small DGP PEE3} & {\small DGP
PEE4} & {\small DGP PEE5} & {\small DGP PEE6} & {\small DGP PEE7} & {\small %
DGP PEE8} \\
${\small S}_{n}$ & \multicolumn{8}{c}{\small GL forecast superiority} \\
{\small 0.53} & {\small 0.045} & {\small 0.042} & {\small 0.051} & {\small %
0.057} & {\small 0.777} & {\small 0.675} & {\small 0.999} & {\small 0.999}
\\
{\small 0.44} & {\small 0.045} & {\small 0.055} & {\small 0.052} & {\small %
0.043} & {\small 0.795} & {\small 0.679} & {\small 1.000} & {\small 0.999}
\\
{\small 0.35} & {\small 0.056} & {\small 0.047} & {\small 0.044} & {\small %
0.047} & {\small 0.760} & {\small 0.689} & {\small 1.000} & {\small 0.999}
\\
{\small 0.26} & {\small 0.052} & {\small 0.052} & {\small 0.049} & {\small %
0.048} & {\small 0.810} & {\small 0.683} & {\small 1.000} & {\small 1.000}
\\
{\small 0.17} & {\small 0.053} & {\small 0.050} & {\small 0.055} & {\small %
0.059} & {\small 0.766} & {\small 0.721} & {\small 1.000} & {\small 1.000}
\\
{\small 0.08} & {\small 0.063} & {\small 0.052} & {\small 0.064} & {\small %
0.056} & {\small 0.813} & {\small 0.704} & {\small 1.000} & {\small 0.999}
\\
& \multicolumn{8}{c}{\small CL forecast superiority} \\
{\small 0.53} & {\small 0.057} & {\small 0.055} & {\small 0.065} & {\small %
0.048} & {\small 0.990} & {\small 0.976} & {\small 1.000} & {\small 1.000}
\\
{\small 0.44} & {\small 0.059} & {\small 0.053} & {\small 0.047} & {\small %
0.063} & {\small 0.992} & {\small 0.973} & {\small 1.000} & {\small 1.000}
\\
{\small 0.35} & {\small 0.055} & {\small 0.050} & {\small 0.065} & {\small %
0.061} & {\small 0.992} & {\small 0.973} & {\small 1.000} & {\small 1.000}
\\
{\small 0.26} & {\small 0.059} & {\small 0.054} & {\small 0.055} & {\small %
0.067} & {\small 0.991} & {\small 0.969} & {\small 1.000} & {\small 1.000}
\\
{\small 0.17} & {\small 0.065} & {\small 0.067} & {\small 0.062} & {\small %
0.087} & {\small 0.981} & {\small 0.978} & {\small 1.000} & {\small 1.000}
\\
{\small 0.08} & {\small 0.059} & {\small 0.082} & {\small 0.061} & {\small %
0.075} & {\small 0.990} & {\small 0.984} & {\small 1.000} & {\small 1.000}
\\
& & & & & & & & \\
{\small DM} & {\small 0.011} & {\small 0.003} & {\small 0.008} & {\small %
0.019} & {\small 0.999} & {\small 0.997} & {\small 1.000} & {\small 1.000}
\\ \hline\hline
\multicolumn{9}{c}{\small DGPs PEE1 - PEE4 satisfy the null hypothesis
whereas the other DGPs satisfy the alternative hypothesis. n=600.} \\
\multicolumn{9}{c}{\small Entry numbers are the rejection frequency in 1000
repetitions. The number of bootstrap resamples is 300.} \\
\multicolumn{9}{c}{$S_{n}${\small \ is the bootstrap smoothing parameter.
The nominal test size is 10\%.}}%
\end{tabular}%
\end{equation*}
%TCIMACRO{\TeXButton{E}{\end{table}}}%
%BeginExpansion
\end{table}%
%EndExpansion
%TCIMACRO{\TeXButton{linespread}{\linespread{1.25}} }%
%BeginExpansion
\linespread{1.25}
%EndExpansion
\
\pagebreak
%TCIMACRO{\TeXButton{linespread}{\linespread{1.0}}}%
%BeginExpansion
\linespread{1.0}%
%EndExpansion
%TCIMACRO{\TeXButton{B}{\begin{table}{\centering}}}%
%BeginExpansion
\begin{table}{\centering}%
%EndExpansion
\caption{GL and CL Forecast Superiority Tests (DGPs PEE9-
16)\label{key}}%
\begin{equation*}
\begin{tabular}{ccccccccc}
\hline\hline
& {\small DGPPEE9\ } & {\small DGPPEE10} & {\small DGPPEE11} & {\small %
DGPPEE12} & {\small DGPPEE13} & {\small DGPPEE14} & {\small DGPPEE15} &
{\small DGPPEE16} \\
${\small S}_{n}$ & \multicolumn{8}{c}{\small GL forecast superiority} \\
{\small 0.53} & {\small 1.000} & {\small 1.000} & {\small 0.776} & {\small %
0.711} & {\small 1.000} & {\small 1.000} & {\small 0.839} & {\small 0.799}
\\
{\small 0.44} & {\small 1.000} & {\small 1.000} & {\small 0.780} & {\small %
0.672} & {\small 1.000} & {\small 1.000} & {\small 0.827} & {\small 0.762}
\\
{\small 0.35} & {\small 1.000} & {\small 1.000} & {\small 0.775} & {\small %
0.692} & {\small 1.000} & {\small 0.999} & {\small 0.837} & {\small 0.761}
\\
{\small 0.26} & {\small 1.000} & {\small 1.000} & {\small 0.778} & {\small %
0.706} & {\small 1.000} & {\small 1.000} & {\small 0.848} & {\small 0.759}
\\
{\small 0.17} & {\small 1.000} & {\small 1.000} & {\small 0.793} & {\small %
0.719} & {\small 1.000} & {\small 1.000} & {\small 0.838} & {\small 0.788}
\\
{\small 0.08} & {\small 1.000} & {\small 1.000} & {\small 0.813} & {\small %
0.736} & {\small 1.000} & {\small 1.000} & {\small 0.859} & {\small 0.778}
\\
& \multicolumn{8}{c}{\small CL forecast superiority} \\
{\small 0.53} & {\small 1.000} & {\small 1.000} & {\small 0.990} & {\small %
0.978} & {\small 1.000} & {\small 1.000} & {\small 0.989} & {\small 0.979}
\\
{\small 0.44} & {\small 1.000} & {\small 1.000} & {\small 0.986} & {\small %
0.974} & {\small 1.000} & {\small 1.000} & {\small 0.981} & {\small 0.985}
\\
{\small 0.35} & {\small 1.000} & {\small 1.000} & {\small 0.995} & {\small %
0.979} & {\small 1.000} & {\small 1.000} & {\small 0.992} & {\small 0.968}
\\
{\small 0.26} & {\small 1.000} & {\small 1.000} & {\small 0.991} & {\small %
0.983} & {\small 1.000} & {\small 1.000} & {\small 0.985} & {\small 0.978}
\\
{\small 0.17} & {\small 1.000} & {\small 1.000} & {\small 0.994} & {\small %
0.985} & {\small 1.000} & {\small 1.000} & {\small 0.988} & {\small 0.981}
\\
{\small 0.08} & {\small 1.000} & {\small 1.000} & {\small 0.991} & {\small %
0.987} & {\small 1.000} & {\small 1.000} & {\small 0.986} & {\small 0.977}
\\
& & & & & & & & \\
{\small DM} & {\small 1.000} & {\small 1.000} & {\small 0.998} & {\small %
0.997} & {\small 1.000} & {\small 1.000} & {\small 0.988} & {\small 0.984}
\\ \hline\hline
\multicolumn{9}{c}{\small Notes: See Notes to Table 5. DGPs PEE9 - PEE16
satisfy the alternative hypothesis.}%
\end{tabular}%
\end{equation*}
%TCIMACRO{\TeXButton{E}{\end{table}}}%
%BeginExpansion
\end{table}%
%EndExpansion
%TCIMACRO{\TeXButton{linespread}{\linespread{1.25}}}%
%BeginExpansion
\linespread{1.25}%
%EndExpansion
\end{document}