\documentclass[letter,12pt]{article}
\usepackage{natbib}
\usepackage{booktabs}
\usepackage{tabularx}
\usepackage[T1]{fontenc}
\usepackage{numprint}
\npdecimalsign{.}
\npthousandsep{}
\npthousandthpartsep{}
\usepackage{graphicx}
\usepackage{txfonts}
\usepackage[scaled=.90]{helvet}
\usepackage{courier}
\usepackage[isolatin]{inputenc}
\usepackage{savesym}
\savesymbol{iint}
\savesymbol{iiint}
\savesymbol{iiiint}
\savesymbol{idotsint}
\usepackage[reqno]{amsmath}
\usepackage{bm}
\usepackage{pa}
\usepackage{rotating}
\usepackage[nolists]{endfloat}
\usepackage{endnotes}
\let\footnote=\endnote
\newcommand{\vm}[1]{% Vector or matrix
\bm{\mathrm{#1}}}
\providecommand{\abs}[1]{\lvert#1\rvert}
\usepackage{scalefnt}
\usepackage{setspace}
\usepackage{url}
\doublespacing
\usepackage{xr}
\externaldocument{surveybias}
\begin{document}\sloppy
\section{Additional Figures}
\label{sec:additional-figures}
\begin{figure}
\includegraphics[width=\textwidth]{bw-series}
\caption{Development of $B_{w}$ over time for eight French pollsters}
\label{fig:timeseries}
\end{figure}
\section{Derivation of Standard Errors}
\label{sec:deriv-stand-errors}
In Equation \eqref{eq:4} and \eqref{eq:5} we demonstrate how $A'_{i}$ is related to the parameters $\beta_{1}, \beta_{2}, \cdots \beta_{k-1}$ of the familiar MNL. Since these calculations are based on survey data, the $\beta$s and $A'_{i}$s should be treated as random vectors, whose variability is of interest.
Because the underlying MNL does not involve any explanatory variables, $\vm{\beta}$ could in principle be obtained by substituting the observed probabilities into \eqref{eq:4} and solving for the $k-1$ unique $\beta$s (one $\beta$ is always set to 0 as an identifying restriction). However, the most convenient way to obtain $\vm{\beta}$ and the associated variance-covariance matrix $\vm{V}$ is to rely on off-the-shelf maximum likelihood procedures for estimating the parameters of the MNL, which are available in all major statistical packages.
For large samples, ${\beta}_{2}, \cdots {\beta}_{k}$ are distributed multivariate normal. Their variance-covariances matrix $\vm{V}$ can be estimated by the inverse of the information matrix \citep[193]{agresti-2002}, which is the negative expectation of the matrix of second-order partial derivatives of the likelihood function with respect to the parameters (Hesse Matrix). In that sense, maximum likelihood estimation generates the complete variance-covariance matrix as a byproduct.
Since $A'_{1}, A'_{2}, \cdots A'_{k}$ are defined as non-linear combinations of the $\beta$s (see Equation \eqref{eq:5}), the derivation of their standard errors is by no means straightforward, although they should be approximately normal as well. To see why this is the case, consider (without loss of generality) the calculation of $A'_{i}$ for a three party system. Taking the first party as the reference category (for which $\beta_1$ is set to 0 so that $\exp(\beta_1)=1$), $A'_{1}, A'_{2}$ and $A'_{3}$ are defined as
\begin{align}
\label{eq:12}
A'_{1} = & \ln \left(
\frac{\frac{1}{1 + \exp(\beta_{2}) + \exp(\beta_{3})}}{1-\frac{1}{1 + \exp(\beta_{2}) + \exp(\beta_{3})}}
\times \frac{1-v_{1}}{v_{1}}\right) \\
A'_{2} = & \ln \left(
\frac{\frac{\exp(\beta_{2})}{1 + \exp(\beta_{2}) + \exp(\beta_{3})}}{1-\frac{\exp(\beta_{2})}{1 + \exp(\beta_{2}) + \exp(\beta_{3})}}
\times \frac{1-v_{2}}{v_{2}}\right) \\
A'_{3} = & \ln \left(
\frac{\frac{\exp(\beta_{3})}{1 + \exp(\beta_{2}) + \exp(\beta_{3})}}{1-\frac{\exp(\beta_{3})}{1 + \exp(\beta_{2}) + \exp(\beta_{3})}}
\times \frac{1-v_{3}}{v_{3}}\right) .
\end{align}
Since both $\beta_2$ and $\beta_3$ are normally distributed, $\exp(\beta_2)$ and $\exp(\beta_3)$ have log-normal distributions, whose location and shape are governed by the respective mean and variance of the underlying normal distribution. For the distribution of the sum of log-normally distributed random variables (i.e.\ $\exp(\beta_{2}) + \exp(\beta_{3})$ in this case), there is no closed form expression, though it can often be approximated as yet another log-normal distribution. If this approximation holds, the inner fraction also has a log-normal distribution, because the ratio of two log-normal distributions as well as the inverse of a single log-normal distribution are also distributed log-normal.
Things are further complicated by the addition of 1 in the denominator of the inner fractions, the covariation between $\beta_{1}$ and $\beta_{2}$, the difference in the denominator of the outer fraction, and the multiplication by the odds of the vote. In short, while there is reason to believe that the expression within the parentheses is distributed approximately log-normal, rendering the distribution of $A'_{i}$ itself approximately normal, one cannot be sure.
We therefore carried out a number of numerical simulations based on the fictitious party system (2) and (3) (see Note \ref{fn:conditions}). For each party system, we considered nine different scenarios: no bias as well as under-/overrepresentation of support for the biggest/smallest party by factors of $\frac{1}{10}$ and $\frac{1}{4}$, respectively. The missing/superfluous support was taken/given to the other parties according to their true support in the population. Put differently, we simulated the more interesting case of \emph{concentrated} bias, since diffuse bias comes close to no bias at all.
\begin{table}\small
$\begin{array}{lllrrl}\hline
\text{No} & \multicolumn{1}{c}{\vm{v}} & \multicolumn{1}{c}{\vm{p}} & \multicolumn{1}{c}{\beta_{2}} & \multicolumn{1}{c}{\beta_{3}} & \multicolumn{1}{c}{\vm{V}}\\\hline
% \input{se-simulation/joint-panels}
(1) & \left(\frac{2}{5},\,\frac{2}{5},\,\frac{1}{5}\right) & \left(\frac{2}{5},\,\frac{2}{5},\,\frac{1}{5}\right) & -0.000 & -0.693 & \begin{smallmatrix}0.005 & \\0.003 & 0.008 \end{smallmatrix} \\
(2) & & \left(\frac{11}{25},\,\frac{28}{75},\,\frac{14}{75}\right) & -0.165 & -0.856 & \begin{smallmatrix}0.005 & \\0.002 & 0.008 \end{smallmatrix} \\
(3) & & \left(\frac{1}{2},\,\frac{1}{3},\,\frac{1}{6}\right) & -0.406 & -1.097 & \begin{smallmatrix}0.005 & \\0.002 & 0.008 \end{smallmatrix} \\
(4) & & \left(\frac{9}{25},\,\frac{32}{75},\,\frac{16}{75}\right) & 0.171 & -0.525 & \begin{smallmatrix}0.005 & \\0.003 & 0.007 \end{smallmatrix} \\
(5) & & \left(\frac{3}{10},\,\frac{7}{15},\,\frac{7}{30}\right) & 0.443 & -0.253 & \begin{smallmatrix}0.005 & \\0.003 & 0.008 \end{smallmatrix} \\
(6) & & \left(\frac{39}{100},\,\frac{39}{100},\,\frac{11}{50}\right) & 0.000 & -0.573 & \begin{smallmatrix}0.005 & \\0.003 & 0.007 \end{smallmatrix} \\
(7) & & \left(\frac{3}{8},\,\frac{3}{8},\,\frac{1}{4}\right) & 0.000 & -0.405 & \begin{smallmatrix}0.005 & \\0.003 & 0.007 \end{smallmatrix} \\
(8) & & \left(\frac{41}{100},\,\frac{41}{100},\,\frac{9}{50}\right) & 0.000 & -0.823 & \begin{smallmatrix}0.005 & \\0.002 & 0.008 \end{smallmatrix} \\
(9) & & \left(\frac{17}{40},\,\frac{17}{40},\,\frac{3}{20}\right) & -0.000 & -1.041 & \begin{smallmatrix}0.005 & \\0.002 & 0.009 \end{smallmatrix} \\
(10) & \left(\frac{1}{2},\,\frac{1}{4},\,\frac{1}{4}\right) & \left(\frac{1}{2},\,\frac{1}{4},\,\frac{1}{4}\right) & -0.693 & -0.693 & \begin{smallmatrix}0.006 & \\0.002 & 0.006 \end{smallmatrix} \\
(11) & & \left(\frac{11}{20},\,\frac{9}{40},\,\frac{9}{40}\right) & -0.894 & -0.894 & \begin{smallmatrix}0.006 & \\0.002 & 0.006 \end{smallmatrix} \\
(12) & & \left(\frac{5}{8},\,\frac{3}{16},\,\frac{3}{16}\right) & -1.201 & -1.207 & \begin{smallmatrix}0.007 & \\0.002 & 0.007 \end{smallmatrix} \\
(13) & & \left(\frac{9}{20},\,\frac{11}{40},\,\frac{11}{40}\right) & -0.492 & -0.492 & \begin{smallmatrix}0.006 & \\0.002 & 0.006 \end{smallmatrix} \\
(14) & & \left(\frac{3}{8},\,\frac{5}{16},\,\frac{5}{16}\right) & -0.181 & -0.184 & \begin{smallmatrix}0.006 & \\0.003 & 0.006 \end{smallmatrix} \\
(15) & & \left(\frac{29}{60},\,\frac{29}{120},\,\frac{11}{40}\right) & -0.691 & -0.563 & \begin{smallmatrix}0.006 & \\0.002 & 0.006 \end{smallmatrix} \\
(16) & & \left(\frac{11}{24},\,\frac{11}{48},\,\frac{5}{16}\right) & -0.693 & -0.381 & \begin{smallmatrix}0.007 & \\0.002 & 0.005 \end{smallmatrix} \\
(17) & & \left(\frac{31}{60},\,\frac{31}{120},\,\frac{9}{40}\right) & -0.695 & -0.832 & \begin{smallmatrix}0.006 & \\0.002 & 0.006 \end{smallmatrix} \\
(18) & & \left(\frac{13}{24},\,\frac{13}{48},\,\frac{3}{16}\right) & -0.693 & -1.064 & \begin{smallmatrix}0.006 & \\0.002 & 0.007 \end{smallmatrix} \\
\end{array}$
\label{tab:scenarios}
\label{tab:se-simulations-3parties}
\caption{$\beta_{1},\, \beta_{2},$ and $\vm{V}$ under $2\times 9$ different types of bias}
\end{table}
Table \ref{tab:scenarios} lists the $2\times 9$ different scenarios we were investigating, and the values for $\beta_{2}$ and $\beta_{3}$ which are implied by $\vm{p}$, along with their variance-covariance matrix $\vm{V}$ for $n=1000$. Under each scenario, we drew \numprint{50000} values from the joint distribution of $\beta_{2}$ and $\beta_{3}$ (given $\vm{V}$), and calculated $A'_{1},\, A'_{2}$ and $A'_{3}$. This resulted in 54 simulated sampling distributions for our measure. Given the very large number of simulated observations, we applied the skewness and kurtosis test to detect deviations from normality \citep{dagostino-belanger-dagostino-1990}.\footnote{Stata's \texttt{sktest} implements an adjustment of the original test suggest by \citet{royston-1991}.}
Not withstanding the very large sample size, the
test failed to reject the null hypothesis of normality (see Table \ref{tab:se-simulations-3parties}) in each instance. The lowest p-value for the omnibus test was \numprint{0.74}. P-values for the component kurtosis test are somewhat smaller but do not go below \numprint{0.48}. These results are further confirmed by a visual inspection of the quantile plots.
Moreover, the simulated sampling distributions are neatly centred on their expected theoretical value, i.e.\ there is no bias (see the leftmost column in each panel of Table \ref{tab:se-simulations-3parties}). As an additional safeguard, we ran another set of simulations using the same bias factors on two fictitious six party systems (5) and (6), resulting in a further 108 simulated sampling distributions of $A'_{i}$ (not shown as a table). Again, none of the omnibus tests for normality has a p-value lower than \numprint{0.05}, some of the component kurtosis tests do, hinting at tails that are slightly heavier than normal. In each case, however, the deviation is tiny, i.e.\ less than \numprint{0.05}.
We are therefore confident that $A'_{i}$ is approximately normally distributed under a wide range of conditions, and that classical hypothesis tests and confidence intervals are valid.
This leaves the problem of finding a computationally efficient way to estimate the sampling variance of $A'_{1}, A'_{2}, \cdots A'_{k}$ for a given data set. Fortunately, using the delta method \citep[73-74, ch. 14.1]{agresti-2002} it is easy to come up with good approximations of these quantities.
\begin{sidewaystable}\small{}
$ \begin{array}{lrrrrrrrrrrrrrrr}\hline
\text{No}& \multicolumn{5}{c}{A'_{1} } & \multicolumn{5}{c}{A'_{2} } & \multicolumn{5}{c}{A'_{3}}\\
& \mu - A'_{1} & p_{1} &p_{2} & p_{3} & \sigma / \hat{\sigma} & \mu - A'_{2} & p_{1} &p_{2} & p_{3} & \sigma / \hat{\sigma} & \mu - A'_{3}& p_{1} &p_{2} & p_{3} & \sigma / \hat{\sigma}\\ \hline
% \input{se-simulation/results}
(1) & -0.001& 0.99& 0.92& 0.99& 1.000& -0.001& 0.63& 0.63& 0.79& 1.000& -0.001& 0.94& 0.49& 0.78& 1.000\\
(2) & -0.001& 0.98& 0.90& 0.99& 1.000& -0.001& 0.67& 0.64& 0.82& 1.000& -0.001& 0.94& 0.49& 0.79& 1.000\\
(3) & -0.001& 0.97& 0.87& 0.99& 1.000& -0.001& 0.72& 0.67& 0.86& 1.000& -0.001& 0.94& 0.49& 0.79& 1.000\\
(4) & -0.001& 1.00& 0.94& 1.00& 1.000& -0.001& 0.59& 0.63& 0.77& 1.000& -0.001& 0.94& 0.49& 0.78& 1.000\\
(5) & -0.001& 0.99& 0.97& 1.00& 1.000& -0.001& 0.53& 0.65& 0.74& 1.000& -0.001& 0.94& 0.49& 0.78& 1.000\\
(6) & -0.001& 0.98& 0.90& 0.99& 1.000& -0.001& 0.61& 0.63& 0.78& 1.000& -0.001& 0.94& 0.49& 0.78& 1.000\\
(7) & -0.001& 0.97& 0.88& 0.99& 1.000& -0.001& 0.59& 0.63& 0.77& 1.000& -0.001& 0.94& 0.49& 0.78& 1.000\\
(8) & -0.001& 1.00& 0.93& 1.00& 1.000& -0.001& 0.64& 0.63& 0.80& 1.000& -0.001& 0.94& 0.49& 0.79& 1.000\\
(9) & -0.001& 0.99& 0.96& 1.00& 1.000& -0.001& 0.67& 0.65& 0.82& 1.000& -0.001& 0.94& 0.49& 0.79& 1.000\\
(10) & -0.001& 0.89& 0.71& 0.92& 1.000& -0.001& 0.71& 0.66& 0.85& 1.000& -0.001& 0.94& 0.49& 0.78& 1.000\\
(11) & -0.001& 0.88& 0.68& 0.91& 1.000& -0.001& 0.75& 0.69& 0.88& 1.000& -0.001& 0.94& 0.49& 0.78& 1.000\\
(12) & 0.001& 0.87& 0.66& 0.89& 0.999& 0.001& 0.82& 0.74& 0.92& 1.000& -0.006& 0.94& 0.49& 0.79& 0.998\\
(13) & -0.001& 0.91& 0.74& 0.94& 1.000& -0.001& 0.66& 0.64& 0.81& 1.000& -0.001& 0.95& 0.48& 0.78& 1.000\\
(14) & 0.001& 0.93& 0.79& 0.96& 1.000& 0.001& 0.59& 0.63& 0.77& 1.000& -0.004& 0.95& 0.48& 0.78& 0.999\\
(15) & -0.001& 0.88& 0.69& 0.91& 1.000& -0.001& 0.69& 0.65& 0.83& 1.000& -0.001& 0.95& 0.48& 0.78& 1.000\\
(16) & -0.001& 0.86& 0.66& 0.89& 1.000& -0.001& 0.68& 0.64& 0.82& 1.000& -0.001& 0.95& 0.48& 0.78& 1.000\\
(17) & -0.001& 0.91& 0.73& 0.94& 1.000& -0.001& 0.72& 0.67& 0.86& 1.000& -0.001& 0.94& 0.49& 0.78& 1.000\\
(18) & 0.001& 0.93& 0.78& 0.96& 0.999& 0.001& 0.75& 0.69& 0.88& 1.000& -0.006& 0.94& 0.49& 0.79& 0.998\\
\end{array}$
\footnotesize{$p_{1}$: test based on skewness; $p_{2}$: test based on kurtosis; $p_{3}$: $\chi^{2}$ omnibus test}
\end{sidewaystable}
The delta method, whose foundations were laid in the 1940s by Cram\'{e}r \citep{oehlert-1992}, approximates the expectation (or higher moments) of some function $g(\cdot)$ of a random variable $X$ by relying on a (truncated) Taylor series expansion.
More specifically, \citet[578]{agresti-2002} shows that (under weak conditions) for some parameter $\theta$ that has an approximately normal sampling distribution with variance $\sigma^{2}/n$, the sampling distribution of $g(\theta)$ is also approximately normal with variance $[g'(\theta)]^{2}\sigma^2/n$, since $g(\cdot)$ is approximately linear in the neighbourhood of $\theta$. The delta method can be generalised to the case of a multivariate normal random vector \citep[579]{agresti-2002} such as the joint sampling distribution of some set of parameter estimates.
Stata's procedure \texttt{nlcom} is a particularly versatile and powerful implementation of the delta method. As a post-estimation command, \texttt{nlcom} accepts symbolic references to model parameters and computes sampling variances for their linear and non-linear combinations and transformations. Our add-on \texttt{surveybias} internally makes the required calls to \texttt{nlcom} in order to calculate approximate standard errors for the $A'_{i}$s.
The approximation works very well, as can be gleaned from the rightmost column in each panel in Table \ref{tab:se-simulations-3parties}: in 54 experiments, the approximated standard error is never off by more than \numprint{0.2} per cent.
Variance and covariances for $B$ and $B_{w}$ are also approximated and posted by our add-on, although the respective distributions of these quantities are folded-normal and therefore skewed, as explained in section \ref{sec:sign-tests-multi}. Additionally, the full (approximate) matrix of parameter variances and covariances is stored for further post-estimation analyses.
\section{Statistical Power}
\label{sec:statistical-power}
The statistical power of the $\chi_{2}$-test is a function of sample size, magnitude of bias, and distribution of bias amongst parties. To assess how badly a poll must be biased in practice to be flagged up by the test, we ran another series of simulations. For party systems (2), (3), (5), and (6), we simulated that support for the biggest/smallest party was deflated/inflated by a factor of 10, 15, and 25 per cent, respectively. Under each of these 48 scenarios, we sampled from the appropriate biased multinomial distribution. The sample size was set to \numprint{1000}, which seems to be the lower limit for commercial polls. We then tested against the null hypothesis of no bias based on both Pearson's $\chi^{2}$ and the likelihood ratio $G^{2}$, using the conventional criterion of $p \le 0.05$. For each scenario, this procedure was carried out \numprint{10000} times.
\begin{table}
\begin{tabular}{lp{12ex}rrrr}\hline
system & most affected party & bias & $B_{w}$ & \multicolumn{1}{c}{$\chi^{2} \geq \chi^{2}_{crit} $ }&\multicolumn{1}{c}{ $G^{2} \geq G^{2}_{crit}$} \\\hline
% \input{power/power-tab-1}
(2)&biggest&-0.25&0.33&1.00&1.00\\
(2)&biggest&-0.15&0.19&0.95&0.95\\
(2)&biggest&-0.10&0.13&0.64&0.64\\
(2)&biggest&0.10&0.13&0.63&0.63\\
(2)&biggest&0.15&0.19&0.94&0.94\\
(2)&biggest&0.25&0.32&1.00&1.00\\
(2)&smallest&-0.25&0.15&0.97&0.97\\
(2)&smallest&-0.15&0.09&0.57&0.56\\
(2)&smallest&-0.10&0.06&0.29&0.28\\
(2)&smallest&0.10&0.06&0.27&0.28\\
(2)&smallest&0.15&0.09&0.54&0.55\\
(2)&smallest&0.25&0.14&0.93&0.94\\
(3)&biggest&-0.25&0.41&1.00&1.00\\
(3)&biggest&-0.15&0.25&0.99&0.99\\
(3)&biggest&-0.10&0.16&0.82&0.82\\
(3)&biggest&0.10&0.17&0.82&0.81\\
(3)&biggest&0.15&0.25&0.99&0.99\\
(3)&biggest&0.25&0.44&1.00&1.00\\
(3)&smallest&-0.25&0.20&0.99&0.99\\
(3)&smallest&-0.15&0.12&0.72&0.70\\
(3)&smallest&-0.10&0.08&0.36&0.35\\
(3)&smallest&0.10&0.08&0.35&0.35\\
(3)&smallest&0.15&0.12&0.68&0.68\\
(3)&smallest&0.25&0.19&0.99&0.99\\
\end{tabular}
\caption{Statistical Power: Three-Party Systems}
\label{tab:power-1}
\end{table}
Table \ref{tab:power-1} shows the results for the three-party systems. First, note that results are virtually identical for Pearson's $\chi^{2}$ and the likelihood ratio $G^{2}$, and that the direction (upward/downward) does not matter (as it should). Second, even for moderately large values of $B_{w}$ in the range of \numprint{0.12} to \numprint{0.15}, the null hypothesis is rejected in the vast majority of cases, comfortably exceeding the conventional threshold of \numprint{0.8}. If the total bias $B_{w}$ is smaller than \numprint{0.1}, however, the power of the test is considerably lower. If, for instance, support for smallest party in system (2) is systematically biased from 20 to 22 per cent in the polls, the test will miss this bias in roughly two out of three applications.
\begin{table}
\begin{tabular}{lp{12ex}rrrr}\hline
system & most affected party & bias & $B_{w}$ & \multicolumn{1}{c}{$\chi^{2} \geq \chi^{2}_{crit} $ }&\multicolumn{1}{c}{ $G^{2} \geq G^{2}_{crit}$} \\\hline
% \input{power/power-tab-2}
(5)&biggest&-0.25&0.38&1.00&1.00\\
(5)&biggest&-0.15&0.23&0.97&0.98\\
(5)&biggest&-0.10&0.15&0.68&0.69\\
(5)&biggest&0.10&0.16&0.67&0.66\\
(5)&biggest&0.15&0.24&0.98&0.97\\
(5)&biggest&0.25&0.41&1.00&1.00\\
(5)&smallest&-0.25&0.07&0.54&0.50\\
(5)&smallest&-0.15&0.04&0.19&0.18\\
(5)&smallest&-0.10&0.03&0.11&0.10\\
(5)&smallest&0.10&0.03&0.10&0.11\\
(5)&smallest&0.15&0.04&0.18&0.19\\
(5)&smallest&0.25&0.07&0.46&0.49\\
(6)&biggest&-0.25&0.21&1.00&0.99\\
(6)&biggest&-0.15&0.12&0.69&0.68\\
(6)&biggest&-0.10&0.08&0.32&0.32\\
(6)&biggest&0.10&0.08&0.30&0.30\\
(6)&biggest&0.15&0.12&0.64&0.64\\
(6)&biggest&0.25&0.20&0.99&0.99\\
(6)&smallest&-0.25&0.03&0.25&0.22\\
(6)&smallest&-0.15&0.02&0.11&0.09\\
(6)&smallest&-0.10&0.01&0.07&0.07\\
(6)&smallest&0.10&0.01&0.08&0.08\\
(6)&smallest&0.15&0.02&0.10&0.11\\
(6)&smallest&0.25&0.03&0.23&0.26\\
\end{tabular}
\caption{Statistical Power: Six-Party Systems}
\label{tab:power-2}
\end{table}
Our simulations for the six-party systems by and large confirm these findings (see Table \ref{tab:power-2}). The rejection rates for moderate bias are somewhat lower yet still acceptable, while strong bias ($B_{w}$ exceeding \numprint{0.2}) is detected with certainty. Note, however, that even substantial bias in the measurement of support for small parties hardly affects $B_{w}$ (under the somewhat unrealistic assumption that the resulting bias in the measurement of other parties is distributed proportionally), and is rarely picked up by the test (see e.\,g.\ the last line in Table \ref{tab:power-2}). One should, however, remember that even this comparatively strong effect is equivalent of a measurement that is biased from 5 to \numprint{6.25} per cent. While the detection of bias in highly fragmented party systems clearly requires larger samples, we are reasonably sure that our test has sufficient power to detect substantively relevant bias under a wide range of conditions.
\def\enotesize{\normalsize}
\theendnotes{}
\singlespacing{}
\bibliographystyle{apsr}
\bibliography{surveybias.bib}
\end{document}