In the wake of the New Hampshire primary, much press coverage has been focused on the pre-election polls, in particular on the Democratic presidential primary. Headlines indicate that pre-election polls were misleading or wrong. Yes, all of the pre-election polls showed Senator Obama ahead in the final pre-election polls. Clearly, on this count, they all failed to reflect the eventual outcome. But the polls also were surprisingly accurate in measuring support for candidates other than Senator Clinton —with estimates of around 36% for Senator Obama, 19% for Senator Edwards, and 6% for Governor Richardson (compared to final estimates of 36%, 17% and 5%, respectively). They went astray in the case of Senator Clinton's final vote.
The final pre-election poll estimates reinforce several points:
• Polling is a scientific process that attempts to capture information about individual attitudes and behaviors, both of which are subject to variation over time. Events following the conduct of a survey or poll can result in opinion and behavior changes.
• Polls and surveys are subject to multiple sources of error— including failure to sample all the voters and social desirability bias just to mention two.
• The role of undecideds in a close election is difficult to understand in advance. As late as Monday, January 7, polls indicated that up to 10% of Democratic voters were still undecided and the CBS News Polls cited that "28% of Democratic voters say their minds could still change."
• Understanding the methodology related to the conduct of the poll, the allocation of undecideds, and the likely voter models becomes increasingly important when elections are close.
• All polls are subject to effects due to missing some randomly selected respondents because they are not at home when called, refuse to be interviewed, or are unavailable for other reasons. In most past election polling, this problem has not appeared to affect estimates appreciably. However, there is always the possibility that an effect may occur in a particular election.
The forces shaping the discrepancies between the pre-election polls and the actual outcomes in New Hampshire deserve immediate and thorough examination and analysis, if we are to understand what happened there and apply that understanding to state primaries to follow. The American Association for Public Opinion Research (AAPOR) supports the disclosure of polling methodology—and as advocated by www.pollster.com -- including the disclosure of information related to questions used in the poll, sample size, response rates, as well as the likely voter models and undecided allocations used by the pollster. Only when the data are fully available to scholars of pre- election polls will we understand the effects of alternative models and design, as well as the potential impact of any bias, on pre- election poll estimates. AAPOR strongly supports the recommendation made by Gary Langer of ABC News<http://blogs.abcnews.com/thenumbers/2008/01/new-hampshires.html> (and an AAPOR member) when he advocated for the producers of the New Hampshire pre-election polls "to look at the data, and to look at it closely, and to do it without prejudging." Clearly, the NH pre- election polls warrant more analysis and research before we attempt to draw even tentative conclusions.
[Note: *Public Opinion Quarterly*, the journal of the American Association for Public Opinion Research, has published three articles that address some causes of error in self-reports of voting or vote intention in races involving minorities and women. They do not "explain" what happened in NH, but they offer some background for considering poll results in that primary and ones to come. The articles are available on the AAPOR web site<http://www.aapor.org/> .]
-- Pat Lewis Communications Director American Association for Public Opinion Research (AAPOR) 1405 North George Mason Drive Arlington, Virginia www.aapor.org
AAPOR -- the leading association of public opinion and survey research professionals.