WASHINGTON -- Don't like the latest poll numbers on the presidential race? Just wait. More are coming -- probably before you can say "margin of sampling error."
Regardless of what any given poll says, it's likely that someone -- a campaign, an interest group, a political party or a rival polling operation -- will argue that the survey is invalid at the same time someone else argues that it's significant.
Polls have become political tools and have taken on outsized importance as they've become easier and cheaper to conduct.
A part of campaigns grounded in science, polling has lost some of its standing as a neutral record of voters' attitudes and opinions and, at times, has become just another talking point. TV pundits and election analysts hang on every one-point shift, to heck with the margin of error.
Another twist this year: A pollster's usually difficult task of reaching a representative pool of likely voters has become far more difficult in the aftermath of Superstorm Sandy's march through the East Coast with just days to go before the election.
130 POLLS IN 57 DAYS
As long as there's been polling, campaigns have released polls favorable to their cause, and independent pollsters have tried to capture the state of the race minus the internal campaign baggage. But now, with an array of options to conduct polling on the cheap, polls are far more frequent, more visible and more often used as campaign tools. In the 57 days since Labor Day, The Huffington Post's Pollster blog shows 130 national polls have been conducted on the presidential horserace.
Not all of them are created equal.
As the minimum cost of conducting a poll has fallen from something like the sticker price of a Lexus to the cost of a 1998 Honda Civic, the simple dividing line of partisan-or-not has been complicated by questions of basic methodology. But as in car shopping, you get what you pay for.
A poll using more rigorous methodology -- in which live callers dial a random sample of landline telephones and cellphones over the course of several nights, making multiple attempts at different times to reach a respondent before giving up and attempting to choose a random respondent within each household -- can cost well over $50,000. Automated polls -- using recorded voice interviewers, only dialing landlines, taking whoever answers the phone and only on one night -- can cost as little as $2,000.
Those more expensive polls cost so much more because they adhere to survey research rule No. 1: Every person should have a known, non-zero chance of being included in a survey's sample. If you only dial landlines, the cellphone-only voters have zero chance of being included. Likewise, if you only dial on Wednesday night, anyone who isn't home that night is automatically excluded from the sample and the pollster has no way of knowing who those people are.
Much of the proliferation of polling in the last few years has come from the cheaper end of the scale, with automated pollsters dominating popular poll aggregations. Polls conducted using automated methodology do not meet the standards for coverage of The Associated Press, nor do partisan polls.
MIND THE GAP
For the public, this has meant new polling almost daily, some of it conflicting. And as pundits and prognosticators seek to explain the differences between polls, more are focused on pollsters with a finger on the scale rather than on the random variation that pollsters expect.
This campaign season has produced a string of polling-related controversies: overweighting surveys by party identification, sampling likely vs. registered voters, how pollsters measure race and ethnicity, and whether public pollsters were using their polls to change the narrative of a particular race.
As these controversies rage, the standard caveats about a poll's margin of sampling error seem to have gone out the window. Polling analysts and aggregators put every 1- or 2-point shift under a microscope, even though almost no poll is accurate enough to detect changes that small with any certainty.
Whether valid or not, criticisms that suggest pollsters are fixing the data to get a desired result have done a number on polling's reputation. By the end of the 2008 contest, almost 4 in 10 voters told Fox News pollsters they thought media pollsters were fixing their numbers for Obama, and the public's take on polls became as partisan as any other part of the campaign.
At the end of recent presidential campaigns, the Pew Research Center has asked voters to give pollsters a letter grade for their performance. In 2008, 64 percent of Democrats gave pollsters an A or B for their performance in the campaign, while just 37 percent of Republicans felt the same, a 27-point gap. Four years earlier, the partisan gap was 11 points, and in 2000, a slim 7 points separated partisans.
READING THE FINAL POLLS
In the final days of the campaign, the polls will continue, with most major pollsters releasing their final estimate of the presidential horserace just before Election Day and prognosticators massaging them into their final projections. Typically, these last-chance estimates have converged as voters solidified their choices, but the massive storm that swept through the East Coast could complicate the picture for pollsters.
With millions of East Coast voters without power, voters in hard-hit states will be difficult to reach. And many "likely voters" could wind up not voting at all as they deal with the aftermath of the storm. Pollsters aiming to produce a final portrait of the race will have a hard time knowing for sure how big an impact the storm has had on the election forecast.
Before the storm, however, this year's crop of polls had largely been in agreement that the race between Mitt Romney and Barack Obama was very close. Take the final round of public polling with a grain of salt and a pound of margin of error, and remember that polls are really just a snapshot in time.