In the days before the 2016 US presidential election, nearly every national poll put Hillary Clinton ahead of Donald Trump—up by 3%, on average. FiveThirtyEight’s predictive statistical model—based on data from state and national voter polls—gave Clinton a 71.4% chance of victory. The New York Times’ model put the odds at 85%.
Trump’s subsequent win shocked the nation. Pundits and pollsters wondered: How could the polls have been so wrong?
Trump-Clinton isn’t the only example of a recent electoral surprise. Around the world, including in the 2015 United Kingdom election, the 2015 Brexit referendum, the 2015 Israeli election, and the 2019 Australian election, results have clashed with preelection polls.
But experts contend that these misses don’t mean we should stop using or trusting polls. For example, postelection analyses of the 2016 US election suggest that national election polling was about as accurate as it has always been. (State polls, however, were a different story.) Clinton, after all, won the popular vote by 2%, not far from the 3% average that the polls found, and within the range of errors seen in previous elections. Polls failed to anticipate a Trump victory not because of any fundamental flaws, but because of unusual circumstances that magnified typically small errors.
“Everyone sort of walked away with the impression that polling was broken—that was not accurate,” says Courtney Kennedy, director of survey research at the Pew Research Center.
The issue may be one of expectations. Polls aren’t clairvoyant—especially if an election is close, which was the case in many of the recent surprises. Even with the most sophisticated polling techniques, errors are inevitable. Like any statistical measure, a poll contains nuances and uncertainties, which pundits and the public often overlook. It’s hard to gauge the sentiment of an entire nation—and harder still to predict weeks or even days ahead how people will think and act on Election Day.
“As much as I think polls are valuable in society, they’re really not built to tell you who’s going to be the winner of a close election,” Kennedy says. “They’re simply not precise enough to do that.”
Would you like to take a survey?
Pollsters do their best to be accurate, with several survey methods at their disposal. These days, polling is in the midst of a transition. While phone, mail-in and even (rarely) door-to-door surveys are still done, more and more polls are happening online. Pollsters recruit respondents with online ads offering reward points, coupons, even cash. This type of polling is relatively cheap and easy. The problem, however, is that it doesn’t sample a population in a random way. An example of what’s called a non-probability approach or convenience sampling, the Internet survey panels include only people who are online and willing to click on survey ads (or who really love coupons). And that makes it hard to collect a sample that represents the whole.
“It’s not that convenience Internet panels can’t be accurate,” says David Dutwin, executive vice president and chief methodologist of SSRS, a research survey firm that has worked on polls for outlets such as CNN and CBS News. “It’s just generally thought—and most of the research finds this—there’s certainly a higher risk with non-probability Internet panels to get inaccurate results.”
With more traditional methods, pollsters can sample from every demographic by, for instance, calling telephone numbers at random, helping ensure that their results represent the broader population. Many, if not most, of the major polls rely on live telephone interviews. With caller ID and the growing scourge of marketing robocalls, many people no longer answer calls from unknown numbers. Although response rates to phone surveys have plummeted from 36% in 1997 to 6% in 2018—a worrisome trend for pollsters—phone polls still offer the “highest quality for a given price point,” Dutwin says.
In fact, most efforts to improve the accuracy of polling set their sights on relatively small tweaks: building better likelihood models, getting a deeper understanding of the electorate (so pollsters can better account for unrepresentative samples), and coming up with new statistical techniques to improve the results of online polls.
One promising new method is a hybrid approach. For most of its domestic polls, Kennedy says, the Pew Research Center now mails invitations to participate in online polls—thus combining the ease of Internet surveys with random sampling by postal address. So far, she says, it’s working well.