No matter where you situate yourself on the political spectrum, don’t try to deny that the 2016 US presidential election made you go “whaaaaaaat?” This isn’t a judgment; if you believe Michael Wolff’s book, even Donald Trump didn’t think Donald Trump was going to be president. Partially that’s because of polls. Even if you didn’t spend 2016 frantically refreshing Fivethirtyeight and arguing the relative merits of Sam Wang versus Larry Sabato (no judgment), if you just watched the news, you probably thought that Hillary Clinton had anywhere from a 71 percent to 99 percent chance of becoming president.

And yet.

That outcome, combined with a similarly hinky 2015 election in the United Kingdom, kicked into life an ecosystem of mea maxima culpas from pollsters around the world. (This being statistics, what you really want is a mea maxima culpa, a mea minima culpa, and then mean, average, and standard-deviation culpas.) The American Association for Public Opinion Research published a 50-page “Evaluation of 2016 Election Polls.” The British report on polls in 2015 was 120 pages long. Pollsters were “completely and utterly wrong,” it seemed at the time, because of low response rates to telephone polls, which tend to be over landlines, which people tend to not answer anymore.

So now I’m going to blow your mind: All those pollsters might have been wrong about being wrong. In fact, if you look at polling from 220 national elections since 1942—that’s 1,339 polls from 32 countries, from the days of face-to-face interviews to today’s online polls—you find that while polls haven’t gotten much better at predicting winners, but they haven’t gotten much worse, either. “You look at the final week of polls for all these countries, and essentially look at how those change,” says Will Jennings, a political scientist at the University of Southampton and coauthor of a new paper on polling error in Nature Human Behaviour. “There’s no overall trend of errors increasing.”

Jennings and his coauthor Christopher Wlezien, a political scientist at the University of Texas, essentially examined the difference between how a candidate or party polled and the actual, final share. That absolute value became their dependent variable, the thing that changed over time. Then they did some math.

First, they looked at an even bigger database of polls that covered entire elections, starting 200 days before Election Day. That far out, they found, the average absolute error was around 4 percent. Fifty days out, it declines to about 3 percent, and then the night before the election it’s about 2 percent. That was constant across years and countries, and it’s what you’d expect. As more people start thinking about voting and more polls start polling, the results become more accurate.

The red line tracks the average error in political polls in the last week of a campaign over 75 years.

WILL JENNINGS

More importantly, if you look just at last-week polls over time and take the error for each from 1943 to 2017, the mean stays at 2.1 percent. Actually, that’s not quite true—in this century it dropped to 2.0 percent. Polling remains pretty OK. “It is not what we quite expected when we started,” Jennings says.

In 2016 in the US, Jennings says, “the actual national opinion polls weren’t extraordinarily wrong. They were in line with the sorts of errors we see historically.” It’s just that people kind of expected them to be less wrong. “Historically, technically advanced societies think these methods are perfect,” he says, “when of course they have error built in.”

Sure, some polls are just lousy—go check the archives at the Dewey Presidential Library for more on that. Really though, all surprises tend to stand out. When polls casually and stably barrel toward a foregone conclusion, no one remembers. “There weren’t a lot of complaints in 2008. There weren’t a lot of complaints in 2012,” says Peter Brown, assistant director of the Quinnipiac University Poll. But 2016 was a little different. “There were more polls than in the recent past that did not perform up to their previous results in elections like ‘08 and ‘12.”

Also, according to AAPOR’s review of 2016, national polls actually reflected the outcome of the presidential race pretty well—Hillary Clinton did, after all, win the popular vote. Smaller state polls showed more uncertainty and underestimated Trump support—and had to deal with a lot of people changing their minds in the last week of the campaign. Polls that year also didn’t account for overrepresentation in their samples of college graduates, who were more likely to support Clinton.

In a similarly methodological vein, though, Jennings’ and Wlezien’s work has its own limitations. In a culture where civilians like you and me watch polls obsessively, their focus on the last week before election day might not be using the right lens. That’s especially important if it’s true, as some observers hypothesize, that pollsters “herd” in the final days, wanting to make sure their data is in line with their colleagues’ and competitors’.

“It’s a narrow and limited way to look at how good political polls are,” says Jon Cohen, chief research officer at SurveyMonkey. Cohen says he has a lot of respect for the researchers’ work, but that “these authors are telling a story that is in some ways orthogonal to how people experienced the election, not just because of polls that came out a week or 48 hours before Election Day but because of what the polls led them to believe over the entire course of the campaign.”

Generally, pollsters agree that response rates remain a real problem. Online polling or so-called interactive voice response polling, where a bot interviews you over the phone, might not be as good as random-digit-dial phone polls were a half-century ago. At the turn of the century, the paper notes, perhaps a third of people a pollster contacted would actually respond. Now it’s fewer than one in 10. That means surveys are less representative, less random, and more likely to miss trends. “Does the universe of voters with cells differ from the universe of voters who don’t have cells?” asks Brown. “If it was the same universe, you wouldn’t need to call cell phones.”

Internet polling has similar issues. If you preselect a sample to poll via internet, as some pollsters do, that’s by definition not random. That doesn’t mean it can’t be accurate, but as a method it requires some new statistical thinking. “Pollsters are constantly struggling with issues around changing electorates and changing technology,” Jennings says. “Not many of them are complacent. But it’s some reassurance that things aren’t getting worse.”

Meanwhile, it would be nice if polls could start working on ways to better express the uncertainty around their numbers, if more of us are going to watch them. (Cohen says that’s why SurveyMonkey issued multiple looks at the special election in Alabama last year, based in part on different turnout scenarios.) “Ultimately it would be nice if we could assess polls on their methodologies and inputs and not just on the output,” Cohen says. “But that’s the long game.”
And it’s worth keeping in mind when you start clicking on those mid-term election polling results this spring.

Counting Votes

  • Voting toward the 2018 election has already begun, and some systems remain insecure.
  • Two senators offer suggestions for securing US voting systems.
  • The 2016 election results surprised many people, but not the big-data guru in Trump’s campaign.



Source link

NO COMMENTS

LEAVE A REPLY