Donald Trump got up the morning after the first Presidential Debate telling everyone he won it. What base did he have for such a presumption? Well, he showed a bunch of polls by the likes of Time, CNBC and Forture showing such a thing. Take a look:

If you pay close attention, you might notice two relevant features: one, except for the previously mentioned sources the rest are either too rightist or very unknown ones. Two, they are online polls: widgets embedded in a webpage that allows an anonymous user to cast a vote. The results are shown as soon as the user clicks on a Vote or Ok button.

Other sign to identify them: they come with little social media Like and Share buttons so people can reproduce them like rabbits.

First things first. Online poll are not real polls! They just give a glimpse of the moment but their reliability is very low, since they are not sampled and online voting does not prevent multiple votes by the same person and even multiple votes by robot apps that cast many nonhuman clicks, their validity is practically zero.

Let’s examine some reliable sources to explore this type of polls in contrast with the real ones. Vox provides a good piece by German Lopez about it:

The difference between unscientific and scientific polls

The polls that Trump is relying on let anyone vote with absolutely zero checks. If you’re online at the time and find the poll, you can vote. You don’t have to live in America or be a US citizen.

And you can vote multiple times — by reopening a browser tab, going behind an internet proxy, or logging on to a different account.

This can lead to some very skewed results. For example, if an active online community — liker /The_Donald, the Reddit community that supports Trump — gets a bunch of people to vote on a poll (as they did), this can lead to Trump supporters overwhelming the results with a higher percent of Trump supporters than would otherwise be present in a typical sample of American voters. With such a skewed sample, it’s impossible to take the results seriously — it turns into a contest over which online community is most enthusiastic about winning unscientific polls, not how US voters feel about who won the debate.

This is how Trump came ahead in a few online polls, including those at Drudge Report, Time magazine, and CNBC.

Those are the polls trumpists are showing as “proof” of a “victory” only they seem to acknowledge. Gomez, them, goes to the scientific realities of a reliable sampled opinion research:

The polls Clinton is relying on, on the other hand, use statistical controls to make sure the sample isn’t so skewed. They try to contact people that match the voting population — so they’ll try to ensure that a certain percent of respondents in the survey are white, black, Latino, Democrat, Republican, and so on. And if they can’t reach the right amount of people, they’ll sometimes adopt statistical weights to bring up or down a specific group — so if a survey has too many men, they might try to weigh the women’s responses higher.

cnn-orc-debate-pollThe CNN/ORC poll on Monday night was a scientific one. CNN acknowledged its sample of Democrats was a bit too high — since this was a poll taken quickly after the debate, the network and its pollster just didn’t have time to do better. Still, the win was overwhelmingly for Clinton — with 62 percent of voters who watched the debate saying that she won versus 27 percent saying the same for Trump.

CNN’s poll wasn’t the only scientific one to reach this conclusion. Public Policy Polling’s post-debate poll found that 51 percent of debate watchers said Clinton won, versus 40 percent who said the same of Trump. So far, CNN and PPP’s polls are the only two scientific polls we have.

Nate Silver, star pollster and statistician from FiveThirtyFive, gives his vision in Live Polls And Online Polls Tell Different Stories About The Election, where he explains why not all polls are created equal, even when they used all of them with various degrees of reliability.

FiveThirtyEight generally takes an inclusive attitude towards polls. Our forecast models include polls from pollsters who use traditional methods, i.e., live interviewers. And we include surveys conducted with less tested techniques, such as interactive voice response (or “robopolls”) and online panels. We don’t treat all polls equally — our models account for the methodological quality and past accuracy of each pollster — but we’ll take all the data we can get.

This split, however, between live-interview polls and everything else, is something we keep our eye on. When we launched our general election forecasts in late June, there wasn’t a big difference in the results we were getting from polls using traditional methodologies and polls using newer techniques. Now, it’s pretty clear that Hillary Clinton’s lead over Donald Trump is wider in live-telephone surveys than it is in nonlive surveys.


In short, online polls are not real, valid polls. They just show a shot of enthusiasm during a TV live event, such as a debate. But they cannot be taken seriously as indicators of actual social preferences because they:

  • Are not based on scientifically calculated samples.
  • Can be voted several times by the same person.
  • Can even be voted from outside the US.
  • (Most of the times) Robot servers can vote massively for an option.
  • Are susceptible to be altered by the site’s owner.
  • (If the site is popular among certain groups) Obviously gets more votes from that group.

So, let′s put favoritism or fanaticism aside and root for precision instead.

-Fernando Nunez-Noda (text and illustration).