All errors should be reported to

Monday, October 24, 2016

Polls are wrong, not rigged

Two national polls have a 14-point variance in results despite claims of a margin of error of 3.5 in one case, 3.6 in the other. One poll shows Trump ahead by 2, the other shows Hillary up by 12.

Guess which one is a co-operative effort of ABC News and the Washington Post.

Which led Zero Hedge to conclude the fix is in:
Of course, like many of the recent polls from the likes of Reuters, ABC and The Washington Post, something curious emerges when you look just beneath the surface of the headline 12-point lead. 
"METHODOLOGY – This ABC News poll was conducted by landline and cellular telephone Oct. 20-22, 2016, in English and Spanish, among a random national sample of 874 likely voters. Results have a margin of sampling error of 3.5 points, including the design effect. Partisan divisions are 36-27-31 percent, Democrats -- Republicans --Independents."
As we've pointed out numerous times in the past, in response to Reuters' efforts to "tweak" their polls, per the The Pew Research Center, at least since 1992, Democrats have never enjoyed a 9-point registration gap despite the folks at ABC and The Washington Post somehow convincing themselves it was a reasonable margin.

Proof positive of slanting the statistics.

Except it isn't.

The Investor's Business Daily poll that shows Trump up by 2 noted this: "The poll results include responses from 783 likely voters, with a weighted partisan breakdown of 282 Democrats, 226 Republicans, and 259 Independents. The results reflect the rolling average of six days' worth of polling, with a margin of error of +/- 3.6 percentage points."

That breakdown by percentage is 36-29-35. A 7-point Democratic advantage.

Same model + same method = different result?

Instead of trying to second guess the polls, readers should accept that polls are not science. If they were, people using the same methodology would not reach different results, especially ones that fall outside the margin of error. It would be like weighing Hillary with the same scale and coming up with answers of 170 pounds and 223 pounds on the same day.

We have known the polls are off for some time.

Four years ago, the polls listed on the Real Clear Politics Average predicted one day before the election that Obama would squeak out a 0.7 percent win.

Obama won by 3.9, which was outside the margin of error for random-digit dial surveys, which are the cottage industry's standard.

The only logical conclusion is that the polls are wrong. All of them, because the methods used are the same and the results vary beyond the margin of error. Some polls may have the exact outcome on Election Day, but that will not be by design but rather by the stroke of luck. Averaging polls is an exercise in futility popularized by Nate Silver, who got skunked in the nomination process so bad that he had to apologize to readers.

Read the polls. In arguments, cite the ones that agree with you. But vote on Election Day and see what actually happens.


Hate Nate? In "Trump the Press," I devoted a chapter to him and his erroneous forecasts. The book is a fun romp through the Republican nomination that uses the deadliest weapon to skewer the media experts: their own words. "Trump the Press" is available as a paperback, and on Kindle.


  1. How about both wrong and rigged? The ones actually trying to see through the fog can't see and the others care more about the results than they do the truth, know they can't get to the truth anyway, so just put out a result that helps their side.

    1. Seems to me that a poll rigged to make "despondent" Trump voters stay at home on Election Day could just as easily make "jubilant" Clinton voters not bother to turn out.

      That's what happened with that Brexit vote. Despite polls showing the Remain vote streets ahead, the Leave guys won because their voters were more cussed and ornery.

    2. The Hillary crowd would seem to have more lazies in it.

  2. Thing is, we have an awful lot of evidence this year they are rigged.

  3. Can't help but think that certain polls have a date with liability.

    If you say that Clinton is ahead by 12 points, you'd better mean it.

  4. Yes, they are wrong. But, it's more serious than that. Consider that so-called gold standard of about 4% MOE. The real relationship for standard error (standard deviation for the population estimated from the sample) is square root of SDsample/n-1. So, as the sample n goes up, real world science predicts that standard error goes down. That's why if you don't reach significance in a study you just increase the n and see what happens. Put another way, the more you sample the population, the closer you'll get to the population mean--makes sense, no? MOE (jargon) is a *confidence interval* where the population mean is likely to fall by the probabilities. That's how certain you will bet your paycheck on it. So, you multiply standard error X 1.96 for a 95% confidence interval where the population mean will be found. This also is based on a normal distribution of numbers, which you never actually get on a yes/no response converted into percentages. (going from a nominal scale to a ratio scale)

    Notice that as the n increases in the polls the standard error essentially stays the same. Also, they "shoot" for a specific standard error. It doesn't work that way folks. The only way that works is fraud...they falsify the data; they literally pick and choose the numbers.

    Statisticians can actually use a formula to indict them. Where is that? (crickets)

    1. If it's anything like medical research study design is everything. How things are set up in the beginning basically determines the outcome. This explains why so much "knowledge" ends up being overturned in the long run, and how findings of association end up being confused with cause and effect. If trained scientists can be so gullible pollsters and regular people are as well.

    2. In simpler terms the MOE% assumes a truly random sample. If you survey the average weight of apples in an orchard you'll get a different result if you only sample the side that is irrigated.

  5. Any word on the D:R:I numbers for the LATimes poll?

  6. I dunno...may be in the minority on this, but we don't have a landline in our house. And if I get a call on my cell from a number I don't recognize, I don't answer it. I figure, if it's important, they'll leave a message. Pollsters don't leave messages. I have voted in every primary and general election since 1986. Never once talked to a pollster, except in one exit poll. The polling system isn't's just antiquated...

    1. I have caller ID plus screening on my land line (actually VOIC line). Most polling calls are automatically screened and if I don't recognize the number or it says "Unavailable" I don't answer. Bottom line - I don't ever get polled, which is fine with me.

  7. Plenty of wikileaks evidence, including your previous post say it is rigged.

  8. This comment has been removed by the author.

  9. In my view it shows that it is not just D oversample that contributes, but also the demo mix in the poll, i.e. an unrealistic % of minorities will end up with a non-credible crosstab of Clinton leading among men (ABC). As zerohedge mentions pollsters such as ABC only show their D/R/I breakdown, and not any further demo breakdown details of their samples. This links in with the Podesta e-mail - i.e. a rigging method discussed there is by adjusting the demo mix (see the 30-page attachment in that leaked e-mail).

  10. Gallup got 2012 wrong -- had Mitt winning it by 2. They stated it was impossible to conduct proper political polling in this day & age due to caller ID & cell phones so they got out of the business.
    Gallup is honest. The rest are doing it for the money.