Sunday, May 21, 2017

Full Of Themselves, And STILL Haven't Learned!

Excerpt from a very astute analysis by Ted Carroll writing for Rasmussen Reports:
As our friends at this year’s AAPOR [American Association for Public Opinion] conference already know, we [Rasmussen] were named the most accurate pollster in predicting the 2016 presidential popular vote – coming within one-tenth of one percent of the actual vote totals with over 136 million votes counted. We believe our automated private interviews, our use of internet panels and other proprietary techniques gave us an advantage over live interview pollsters in identifying the real underlying issues that led to the upset few in the industry saw coming.

Yet during last year’s campaign, any pollster like Rasmussen Reports that dared deviate from the almost absolute certainty that Hillary Clinton was going to be our next president was the target of a firestorm of criticism from the mainstream media and the so-called “polling analyst” community. This particular intimidation racket featured journalist enforcers banging out quantification “critiques” and “rankings” that falsely implied superior predictive precision. These tainted statistics provided cover for media partisans to hammer the heads of any pollsters issuing impure thoughts. ESPN’s Nate Silver, Harry Enten and their fellow travelers rose over time to Walter Winchell-like heights, only to crash - exceptionally hard - along with their many disciples on election night.
Click here to read more!

As for our own views on how polls and pollsters got it wrong, here they are:

Consider, just for a moment, how far off the mark the polls were for the 2016 election:
First, lets look at the final vote results with Trump the winner and up by 74 in the electoral vote while Clinton was up by 2.8% in the popular vote. 
Electoral Vote (270 needed to win)     Popular Vote:
Trump 306                                           Trump 44.4%
Clinton 232                                          Clinton 47.2%

Now, let's look at what the major polls and prognosticators forecast in their final predictions:
Moody's Analytics: Clinton 332, Trump 206  WRONG!
Larry Sabato: Clinton 322, Trump 216  WRONG!
Five Thirty Eight: Clinton 320, Trump 235  WRONG!
Fox News: Clinton 274, Trump 215  WRONG!
Associated Press: Clinton 274, Trump 190  WRONG!
LA Times: Clinton 352, Trump 186  WRONG!
Election Projection: Clinton 279, Trump 249  WRONG!
RCP Average: Clinton, 272; Trump 266  WRONG!

And let's look at the popular vote prognostications:
Monmouth University Poll: Clinton +6  WRONG! 
NBC News: Clinton +7  WRONG!
NBC News - Wall Street Journal: Clinton +5  WRONG!
Reuters/Ipsos - Clinton +5  WRONG!

And in some of the key states, polls were wildly wrong! Though the Real Clear Politics (RCP) poll averages showed Clinton winning Pennsylvania, Michigan, and Wisconsin, Trump ended up winning all three, outperforming projections by 3 points, 4.4 points, and a stunning 7.5 points, respectively. That was way, way, WAY off the mark. In Iowa, where poll averages showed Trump up by three points, he actually won by ten points

Among all these, few polls seemed more off the mark than the Monmouth University poll, hawked endlessly during the campaign by its ubiquitous director Patrick Murray, aka "Pollster Patrick" (as he bills himself on Twitter).  Murray was everywhere, until the polling data crashed on election night. “The polls were largely bad, including mine,” Murray later admitted. "In key states, the narrative driven by data was wrong," he told his local daily newspaper. "We were telling the wrong story, and that's bad."

But, wait a minute. Polls aren't about "narratives" or "telling stories." Polls are supposed to be more accurate than that. In the end, polls are about hard data, aren't they? Oh, we know that there are real people and real stories behind the data, but isn't that more the business of focus groups and more nuanced chroniclers of public opinion? And, for that matter, what people are thinking and feeling and living actually drives the data and not the other way around, right? So it would seem.

Well, maybe Murray got too caught up in the heady notoriety of the numbers chase and missed something along the way. After all, he seems like a nice enough guy and there would appear to be an explanation for everything, eventually -- right? Huh?

We can't say with certainty how and why the polling was so off base. Maybe it was simply that the pollsters themselves drifted too far from the grassroots -- just like so many elites drifted too far from those "ordinary Americans" that Hillary Clinton said she didn't want to hear about. We do know this, however -- Larry Sabato of the University of Virginia (another prognosticator and well-known talking head) came right out and admitted that "we blew it." And then Sabato added this:
We heard for months from many of you, saying that we were underestimating the size of a potential hidden Trump vote and his ability to win. We didn’t believe it, and we were wrong. The Crystal Ball is shattered. . . .
We have a lot to learn, and we must make sure the Crystal Ball never has another year like this. This team expects more of itself, and we apologize to our readers for our errors.
You have to hand it to Sabato. He faced the facts and wisely headlined his post-election commentary
"Mea Culpa, Mea Culpa, Mea Maxima Culpa." Now, that's the proper way to fess up. Classy guy!

A couple of final notes: One seemingly stubborn poll consistently had an accurate snapshot of the electorate. That was the USC Dornsife/Los Angeles Times “Daybreak” poll, which we cited on this blog again and again. It's significant to note that this poll was mocked by most political pundits and talking heads. They dismissed it as an outlier and most refused to even support its findings. But the poll regularly gave Donald Trump a significant chance to win over the past four months.

Add to the Daybreak Poll two erstwhile professors: First and foremost,  Helmut Norpath of Stony Brook University who insisted that Trump had an 87% chance of winning the election based on his iron-clad model that has a remarkable record of accuracy. Also, presidential historian Allan Lichtman of American University foresaw a Trump win based on his model, even though Lichtman seemed to hedge his bet a bit while Norpath held firm over many months.  Hats off to them!

No comments: