But, finally we're ready.
Have you been following those polls that say that President Trump's approval numbers have been slipping? Have you?
Here's how the supposed poll results have been bannered on newspapers and web sites:
Polls show Trump with historically low approval ratings . . .
Confidence drops in Trump transition . . .
Trump's poll numbers are slipping . . .
And even before he took office, we had THIS headline:
Trump Supporters Are Turning Against Him . . .
Well, don't believe any of them! Because many of these are the same polls that told you that Hillary Clinton would easily be elected as America's 45th President. It. Was. A. Done. Deal. That's what they told you. They were certain of it.
Consider, just for a moment, how far off the mark the polls were:
First, lets look at the final vote results with Trump the winner and up by 74 in the electoral vote while Clinton was up by 2.8% in the popular vote.
Electoral Vote (270 needed to win) Popular Vote:
Trump 306 Trump 44.4%
Clinton 232 Clinton 47.2%
Now, let's look at what the major polls and prognosticators forecast in their final predictions:
Moody's Analytics: Clinton 332, Trump 206 WRONG!
Larry Sabato: Clinton 322, Trump 216 WRONG!
Five Thirty Eight: Clinton 320, Trump 235 WRONG!
Fox News: Clinton 274, Trump 215 WRONG!
Associated Press: Clinton 274, Trump 190 WRONG!
LA Times: Clinton 352, Trump 186 WRONG!
Election Projection: Clinton 279, Trump 249 WRONG!
RCP Average: Clinton, 272; Trump 266 WRONG!
And let's look at the popular vote prognostications:
Monmouth University Poll: Clinton +6 WRONG!
NBC News: Clinton +7 WRONG!
NBC News - Wall Street Journal: Clinton +5 WRONG!
Reuters/Ipsos - Clinton +5 WRONG!
And in some of the key states, polls were wildly wrong! Though the Real Clear Politics (RCP) poll averages showed Clinton winning Pennsylvania, Michigan, and Wisconsin, Trump ended up winning all three, outperforming projections by 3 points, 4.4 points, and a stunning 7.5 points, respectively. That was way, way, WAY off the mark. In Iowa, where poll averages showed Trump up by three points, he actually won by ten points.
Among all these, few polls seemed more off the mark than the Monmouth University poll, hawked endlessly during the campaign by its ubiquitous director Patrick Murray, aka "Pollster Patrick" (as he bills himself on Twitter). Murray was everywhere, until the polling data crashed on election night. “The polls were largely bad, including mine,” Murray later admitted. "In key states, the narrative driven by data was wrong," he told his local daily newspaper. "We were telling the wrong story, and that's bad."
But, wait a minute. Polls aren't about "narratives" or "telling stories." Polls are supposed to be more accurate than that. In the end, polls are about hard data, aren't they? Oh, we know that there are real people and real stories behind the data, but isn't that more the business of focus groups and more nuanced chroniclers of public opinion? And, for that matter, what people are thinking and feeling and living actually drives the data and not the other way around, right? So it would seem.
Well, maybe Murray got too caught up in the heady notoriety of the numbers chase and missed something along the way. After all, he seems like a nice enough guy and there would appear to be an explanation for everything, eventually.
We can't say with certainty how and why the polling was so off base. Maybe it was simply that the pollsters themselves drifted too far from the grassroots -- just like so many elites drifted too far from those "ordinary Americans" that Hillary Clinton said she didn't want to hear about. We do know this, however -- Larry Sabato of the University of Virginia (another prognosticator and well-known talking head) came right out and admitted that "we blew it." And then Sabato added this:
"Mea Culpa, Mea Culpa, Mea Maxima Culpa." Now, that's the proper way to fess up. Classy guy!
But, wait a minute. Polls aren't about "narratives" or "telling stories." Polls are supposed to be more accurate than that. In the end, polls are about hard data, aren't they? Oh, we know that there are real people and real stories behind the data, but isn't that more the business of focus groups and more nuanced chroniclers of public opinion? And, for that matter, what people are thinking and feeling and living actually drives the data and not the other way around, right? So it would seem.
Well, maybe Murray got too caught up in the heady notoriety of the numbers chase and missed something along the way. After all, he seems like a nice enough guy and there would appear to be an explanation for everything, eventually.
We can't say with certainty how and why the polling was so off base. Maybe it was simply that the pollsters themselves drifted too far from the grassroots -- just like so many elites drifted too far from those "ordinary Americans" that Hillary Clinton said she didn't want to hear about. We do know this, however -- Larry Sabato of the University of Virginia (another prognosticator and well-known talking head) came right out and admitted that "we blew it." And then Sabato added this:
We heard for months from many of you, saying that we were underestimating the size of a potential hidden Trump vote and his ability to win. We didn’t believe it, and we were wrong. The Crystal Ball is shattered. . . .You have to hand it to Sabato. He faced the facts and wisely headlined his post-election commentary
We have a lot to learn, and we must make sure the Crystal Ball never has another year like this. This team expects more of itself, and we apologize to our readers for our errors.
"Mea Culpa, Mea Culpa, Mea Maxima Culpa." Now, that's the proper way to fess up. Classy guy!
A couple of final notes: One seemingly stubborn poll consistently had an accurate snapshot of the electorate. That was the USC Dornsife/Los Angeles Times “Daybreak” poll, which we cited on this blog again and again. It's significant to note that this poll was mocked by most political pundits and talking heads. They dismissed it as an outlier and most refused to even support its findings. But the poll regularly gave Donald Trump a significant chance to win over the past four months.
Add to the Daybreak Poll two erstwhile professors: First and foremost, Helmut Norpath of Stony Brook University who insisted that Trump had an 87% chance of winning the election based on his iron-clad model that has a remarkable record of accuracy. Also, presidential historian Allan Lichtman of American University foresaw a Trump win based on his model, even though Lichtman seemed to hedge his bet a bit while Norpath held firm over many months. Hats off to them!
No comments:
Post a Comment