The 'spectacle' of polls: hits, misses and.... does it even matter?
The Republican wave in the November 5 election left many pollsters dumbfounded. Some vowewd to reassess their results, while others stood by their predictions. Yet, the credibility of the polls has been under scrutiny for years.
There was talk of a close election, even one of the closest in history. It was far from it. Donald Trump beaten Kamala Harris by a comfortable margin. The Republicans likewise retained the Senate and could do the same with the House of Representatives. That prediction, supported by some polls, was contradicted by others, polls that, in the final hours, claimed credit for their most accurate forcasts.
In addition to the bookmakers and Moo Deng (a hippo famous on Tik Tok who predicted a bowl of fruit named after the Republicans would win), one of the few forecasters that came close to the actual result was Atlas Intel, the pollster that performed best in 2020. Other pollsters with accurate predictions included Rasmussen Reports, J.L. Partners and Decision Desk HQ (which relied on its election prediction model). Far off the mark were NPR-Marist, Morning Consult, The Economist, and several others.
National polls had shown Harris gaining ground, with her lead ranging from 0.1% to 2%. However, the current tally indicates a 3.5% lead for Trump. That said, it is anticipated that in the coming hours, days and weeks - particularly in California where millions of ballots remain - the gap will likely narrow to a margin of error similar to what has been seen in past elections.
The voting trend in the last stretch of the presidential race suggested that the outcome might not be as close as initially anticipated. While some projections still gave Harris the edge, the 538 aggregator showed her down by 1.7 points over the past 15 days. Overall, the trend indicated a decline for Harris, while Trump saw a steady rise in support.
At the state level, the polls largely reflected the tightness of the races. Trump narrowly won Wisconsin by one point, Georgia by two, North Carolina by three. These results were close to most predictions. Arizona and Nevada are still too close to call.
While opinion poll results have received mixed reviews, with some arguing that they performed much better than in 2020 and 2012, most analysts agree that this is the third time they have underestimated Trump.
Battle of pollsters
"My prediction for this presidential election was wrong, I own up to it," history professor Allan Lichtman said in a video he uploaded to social media. His 13-question true-false method had successfully predicted nine of the last 10 presidential elections. In 2024, he added his second blunder.
Lichtman, who announced he would take time to reflect on what went wrong, criticized renowned pollster Nate Silver, who, unlike him, refused to acknowledge any misjudgments. "Silver’s last call had Harris very marginally ahead. He certainly was not predicting a Trump Electoral College landslide."
He also criticized him for claiming that his "gut" told him Trump would win because it was a way of "can’t be wrong no matter what the outcome" - a tactic for playing both sides.
Before the polls closed, Silver had criticized his colleagues, accusing them of adjusting their results to present a closer race than was actually the case. One tactic he pointed to was "herding" - the practice of adjusting poll results based on the trends of other surveys.
This is not the first time Silver and Nate have clashed. Thier ongoing fued highlights the internal divisions within the world of polling - ranging from methodological debates to ideological differences and competing interests.
Alongside these internal disputes, external criticisms often arise after each election, as people compare actual results with predictions. In recent hours, Silver has become one of the most targeted figures, largel due to his high public profile. Among the critics is researcher Tim Hwang, who offers the following critique:
Silver also said Trump had a 20% chance of winning all the swing states, compared to 14% for Harris. However, he gave the vice president a 70% chance of winning the popular vote. These mixed results highlight the uncertainty of the race.
What went wrong?
Here at VOZ we noted days ago that there is a historical precedent for a seemingly close that ended in a landslide GOP victory. In the 1980 presidential race, Ronald Reagan vastly outperformed then-President Jimmy Carter, despite polls suggesting a tight contest.
Political analyst Craig Keshishian, who once worked for Reagan, pointed out similarities between the two races. In 1980, the apparent parity evaporated once voting began. "The Carter campaign bottomed out and Reagan won by nearly 10 points in the popular vote and by a staggering 489 to 49 in the Electoral College." According to Keshishian, they key factor was the hidden vote.
The "hidden vote" or a "silent majority" refers to a group of voters who do not openly express their preferences or even hide them when asked. It was a key factor in Trump's first victory in 2016, although it is still unclear what role it played in Tuesday's results. Harris' team had also banked on this elusive group, believing it consisted largely of married women with MAGA-supporting husbands. That gamble, however, seems not to have paid off.
Whether or not a hidden vote existed, pollsters themselves acknowledge that they failed to grasp the extent of the Republican candidate’s appeal in communities once seen as firmly Democratic. There were signs, however—this newspaper, for example, had been predicting for months that both the Hispanic vote and the Black vote were shifting toward Trump. The polls ultimately confirmed this trend. Equally true, however, was Harris' struggle to secure the male vote.
Lichtman isn't the only pollster to shrug off the question of what went wrong: "I’ll be reviewing data from multiple sources with hopes of learning why that happened. And, I welcome what that process might teach me," said J. Ann Selzer in The Des Moines Register. Selzer is the author of an Iowa poll that the newspaper publishes.
Her latest poll gave 47% to Harris and 44% to Trump, with a margin of error of 3.4%. However, the current tally reverses that result and expands the gap, showing Trump at 56% and Harris at 43%. What was expected to be a 3% advantage for the Democrat turned out to be a 13% lead for the Republican.
What are the polls good for?
Entertainment journalist James Hibberd argued that polls have become "an entertainment product." He made this observation, fittingly, in The Hollywood Reporter.
For Hibberd, polls are not only ineffective but also counterproductive, as he believes they undermine the credibility of the media that publish them. However, he acknowledges one widely accepted function: motivating voter turnout.
If a candidate’s lead is too large or too small, supporters might feel complacent and not turn out to vote. But when the margin is narrow, voters may feel their vote is more crucial. This is one of the criticisms that has emerged in recent hours against polls showing Harris tied with Trump, with some accusing these polls of being manipulated to energize the Democratic base.
Polls are also crucial for campaign teams for another reason: they help to evaluate the effectiveness of their messages and to find which groups to target.
It’s important to note that not all polls were wrong, and some trends predicted in advance by a few proved to be crucial. The Hispanic vote was key for Trump, as was the Black vote. As further analysis unfolds, some polls will be vindicated, while others will likely be discredited.