I am a pollster. My organization, CIRCLE, just released a national survey of young adults that shows Obama ahead of Romney among likely voters under 30: 52% to 35%.
Pollsters much more prominent than I are under fire. Look at the comment thread on any high-traffic blog post or news article that reports a poll and you will see fervent remarks denouncing the survey for deliberate, partisan bias. Typically, the charge is that a poll showing Obama ahead has been conducted to help Obama (even though one might think that a lowball estimate would work better, by alarming his supporters into voting).
One sees blanket denunciations as well as very precise, faux-erudite critiques. For instance, the Detroit News showed a pro-Romney bias because it has a libertarian editorial board. Nate Silver is cooking the books because the Times is liberal. National polls are biased to Romney because they miss cell phone users. I was on talk radio yesterday in San Francisco, and a caller argued that Obama’s support cannot have declined because of the first presidential debate. Instead, his decline in national polls must be a deliberate distortion to set up the fraud that will occur on Election Day, when Romney will use doctored voting machines to steal the vote.
I would like to say: Who you want to win is different from who’s ahead. The former is a value-judgment; the latter is an empirical proposition. Empirical propositions are true or false. So just because you don’t like a poll, it doesn’t mean it’s wrong.
This is largely true, but won’t quite suffice.
I do believe that the average of the polls provides an accurate a picture of the horse race. Aggregating all available surveys produces a gigantic sample, and using lots of pollsters’ data reduces the error caused by their specific methods. We said that youth favored Obama by 52%-35%. We had interviewed a randomly recruited online sample from Knowledge Networks. The very same day, completely independently, and using a random-digit-dialing survey of land-lines and cell phones, the Pew Research Center pegged the youth vote at 56%-35%, well within the margin of error of our result. I take this as confirmation of our finding. I do not read it as mere coincidence, because the same thing happens every day. Separate pollsters draw modest random samples of Americans, use different questions and modes of contact, and come up with quite similar results. The method works.
On the other hand:
1. There is no truth now about how people will vote next week. If the polls are supposed to be predictive, that’s not a typical empirical truth.
2. Each poll requires a whole set of choices that affect the findings. We hired Knowledge Networks, which randomly recruits a national sample and provides people with free Internet access if they need it. We drew a random sample of their panel with large minority sub-samples that we adjusted to make the sample resemble the Census demographic profile of 18-29 year-old citizens. We asked respondents: (1) how likely they were to vote, (2) whether they were certain to vote for Obama, and (3) whether they were certain to vote for Romney. (We randomized the order of the two latter questions.) To calculate Obama’s share of the youth vote, we reported the proportion who said that they were extremely likely to vote, they preferred Obama, and they did not prefer Romney.
The alternatives are myriad. If you call people by phone, you are likely to give them a choice of the candidates and code people as undecided only if they refuse to respond. Often, the interviewer pushes back and says, “If you had to choose …?” That reduces the undecided rate, which was fairly high in our poll. The survey can just ask about Obama and Romney, or it can add two minor party candidates (Libertarian Gary Johnson and Green Jill Stein), or it can add even more options. We accepted a generic response about “someone else,” and four percent chose that.
It is typical of social science that truth is somewhat obdurate–not just made up, but stubbornly out there–yet reality is very much colored by our methods and choices.
3. None of us directly calls more than 1,000 Americans to interview them about the presidential election. We trust other people to do that for us. You trust me and my colleagues to survey youth–or I hope you do. We trusted Knowledge Networks to draw a good sample for us. I trusted my colleagues to run the numbers right. Since social knowledge is mediated, it relies on trust of strangers or of institutions, or both. This was the problem for the caller in San Francisco. He took as a premise that Bush had stolen the 2004 election, which would require a very large conspiracy. If that’s afoot, then all the surveys in Nate Silver’s model could be deliberately distorted to show Romney gaining after Oct. 4. Clearly, I do not agree with this, but then again, I would be part of the conspiracy, so why listen to me? More seriously, trust is fundamental, and excessive or automatic trust is foolish. So the questions for all of us are: whom to trust, how much, and when?