skepticism about surveys

I keep encountering new reasons to be skeptical about survey results. The moral is not to dismiss everything you read that’s based on polling, but to use surveys very carefully.

1. We looked at the percentage of young people who said they were registered at several moments during each of the last four presidential campaigns. The results show lots of up and down movement, including quick declines of as much as 10 points. This doesn’t make sense, since people register during the campaign season and don’t lose or drop their registration in large numbers. Furthermore, self-reported registration rates at any given month do not predict turnout in November–at all. For example, self-reported registration rates were consistently the highest in 1996, the year when we saw the lowest youth turnout ever. September of 2000 looked terrible, but then the registration rate rose to the highest ever recorded in November of 2000–even though actual turnout was poor that year. The registration number seems to move randomly and isn’t meaningful.

Since all election polls use registration questions to screen voters, this finding should make one skeptical of horse-race polls.

2. Some states (e.g., Michigan and Minnesota) collect hard data about voters, such as their ages. In these states, one can compare the demographics of the actual voters against exit poll data. We have found striking discrepancies in past years. Presumably, problems arise because people are not equally likely to participate in exit polls, and many now vote absentee.

3. When pollsters call random phone numbers, in theory they should reach a representative sample of Americans. In fact, as I know from bitter personal experience, they tend to reach samples whose demographics differ greatly from the Census Bureau’s–and not in predictable ways. Therefore, pollsters almost always “weigh” their samples. If they reach half as many African American males as they should, then each Black man in the sample counts for two. But there are huge questions about which variables one should “weigh,” and by how much.

I put more faith in trends, rather than snapshots. For example, I’m very skeptical about claims like “Bush has 52% of the vote,” because they are based on calculations involving who is registered; but I’m more persuaded that Kerry has lost three points since the last Gallup poll. However, even an apparently identical survey does not give you comparable results if the sample is weighted differently each time.

One can improve the quality of a survey by spending the time and money necessary to reach a high proportion of the people who were on your original, random list of phone numbers; or even better, by supplementing phone calls with home visits. Such efforts will be reflected in a high “response rate,” such as we see in Census polls. But the response rates of other polls are rarely disclosed and vary enormously. Many respectable firms have disturbingly low response rates. I think the lesson is to distinguish between a few solid polls and many dubious ones, and to pay attention only to the former.

2 thoughts on “skepticism about surveys

  1. Michael Weiksner


    Mathematically speaking, your faith in trends may be misplaced.

    If you are looking at a trend of two points taken from polls, the random error compounds rather than cancel out. For example, 53% Bush favorability +/- 3% points on September 1 and Bush favorability 50% +/-3 % on September 15 could actually be anything from 9 point negative swing to a 3 positive swing.

    True, if you have multiple data points to create a trend, then you actually can mathematically reduce the random error.

    However, trends are even more suseptible to nonrandom error (e.g., bias). The prime example here is Jesse Ventura’s successful run for Governor in Minnesota, when his growing support among young voters was not measured and reported because the support was weighed out of the likely voter polls leading up to the election.

    Finally, there is the unavoidable selection bias. The refusal rate for telephone polls can be as high as 60 to 80 percent. There is no way to weight what are essentially 50 million panelists to represent the actual 250 million US population.

Comments are closed.