AP Stylebook on polls and surveys
Stories based on public opinion polls must include the basic information for an intelligent evaluation of the results. Such stories must be carefully worded to avoid exaggerating the meaning of the poll results.
Information that should be in every story based on a poll includes the answers to these questions:
1. Who did the poll and who paid for it? (The place to start is the polling firm, media outlet or other organization that conducted the poll. Be wary of polls paid for by candidates or interest groups. The release of poll results is often a campaign tactic or publicity ploy.
Any reporting of such polls must highlight the poll's sponsor, so that readers can be aware of the potential for bias from such sponsorship.)
2. How many people were interviewed? How were they selected? (Only a poll based on a scientific, random sample of a population in which every member of the population has a known probability of inclusion can be used as a reliable and accurate measure of that population's opinions. Polls based on submissions to Web sites or calls to 900-numbers may be good entertainment but have no validity. They should be avoided because the opinions come from people who select themselves to participate. If such unscientific pseudo-polls are reported for entertainment value, they must never be portrayed as accurately reflecting public opinion and their failings must be highlighted.)
3. Who was interviewed? (A valid poll reflects only the opinions of the population that was sampled. A poll of business executives can only represent the views of business executives, not of all adults. Surveys conducted via the Internet even if attempted in a random manner, not based on self-selection face special sampling difficulties that limit how the results may be generalized, even to the population of Internet users. Many political polls are based on interviews only with registered voters, since registration is usually required for voting. Close to the election, polls may be based only on "likely voters." If "likely voters" are used as the base, ask the pollster how that group was identified.)
4. How was the poll conducted by telephone or some other way? (Avoid polls in which computers conduct telephone interviews using a recorded voice. Among the problems of these surveys are that they do not randomly select respondents within a household, as reliable polls do, and they cannot exclude children from polls in which adults or registered voters are the population of interest.)
5. When was the poll taken? (Opinion can change quickly, especially in response to events.)
6. What are the sampling error margins for the poll and for subgroups mentioned in the story? (The polling organization should provide sampling error margins, which are expressed as "plus or minus X percentage points," not "percent." The margin varies inversely with sample size: the fewer people interviewed, the larger the sampling error. Although some pollsters state sampling error or even poll results to a tenth of a percentage point, that implies a greater degree of precision than is possible from a sampling; sampling error margins should be rounded to the nearest half point and poll results to the nearest full point. If the opinions of a subgroup women, for example are important to the story, the sampling error for that subgroup should be included. Subgroup error margins are always larger than the margin for the entire poll.)
7. What questions were asked and in what order? (Small differences in question wording can cause big differences in results. The exact question texts need not be in every poll story unless it is crucial or controversial.)
When writing and editing poll stories, here are areas for close attention:
Do not exaggerate poll results. In particular, with pre-election polls, these are the rules for deciding when to write that the poll finds one candidate is leading another:
If the difference between the candidates is more than twice the sampling error margin, then the poll says one candidate is leading.
If the difference is less than the sampling error margin, the poll says that the race is close, that the candidates are "about even." (Do not use the term "statistical dead heat," which is inaccurate if there is any difference between the candidates; if the poll finds the candidates are tied, say they're tied.)
If the difference is at least equal to the sampling error but no more than twice the sampling error, then one candidate can be said to be "apparently leading" or "slightly ahead" in the race.
Comparisons with other polls are often newsworthy. Earlier poll results can show changes in public opinion. Be careful comparing polls from different polling organizations. Different poll techniques can cause differing results.
Sampling error is not the only source of error in a poll, but it is one that can be quantified. Question wording and order, interviewer skill and refusal to participate by respondents randomly selected for a sample are among potential sources of error in surveys.
No matter how good the poll, no matter how wide the margin, the poll does not say one candidate will win an election. Polls can be wrong and the voters can change their minds before they cast their ballots.
Thanks to John Bolt of the AP for this material.