Poll survey controversy, Part II
The controversy over pre-election surveys continues to rage as many have joined the fray and more information has poured in that sheds light on public opinion polling. I yield a portion of this space today to a second bulletin from former senator Francisco Tatad which I found revealing:
New surveys raise more questions
The release last March 5 by Pulse Asia of the results of its February pre-election survey comes at a time when there is widespread public questioning of the methods and practices of opinion polling firms and the injurious effects of surveys on the election campaign…The media and the public should not be deluded by the positional or percentage changes in the horse race into thinking that the Pulse Asia survey has improved in methodology or that the malpractices have been corrected. They have not.
1. Pulse Asia evades disclosure requirement
In issuing the results of its February survey, Pulse Asia continues to ignore our demand for public disclosure of the names of all those sponsoring its Ulat ng Bayan surveys. It invokes the confidentiality of its contracts and eludes its professional and ethical responsibility.
In the US and other countries, disclosure of the identity of survey sponsors is a must prior to publication. They are demanded by the professional association to which pollsters belong; they are also demanded by the media organizations that report them. In the Philippines, this requirement is unconscionably cast aside.
Pulse Asia cannot escape this responsibility by saying that its February survey, and its previous surveys, are “not singularly sponsored”, because in fact, they have multiple sponsors.
Neither should it posture that releasing survey results is “a public service” or an “academic responsibility”, because the fact that the survey is sponsored and undertaken for a fee or fees colors it with the self interest of both pollster and sponsors.
To reiterate our position: if the sponsors do not want their identities revealed, then Pulse Asia may not disclose to the public the survey results. In the case of multiple sponsorships, the disclosure of all sponsors should be mandatory prior to release. It should not select only those willing to come out in the open.
2. Undecided voters suppressed in surveys
Secondly, poll survey firms are suppressing the real number of undecided voters because of the way they frame the vote-choice question, thereby making the results flawed and inaccurate.
In late January, with the election still three months away and the campaign not even officially begun, SWS reported a mere two percent of voters undecided and Pulse Asia four percent undecided. In May 2009, with the election still a year away, Pulse Asia reported a similar four percent of voters undecided. Ironically, now, when the election date is closer, Pulse Asia is reporting a slightly higher percentage of undecided — six percent.
The number of the undecided is perennially being understated or suppressed because Pulse Asia, SWS and other local pollsters continue to frame the vote choice question in the same way pollsters did in the 1948 US election: “If the elections were held today, who would you vote for among these candidates, etc.?”
Former Gallup executive and social scientist David Moore, who has led the way in exposing this Achilles heel of election polling, says that this “forced choice” question does not take into account that statistically and realistically, most voters are undecided up to the final days when they actually have to vote. It forces respondents to choose in a hypothetical way. Consequently, they get answers that do not reflect real voter sentiments.
Moore says that in actual tests conducted in the US, the forced – choice question produces undecided answers of five percent or lower. But when respondents are offered a clear choice of saying that they still haven’t decided yet, the number of undecided can range from 20 percent to as high as 70 percent – depending on how far away the election is. He says that when the election is still three months away, it’s safe to assume that some 30 percent are still undecided.
Because of the insistence of pollsters since 1948 on using the forced-choice question, Moore says in his recent book, The Opinion Makers, that pollsters “do not measure public opinion, they manufacture it.”
Another view from an American pollster
I received this comment from my friend Pete Brodnitz, partner at Benenson Strategy Group and 2007 Pollster of the Year.
“A wide range of criticisms have been leveled at polling techniques recently in the Philippine press. They have included discussion of whether Philippine polls are up to international standards, and whether any polls at all are accurate.
I have conducted polls around the world in a wide variety of countries, languages and cultures. The fact is that polls work. They are not infallible. Exit polls – polls conducted on the day of an election – are a particularly difficult type of poll to conduct accurately but most polls are not exit polls. So while there are clear examples of polls being off the mark, there are many more examples of polls that are quite accurate. And many of the most widely cited cases of inaccurate polls are dated, exit polls rather than pre-election polls, or polls conducted during highly volatile elections where opinion is in flux.
The quality of polls around the world is generally high even though the methods used to conduct the polls – including differences in how interviews are conducted (phone, door-to-door, Internet) and how questions are structured – vary widely. In every country, there is a mix of quality and not so high quality polls. And in every country, questions are routinely and wisely raised about whether a particular poll is accurate or not, biased or not. But the fact is that when a preponderance of polls indicate that certain attitudes or vote preferences are common, the odds are good that this is accurate.
Polls are commonly described as a snapshot in time but too often people forget to take this into account when reading polls. While some polls are inaccurate I believe the more common reason for why some polls don’t reflect the winner of an election is because voters attitudes continue to change up until election. So while voters are right to be critical poll readers and right to reject the idea that any poll predicts the future, they would be wrong to assume they cannot learn about the attitudes of their fellow citizens by reading the polls.”
Any reply that Pulse Asia, SWS and other pollsters would like to make to Mr. Tatad is welcome and will be similarly published in this column.
- Latest
- Trending