urvey and statistical study are two different things.
Let’s say you were looking to lease a store location to put up a fast food restaurant. There’s plenty enough foot traffic around the place. So you ask 100 people passing by if they would eat at a fast food where you’re standing. 95 of them say yes. Is that enough data to give you confidence to put up the business?
No. Just asking 100 people at random without classification only gives you raw indication of preference. Once you sort out these people—by age and means, for example—you’ll find out that children below 12 who may want to eat at a fast food place will not do so without the company of some adult who has the purchasing power to pick up the tab.
So next time you’ll still want to ask 100 people at random, but before you ask their preference you’d want to prequalify them for that question by asking if they are gainfully employed and, therefore, have money to spend.
In short, you tighten the parameters of your “sample” to define a demographic. Only people of similar character and circumstances are likely to behave in a similar way. Their preference will most likely reflect the same preference of other people like them—including the many thousand others you did not ask.
That’s the idea behind a VALID statistical sample. A lot of scientific considerations go into designing a sample. That’s right, you DESIGN a sample. You don’t just arbitrarily stumble across one.
It’s like when you mix hot water, coffee grounds, sugar and creamer—and mix them well—you produce drinkable coffee solution. It is a perfectly homogenous liquid. All the chemical properties of one teaspoonful of it is exactly the same as the whole mug. So there is nothing anomalous if you make conclusions about the chemical makeup of the entire solution based on your analysis of just one drop if it.
It’s fairly easy to achieve perfect homogeneity with chemicals. With humans, it’s a lot trickier.
It is not necessarily unintelligent to ask just 1,000 voters and let their preference theoretically reflect the minds of the rest of millions of Filipino voters. With the right survey preparation, and scientific sampling, that is doable. The margin of error could be big or small, all depending on how carefully you designed your sample.
How painstakingly did you work to achieve as good a HOMOGENOUS sample as possible? How carefully did you phrase your question?
It would take too much time (maybe forever) and too many factors and therefore too many prequalifying questions to ask, to be able to come up with as homogenous a survey sample as possible. All respondents may be voting age, but are they all registered? They may all be registered but did they vote in the last election?
Once you eliminate all possible contaminants of your sample, the results you get from surveying them will be more accurate. Never 100% but as closely approximating “standard deviation” as you are honest to your own effort.
This is why I don’t trust all surveys, but especially any survey that claims “zero error margin”--that is statistically impossible. Making such claim shreds your credibility. Falsus in uno, falsus in omnibus—if you lie about one thing, you would lie about everything. Why should I believe your survey work?
On the other hand, there are some really good research institutions that employ real statisticians, people with formal mathematical backgrounds. They also have on staff work good psychologists and grammarians. They know how to construct survey instruments—essentially questionaires—where the language in each question is carefully structured, preventing it from influencing the respondent’s opinion, or triggering a distorting predisposition on his part.
If the question was, “would you vote for a president who will carry out policies to improve the economy?”—that’s a meaningless survey because who in his right mind would say no to that?
Yet, it’s amazing how so many surveys report that many Filipinos support this candidate or another BECAUSE of his or platform. Of course. Why else would you support a candidate if you don’t like his platform? Since no platform would ever evoke a negative scenario, as a general rule you musn’t use motherhood phrases in a survey questionaire.
Collating survey answers is also radically different from interpreting them. There are trained psychometricians who combine the objective numbers with the known behavioral history of a given demographic group to interpret survey numbers. This highly subjective discretion can be abused.
For example, if the question was “do you prefer a candidate who promotes inclusion, or one who promotes unity?”
Let’s say more respondents preferred unity. You then go out and say BBM is “leading” because he is the standard bearer of the Unity Team. That’s an interpretative aberration because people who preferred unity may have just been responding to a universal positive virtue. They may not have BBM in their minds at all. It was YOU who made that association.
Before you say I’m unfairly bashing BBM, let me say that if more people preferred inclusivity, and you use that to conclude that Leni is leading because that’s her signature philosophy, then you would be committing the same abuse.
The point is, there’s more to a good, accurate and scientific survey than meets the eye. Collecting data is just one aspect of it. It is also an evolving science. Techniques for “blinding the data” for example are improving all the time.
What is “blinding?”
You shouldn’t go out interviewing people on the street for a survey wearing a Leni-Kiko T-shirt, or even anything pink. In our polite culture, not all but a substantial number of people will consciously or subconciously respond in a way they perceive agrees with you—especially if they know you.
In other words, the expression, “In a survey I conducted among my friends ..” is a dead giveaway that the result most probably reflects your state of mind more than your friends’.
The same is true if you wear a t-shirt with Ferdinand Marcos’ smiling face on it, even if the dead man is not on the ballot this May. You would still project a reinforcing subliminal stimulus for BBM, ruining the survey.
Gallup Poll, the number 1 pollster in the US, uses a “double-blind” set-up, where the interviewer never knows the identity of the person responding, and the latter doesn’t know who’s asking him the questions. That’s why, to this day, Gallup only conducts its surveys by telephone using a machine that dials number randomly.
Its interviewers follow a very carefully flowcharted conversation template that contains enough vetting questions to help the interviewer decide whether to proceed all the way to the main question, or to bail out earlier. Not all calls are successful, and they might make a hundred calls to yield 5 or 6 that are qualified. So it takes up to 10,000 calls to validly collate results from a sample comprising just 1,000.
And none of their surveys is commissioned by a candidate or political party. That’s the most important element of a legitimate scientific survey: it must be undertaken by a disinterested party.
After Google-ing the history and backgrounds of some of the more prominent survey entities in the Philippines, I am not convinced that there’s even one who can say they are disinterested enough to be credible.**
No comments:
Post a Comment