Close

Can you trust poll results? Here’s what to look for… by Craig Griffin

In an increasingly data-driven world, public opinion polls are an important resource for journalists, as long as you are clear on whose opinion they reflect, what was asked and how the data has been interpreted. With much discussion over the best “exit strategy” for Covid-19 and with businesses and public services urgently considering their options, feedback from the general public can be very useful for policy and decision makers. After all, public opinion can be a critical force in shaping and transforming society, particularly in times of crisis, as long as we ask these three questions.

Whose opinion is this? Let’s start with sampling. You don’t need to drink a full bottle of wine to know its taste. Similarly, you don’t  need to conduct a full census to understand the mood of a nation. Researchers usually try to interview a representative sample of the relevant population, which is as much about the quality of the sampling approach as the quantity of people interviewed – bigger is not always better. There are two commonly used sampling methods. The first is random sampling – as the name suggests people are selected randomly from the relevant population of interest. The other is quota sampling, which applies quotas to specific criteria, usually demographic, to ensure the mix of people surveyed is in proportion to the overall population. There are many ways to interview people (face to face, telephone and online) and all are valid ways to conduct opinion polling.

Many surveys aren’t conducted by research professionals and don’t follow scientifically valid approaches. Any survey from social media, phone-in polls and links on websites should probably be dismissed out of hand, or at best treated with a healthy dose of skepticism. With no control over who is responding, this usually leads to a sample that doesn’t represent the views of the broader population. A small sample based on a scientific sampling approach is much better than a large self-selecting sample.

What were they asked? The second critical factor is the wording of questions. While there’s certainly an art to writing good questions, there are certain things to look out for. Questions should be simple, clear and precise. They shouldn’t deliberately (or inadvertently) lead people to answer in a particular way – either through the question itself or the range of answers provided. They shouldn’t assume knowledge that the respondent may not have. Surveys conducted in times of crisis should be sensitive to respondent concerns and their willingness and ability to answer certain questions. A recent survey asked what the Thai public had done to protect themselves during the crisis and gave a range of possible answers. Self-reported behavior is often inaccurate, particularly when the answers provided may imply a social norm (such as mask wearing or social distancing). \

It’s also good practice to find out who has commissioned a survey to identify potential bias – sometimes implicit in the design of the questions.  A good rule of thumb in research design is that you shouldn’t be able to tell the sponsor of the research by the wording of the questions. More commonly, bias can be found in reporting of the data. The Liberal Democrats in the UK were widely criticized in the 2019 election for selective use of data.

How has the date been interpreted? What should we look for in the analysis, interpretation and reporting of the data? Every survey conducted has a standard margin of error. For example, a random sample of the Thai population of N=1000 has to be interpreted as 50% +/- 3.1%, (at commonly used 95% confidence level). In simple terms, this means that you can be 95% sure that the real result, had you conducted a census, could be up to 3% higher or lower than the 50% reported in the survey. There are a couple of implications here. Firstly, it’s ludicrous to report percentages to two or even one decimal place. Just round it up to the nearest percentage. Second, when comparing two or more numbers, whether answers to a question, or answers from different sub-groups (e.g. opinions of men vs. women), make sure the difference is statistically significant, considering the possible margin of error. In other words, small differences may not be worth highlighting as they could be due to sampling error.

You also need to consider if an individual question has been “filtered” (only asked of certain respondents). It is a common and reasonable practice but can affect the interpretation of the results. Sometimes data is weighted before being reported. This is usually done where there were insufficient people of a certain group in a survey (for example 45-54 year olds). The more the data is weighted, the greater the margin of error and the less certain the results.

In summary, check the source: Before evaluating any poll, ask for details of the sampling approach used (together with any weighting applied), a profile of the people included in the survey, the date when the survey was conducted and the detailed results in the form of data tables (including exact details of how the questions were asked, and to whom). In many countries, it’s common practice to make this available.

Here is an example of an article in which the data tables are included at the bottom of the article, providing full transparency into how the research was executed, the possible margin of error and allowing the reader to reach their own conclusions (print media can instead provide the web address of where the data tables can be found). A visual is used to support the key findings, which is also good practice.

Here and here are two (randomly selected) recent pieces of reporting of opinion polls in Thailand. While the dates and sample size are shown, there is no information on the sampling approach used, the profile of the people surveyed, or how the questions were asked. The use of decimal places is unnecessary. Polls may not be perfect, but when done correctly, they are an important way of measuring what the public thinks and can play a positive role in a democratic society.

[Craig Griffin is Thailand representative of ESOMAR, a global association for members of the data, research and insights community, and an adjunct lecturer at Thammasat University teaching research methods. He runs Bangkok-based FUEL Research & Consulting, This article was originally written for the Foreign Correspondent’s Club in Thailand. It appeared in their bulletin dated 29th April and you can find it on Craig’s blog here.]

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *