As the instrument for collecting information from respondents, the questionnaire plays a central role in quant. It must address the research objectives; and while this may sound obvious, it does require considerable thought and preparation, and is clearly crucial to the effectiveness of the research. A good questionnaire design also minimizes non-sampling bias and is an efficient and effective means of acquiring information.
Questionnaires typically open with a screener, followed by the main section pertaining to the research topic, and conclude with the section on demographics. The flow should be logical, from the broader, wide-ranging to more specific and focussed questions. Sensitive questions should be placed towards the end.
The question sequence must be carefully considered to avoid order effects — biases caused by the sequencing of the questions. Order effects occur when a question earlier in the study has an influence on the response to one or more questions that follow. Take for example, the sequence of questions listed below:
Please rate the colour of this drink.
Please rate the smell of this drink.
Please rate the amount of cocoa in this drink.
Please rate the amount of sugar in this drink.
Please rate the size of chocolate chips in this drink.
What is your overall rating of this drink?
Placing it at the end, a common error, biases the response to the question on “overall rating”. If for instance, respondents have a positive (or a negative) feeling for the attributes colour, smell, amount of cocoa, etc. it will influence their response on overall rating. For this reason, questions on overall rating of product, service or experience should precede questions about the various attributes that influence the overall rating.
We also need to be mindful of halo effects. If the respondent has a positive (or a negative) feeling for the first one or two attributes (colour, smell), this might cause a halo affecting her response to some of the subsequent attributes (amount of cocoa, amount of sugar, etc.). The halo effect may be contained by randomizing the sequence of the questions that appear on attribute rating.
Order effects can also occur when the text of a question reveals previously unknown facts to respondents. For example the question “Do you use Timotei shampoo?” makes the respondents aware of the brand Timotei. Accordingly unaided brand awareness questions are placed near the start of the questionnaire, and questions on factual and behavioural information go before questions relating to attitudes and opinions. For instance in advertising tracking, brand awareness questions precede questions on purchase behaviour and brand image, and these are followed by questions pertaining to advertising awareness and diagnostics.
Besides the sequencing of questions, the sequencing of choices may also generate bias. This is referred to as order bias — it may occur when a respondent is given the opportunity to select answers in multiple choice questions. For instance, given a choice of flavours, colours, ingredients and so on, respondents have the tendency to select items that appear at the top or the bottom of the list. Order bias may be eliminated by randomizing the sequence of the items in the list so that each has the same likelihood of being in any given position in the list.
Questionnaires typically comprise mainly close-ended questions and some open-ended ones. Open-ended questions tend to be of the type — “Do you have any suggestions on how to improve our service?” They solicit additional information from the respondent and allow respondents greater freedom to express themselves. They often yield useful insights that help diagnose issues and interpret responses to close-ended questions e.g. — “Please share the reasons why you stopped using brand A.” Open-ended questions may also yield quotable comments (verbatim) that enrich the research findings.
Note however that the exploratory nature of open-ended questions makes them amenable more to qual than quant. Unlike closed-ended questions which offer predetermined lists or categories of possible answers, open-ended questions are not as easy to manage in quant. They are harder to record, code, process, analyse and report, and consequently add to the expense of the study. For these reasons they should be used sparingly in quant studies.
Close-ended questions provide a set of answers from which the respondent must choose. Examples include dichotomous closed questions (For instance Yes/No — Did you drink tea today?), or multiple choice questions. These questions are apt for quant where we aim to close rather than expand the focus of inquiry. Responses are comparable across respondents, and they are quicker, easier and cheaper to administer in field, and process in the office.
Close-ended questions are, however, rigid and may restrict some respondents to a list that does not apply to them. It also prevents them from expressing their complete feelings and experiences; which is why a few open-ended questions provide for good balance.
A good understanding of the subject is required to develop good questions and pre-code lists. Where there is lack of familiarity, exploratory qual study may be carried out prior to quant, to determine the right questions and the anticipated responses. It also helps to pick up the terminology consumers commonly used for terms pertaining to the category.
Note: To find content on MarketingMind type the acronym ‘MM’ followed by your query into the search bar. For example, if you enter ‘mm consumer analytics’ into Chrome’s search bar, relevant pages from MarketingMind will appear in Google’s result pages.
Marketing has changed. More so in practical terms, and marketing education is lagging.
The fundamental change lies in the application of analytics and research. Every aspect of the marketing mix can be sensed, tracked and measured.
That does not mean that marketers need to become expert statisticians. We don't need to learn to develop marketing mix models or create perceptual maps. But we should be able to understand and interpret them.
MarketingMind helps. But the real challenge lies in developing expertise in the interpretation and the application of market intelligence.
The Destiny market simulator was developed in response to this challenge. Traversing business years within days, it imparts a concentrated dose of analytics-based strategic marketing experiences.
Like fighter pilots, marketers too can be trained with combat simulators that authentically reflect market realities.
But be careful. There are plenty of toys that masquerade as simulators.
Destiny is unique. It is an authentic FMCG (CPG) market simulator that accurately imitates the way consumers shop, and replicates the reports and information that marketers use at leading consumer marketing firms.
While in a classroom setting you are pitted against others, as an independent learner, you get to play against the computer. Either way you learn to implement effective marketing strategies, develop an understanding of what drives store choice and brand choice, and become proficient in the use of market knowledge and financial data for day-to-day business decisions.