Don’t Know? Don’t Know
How does presenting the Don’t Know option affect survey estimates?
In self-administered web surveys, how should researchers present the ‘Don’t Know’ option to promote accurate responses? This question sits at the heart of a methodology seminar held by Natcen and the European Social Survey.
Demands of a survey
There are two demands on a well-designed questionnaire:
- An appropriate answer option should be available for all respondents;
- People answering the survey should be motivated to give the most appropriate response.
These demands can come into conflict when trying to measure a lack of knowledge.
In interviewer-administered surveys, ‘Don’t Know’ is often not given as an explicit choice and must be volunteered spontaneously by the respondent. Sometimes, the interviewer then probes this initial Don’t Know response with further questions, trying to see if the respondent can reveal a more substantive answer.
This is much more difficult in self-administered surveys, with no interviewer to guide. Giving a Don’t Know option can lead to satisficing — a pleasant word for a troubling concept in survey research. People who satisfice are not giving true answers reflective of their real beliefs: they are clicking through the survey to get it done.
One solution is to use the interactivity of the survey format: withhold the DK option initially, and only display it when the respondent attempts to skip the question. Respondents may not be aware of this functionality and it is harder to select the DK option.
Another solution is to display the Don’t Know option upfront and seek to emulate the probing interviewer with a subsequent prompt. Respondents may be put off selecting the DK option once the additional burden becomes clear. Prompting could lead to false responses.
There are outstanding questions in the literature:
- How does the presentation of DK options affect the quantity and quality of those answers?
- Does this differ between question types?
- How does it affect the distribution of ‘substantive’ responses, especially in bipolar scales with mid-points?
- When DK options are made less visible, do we see more ‘non-attitudes’?
“I really don’t know”
Bernard Steen (NatCen) gave the opening talk. NatCen used their panel to test four treatments:
DK Hidden — No Explanation: The DK option is initially hidden, but the functionality is not explained to respondents.
DK Hidden — with Explanation: The DK option is initially hidden, but how to make it appear is explained to people answering the survey at the start.
DK Upfront — No Prompt: People can explicitly see the Don’t Know option, and there is no prompt following its selection.
DK Upfront — Polite Prompt: People can see the DK option. Once chosen, they see a prompt to try and get a more substantive response: ‘I really don’t know’ is also offered.
Five agree-disagree questions were used, and a closed probing question following the selection of the DK option. The rate of Don’t Know responses is much higher when offered upfront compared to it being hidden.
Even if we look only at the upfront responses due to a lack of knowledge, the difference suggests hiding the DK option means missing some people who really don’t know. In the upfront treatment, prompting those who say they initially Don’t Know generally seems to lead to better use of that option — but more testing is needed.
Fewer people selected the ‘Neither agree nor disagree’ choice when the Don’t Know option was offered upfront. This is further evidence that respondents use midpoints for other reasons than expressing a neutral attitude, even when the Don’t Know option is made available.
Bernard Steen suggests this is because some people define midpoints negatively, as one open response stated:
I do not have enough knowledge to say agree or disagree.
MORI on mobiles
James Thom (Ipsos MORI) presented a similar experiment for mobile surveys. Three treatments were tested: an explicit (upfront) Don’t Know option, the DK option does not appear, and one where the DK option reactively appeared if the person tried to skip the question.
Respondents did not know which treatment they had been assigned to. Failure rates in finishing the survey were similar — as Ipsos MORI pays on completion.
The reactive treatment did not work well — with 64% of respondents unaware they could have chosen DK or to skip if they wanted to.
Ipsos MORI asked: “Is Britain’s electoral system based on proportional representation?” Whilst the intended correct response is ‘No’, the answer does differ by what is meant by “electoral system”. Some people selecting DK appear to genuinely not know. In the explicit condition, splitting these answers in half and assigning them to ‘Yes’ and ‘No’ yields a similar ‘No’ share to the other treatments.
Adding the explicit Don’t Know and ‘Prefer not to answer’ options reduced the time to complete the survey, with more people picking that option. There were some effects on substantive choices: generally stripping out people who were genuinely unsure. It also pushes estimates away from people guessing, satisficing or using some other heuristic to choose when they do have not enough knowledge to answer (‘non-attitudes’).
Tim Hanson (Kantar) gave the final presentation. Kantar’s test was run on the eleventh wave of the Understanding Society Innovation panel.
Three treatments were tested: reactive Don’t Know and ‘Don’t want to answer’ options are hidden initially but appears if the question is skipped; the same functionality but respondents are told how to make those options appear; and those two options are offered explicitly. A follow-up question was asked if there was an apparent contradiction between questions of attitudes and knowledge.
There were low rates of DK responses across the health questions. The rate was much for attitudinal questions with low salience — and there were large differences for these attitudinal questions by treatments. This has an impact on the middle option in Kantar’s question on benefits and risks of nuclear energy — with greater numbers selecting ‘About the same’ when the DK option is hidden.
Kantar asked people who said they knew ‘nothing at all’ about nuclear energy why they had offered an opinion about its benefits and risks. For those people saying they knew ‘nothing at all’ and where the DK option was hidden, 63% offered their opinion anyway. This is compared to 20% with the explicit DK option. As with the other experiments, making it harder to say Don’t Know means risking capturing non-attitudes in substantive options.
The follow-up question from Kantar, where people can openly type their answers, was:
You say that you know nothing about nuclear energy but earlier gave a view on whether the benefits of nuclear energy outweigh the risks. Please can you say why you did not respond ‘don’t know’ to this question? Any information you can provide will help us improve our questions in future.
The most common response to this question was: Don’t Know. People did not know why they did not select Don’t Know. Some of the other answers were enlightening into how people think:
Because I feel that we should be using greener energy resources. Even though I don’t know anything about nuclear energy, I do know that it’s not good for the planet!
Another person wrote:
I don’t know about nuclear energy but am sure it’s safe.
Varying how the Don’t Know options appear can affect survey estimates. It is sometimes unclear how best to proceed, though hiding these options can lead to people expressing non-attitudes.