The devil is in the detail: reflections on the value and application of cognitive interviewing to strengthen quantitative surveys in global health. K Scott, O Ummer, A E LeFevre. Health Policy and Planning, Volume 36, Issue 6, July 2021, Pages 982–995, https://doi.org/10.1093/heapol/czab048
Abstract: Cognitive interviewing is a qualitative research method for improving the validity of quantitative surveys, which has been underused by academic researchers and monitoring and evaluation teams in global health. Draft survey questions are administered to participants drawn from the same population as the respondent group for the survey itself. The interviewer facilitates a detailed discussion with the participant to assess how the participant interpreted each question and how they formulated their response. Draft survey questions are revised and undergo additional rounds of cognitive interviewing until they achieve high comprehension and cognitive match between the research team’s intent and the target population’s interpretation. This methodology is particularly important in global health when surveys involve translation or are developed by researchers who differ from the population being surveyed in terms of socio-demographic characteristics, worldview, or other aspects of identity. Without cognitive interviewing, surveys risk measurement error by including questions that respondents find incomprehensible, that respondents are unable to accurately answer, or that respondents interpret in unintended ways. This methodological musing seeks to encourage a wider uptake of cognitive interviewing in global public health research, provide practical guidance on its application, and prompt discussion on its value and practice. To this end, we define cognitive interviewing, discuss how cognitive interviewing compares to other forms of survey tool development and validation, and present practical steps for its application. These steps cover defining the scope of cognitive interviews, selecting and training researchers to conduct cognitive interviews, sampling participants, collecting data, debriefing, analysing the emerging findings, and ultimately generating revised, validated survey questions. We close by presenting recommendations to ensure quality in cognitive interviewing.
Keywords: Cognitive interviewing, survey research, validity, methodological innovation, qualitative research
Introduction
This methodological musing calls attention to cognitive interviewing, a qualitative research methodology for improving the validity of quantitative surveys that has often been overlooked in global public health. Cognitive interviewing is ‘the administration of draft survey questions while collecting additional verbal information about the survey responses, which is used to evaluate the quality of the response or to help determine whether the question is generating the information that its author intends’ (Beatty and Willis, 2007). This methodology helps researchers see survey questions from the participants’ perspectives rather than their own by exploring how people process information, interpret the words used and access the memories or knowledge required to formulate responses (Drennan, 2003).
Cognitive interviewing methodology emerged in the 1980s out of cognitive psychology and survey research design, gaining prominence in the early 2000s (Beatty and Willis, 2007). Cognitive interviewing is widely employed by government agencies in the preparation of public health surveys in many high-income countries [e.g. the Collaborating Center for Questionnaire Design and Evaluation Research in the Center for Disease Control and Prevention (CDC)/National Center for Health Statistics (2014) and Agency for Healthcare Research and Quality in the Department of Health and Human Services (2019) in the USA and the Quality Care Commission (2019) for the National Health Service Patient Surveys in the UK]. Applications in the global public health space are emerging, including to validate measurement tools undergoing primary development in English and for use in English [e.g. to measure family response to childhood chronic illness (Knafl et al., 2007)]; to support translation of scales between languages [e.g. to validate the London Measure of Unplanned Pregnancy for use in the Chichewa language in Malawi (Hall et al., 2013)] and to assess consumers’ understanding and interpretation of and preferences for displaying information [e.g. for healthcare report cards in rural Tajikistan (Bauhoff et al., 2017)]. However, this methodology remains on the periphery of survey tool development by university-based academic researchers and monitoring and evaluation teams working in global health; most surveys are developed, translated and adapted without cognitive interviews, and publications of survey findings rarely stipulate that cognitive interviews took place as part of tool development processes.
Context: respectful maternity care in rural central India
We used cognitive interviewing to examine survey questions for rural central India, adapted from validated instruments to measure respectful maternity care used in Ethiopia, Kenya and elsewhere in India. This process illuminated extensive cognitive mismatch between the intent of the original questions and how women interpreted them, which would have compromised the validity of the survey’s findings (Scott et al., 2019). Two examples are provided here.
Cognitive interviews revealed that hypothetical questions were interpreted in unexpected ways
A question asked women whether they would return to the same facility for a hypothetical future delivery. The researchers intended the question to assess satisfaction with services. Some women replied no, and, upon probing, explained that their treatment at the facility was fine but that they had no intention of having another child. Other women said yes, despite experiencing some problematic treatment, and probing revealed that they said this because they were too poor to afford to go anywhere else.
Cognitive interviews revealed that Likert scales were inappropriate
The concept of graduated agreement or disagreement with a statement was unfamiliar and illogical to respondents. Women did not understand how to engage with the Likert scales we tested (5-, 6- and 10-point scales, using numbers, words, colours, stars, and smiley faces). Most respondents avoided engaging with the Likert scales, instead responding in terms of a dichotomous yes/no, agree/disagree, happened/did not happen, etc., despite interviewer’s attempts to invite respondents to convert their reply to a Likert response. For example, when asked to respond on a 6-point Likert scale to the statement ‘medical procedures were explained to me before they were conducted’, a respondent only repeated ‘they didn’t explain’. Other respondents, when shown a smiley face Likert scale, focused on identifying a face that matched how they felt rather than that depicted their response to the statement in question. For example, when asked to respond to the statement ‘the doctors and nurses did everything they could to help me manage my pain’, a respondent pointed to a sad face, explaining that although the doctors and nurses helped her, since she was in pain her face was ‘like this’ (i.e. sad). Without cognitive interviews, survey enumerators would unknowingly record responses unrelated to the question at hand or would attempt to fit respondent dichotomous answers into Likert scales using whatever interpretation the enumerator saw fit.
Cognitive interviewing recognizes that problems with even one detail of a survey question can compromise the validity of the data gathered, whether it is an improper word, confusing phrasing, unfamiliar concept, inappropriate response option, or other issue. Without cognitive interviews, gaps between question intent and respondent interpretation can persist, severely compromising the quality of data generated from surveys (Box 1). Furthermore, cognitive mismatch is often impossible to detect after data collection. Instead, responses recorded in the survey are taken as ‘true’, regardless of whether the respondents understood and answered the question in the intended manner and regardless of the assistance, adjustment, or interpretation provided by enumerators.
In this article, we argue that cognitive interviewing should be an essential step in the development of quantitative survey tools used in global public health and call attention to the detailed steps of applying this method in the field. We start by reviewing what cognitive interviewing is and consider the varied definitions and use cases in survey tool development. We next outline the recommended steps in survey tool development and then provide an overview of how to go about cognitive interviewing. We close by reflecting on the broader implications of cognitive interviewing.
No comments:
Post a Comment