Assessment of Psychopathology: Is Asking Questions Good Enough? Barbara Pavlova. JAMA Psychiatry. 2020;77(6):557-558. March 11 2020, doi:10.1001/jamapsychiatry.2020.0108
The evaluation and success of our efforts to prevent, detect, and treat mental illness depend on the assessment of psychopathology. Almost all psychiatric assessments consist of asking questions, through questionnaires or interviews, about behaviors and experiences. We either ask the person being assessed or someone who knows them well. Based on the answers, we diagnose, recommend treatment, and monitor outcome. Regardless of who is reporting, overreporting and underreporting are common. People may overreport or underreport on purpose when they are hoping for benefits associated with a diagnosis (eg, educational support, time off work, or access to medication), or fearing the consequences of diagnosis, including stigma or adverse effects of medication. Beliefs about mental illness not being real, concerns about privacy, health insurance cost, and implications for custody of children are also reasons for underreporting.
Unintentional overreporting and underreporting are even more common. Many diagnoses rely on recalling duration and frequency of multiple symptoms, which is prone to memory bias. Recent events are more salient, and people are more likely to remember times when their mood was similar to the mood at the time of reporting. Mood-dependent memory impedes the assessment of bipolar disorder, where individuals typically present in the depressive phase, and correct diagnosis depends on their recall of manic episodes. What is being reported about others is also influenced by reporters’ mental state. For example, mothers experiencing depression and anxiety report more severe symptoms in their children than the children themselves.1 These biases have been demonstrated in children thanks to routine use of multiple informants. It is likely that biases of self-report in adults remain hidden, because report by others is underused in adult psychiatric practice and research. Other reporting inaccuracies may stem from reporters’ implicit and explicit biases related to age, sex, race/ethnicity, appearance, or disability of the person whose behavior they are describing.
The comparison group that the reporter uses also has an effect and is the most likely explanation for why younger children within the classroom are more often diagnosed as having attention-deficit/hyperactivity disorder than their older classmates.2 Similarly, clinicians who are likely to see those who are severely ill may underestimate problems of the relatively less affected. It is likely that the comparison group also affects self-report in adults and may contribute to apparent strong effects of income inequality and urban environment on psychopathology. Self-report and report by others may have complementary strengths depending on the problem that is being evaluated. While observable problems, such as attention-deficit/hyperactivity disorder, may be suited to report by others, the less visible difficulties, including anxiety, may be more accurately captured by self-report.
The studies that evaluated the relative predictive value of information from multiple reporters suggest that while everyone comes with their own biases, each reporter also contributes to the assessment/prediction in a meaningful way. Self-rated depression questionnaire and clinician-rated interview differ, but each contributes uniquely to the prediction of antidepressant treatment outcome.3 Similarly, parents’ ratings of their offspring’s depressive symptoms as well as those self-reported by the offspring prospectively predict a new-onset mood disorder.4
Problems with self-report and recall bias have been known for decades, but alternative methods have not been adopted in practice. The impracticability of accessing multiple informants and lack of objective unbiased standards may be why we continue to use suboptimal but convenient assessment methods. To improve assessment and prediction accuracy, we need methods that are more objective and less biased.
First, observation of behavior by a person who has no stakes in the assessment result improves assessment and prediction of functional outcomes. Independent observation of classroom behavior predicts future attention-deficit/hyperactivity disorder–associated impairment with greater accuracy than the parent and teacher reports.5 Independent raters unaware of parent diagnosis observed more inattention, language/thought problems, and oppositional behavior in offspring of parents with mood and psychotic disorders than in offspring of parents without these disorders.6 Ratings of behavior by independent assessors may also contribute to predicting and evaluating treatment outcome.
Second, ecological momentary assessment (ie, repeated assessment of respondents’ experiences in their natural environment in real time) minimizes memory bias. For example, it may help identify early signs of mood and energy deterioration, which could enable clinicians to intervene early to prevent a major mood episode.
Third, automated analysis of behavior has a potential to avoid biases associated with human reporters. As with human observers, automated analysis of behavior uses the discernible signs of mental state, including speech content and prosody, body movement, and facial expressions. Automated analysis of speech could contribute to diagnosis and prediction of response to treatment. For example, features of speech, including speed, articulation, or repetitiveness, may aid the diagnosis of depression. Corcoran et al7 showed that automated speech analysis can predict psychosis onset among individuals at clinical high risk with high accuracy. In addition, increased pupillary reactivity to sad words distinguished children and young people with depression from their nondepressed peers.8 Automated analysis of speech and pupillary reactivity may also identify individuals at risk for depression. Finally, actigraphy can contribute to the assessment of mental illness through identifying changes in activity and sleep that precede a relapse of psychosis or depression.9 While automated analysis of speech, pupillary reactivity, and actigraphy contribute predictive information that complements self-report, none of these has been developed and validated as a comprehensive stand-alone assessment method that could replace questionnaires and interviews.10
The limits and biases of self-report have been known for decades, and the calls for integrating more objective measurement into psychiatric assessment are not new.11 Yet little has changed in psychiatric assessment to date. The last decade has brought evidence that multisource assessment actually improves the prediction of meaningful outcomes.3,4,7 At the same time, the feasibility of objective measurement is rapidly improving with the availability of wearable technology.9 The next steps in implementing objective assessment should include prospective evaluation of predictive value of objective tests used alone or alongside established interview and questionnaire methods.10 Clinical applicability will be enhanced if these steps are informed by what is known about report biases and multisource assessment. Because each reporter contributes unique predictive information,3-5 new methods should be evaluated against multireporter assessment rather than relying on a single reporter for a standard. New technology often uses artificial intelligence to learn from existing data that include the biases reviewed here. When calibrating new methods, care must be taken to ensure fairness and avoid perpetuation of biases pertaining to race/ethnicity, sex, and education.
While objective measurement of psychopathology is desirable, the presently available methods are far from being universally applicable.10 Although reports by self and others come with various biases and inaccuracies, they will likely remain the most informative way of assessment in psychiatry in the foreseeable future. Yet these traditional methods can and should also be improved. Unbiased objective measures of mental state, with methods such as speech analysis, pupillary reactivity, and actigraphy, may help to design and calibrate self-report and clinical interview measures so that they are less prone to bias.
Combination of multireporter assessment with objective analysis of behavior offers an opportunity to improve diagnosis and prediction of mental illness to better target treatment and preventative efforts. The key to implementing this knowledge may lie in practical solutions that allow incorporating objective and unbiased assessment in the work flow of research and clinical practice.
No comments:
Post a Comment