Uncritical Publication of a Biased Study Leads to Misleading Media Reports. Lynn R Webster. Pain Medicine, pny234, https://doi.org/10.1093/pm/pny234
Excerpts:
On March 6, 2018, the Journal of the American Medical Association (JAMA) published a manuscript titled “Effect of Opioid vs Nonopioid Medications on Pain-Related Function in Patients With Chronic Back Pain or Hip or Knee Osteoarthritis Pain: The SPACE Randomized Clinical Trial,” by Krebs et al. [1]. The authors concluded that treatment with opioids was not superior to treatment with nonopioid medications for improving pain-related function over 12 months. The national interest in this topic and the putative results of the study led to headlines in major news outlets touting proof that opioids were not effective for chronic noncancer pain. The article is in the top 5% of all research outputs measured by Altmetric: As of this writing, 313 news stories from 191 outlets, 2 278 tweeters, 45 Facebook pages, nine blogs, and seven Redditors have reported on the study [2]. This much media reach could influence social and political policy for the better if the understanding of the research is valid. Unfortunately, the conclusions of the article were widely mischaracterized so that the extensive reporting could instead lead to harm. Four Letters to the Editor of JAMA, along with a reply by Krebs et al., were published, demonstrating that others also had concerns about the manuscript [3–7]. We as researchers and reviewers of manuscripts can better help people to understand this type of complicated research on controversial topics. Here is my analysis of how the journal authors, reviewers, and media got it wrong.
[...]
Here is a simple analogous illustration of how the pragmatic study design was compromised. In Scenario A, a market research study with the strict inclusion criteria of an RCT is conducted among ice cream consumers to determine their preferred ice cream flavor: chocolate, vanilla, or no preference. The average result will almost surely be either chocolate or vanilla.
Scenario B, in contrast, is a pragmatic trial that would include all consumers, both those who eat ice cream and those who do not, so the inference to consumers as a whole can be made. If the majority of study participants do not even eat ice cream, the average result will almost surely be no preference. The inference is wider, and yet the pragmatic study conclusion does not apply to the relevant market research question: If a consumer is going to buy ice cream (which will only happen with consumers who eat ice cream), which flavor will they choose? A reasonable person would realize that the pragmatic study approach is not useful in this situation.
A more extreme Scenario C would exclude ice cream consumers altogether, so the foregone conclusion is no preference. Now the study is so ludicrous that there is a legitimate question as to whether the study should be conducted. This scenario no longer represents pragmatic research because it violates the central principle of “little or no selection beyond the clinical indication of interest.” It is merely a poorly designed study with participant selection bias so extreme that it has no scientific validity.
What if after conducting the Scenario C study, the investigators did not point out the selection bias to the reader and the impact it had on the results (ice cream consumers were excluded, masking that chocolate is the more popular ice cream flavor)? What if, further, the investigators did not report the additional finding that consumers reported that they prefer chocolate over vanilla in other foods, misleading by omission? [...]
These omissions describe the errors in the JAMA article. By specifically excluding patients who had tolerated and presumably benefitted from opioids (ice cream consumers), the investigators studied only participants 1) who had previously tried opioids and discovered they did not respond to them and 2) patients who had never tried opioids because they had previously responded adequately to nonopioid medications. In the words of the analogy, they studied only people who do not consume ice cream.
Naturally, therefore, the JAMA study achieved the only finding possible: that both opioids and nonopioids reduced pain equally well in patients in whom opioids were not medically indicated. Unfortunately, and probably unintentionally, the authors’ conclusion underplayed the selection bias: “Treatment with opioids was not superior to treatment with nonopioid medications for improving pain-related function over 12 months. Results do not support initiation of opioid therapy for moderate to severe chronic back pain or hip to knee osteoarthritis pain.” This is not a false conclusion, but it is misleading. [...]
The study authors could have partially ameliorated these problems by informing JAMA readers of the serious selection bias. To accomplish this, the JAMA article should have contained a more complete description of how opioid users were identified and how they were excluded from the study. [...]
Friday, November 23, 2018
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment