Quantcast
Channel: Special Articles - Committee for Skeptical Inquiry
Viewing all articles
Browse latest Browse all 856

It’s Time for Science-Based Medicine

$
0
0
old, weathered bottle of 'healing oil'

In the pages of the Skeptical Inquirer and elsewhere in the skeptical literature, you can read about a seemingly endless array of snake oil remedies, dubious health claims, questionable practices, ineffective regulation, and shortcomings of mainstream medicine. All of this is happening in an era of so-called “evidence-based medicine,” which was supposed to rededicate the medical profession to a solid scientific grounding in order to bring the best possible care to each patient.

Don’t get me wrong: modern medicine is a science-based endeavor. We continue to make tremendous, even accelerating, progress in understanding and treating disease. The institutions of science and the professionalism within medicine are also improving over time. The evidence-based medicine (EBM) movement has been, overall, a positive influence on the practice of medicine. EBM had two main goals: to evaluate and systematically characterize the evidence base for each clinical decision, and to deliver this information to practitioners when and where they need it. EBM is great as far as it goes, but it has some interesting flaws, and clearly has not done enough to eradicate pseudoscience and substandard practice from medicine.

That is exactly why I founded the website Science-Based Medicine, which has recently spawned the Society for Science-Based Medicine group (headed by Mark Crislip).

The goal of science-based medicine (SBM) is to raise the practice of medicine to the highest standards possible, relying on the full spectrum of scientific evidence; raising the application of critical thinking to the practice of medicine; optimizing the institutions of science, research, and academia; rooting out pseudoscience and quackery; and advocating for effective regulations.

Science-Based Medicine vs. Evidence-Based Medicine

The core weakness of evidence-based medicine is that it relies, as the name implies, solely on clinical evidence to determine whether a treatment is appropriate or not. This may superficially sound reasonable, but it deliberately leaves out an important part of the scientific evidence: plausibility.

When EBM was first proposed, the idea was that doctors should not be using treatments simply because they make sense. We need evidence to show that the treatments are actually safe and effective. This is reasonable, but their solution was to eliminate “making sense” (or plausibility) from the equation. Each treatment was considered conceptually as a blank slate or with a level playing field—the only thing that mattered was the clinical evidence.

Unfortunately, the EBM movement came at roughly the same time that dubious health practices were being rebranded as “complementary and alternative medicine” (CAM). By leveling the playing field, EBM inadvertently removed the primary objection to most CAM modalities: that they are highly implausible. I guess it did not occur to early EBM proponents that anyone would seriously propose a highly implausible treatment and try to study it scientifically. CAM proponents fell in love with EBM, because it gave them the opportunity to present their treatments with a veneer of scientific legitimacy. They tend to interpret EBM as meaning that if you can point to any evidence whatsoever (no matter how poor and conflicting), then you can call your practice “evidence-based.”

One of the standard bearers for EBM is the Cochrane Collaboration, which publishes high-quality systematic reviews of clinical questions. Cochrane reviews quickly became ripe for CAM exploitation. My favorite example is a Cochrane review of homeopathic substance Oscillococcinum for the flu (Vickers and Smith 2009). Oscillococcinum is imaginary, and homeopathy is utter nonsense, so this treatment is akin to fairy dust diluted out of existence. If anything should be treated as having a prior plausibility of zero, this is it. Yet the authors concluded:

Though promising, the data were not strong enough to make a general recommendation to use Oscillococcinum for first-line treatment of influenza and influenza-like syndromes. Further research is warranted but the required sample sizes are large. Current evidence does not support a preventative effect of Oscillococcinum-like homeopathic medicines in influenza and influenza-like syndromes. (Vickers and Smith 2009)

While they are essentially saying that the evidence is negative, they characterize the treatment as “promising” and recommend “further research.” SBM would take a different approach.

A science-based medicine review would explicitly consider prior scientific plausibility, bringing to bear our understanding of physics, chemistry, and biology, which represent a far larger and more reliable body of scientific evidence than the few clinical studies of Oscillococcinum. It would also consider the totality of homeopathy research in the context of our current understanding of patterns of evidence in the medical literature.

An SBM review would conclude that the scientific basis for the existence of Oscillococcinum is unconvincing to say the least, and actually is rank pseudoscience analogous with N-rays. Homeopathy itself also qualifies as pseudoscience, because it is at odds with our basic understanding of physics and chemistry. Furthermore, the totality of the homeopathy clinical research is most consistent with a treatment that has no effect.

We therefore have an ineffective application of a nonexistent substance. Further, there is no scientific reason to presume that this particularly treatment will be effective for the flu. Finally, the clinical evidence is insufficient (unsurprisingly) to conclude that the treatment works. Taken as a whole it seems that this treatment has no promise and any further research would be such a waste of resources as to be unethical.

Prior Plausibility

It is not surprising that advocates of dubious health treatments do not like the concept of plausibility. They have been basking in the sun of EBM where they don’t have to answer for extreme scientific implausibility and resent SBM trying to ruin their good time.

This attitude is represented by an opinion piece by David Katz, in which he writes:

This essay makes the case that such borders are not readily defined and challenges policy makers and practitioners with this question: In which direction lies the greater risk of miscarriage for legitimate scientific inquiry and the promise of improving the human condition—in the belief that something works whether or not it’s plausible, or in the conviction that something is implausible whether or not it works? (Katz 2010)

Katz and other CAM advocates try to present plausibility as a mere bias, one that will turn us away from effective treatment. Katz completely misses the point—plausibility is partly how we know if something works. CAM advocates tend to start with the conviction that their treatments work, and are trying to find scientific justification to help them market their treatment. I have yet to find a single example of a CAM modality that was abandoned by its advocates due to evidence of lack of efficacy.

SBM recognizes that clinical evidence is tricky, complicated, and often ambiguous. There is good evidence to support this position. John Ioannidis has published a series of papers looking at patterns in the clinical research. He found that most published studies actually come to what is ultimately the wrong conclusion, with a strong false-positive bias (Ioannidis 2005). This effect is worsened in proportion to the implausibility of the clinical question.

Simmons et al. nicely demonstrated that by exploiting common researcher “degrees of freedom,” that almost any set of data can be made to seem statistically significant (Simmons et al. 2011). In other words, it is possible to manipulate the data, even completely innocently, just by making decisions about how to collect and analyze the data that can achieve a falsely statistically significant result. Individual studies, therefore, should rarely be compelling. Data is only truly reliable when it is independently replicated, especially in a way that eliminates the degrees of freedom.

In a commentary for Nature, Regina Nuzzo describes what is called “p-hacking,” which is essentially what Simmons et al. were also describing (Nuzzo 2014). Nuzzo is specifically criticizing over-reliance on P-values, which is the statistical measure of whether or not data is significant and should be taken seriously. P-values, however, are not as reliable as many assume.

A P-value of 0.01, which many falsely believe to mean that the effect being studied has a 99 percent chance of being real, only has a 50 percent chance of being replicated with fresh data. So even a value that many take as solid evidence is really only a coin flip when you properly understand the statistics. The problem of over-reliance on P-values is further demonstrated by statistician Geof Cummings in a video he posted on YouTube (Cummings 2009).

One solution to the weaknesses of P-values is to supplement or even replace this type of statistical analysis with another type of analysis called Bayesian analysis. Bayesian analysis formally considers prior plausibility. It looks at the data as common sense dictates: How much does this new data affect the prior probability that a specific scientific idea is true?

Conclusion

The core philosophy of SBM is to use the best possible conclusion that science currently has to offer in making clinical decisions (including regulatory ones). This includes the most rigorous clinical studies possible. However, it must also consider preclinical and basic science—all scientific information that can reasonably be applied to the question at hand. This means considering the overall scientific plausibility of any clinical claim.

Clinical evidence faces many challenges, including researcher bias, publication bias, and the vagaries of statistical analysis. Most studies that are performed are imperfect; for example they may be too small, may not sufficiently account for all variables, may have defects in blinding, and have to make many choices (such as which outcomes to measure and compare), that can affect the outcome. (For more discussion on these points, see Morton E. Tavel, “Bias in Reporting of Medical Research,” in this issue, p. 34.)

It often takes decades for clinical research to progress to the point that we have highly rigorous and definitive trials. In the meantime, we have to make decisions based upon imperfect evidence. Basic science plausibility helps put the clinical evidence into context, improving our ability to make reliable decisions based upon preliminary clinical data. That is why I believe that evidence-based medicine should evolve in the direction of science-based medicine. That would be the evidence-based thing to do.


References

Cummings, G. 2009. Dance of the P-Values. Online at https://www.youtube.com/watch?v=ez4DgdurRPg.

Ioannidis, J. 2005. Why most published research findings are false. PLoS Medicine 2(8) (August): e124. Doi: 10.1371/journal.pmed.0020124 PMCID: PMC1182327.

Katz, D. 2010. Descartes’ carton–On plausibility. Explore 6(5) (September/October).

Nuzzo, R. 2014. Scientific method: Statistical errors P values, the ‘gold standard’ of statistical validity, are not as reliable as many scientists assume. Nature News (February 12).

Simmons, Joseph P., Leif D. Nelson, and Uri Simonsohn. 2011. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science Doi: 10.1177/0956797611417632.

Vickers, A., and C. Smith. 2009. Withdrawn: Homoeopathic Oscillococcinum for preventing and treating influenza and influenza-like syndromes. Cochrane Database Systematic Review (July 8), (3):CD001957. Doi: 10.1002/14651858.CD001957.pub4.


Viewing all articles
Browse latest Browse all 856

Trending Articles