The term evidence-based medicine (EBM) first appeared in the medical literature in 1992. There were two previous EBMs: Expert-Based Medicine and Experience-Based Medicine. In the 4th century BCE, Aristotle said men have more teeth than women. He was the expert, and for many centuries his error was perpetuated because no one dared question his authority and no one bothered to look in mouths and count teeth.
Then we relied on experience. When I was in medical school, professors would often say something to the effect, “In my experience, drug A is the best treatment for disease B.” Dr. Mark Crislip says the three most dangerous words in medicine are “in my experience” because experience is so compelling and so often wrong. Richard Feynman said, “The first principle is that you must not fool yourself—and you are the easiest person to fool.”
Why Evidence Based Medicine is Essential
Experience is deceptive. When a patient gets better with a treatment, it could be because of the treatment but it could also be due to:
- Improvement of symptoms because of the natural course of the disease.
- Regression to the mean (an exceptionally high blood pressure reading will naturally be followed by a lower one closer to the average BP).
- Spontaneous remission.
- Inaccurate observation of what really happened.
- Biases that influence our interpretation of events.
- Unidentified co-interventions.
- Reinforced expectations.
- Classical (Pavlovian) conditioning.
- Social learning.
- Many other psychosocial and psychobiological factors.
Even the most reasonable-sounding, intuitively obvious beliefs may be wrong. The gold standard of EBM is the randomized controlled trial (RCT) where the treatment is compared to a placebo with all other factors being equal.
EBM is a great concept, but its implementation has been flawed. It gives short shrift to plausibility and appears to worship the RCT above all else. If an RCT showed that scratching your nose cured cancer, EBM would accept it, even while a skeptical thinker would assume something was wrong with the study.
How much of current practice is evidence-based? 78% of our interventions are based on some form of compelling evidence, and 38% are supported by RCTs. More evidence is always better, but it’s unreasonable to hope for everything we do to be supported by RCTs. The British Medical Journal published a delightfully tonguein-cheek proposal ridiculing those who are overly attached to RCTs:
The effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence-based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.
We don’t need to do an RCT pushing people out of planes with and without parachutes to know what would happen. As the old adage says, “You don’t need a parachute to skydive; you only need a parachute to skydive twice.” We don’t need to do an RCT of surgery for appendicitis, of setting broken bones, or of controlling blood loss in trauma and surgery. If we have a lifesaving treatment, we can’t ethically deny it to half of our subjects for a control group.
Homeopathy says that you can dilute out all the molecules of the original substance and the water will remember it and have effects opposite to those of the original substance. Based on the sort of basic science evidence that amounts to “established knowledge,” we can confidently say that homeopathy can’t possibly work as claimed. Is it realistic to assume that a huge body of established knowledge could be overthrown by a few ambiguous clinical trials? EBM makes that assumption. They’ve forgotten Carl Sagan’s “Extraordinary claims require extraordinary proof.”
Prior plausibility is important
If we do an RCT and get positive results that are significant at the p=0.05 level, most people would think that was convincing proof. But with Bayesian mathematics we can show that if the prior probability was only 1%, a statistical significance at the .05 level only raises the posterior probability to 3%.
If you were willing to disregard prior plausibility entirely, you could do scientific research on the Tooth Fairy. You could test whether the Tooth Fairy brought more money to kids who left their tooth in a Baggie than to those who wrapped it in Kleenex. You could study the average amount left for the first tooth versus the amount left for the last tooth. You might find that the Tooth Fairy left more money to rich kids than to poor kids (Surprise!). You could use the best rigorously scientific methods. Your results would be statistically significant and reproducible by other researchers. You would think you had done good science and had learned something about the Tooth Fairy. You would have learned something, all right; but not about the Tooth Fairy because she doesn’t exist. You would really be misinterpreting data about parental behavior and popular customs.
You can disregard the prior plausibility of homeopathy, but your science won’t really be testing the efficacy of homeopathic remedies; it will be testing human psychology and you will be fooling yourself. Clinical acupuncture studies do not test the efficacy of improving the flow of qi through meridians at acupoints, because none of these exist. Chiropractic studies do not test the efficacy of correcting displacements of bones in the spine, because chiropractic subluxations have never been demonstrated.
Critics of EBM complain that just because we don’t understand how something works, we shouldn’t reject it as implausible. We don’t. When there is good evidence for efficacy, we adopt the treatment first and then try to figure out its mechanism. When we started using penicillin we didn’t know how it worked; but we had strong evidence that it did. If we had penicillin-strength evidence for homeopathy, we’d all be using it.
Pragmatic trials are designed to see how a treatment works in a real-world setting as opposed to the artificial environment of a research study. Advocates of alternative medicine love pragmatic studies because they are often the only kind of study to support them. Pragmatic studies on implausible treatments demonstrate what I call Cinderella Medicine. Imagine that Cinderella’s Ugly Stepsister got a complete makeover with hair styling, expertly applied cosmetics, jewels, and a beautiful designer dress. Add tooth whitening or even orthodontia, charm school, modeling classes, and elocution lessons. If you entered her in a beauty contest along with the unadorned, dirty Cinderella in her original rags, the Ugly Stepsister might win hands down. But it wouldn’t be a fair contest unless both were in their original unenhanced state or unless you compared the makeover-enhanced Stepsister to the Fairy-Godmother-enhanced Cinderella.
Let’s say you do a pragmatic study of acupuncture for back pain compared to standard treatment. When you go to your primary care doctor for standard treatment, he is likely to say, “You have plain old garden variety back pain. Everybody gets these backaches from time to time. We don’t know what causes them, but they go away in a few weeks on their own. While you’re waiting for it to go away, I can offer you a prescription for some pain pills or a referral for physical therapy.” He doesn’t spend much time with you, may seem bored and unsympathetic, and may not even ask you to return.
Compare that to a visit to the acupuncturist. He assures you that he knows how to relieve your back pain. He provides a complicated explanation with all kinds of impressive, esoteric oriental terminology, mentioning yin and yang, ancient Chinese wisdom, and how his needles will adjust the flow of qi through your meridians to restore health. He takes you into a quiet back room, has you lie down and relax, and spends half an hour or more doing up-close-and-personal hands-on treatment. He is charismatic and caring, interested in you as a person, asks a lot of questions, and may uncover another unrelated problem that needs treatment. After treatment, he prompts “You feel better now, don’t you?” and you feel a social pressure to agree. Instead of dismissing you with a prescription, he asks you to return three times a week for several weeks.
This pragmatic study shows that acupuncture works better than standard care. But you have fallen into the Cinderella Medicine trap. Standard care is like Cinderella in her rags and ashes, unadorned. Simply inserting acupuncture needles would be like the Ugly Stepsister before her makeover. The needles don’t have any specific effects: touching the skin with toothpicks has been shown to work just as well. But the acupuncture experience is like the Ugly Stepsister after her makeover. The acupuncturist dresses up the treatment with all kinds of enhancements, producing “nonspecific effects” that are not due to the treatment itself, but to the interaction with the provider. Plain needle insertion has been given the Cinderella treatment and transformed into an enhanced package of suggestion, expectation, relaxation, ancillary psychological effects, personal interactions, etc.
Those needles are ready to go to the ball and wow the prince. Acupuncture is the ideal placebo system.
So acupuncture with no specific effects, but with many nonspecific treatment effects, appears to outperform a standard treatment that offers some small specific effects but little in the way of nonspecific enhancements. You haven’t proved that acupuncture works; you’ve only demonstrated that standard treatment could use a makeover.
EBM is essential but flawed. Our multi-author blog www.sciencebasedmedicine.org promotes truly Science-Based Medicine (SBM) that considers all the available evidence and discounts implausible ideas that are incompatible with established scientific knowledge. Check it out.
This article was originally published as a SkepDoc column in Skeptic magazine.