The New England Journal of Medicine periodically runs a feature called “Clinical Decisions.” They present a case history, then they present 2 or 3 expert opinions on how to manage the case. They stress that none of the options can be considered either correct or incorrect. They allow readers to “vote” as well as to submit comments about why they voted that way. It is understood that the voting is only for interest and to stimulate discussion: it does not result in a consensus.
In April 2008 the topic was the management of carotid artery stenosis. The patient is a 67 year old man who has no symptoms but who is found to have a narrowing of 70-80% in one carotid artery and 20% in the other, putting him at increased risk for stroke. He has other risk factors for cardiovascular disease: hyperlipidemia, hypertension, and overweight. The 3 options are medical management, stent placement, and carotid endarterectomy.
In appendicitis, science gives us straightforward guidance: do an appendectomy. In this case, the decision is not so straightforward. Endarterectomy will reduce his risk of a stroke but the risk of the surgery itself may be as great as the risk of stroke; stenting may work as well as surgery and may be less traumatic, but so far it has only been tested in high-risk patients; medical treatment may not be quite as effective at preventing stroke but it may be more effective overall since it avoids surgical risks and might prevent a heart attack as well as a stroke. Three experts each give a cogent analysis, citing published evidence, and each recommends a different evidence-based option.
At the time I voted, 48% of respondents had voted for medical management, 19% for stenting, and 32% for endarterectomy. The numbers aren’t important, except to demonstrate that not all doctors are knife-happy. What’s really important is reading the 3 different expert opinions and the comments submitted and getting a sense of the complex decision-making that goes into the conscientious practice of science-based medicine. It provides a fascinating glimpse into how the medical mind works. Rather than give examples, I’d urge you to read through some of the comments yourself.
There is even more to think about than what appears in the medical journal. It can only address the patient as presented on paper in the case history. The real patient in the doctor’s office brings more baggage to the decision-making process. Allergies? Other factors that might increase the risk of surgery? A fear of surgery because his father died in the OR? An unwillingness to accept risks? A pathological fear of stroke that makes him WANT surgery no matter what? A history of poor compliance taking medications?
Then there are circumstantial considerations. Is there a surgeon in your area who is proficient and whose complication rate is low? What will the insurance company pay for? Is the patient even insured?
You can involve the patient in the decision-making process, but he may not understand all the implications, and the way you explain the options to him will influence his reaction. Sometimes when you explain too much, he may reject an option out of fear. And if you are too wishy-washy, you miss out on the potential placebo effect of a strong recommendation by a confident, hope-inspiring clinician.
We may hate to admit it, but every one of us has some degree of personal bias, whether we’re aware of it or not. There is no such thing as perfect objectivity. If there were, we would all make the same decision when given the same facts. We don’t. In the book I reviewed last week, On Being Certain, the author Robert Burton gives the example of a cancer patient getting the same exact story and the same 2 options from 2 oncologists, and then asking them what they would personally choose: one said he would personally opt for surgery and the other said he wouldn’t.
Every chemist gets the same product when he mixes two chemicals under standard conditions. Every physicist gets the same answer when he measures the speed of light. One critic of modern medicine told me medicine isn’t science, because if it were science, we would get the same diagnosis and treatment from every doctor. Of course it isn’t a pure science; it’s an applied science.
Medicine gets very messy at times, because we never have all the data we need and we are all subject to the foibles of human psychology. Often the best a doctor can do is look at the available data and then make an educated guess. Sometimes in an emergency he is forced into action before any test results are available. Even when we have good information, the crystal ball won’t tell us which patient will fail to respond to treatment or have a complication. Doctors agonize about some of their decisions and lose sleep over them; it’s a big responsibility to know your actions may have life-and-death consequences.
In the face of all this uncertainty, some patients turn to alternative medicine, not realizing that they’re exchanging something imperfect for something even more imperfect. Instead of making a reasonable choice between 3 evidence-based options, they may throw the baby out with the bath water and turn to a treatment based on belief, on inadequate evidence, or even based on total nonsense. To counteract that, we can do two things. We can try to help patients understand and accept a degree of uncertainty. And we can make the best possible use of the most reliable tool we have: the scientific method.
This article was originally published in the Science-Based Medicine Blog.