This post was originally published on January 21, 2013
The year after I moved to Washington, D.C., I visited an ophthalmologist for a routine vision examination and prescription for new glasses. Since undergoing two surgical procedures to correct a "lazy eye" as a child, I hadn't had any issues with my eyesight. Part of my examination included measurement of intraocular pressures, a test used to screen for glaucoma. Although my work for the U.S. Preventive Services Task Force was in the future, I already understood the lack of evidence to support performing this test in a young adult at low risk. Not wanting to be a difficult patient, though, I went along with it.
My intraocular pressures were completely normal. However, the ophthalmologist saw something else on her examination that she interpreted as a possible early sign of glaucoma, and recommended that I undergo more elaborate testing at a subsequent appointment, which I did a couple of weeks later. The next visit included taking many photographs of my eyes as I tracked objects across a computer screen, as well as additional measurements of my intraocular pressures. These tests weren't painful or very uncomfortable, but they made me anxious. Glaucoma can lead to blindness. Was it possible that I was affected, even though no one in my family had ever been diagnosed with this condition? Fortunately, the second ophthalmologist who reviewed my results reassured me that the tests were normal, and admitted had probably been overkill in the first place. "Dr. X [the first ophthalmologist] is a specialist in glaucoma," he said, by way of explanation. "Sometimes we tend to look a little too hard for the things we've been trained to see." (I appreciated his candor, and he has been my eye doctor ever since.)
I was reminded of this personal medical episode while reading a recent commentary on low-value medical care in JAMA Internal Medicine by Craig Umscheid, a physician who underwent a brain MRI after questionable findings on a routine vision examination suggested the remote possibility of multiple sclerosis, despite the absence of symptoms. Although Dr. Umscheid recognized that this expensive and anxiety-inducing test was low-value, if not worthless, he went along with it anyway. "Despite my own medical and epidemiologic training," he wrote, "it was difficult to resist his [ophthalmologist's] advice. As my physician, his decision making was important to me. I trusted his instincts and experience."
If physicians such as Dr. Umscheid and I didn't object to receiving what we recognized as too much medical care when we saw it, it should not be a surprise that, according to one study, many inappropriate tests and treatments are being provided more often, not less. 5.7% of men age 75 and older received prostate cancer screening in 2009, compared to 3.5% in 1999. 38% of adults received a complete blood count at a general medical examination in 2009, compared to 22% in 1999. 40% of adults were prescribed an antibiotic for an upper respiratory infection in 2009, compared to 38% in 1999. (If you usually have blood counts done at your physicals or swear by the Z-PAK to cure your common cold, we can discuss offline why both of these are bad ideas.)
One of the obstacles to reducing unnecessary medical care (also termed "overuse") is that outside of a limited set of tests and procedures, physicians and policymakers may disagree about when care is going too far. The American Board of Internal Medicine Foundation's Choosing Wisely initiative is a good start, but these lists consist of low-hanging fruit accompanied by caveats such as "low risk," "low clinical suspicion," "non-specific pain." To a clinician who feels for whatever reason that a certain non-recommended test or treatment is needed for his patient, these qualifications amount to get-out-of-jail free cards. It's easy to say that payers should just stop paying for inappropriate and potentially harmful medical care, but as a recent analysis from the Robert Wood Johnson Foundation explains, this is much easier said than done. If a panel of specialists convened to review the medical care that Dr. Umscheid and I received, would they unanimously deem it to have been too much? I'm doubtful.
Similarly, although endoscopy for uncomplicated gastroesophageal reflux disease is widely considered to be unnecessary, that didn't stop an experienced health services researcher from undergoing this low-value procedure after a few days of worsening heartburn. Comparing her personal experience to the (superior) decision-making processes that occur in veterinary medicine, Dr. Nancy Kressin wrote in JAMA:
Until patients are educated and emboldened to question the value of further testing, and until human health care clinicians include discussions of value with their diagnostic recommendations, it is hard to foresee how we can make similar progress in human medicine. Patients may be fearful that there is something seriously wrong that needs to be identified as soon as possible, they are often deferential to their clinicians' greater knowledge of the (potentially scary) possibilities, and some patients want to be sure that everything possible is done for them, without recognizing the potential harms of diagnostic tests themselves, the risks of overdiagnosis, or the sometimes limited value in knowing the cause of symptoms in determining the course of therapy.
Regardless of future insurance payment reforms, both doctors and patients will have key roles to play in recognizing when medical care is too much. More widespread uptake of shared decision-making, while hardly a panacea, would call attention to the importance of aligning care with patients' preferences and values and the need for decision aids that illustrate benefits and harms of often-overrated interventions. Changing a medical and popular culture that overvalues screening tests relative to their proven benefits may be more challenging. A survey study published last month in PLOS One affirmed previous findings that patients are far more enthusiastic and less skeptical about testing and screening than they are about medication, even though the harms of the former are often no less than the latter. I agree with the authors' conclusions:
Efforts to address overuse must involve professional medical associations, hospital systems, payers, and medical schools in modifying fee-for-service payment systems, enabling better coordination of care, and integrating lessons about overuse into training and continuing education. But the preferences of active patients nonetheless merit attention. Both the mistrust of pharmaceuticals and the enthusiasm for testing and screening reflect individuals’ efforts to take care of their health. The challenge is to engage patients in understanding the connection between over-testing and over-treatment, to see both as detrimental to their health, and to actively choose to do less.