Doctors easily fooled by bright, shiny statistics
A large proportion of primary care doctors were easily fooled by numbers that seemed to support cancer screening, but were actually irrelevant. Just as bad, they ignored data supporting screening that would genuinely save lives, in a questionnaire simulation published in the March 6 Annals of Internal Medicine.
412 primary care docs were given surveys testing their enthusiasm for two cancer screening scenarios:
One test increased early detection and boosted 5 year cancer survival from 68% to 99%.
The other test reduced cancer mortality from 2 to 1.6 people per 1,000.
The first test doesn't save lives, it just results in "lead time" bias, overdiagnosis, or both. "Prolonged survival" doesn't mean reduced mortality, just a longer time to live with a diagnosis of cancer. Still, doctors loved this one: 69% of them chose it as the better test.
The second test doesn't sound sexy because of the puny single-digit numbers and "per 1,000" convention, but it means a 20% absolute reduction in cancer death, which would be a huge effect. However, only 23% of physicians chose this option.
Half the doctors believed that finding more cancers in a group of screened people than found in unscreened groups, "proves that screening saves lives." (It doesn't. More often, you overdiagnose harmless cancers, or identify cancers earlier but for which earlier treatment makes no difference.)
H. Gilbert Welch wrote a brilliant and very readable book on this subject, called "Overdiagnosis." Welch's thesis is that doctors and "experts" have hoodwinked the public into believing screening is a great idea across the board, though (he argues) it's simply not. For that matter, maybe we doctors have hoodwinked ourselves, too.
Primary care doctor Steven Reznick recently came clean on KevinMD.com about the difficulty of staying up to speed on statistics and begged researchers to keep it simple -- there's not a lot of time in the day to bone up on PPVs, NNTs and relative risk ratios after seeing 30 patients, each of whom brings her own baggage about cancer screening to the 10-minute visit. It's not a fair fight, Reznick suggests: epidemiologists spend every day playing with numbers, then write jargon-laden articles that fail to spell out the key points clearly for busy and out-of-statistical-practice physicians. (But Steven -- if those folks wrote clearly and on the level of the audience they're supposedly trying to reach, it wouldn't be "academic"-sounding enough to impress their bosses, peers, journal editors, and promotion committees. That's their real audience -- not little ol' you and me.)
Wegwart O et al. Do Physicians Understand Cancer Screening Statistics? A National Survey of Primary Care Physicians in the United States. Ann Intern Med 2012;156:340-349.