I read an interesting update on cancer screening today published in the January 2016 issue of the BMJ noting that: “A systematic review of cancer screening trials found that three (33%) showed reductions in disease specific mortality and that none showed reductions in overall mortality.” This means that while cancer screening lowered cancer-specific deaths in a third of the trials, it failed to lower the overall mortality in any of them. The people saved from cancer simply died at the same age from something else. In another review of a dozen cancer screening trials, the majority noted a lower cancer-specific mortality but no change–or even an increase–in the overall mortality of the group screened. In the few trials where both the cancer and overall mortalities declined, the bulk of the change was attributed to a reduction in non-cancer related deaths. The apparent conclusion is that cancer screening doesn’t save lives.
But before we toss out the baby with the bath water, maybe there’s an explanation. As it turns out, there are a couple of reasons why cancer screening might reduce cancer-specific mortality while leaving the overall mortality unaffected. First, the disease might be so rare that a reduction in cancer deaths gets lost when compared to the overall mortality. For example, if a cancer affects 1 in 50,000 people and kills half of them, then the overall mortality in the population for that particular cancer is 1 in 100,000. Let’s assume further that screening for this cancer is very effective and reduces cancer deaths by 50% (in actuality, no cancer screening is anywhere near this successful), meaning that the cancer death rate in the screened group is 1 in 200,000 versus 1 in 100,000 for the unscreened group. Lastly, let’s assume that the study enrolled 50,000 patients. In this case, we would expect to find no difference in the overall mortality between the screened and unscreened groups because the cancer is simply too rare to influence the outcome. When this happens, a study is said to be “under powered,” meaning that not enough patients are enrolled to detect a statistically significant difference between the groups. By a “statistically significant difference,” I mean one that is unlikely to have occurred due to random chance. In the case of the Minnesota Colon Cancer Control Study that assessed annual fecal occult blood testing over 30 years of follow up and found no overall mortality benefit, the study enrolled 10,000 patients but would have needed 50,000 to detect a significant benefit.
Another reason why cancer screening might reduce cancer deaths without reducing overall mortality is that there may be rare deaths due to the screening procedure itself, or it could be that the cancer treatment increases the risk of death from another cause. An example of the former is a patient who undergoes a colonoscopy, suffers a perforation of the intestine, develops peritonitis, and subsequently dies. Currently, the death rate associated with colonoscopy is 30 per 100,000. An example of the latter might be a case where the chemotherapy used to treat the cancer is cardiotoxic and the patient develops congestive heart failure and dies. In rare cases the diagnosis of cancer is so profound that a major depression ensues and the patient commits suicide. If the increased mortality related to screening, diagnosis, and treatment equals the mortality decline associated with early cancer detection then the overall mortality will remain unchanged, even in an adequately powered study. In the case of PSA testing for prostate cancer, screening leads to an increase in harm without a change in overall mortality, and likely no change in cancer-specific mortality either (this latter remains under debate).
Much of the harm comes from over-diagnosis wherein early screening detects cancers never destined to grow and metastasize, thereby subjecting patients to unnecessary chemo, surgery, and radiation, not to mention the psychological terror that accompanies a cancer diagnosis. Over-diagnosis occurs both with PSA testing for prostate cancer and with mammography for breast cancer. In both cases far more people are harmed than helped. It’s interesting to note that the public markedly overestimates the benefit to screening while far underestimating the harm. In one study, more than two-thirds of women thought mammography would lower their chance of developing breast cancer (mammography can only detect cancer after it occurs; it prevents nothing). More than 60% believed that screening would cut the risk of cancer in half and 75% thought that 10 years of screening would prevent at least 10 breast cancer deaths per 1,000 women screened. The actual numbers are 1 breast cancer death averted per 1,000 women screened for 10 years (dropping from 5 deaths to 4), at the expense of 300 to 500 false positive mammograms, and 10 cases of breast cancer over-diagnosis. Based on this, the Swiss medical board decided not to recommend routine screening mammography. None of the US medical societies have followed suit, but maybe they should. There are no easy choices, because there are no easy answers. One thing for sure is that limiting your risk by not smoking, eating well, and exercising often will do more to keep you healthy than a screening colonoscopy, mammogram, or PSA test. Stay healthy my friends.
For more on this topic, see my prior posts: “I’m skeptical about … PSA screening” (12/12/16); “I’m skeptical about … screening mammograms” (12/4/16); and “I’m skeptical about … screening colonoscopies” (11/24/16).
Vinay Prasad, Jeanne Lenzer, and David Newman, “Why Cancer Screening Has Never Been Shown to “Save Lives”—And What We can Do About It,” BMJ 2016; 352: h6060, doi: 10.1136/bmj.h6680.