publication date: Oct. 2, 2020
Trials & Tribulations
The cancer detection rate—a public health approach to early detection
Joshua J. Ofman, MD, MSHS
Chief medical officer and external affairs,
Back in the 1960s, the American Cancer Society first began promoting the Pap smear as an effective means of cervical cancer screening. A decade later, early detection of breast cancer through mammography became mainstream.
By the 1990s, colorectal cancer screening had been shown to be effective, and this decade, screening for lung cancer was found to reduce mortality. Despite this progress, that in each case has taken massive effort and excruciatingly long clinical studies, cancer is predicted to become the world’s number one killer.
This is not a failing of the existing screening approaches as much as it is a product of the fact that most cancers that eventually claim people’s lives are ones we do not screen for, and they are only detected when signs or symptoms are present, usually signifying advanced disease.
In reality, the various single cancer screening tests, combined with their respective rates of compliance and test performance, results in approximately 15% of the 1.3 million cancers diagnosed each year, being detected early among those aged 50-79. And finding those cancers is inefficient, with $25 billion spent annually to identify approximately 206,000 cancers while spinning off nearly 8.7 million false positive results.
What, then, to do about all of the other cancers we still need to detect? There are some traditional medical approaches—when it comes to cancer screening, it is test sensitivity that should be maximized.
If someone has cancer, the screening test should find it. But when it comes to specificity, or its mirror image, the false positive rate, it is okay if screening tests fall far short of perfection. Managing “false positives” falls under the art of medicine, and doctors and patients can deal with them, the argument goes.
This high sensitivity and suboptimal specificity approach to cancer screening has worked until today, despite the high burden of false positives it produces, because cancer screening has been pursued tumor type by tumor type. But it has meant that each new screening approach has taken decades to be adopted into a reliable workflow. Infrastructure and care maps had to be created for this to occur, in particular, to mitigate harms from false positives for those screening tests.
But if we view the cancer morbidity and mortality as a public health problem rather than a clinical one, the paradigm shifts. In that respect, our problem is not unlike that of population management of the novel coronavirus, where it is widely agreed that we need to dramatically increase testing and detection, so that we can get control of this public health crisis.
To do this, we need to open the aperture from just looking at test characteristics (like sensitivity), and begin to look at infection detection rates in the population. The same approach needs to be taken with cancer. It is well recognized that improving early cancer detection may be the only way to really put a dent in the cancer mortality curve.
Some may assume that we aren’t screening for these cancers because we don’t have treatments. But that is not correct: nearly all cancers have effective surgical, radiation, or therapeutics available, even at early stages.
So, what if we developed a different approach? What if we could transition from screening for individual cancers and start screening individuals for all their cancers? What if we dramatically improved overall cancer detection? What if we tracked the Cancer Detection Rate (CDR) in the population?
First, let’s define the CDR. It is the number of cancers detected divided by the number of cancers expected in the population monitored. This could be applied to health systems, metropolitan statistical areas, states and countries. So, it is a population sensitivity measure normalized for cancer incidence.
Using the U.S. as an example, if the population is 107 million Americans between age 50-79, the CDR for mammography would be 9%, because it detects approximately 117,000 cancers of the 1.3 million expected. Similarly, with stool-based colorectal cancer screening, the CDR is about 6% (69,000 detected). So, even when all five single cancer screening tests are combined, the CDR is approximately 16% (206,000 detected), and it is clear that, while an enormous accomplishment, this alone will not bend the cancer mortality curve or address the public health crisis that is cancer.
With the genomic revolution and advances in machine learning, there are now several multi-cancer early detection (MCED) tests near commercial use, and the CDR may be the right way to assess our national progress.
The MCED test from GRAIL, with validation results recently published in the Annals of Oncology, has the ability to detect over 50 cancer types with good sensitivity, and with a false-positive rate less than 1%. And there are other tests in development as well that can detect from 8-10 cancers.
For the majority of cancers that have no currently recommended screening, they are only detected by happenstance, when they are not causing clinical symptoms. That means that even an average multi-cancer sensitivity of 30-50% for some early stage cancers is a step change improvement. Such a test would be used in concert with existing single cancer screening tests, using an annual blood draw, which approximately 70% of Americans aged 50-79 receive each year.
If everyone took the annual blood test that detects 50 cancers in addition to current screening, our calculations estimate that it could produce a CDR of 50% for all cancers and 75% for the deadliest cancers (e.g. those with 5-year survival less than 50%).
Why is MCED such a profound idea? Because developing and testing a new screening approach for each individual cancer, then building capacity to manage the downstream complications and false positives, is unworkable.
The new MCED tests take advantage of aggregate cancer prevalence and low false-positive rates to dramatically improve the predictive value of positive blood tests, nearly an order of magnitude better than many single cancer tests in terms of the cancer detection rate. Since these tests detect common cancer signals, they by nature may detect cancers for which there is low incidence and, thus, never have single cancer screening tests developed that would be cost-effective.
So, these tests may miss some cancers, thus they must be used in addition to existing single cancer screening. But today, there is no approach to early detection of most cancer killers, and so, from a population health perspective, even a 50% average sensitivity across cancers could lead to the discovery of many cancers prior to their clinical diagnosis, and potentially at earlier stages where treatments are more effective and potentially curable.
The advent of technological innovation provides an opportunity for us to evolve our approach. But we need to learn the lessons from decades of cancer research and the public health challenges posed by COVID-19. Just as we track COVID-19 infection rates, detection rates, and death rates by city, state, and nation, we need to embrace the CDR, and track our progress in early cancer detection.