Why process matters: Considering the ACS colon cancer screening guideline

Share on facebook
Share on twitter
Share on linkedin
Share on email
Share on print

In its just-published guideline on screening for colorectal cancer, the American Cancer Society revised its 2008 CRC screening guidelines, recommending that screening begin at age 45 instead of 50.

This may turn out to be quite reasonable, but, as ACS noted, it is a “qualified recommendation,” one that deserves further scrutiny before being widely implemented.

Starting at age 45 was considered—but not adopted—by the US Preventive Services Task Force in the modeling the NCI Cancer Intervention and Surveillance Modeling Network (CISNET) performed in development of the task force’s 2016 revision of CRC screening guidelines.

It would be instructive to patients, providers, and payers to have further discussion among the principals—modelers and guideline-makers—to more clearly understand the reasons different guideline decisions were reached as well as the tradeoffs involved in starting at the lower age.

Organizations with fixed populations like HMOs and the VA may face difficult tradeoffs if colonoscopy and other testing resources must be reallocated for more primary screening.

ACS is to be commended for using quantitative assessment of outcomes including benefits and harms in development of these guidelines, even if the rules for how quantity was used in decision-making are not pre-specified and clear.

Indeed, over the years ACS has utilized very varied processes for making guidelines. Such variability—a lack of foundation—can itself cause problems.

In the previous iteration of the CRC guidelines, in 2008, ACS developed the “rules of evidence” for guideline-making during the course of deliberations, not beforehand. And the two rules created were qualitative and did not consider long-term outcomes. Decades earlier, the ACS had used detailed quantitative models developed by David Eddy.

It would be instructive to patients, providers, and payers to have further discussion among the principals—modelers and guidelinemakers— to more clearly understand the reasons different guideline decisions were reached as well as the tradeoffs involved in starting at the lower age.

In contrast, the USPSTF has, over decades, developed and extensively published a set of rules about how to synthesize evidence and judge its quality, including a quantitative conceptual framework to assess consequences of different decision choices, that the task force has strived to adhere to.

ACS should be encouraged to develop detailed pre-stated “rules of evidence” to use in all its assessments, and to map their decision-making to those rules—as all guidelines-makers should.

In 2018, the dangers of having rules that are unwritten or too flexible is that guideline-makers may be tempted to “work backwards” from conclusions that they feel pressured to reach.

In contrast, having transparent, quantitative, logical, and fair criteria and rules, agreed-on upfront, will help ensure “trust” in guidelines by the public, providers, and payers, as discussed by the Institute of Medicine (now National Academy of Medicine) in its 2011 report on “Clinical Practice Guidelines We Can Trust”.

Publications of guidelines could be strengthened by expecting major organizations, when they disagree, to routinely explain in detail how differences occur and why—related to changes in evidence or changes in rules.

Last, guidelines could be improved by external peer-review by appropriate arms-length reviewers in the same way as scientific papers are, for clarity, logic, transparency, and adherence to quality standards.

David Ransohoff
Professor of medicine, clinical professor of epidemiology, University of North Carolina at Chapel Hill

YOU MAY BE INTERESTED IN

David Ransohoff
Professor of medicine, clinical professor of epidemiology, University of North Carolina at Chapel Hill

Never miss an issue!

Get alerts for our award-winning coverage in your inbox.

Login