The Quantified Impact of Reputation On the U.S. News Best Hospital List

Share on facebook
Share on twitter
Share on linkedin
Share on email
Share on print

League tables like those published by U.S. News and World Report should probably be taken with a pinch of salt in any case, but it is the self-marketing of these tables that is just a bit problematic.

USNWR underlines that “rankings were developed…to help consumers determine which hospitals provide the best care…” and are based on “hard data.” That may be a stretch.

There is a 131-page methodology paper available at the USNWR website. It is probably safe to assume that most users will not study this in great detail, if they open it at all. Many of the careful considerations underpinning the Specialty Scores are discussed at length, but there is less transparency—probably on purpose—when it comes to what is really under the hood of the USNWR ranking algorithm.

The influence of the individual components of the scoring system on the final Specialty Score depends not only on the weights assigned to these but also on the variation in the underlying data. Criteria that most readers would agree should go into any reasonable ranking, such as access to advanced technologies, FACT accreditation, intensivist on staff, etc., hardly vary among the top 50 hospitals: these have all maxed out!

In other words, these items do not contribute to the ranking. What does vary quite a bit among hospitals are the patient safety scores, but these on the other hand are assigned an overall weight of just 5 percent of the total score. So, again, safety scores contribute little to the final ranking.

What really varies is the Reputation with Specialists score, and this is the parameter in the ranking table that shows by far the strongest association with the final ranking of the top 50 hospitals.

In fact, among the 18 data items listed in the table, only the reputation score shows a (highly) significant correlation with hospital ranking after adjusting for multiple comparisons. Giving relatively high weight to the reputation score is a clever move in terms of ensuring that the final rankings have face validity: it essentially guarantees that the hospitals that most respondents expect to find at the top of the table are going to end up there! Allowing respondents to name only their top five cancer centers compounds the problem. As MD Anderson and Memorial Sloan Kettering are, understandably, both named on more than 60 percent of the respondents’ lists, there are not many votes available to be cast for the rest of the centers. And most other centers are likely to be mentioned by regional respondents only and have limited nationwide repute.

The methodology paper explains how the reputation score is logarithmically transformed in order to reduce the dominance of the highest scores. While this is mathematically correct, it does not do much to reduce the large effect of the reputation score on the final ranking, simply because the dominating hospitals are so far ahead of everyone else. What the log transformation does in reality, is that it amplifies relatively small differences in reputation scores in the low-to-intermediate range.

So, while the cancer scores are data driven, the data item driving the top 50 rankings is the reputation score rather than differences in performance metrics.

One final point, survival is a major concern for any patient with cancer, but it is important to note that the available survival metric included in the USNWR assessment is 30-day mortality. While this shows some variability even after renormalization of the data, the truth is that 30-day mortality is in the low single digits for virtually all procedures. And variations in this low risk will have a limited impact on a patients’ long-term prognosis.

Having said all this, it did put a smile on my face when I saw that University of Maryland had gone up to #21 in this year’s rankings!

The author is division director of biostatistics and bioinformatics, director of the Biostatistics Shared Service, and a professor in radiation oncology, epidemiology and public health at the University of Maryland.

Søren Bentzen
Division director, biostatistics and bioinformatics, director, Biostatistics Shared Service, professor, radiation oncology, epidemiology, and public health, University of Maryland

YOU MAY BE INTERESTED IN

U.S. Deputy Secretary for Health and Human Services, Andrea Palm, and Sweden's Minister for Health Care, Acko Ankarberg Johansson, signing the agreement. Credit: Joel Apelthun/Government Offices of SwedenThe United States and Sweden signed an agreement to step up collaborations in science and technology by focusing on cancer research.
Søren Bentzen
Division director, biostatistics and bioinformatics, director, Biostatistics Shared Service, professor, radiation oncology, epidemiology, and public health, University of Maryland

Login