publication date: Nov. 22, 2019
Conversation with The Cancer Letter
Learning to harmonize:
Ten health care research organizations tell us how they formulated common definitions for real-world endpoints
Jeremy A. Rassen
President, chief science officer,
Robert S. Miller
CancerLinQ, American Society of Clinical Oncology
Mark S. Walker
Chief scientific officer, Outcomes Science & Services,
Chief medical officer,
Senior director, Regulatory Policy,
Nancy A. Dreyer
Chief scientific officer, senior vice president,
IQVIA Real-World Solutions Center for Advanced Evidence Generation
Jennifer B. Christian
Vice president of clinical evidence,
IQVIA Real-World Solutions Center for Advanced Evidence Generation
Lawrence H. Kushi
Director of scientific policy,
Division of Research, Kaiser Permanente Northern California
Vice president of data, evidence and insights operations,
McKesson Life Sciences
Founder and president,
Chief medical officer,
We asked the leadership of 10 companies to share their visions of the future of data sharing, describe their portfolios in real-world evidence, and opine on what it would take to convince FDA accept real-world endpoints in regulatory decision-making in oncology.
The new common definitions of clinical endpoints were recently published as part of a pilot study led by Friends of Cancer Research. with input from FDA and NCI.
With FDA guiding the creation of the pilot methodologies and definitions in the Friends endeavor, data companies and research organizations that participate gain a real advantage. By having a say in the process, they shape the development of these endpoints and definitions, perhaps ensuring that these elements correspond with the strengths of their respective data sets.
The Cancer Letter’s questions were focused on how the collaboration was structured in a systematic way, which problems were being solved, and what are the questions that have yet to be answered.
Matthew Ong, associate editor of The Cancer Letter, asked all the companies the same 10 questions.
What does your organization excel at? In terms of data, what do you provide that is unique compared to your competitors and other health IT companies?
Jeremy A. Rassen, Aetion:
We excel at providing transparent, reliable, and replicable real-world evidence for answering high-stakes questions. For any given question, you start with raw data, ready these data for analysis, analyze, and arrive at transparently-reported results. We bring unique expertise to data selection and transformation, to the analysis itself, and to the reporting that allows for analysis transparency and reproducibility—all guided by the principles that provide for regulatory-grade evidence.
Further, many analyses—particularly in oncology and with rare diseases—require several raw data sources to get at the answer to a given question. Working seamlessly with multiple data sets, each with its unique characteristics (such as possible data missingness), requires what we call “data fluency,” a critical capability for ensuring appropriate selection and transformation of real-world data. Many companies are mono-lingual, if you will—they speak the language of their proprietary data set only, which is insufficient for many questions in oncology.
Robert S. Miller, CancerLinQ:
CancerLinQ is the only physician-led, big data platform in cancer and contains comprehensive longitudinal clinical data from over 1.3 million cancer patients. This growing body of data represents a large cross section of cancer care in the U.S.—a geographically diverse mix of academic, health-system, and physician-owned practices, from 10 different EHR systems.
Mark S. Walker, Concerto:
Concerto HealthAI has best-in-class expertise in creating research-ready, publications-grade data products developed from electronic medical record, genomic, claims, and patient reported outcomes data.
We refer to these products as being “use-case engineered,” a novel approach where the data fields and sources are optimized to specific analyses or solutions. We complement this expertise with our AI/machine learning technologies, and study design and analytic services built on decades of experience working with real-world data.
In health data, 95% of the records are what is considered unstructured data. Consequently, our approach to working with electronic medical record data has emphasized going very ‘deep’ into the record, to extract information ordinarily available only in prospective data collection, and then integrating this information into structured and engineered data products and services that yield actionable information.
Concerto HealthAI understands how data and technology can be engineered together to enable insights and actions in the most devastating and rare diseases. This involves designing and delivering research ready, publications grade data products for all major solid tumors and hematological malignancies.
We are the leading company for advancing AI and machine learning methods for use with those data, allowing predictions and insights into specific patients and patient cohorts to inform new clinical study and clinical trial designs.
Andrew Norden, COTA:
Using technology-enabled human abstraction techniques, COTA takes real-world patient data, hidden and fragmented within EHRs, and curates and organizes it such that clinicians can gain meaningful insights to make better decisions at the point of care—while also reducing costs.
This curated data powers COTA’s CNA, a patented cohorting technology that groups clinically similar patients so a physician can understand how they respond to various treatments as well as their associated outcomes. This allows for a clearer understanding of which treatments result in the optimal outcome for a specific patient cohort.
The clinical depth of COTA’s data is unmatched. With access to both academic and community-based cancer centers, COTA’s EHR agnostic technology-enabled and human abstraction process makes sense of all relevant aspects of the patient journey, including data in physician notes, pathology, radiology, surgical reports, genomic testing results and referral documentation—to develop a longitudinal patient record and comprehensive picture of care.
Increasingly, COTA’s regulatory-grade RWD is being used in clinical trials to develop external control arms (also known as synthetic control arms) with the goal of obviating the need for enrolling concurrent controls in certain circumstances. This has the potential to reduce the time and cost of the clinical development effort which can take as many as 10 years and cost hundreds of millions of dollars. Most importantly, it benefits patients, because no patient wants to receive a mediocre standard-of-care treatment or placebo when alternatively, there is an opportunity to receive a promising experimental agent.
Nicole Mahoney, Flatiron:
Flatiron is more than a data vendor. We’re bringing together clinical, statistical, analytical, and regulatory capabilities tailored to our partners’ research and regulatory needs. Working closely with our partners across the health care ecosystem, we have had years of experience collaborating with our partners and the FDA, which informs our approach on data quality and analytical methodologies.
In terms of our data offerings, we have access to de-identified patient level records at the source via our electronic health record and partnerships with our network of providers, which enables timely and scalable integration of clinically relevant real-world data. Furthermore, our data curation and analytical approaches are not a black box—they are transparent, with traceability of data to the source to generate evidence that is reliable.
Nancy A. Dreyer, IQVIA:
IQVIA distinguishes itself in the industry by our combination of unparalleled data, advanced analytics, transformative technology and deep domain expertise. We are good at putting it all together.
We are scientific leaders who know how to generate scientific evidence about the effectiveness and safety of medical products in conditions of real-world use and how they perform in comparison to other available diagnostic or therapeutic choices. We work with regulators in major markets to create innovative ways to generate the necessary evidence to support new medicines that are safe, effective and affordable.
Our scale and depth of expertise allows us to provide fit-for-purpose research using multi-country clinical and pharmacy data to conduct clinical trials and/or prospective epidemiologic studies, including direct-to-patient research. These diverse tools and assets allow us to use randomization where needed, to collect data from clinicians following protocol-driven care and to use real-world data when it is likely to reliably capture the events of interest, as appropriate. We have an exciting portfolio of scientific tools.
Lawrence H. Kushi, Kaiser:
Kaiser Permanente differs from many other organizations in that it is an integrated health care system in the full meaning of the term “integrated”. That is, it is a health insurance provider, and the people who have Kaiser Permanente insurance also receive care from Kaiser Permanente providers in Kaiser Permanente facilities.
From a health care data availability, research, and analytics perspective, what this means is that, as researchers affiliated with Kaiser Permanente, we have access to the full range of clinical and administrative data, across the full spectrum of care that someone may receive. Thus, we can conduct health services research based on data across the full spectrum of cancer care, from primary prevention to end-of-life care. We can examine not just aspects of active oncology care and treatment, but also clinical encounters related to primary care, cancer screening, or comorbid conditions such as those related to cardiology or endocrinology. We can leverage electronic health records and insurance records.
This differs from most other groups that are trying to contribute in the cancer and health IT space to improve cancer care. These groups fall broadly into two categories, those that have access to health insurance claims data—OptumLabs is an example—and those that have access to detailed electronic health records documenting the cancer care experience. Flatiron Health or ASCO’s CancerLinQ are examples of the latter. The former typically does not have access to the EHR data from the multiple health care systems in which they provide health insurance coverage; the latter typically does not have information about care outside the oncology experience, or does so in a relatively limited fashion, regarding time period or services covered, and may need to rely to claims data from multiple insurers to fill in these gaps in data about clinical care.
In terms of the way Kaiser Permanente is organized, I sit in one of its research groups. Each Kaiser Permanente health care region has a research group, and I’m part of the Division of Research in KP Northern California. These research groups are very similar to academic research units or departments. We’re largely a soft money operation, funded primarily through grants and related mechanisms, and with minimal financial support from our parent organizations. So, we’re not directly part of the health care or insurance provider side of Kaiser Permanente, although we continue to seek ways to better enhance the role of research in Kaiser’s mission. But as part of Kaiser Permanente, we do have access to clinical and administrative data for research purposes. And so, that’s the context in which we are participating in the Friends of Cancer Research initiative.
Just one example of how these distinctions play out in data harmonization and variable definition in the Friends of Cancer Research effort is how we defined who was eligible to be included in a particular analysis. And so, everyone basically said, “Okay, if they’ve had at least two encounters within a defined time period, then we have reasonable confidence that they’ve been in that health care system so we can follow them for immunotherapy receipt and outcomes,” whether it’s the EHR-rich oncology practice group, or health insurance claims data.
In our case, we don’t actually define potential data availability in that way. We can define it based on the health insurance that they have enrolled in. Because we’re fully integrated, if an insured person seeks clinical care, they will do so through one of our facilities. And so, we can define eligibility for a given analysis based on enrollment periods
A rich data source like Optum could, in theory, do that, except the care that people receive could be at multiple different institutions or health care systems. And so, they only have the claims-level data from multiple different health care providers that aren’t linked, except through their claims.
Note that we did align our eligibility definition with the other participating groups, based on number of visits, and it aligns well with both our enrollment approach and what the other groups ended up doing.
Sarah Alwardt, McKesson:
I think one of McKesson’s strongest advantages is that our iKnowMed oncology practice EHR system was not only built for oncology, but it’s continually improved by practicing oncologists.
So, when we think about the data that are structured and the data that are captured through the hard work and the clicks of the oncologists, we know a number of very important clinical features that are captured for us to be able to extract and readily analyze—stage performance status, physician-documented line of therapy, and even diagnosis-naming. So, without having to either curate or algorithmically derive, there’s a large breadth of information that we can get directly from oncologists.
We work with 10 of the 10 top biopharma companies and 18 out of the top 20 biopharma companies, because two of those aren’t really in oncology. We also work with smaller biopharma companies. McKesson’s data, evidence and insights business is an entirely externally-focused organization working with biopharma.
Jonathan Hirsch, Syapse:
Syapse excels at making sense of messy real-world data in a health system environment, and enabling our health system and life sciences partners to use that RWE to improve care for patients. From a data standpoint, we believe these are the things that set us apart.
Comprehensiveness: Since we work with large integrated health systems, we are able to capture much more of the patient’s longitudinal care journey, including their direct cancer care and their non-cancer care (e.g. their cardiac care). We believe this is critical to developing a full understanding of patient outcomes.
Representativeness: We work with providers across the U.S. and South Korea in many settings of care, including traditionally underserved and underrepresented communities. This provides a fuller and more representative picture of the cancer population.
Molecular: Syapse pioneered an interoperability solution for molecular data, allowing us to work directly with testing labs to structure and normalize molecular results at scale. The integration of molecular and clinical data is critical to realizing the vision of precision medicine in oncology.
Gary Palmer, Tempus:
Real-world data is comprised of various types of data from a diverse set of sources including electronic health records, claims data, prescription data, and patient registries. The differences in health care systems, national guidelines, and clinical practice have driven different content.
Tempus not only has deep competency in combining these disparate datasets, but also pairing clinical data with molecular data from tumor/normal matched DNA sequencing, whole-transcriptome RNA sequencing, and immunological biomarker measurements to discover unique insights that can inform treatment decisions.
What are your main takeaways from the Friends of Cancer Research pilot projects?
The recently-released Friends white paper covers a ton of important ground regarding the use of external control arms to augment single-arm studies in oncology. The paper goes all the way from basic methodology to a fully-worked out case study. It’s an impressive effort on the part of Friends and all the stakeholders who participated in its creation.
In terms of analytic takeaways in the external control arm white paper, we as a group discussed and addressed a number of the challenges that come up when creating external control arms, and detailed a case study where external controls led to substantially the same result as randomized controls
Taking a step back, you can glean a few meta-themes from Pilot 2.0 and the related white paper: first, the importance of collaborative work among stakeholders including sponsors, data holders, analytic experts, regulatory agencies, and groups like Friends.
Second, we’re starting to see the power of using RWE to transform how we understand the performance of new cancer therapies, by allowing us to compare against standards of care that are meaningful to regulators, payers, clinicians and patients.
Third, echoing what was said at a Friends meeting earlier this year, we’re seeing that we can build upon the understanding offered by traditional RCTs to investigate non-traditional endpoints that can help all stakeholders—but most importantly, patients—to support thoughtful choices about what treatment is best for how an individual wants to approach their care.
Through these projects, we can develop and implement common endpoints across different real-world data sources. We have demonstrated that the different sources of data can yield fairly similar results regarding patient outcomes.
Another unique aspect is that this project is exploring non-traditional endpoints, such as time to next treatment (TTNT) and time to treatment discontinuation (TTD), that do show promise as potential alternate clinical endpoints to progression-free survival and others commonly used in clinical trials. TTNT and TTD may provide more clinically relevant endpoints because they are related to reasons that patients and clinicians alter clinical care, taking into account toxicity, efficacy, and other factors.
Friends of Cancer Research is creating a unique and valuable community of expertise and assets to advance novel approaches for oncology research. It is creating its own “network effects” as different teams with different approaches can cross-reference each other thereby accelerate progress.
Different data sources can yield similar patterns of findings across endpoints—and that is what we saw with the Friends of Cancer Research project. The value here is that we can achieve insights into specific populations or diseases, across different data sources that are comparable—thus increasing the confidence and the utility of those different data sources alone or in combination.
The work also shows that variation in the underlying sources of data, the time frame covered by those data, geographic differences in the data, and the availability of unstructured medical records, all can affect the results of analyses in ways that may not always be obvious. Here too, we can see where these different sources may be comparable or optimized to specific analyses. Essentially, we’re accelerating the understanding of the data sources fit for specific analyses and for many analyses.
The results of the pilot study showed alignment across different data sources and datasets even though companies were sourcing data differently—through EHRs, claims, tumor registries, and the like. This consistency helps prove the validity of real-world data across applications, spurring a need to understand the broad range of ways it can be used to support clinical research.
Importantly, this effort showed that it’s possible for real-world oncology data organizations to align on considerations for identifying patients across diverse types of sources, from claims-based datasets to EHRs. We were also able to align on high-level definitions for real-world endpoints, and identify important data elements that need to be collected in order to help answer a specific clinical question.
The pilot 2.0 work is important, and is still in the preliminary stages. Additional analyses are needed to help understand differences observed among cohorts derived from different datasets. The differences likely reflect variability in characteristics of the different data sources, such as granularity of information captured or “data depth,” differences in the underlying populations, or even selection criteria for how a patient is included in an EHR-derived cohort versus a claims-derived cohort. Further analyses are needed to better address the differences and understand how the data may be comparable.
Jennifer B. Christian, IQVIA:
The goal of the Friends 2.0 pilot is to understand where and how RWE can be used to evaluate treatment effectiveness in lung cancer. To implement the 21st Century Cures Act, the FDA needs to better understand when real-world data can be trusted and determine the situations where real-world approaches can inform drug approvals and label expansions.
Through this work, we keep learning more about how RWE differs from traditional RCT data. The care a patient receives in an RCT is not the same as a patient receives in routine care, and the findings from RCTs are not necessarily generalizable to the real world. RWE, which is more reflective of routine clinical care, is complementary to RCTs, and the evidence derived from both sources provides a clearer picture to understand the benefits and risks of treatments.
One main takeaway is that, as Dr. Ned Sharpless mentioned at the meeting in September, there is a “tsunami” of data from health care systems that are now available or becoming available. However, an associated takeaway is that the types of data differ, as demonstrated by the different organizations that are participating in this Friends of Cancer Research effort. There’s undoubtedly a lot that we can learn about cancer care, how it’s being delivered in the real-world setting, and determining the real-world effectiveness of care in populations that aren’t necessarily in clinical trials.
Also, just learning about other aspects of health care delivery, whether it’s disparities or the transition from active care to surveillance and long-term impact. Not that we’ve looked at any of those in this particular work with Friends of Cancer Research, but I think that all those things are possible given the types of data that are becoming much more readily available.
I think this Friends of Cancer Research effort is a good way of demonstrating the different strengths of different types of data sources and how they can, despite these differences, at least on clearly defined questions, can basically come up with results that look fairly similar.
Of course, the results do vary from setting to setting, and the next step that we have to work on is, “Okay, why do they vary?” Some of the obvious things are, for example, the populations probably differ a bit, such as in age range. So, we need to explore this in the next steps of what we’re doing currently. But I think that, yes, there’s a big opportunity.
I should mention that I used to run a grant, which no longer has funding, that was funded by the NCI called the Cancer Research Network. The CRN was basically a consortium of research groups, like the one I’m affiliated with, that are attached to health care systems, to support the conduct of cancer research in these settings.
And they were all integrated health care systems, at least in some core part. It included several Kaiser Permanente regions, Marshfield Clinic, Henry Ford, Geisinger, Health Partners, and a couple of others. and it was basically to support cancer research in these settings. I mention this partly because of the types of data that are available now, especially with the implementation of EHRs and how that was really pushed partly by federal legislation, but partly by advances in technology, that results in these data being available.
In the CRN context and in other data settings, people have said, “Oh, this is great. We can potentially better identify people to enroll in clinical trials.” And sure, that’s right. But that’s only one application of these types of data. There are other health services and epidemiology, cancer-care delivery type of research that could be done.
One of the ways that I have sometimes thought about it has been, “Okay, we’ve got a research question related to cancer care, it might be appropriate for a clinical trial. Great. Go through the cooperative groups and do that.” Or it may not be. And if not, maybe it’s possible to look at it in these various settings, whether it’s Kaiser Permanente or the Cancer Research Network, or Flatiron Health or OptumLabs. Let’s make sure the question dictates what types of study designs and analytic approaches should be applied—not everything has to be a clinical trial—and data that are appropriate or necessary are used. So, some questions, yes, clinical trials. Let’s do that. Other questions, maybe not.
And then there’s the whole area that I think Friends of Cancer Research has been interested in and the FDA is interested in, which is, given that there are there are therapies that have been approved that are out there in clinical use, what’s their real-world effectiveness? And of course, these data provider settings, whether they are claims data, detailed oncology data, or integrated health care systems data, that’s where these questions could potentially be addressed. These are real-world data that could be examined to generate real-world evidence on real-world effectiveness.
I would say that, in the way that the FDA requires certain types of data and data elements and monitoring for clinical trials, that probably can’t be done in the same way in the real world, so to speak. But there are probably ways of examining these real-world data that could really inform long term surveillance, long term health effects and whether drugs such as these immunotherapies are working, have the same types of outcomes, or identify long-term unintended effects, in different populations.
We’re really excited to be part of that pilot. For one, it was nice to be in a room full of people who are thinking similarly—and, as I call myself, a “real-world data evangelist”—and understanding that real-world data will be important to not only the decisions we’re making now, but even more important to the decisions that we’ll be making in the future.
So, it was a good opportunity to have a rising-tide-raises-all-boats moment across the industry.
I think that it outlined a few things that we’ll continue to need to work on, and that is understanding standard. I think that for organizations and regulatory bodies to trust real-world data more, it’s not necessarily certified using “the entirety of the dataset,” but ensuring the dataset you’re using for a particular analysis is fit-for-purpose.
So, fit-for-purpose was definitely the phrase that we heard a lot, and making sure that everything that was outlined in the FDA framework for real-world evidence around general reliability, quality, and transparency are achieved, but starting to get into, “What does that actually mean?” and “How is that to be defined?”
I think we have a long way to go in that regard, but this was definitely a first step in thinking about how different our data sets are across a number of organizations and where we can start to find commonality to be able to use these data for the benefit of patients.
This was an important effort to demonstrate that leading organizations in oncology real-world evidence can develop common definitions for cohorts and endpoints, conduct similar analyses, and come together to discuss results.
While this was not a formal validation study, it was a significant demonstration of how far the field has come in a few short years, and an illustration of the hard work in front of all of us to mature the use of real-world evidence in outcomes research, clinical decision-making, and regulatory decision-making.
The heterogeneity in the data source, the composition of data types, curation practices or provenance cascade at each partner organization could introduce variable amounts of missingness, biases, and confounders in the underlying data, and thereby variation in results, however, the pilot found more similarities between the groups than variation.
The different groups agreed on common definitions, but my understanding is that the analyses were done independently. Is it important to talk not only about validation and evaluation of endpoints, but also about the quality and transparency of the data and analyses?
Yes. Pilot 2.0 helped to realize the vital importance of aligning on key questions upfront, such as variable definitions. As you note, we agree on the importance of processes to track and document the preparation and use of data at each stage of evidence generation. This is a central feature of our Aetion Evidence Platform, in which fully archived and auditable logs record all transactions and provide comprehensive versioning of the data, including data history, provenance, linkages, and transformations.
It was important for us to be in sync on validating and evaluating the endpoints, as well as have discussions about differences we were seeing that might relate to data type, source, population, and quality. The groups had frequent calls and emails to work through the details of the endpoint definitions.
Even after our first review of the results, we found that we needed to regroup and discuss again some details of the metrics and approaches for defining endpoints and approaches for censoring. We also asked each group to report censoring fractions per endpoint to help with understanding the completeness and quality of the data.
As a pre-condition for the project, all participants in the Friends of Cancer Research work agreed that quality and transparency of methods were important.
All parties attempted to implement the analysis in the same way, following an agreed-upon plan, but we are also documenting any ways in which the implementation may have varied from that plan. Given the different participants and data sources, this sort of understanding is critical to have effective sharing and to advance the field with confidence.
Quality and transparency have been consistent parts of the Friends research discussion. We have had numerous discussions about topics that include: partner rules on data suppression for small sample sizes to population distributions within research partners, differences in abstraction methods and types of data, analytical techniques and processes used by partners, and sources of data. Many of these topics and others will be discussed in upcoming manuscripts and congress discussions. The work presented in September only represented a small portion of the extensive collaborative work of the group.
COTA does believe establishing quality and transparency standards for data and analyses are an important precursor to being able to expand the utilization of different types of RWD for regulatory decisions. COTA has done extensive work to develop a three-pronged approach to ensure data quality and transparency based on our interactions and discussion with industry partners, life science partners and regulatory bodies.
Data quality and transparency are critically important factors and underpin the interpretability and reliability of RWE studies. A main objective of this project is to align on how to evaluate data quality/reliability.
Given the different types of data sources included in this project, we will need to think about what data quality and completeness mean in the context of routine care (as opposed to prospective randomized clinical trials), and how those criteria are measured using different data sources such as EHR or claims.The steps we’ve collectively taken so far as part of this pilot research project will set a foundation for future work on data quality.
Data reliability metrics are also a focus of the broader RWD/RWE stakeholder community and regulators who are working together to define best practices. Flatiron contributed to a collaborative effort by the Duke Margolis Center for Health Policy’s RWE Collaborative to help identify a minimum set of data quality checks to evaluate whether RWD are reliable and may be fit for use. Those recommendations are described in a paper published in September.
The Friends pilot may provide an opportunity to discuss how the underlying quality of specific data elements may impact the outcomes we observe. For example, date of death is not always captured in real-world clinical settings. Given that incomplete information on death can skew overall survival analyses, data organizations have to link or supplement information with external sources. The impact of incomplete death data highlights the importance of benchmarking it to the gold standard, which is the National Death Index, to generate quality metrics, such as sensitivity.
Absolutely. Being transparent about the quality of data sources is very important in evaluating and validating endpoints. Transparency begins by describing the data sources, but goes further by characterizing the missingness of each variable used in the analysis.
Many organizations participating in the 2.0 pilot project were not able to capture certain clinical tests or generate endpoints such as progression free survival, because these data are either not routinely recorded in clinical practice, captured in the medical records or accessible from the medical records. Some of the findings from this project will focus on characterizing the data that is captured well in real-world sources and data not routinely recorded.
You’re right. We each did our analyses separately and then we sent the tabular results to Friends of Cancer Research, and then they put them together in tables that could be compared. We haven’t actually combined or pooled those results in any way across all the different groups.
We’ve had some experience with actually pooling individual-level data, in research projects across the CRN health care systems, for example. And probably some of these other groups have done the same in their subsets. For example, Syapse partners with several different health care systems; I don’t know if they’ve pooled data across them or not, but they are positioned to be able to do so.
I think that defining the variables in the same way to the extent possible, that’s great. Then we can go back to our various data sources and look at the same analysis and describe the populations in the same way.
We have tried in our calls to be as open and transparent as possible about any hurdles we’re running into or questions that we have about the distributions of population characteristics, operationalizing variable definitions, and things like that, but we haven’t seen individual-level data from other participants, and that’s partly because we’re really doing this in an underfunded manner, let’s put it that way. But this also serves as a benefit, not only for privacy concerns, but also as we are each replications of analyses in different health care or data settings.
So, yes, it’s contributed time and not necessarily the primary focus of anything that any of us are involved in. I will say that Friends have been great, in terms of helping to guide this whole effort.
Our discussion started with, “How do we even define real-world time to treatment discontinuation?”, which is a good place to start, among a thousand definitions, but we have a way to go into how we perform that analysis, and some of it is due to the fact that the data sets are fundamentally really different.
The example that I tend to use is—this was non-small cell lung cancer—for the demonstration for our pilot, if you’re in a claims data set, there’s not an ICD-10 code for non-small cell lung cancer. So, just out of the gate, you’re trying to think about, “How am I determining that this patient even fits the inclusion criteria for the study?” So, we’re going to go beyond just definitions and think about more in Pilot 3.0 such as how we censor the data and data bias.
The groups came together to construct shared definitions and methodologies, which was a significant and important undertaking. Each group quality checked their own data and conducted their own analyses, providing analysis results to Friends to pool. It is critical to discuss the quality of the underlying data and the analyses in order to determine the appropriateness of using real-world evidence to answer these types of questions.
We at Syapse have been and will continue to publish, in collaboration with our health systems partners, through consortium efforts such as Friends, and directly with the FDA throughout our research collaborations.
Analyses were completed independently to ensure data protection using pre-specified, congruent common data elements and statistical analysis plans. Tempus agrees that transparency is critical to ensure confidence in RWE results, and the importance of investigating fit-for-purpose, quality of underlying data along with any variability in the population characteristics and/or methodological assumptions made during the analysis. The partner organizations are carefully reviewing the data and conducting additional analyses to investigate this and plan to share findings and lessons as part of the Friends of Cancer Research collaboration.
It seems that the project would need to focus on stratifying the patient cohort to validate endpoints, and then benchmarking real-world patient outcomes against results from equivalent traditional clinical trials. Who will be in charge of these efforts, and do you see them being done as part of the Friends collaboration?
We look forward to working closely with Friends on upcoming projects, which I’m sure will address a number of “next step” questions raised in the course of Pilot 2.0. Aetion will continue to enable RWD analyses across one or multiple data sets, and support thoughtful design and application of study methodologies.
Friends of Cancer Research has begun conversations about performing subgroup analyses as a follow-on phase to the initial work presented at the public meeting. Stratification is a necessary next step to understand the differences in the populations so this is a high priority for CancerLinQ. The benchmarking to clinical trial results will be completed as a separate project. ASCO and Concerto do not plan to participate in the clinical trial comparison because of other priorities.
A secondary objective of the work will examine a subset of patients who match inclusion criteria for one of the pivotal trials that formed the basis for the real-world study. Several of the participants in the Pilot 2.0 study are engaged in this work, with ongoing support from the Friends leadership.
Norden, COTA: Collaboratively under the Friends 2.0 project we are working on exactly what you are proposing. During the Blueprint Forum in September, the second panel highlighted the approach we are taking to validate both a real-world data framework and real-world outcomes in advanced NSCLC. There is a sub-group within the larger collaboration who is working on developing a manuscript based on this work.
Additionally, we at COTA are doing some exciting work on real-world outcome validation in hematology and solid tumors that we hope to be able to disseminate in the near future. We also have recently embarked on a two-year research collaboration with the FDA where we will also be working to advance knowledge in this area.
The goal of the pilot project is not to directly compare the results from RWD studies to clinical trial results. Rather, the question we’re seeking to address is: Can real-world endpoints be used to characterize differences between available interventions? In pilot 2.0, clinical trials will serve as context for this question.
As a next step, Flatiron and some other pilot 2.0 participants intend to identify real-world cohorts that more closely resemble those from the clinical trials by applying as many I/E criteria from the clinical trials as possible, then will compare outcomes within real-world datasets to determine if real-world endpoints can detect differences across treatments. Future work may include additional analytic approaches to make the real-world cohorts more comparable in order to discern differences by interventions.
As with the other aspects of the Friends’ pilot study, we expect that pilot participants will align on common methods for analyses and conduct studies on their own data. Friends of Cancer Research is managing the project.
Initially, the group wants to compare the findings presented at the most recent Friends 2.0 pilot meeting across all data partners, including characterizing the study populations and comparing overall survival, time to next treatment and time to treatment discontinuation rates. IQVIA and some other data partners are also planning to conduct additional sensitivity analyses using clinical trials as benchmarks for comparison.
Friends will continue to lead this work, and we anticipate these subsequent analyses to continue through early next year. We expect to see differences between the RWD from all the data partners. It is necessary to understand the extent of these differences, the drivers responsible for these differences and whether they are artifacts of data recording or meaningful differences in benefits and risks among the populations.
Beyond the 2.0 project, there is a need to further evaluate this framework in other cancers, other therapeutic areas and in countries outside of the U.S. IQVIA, in collaboration with Friends and Health Data Insight in the U.K., plans to use Public Health England’s Cancer Analysis System to facilitate comparisons with the U.K. national registry data on lung cancer patients.
Yes. Those are actually two things that we have talked about. One is benchmarking against the results of clinical trials, specifically in this particular area, these PD-(L)1 inhibitors, and advanced NSCLC.
One of the interesting things, getting back to the criteria by which you need to think about who gets into clinical trial—Friends did go through recent trials and showed us, “Okay, these are the inclusion criteria.” But some of them are not things that are captured in EHR data or claims data. These are things that you would ask someone specifically because you’re potentially enrolling them in a clinical trial and you wouldn’t necessarily ask them in a clinical context.
As a result, we can’t really directly replicate the populations that were enrolled in a clinical trial. We can for key terms such as the cancer diagnosis or age, but not necessarily some of the things like, “Are you pregnant or planning on becoming pregnant?” Note that this is just a hypothetical example; I don’t recall if this was an explicit exclusion criterion in these trials.
We may be able to pull up the first part if they had a child in a subsequent time period, but we certainly wouldn’t know if people are planning to become pregnant. Maybe we could look at family planning or reproductive health visits or something like that, but intent to become pregnant is not something that we would routinely be able to capture. Obviously, that doesn’t necessarily apply for all clinical trials in terms of inclusion criteria, but for some it might.
So, not necessarily all the groups are going to be able to contribute to a clinical trials population replication because they may be missing more critical information, but we are collectively going to contribute if possible.
So, an example, in our setting and in most claims or EHR-based settings, we don’t have good capture of disease progression, a common outcome of interest in clinical trials.
It’s likely documented somewhere, but it’s usually in text notes or captured in varying ways depending on cancer type. In lung cancer, it’s probably captured through imaging, but we couldn’t necessarily say, “Okay, this is the particular imaging encounter that resulted in the recurrence or progression being identified.” It’s likely documented in text notes, and it might be inferred from subsequent initiation of therapy for advanced cancer. So, the information’s there, but we decided, because of the level of funding and effort we could devote, that we weren’t going to attempt any chart review to confirm recurrence or progression to advanced stage. But a group like Flatiron, where they’re actually going through all the records on a routine basis and through relevant text fields, they could identify progression.
So, something like that, even though you would think would be a clinical endpoint of interest, is not necessarily routinely capturable in structured data. And the extent to which it is varies from cancer to cancer. And as a result, we or other groups may not be able to conduct a relevant comparison to results from a clinical trial.
I do. From the experience that we had, I think that this was a safe place to do this together, as a whole that’s greater than the sum of the parts.
Now, I will say that, independently, many of us are competing in the market today and continue to have our own opinions and versions and positions published as we’re moving forward, and hopefully, what happens is that there’s a convergence of these things to where that ultimately—if we look forward to a day where the FDA establishes the framework—that it was done looking into the totality of the universe of data and not picking a pony.
I think that that’s where some of the opportunity really is for all of us. I focus on community oncology, but even the academic centers and large data partners and some of the up-and-coming new tech companies have their opportunity to share their thoughts and opinions. I think it will be great. It will be more powerful with that broader group than we can accomplish individually.
Friends has the right people to do it. I think that there are other groups that could try to do this with data that ultimately would be maybe less successful, only because of the strong leadership that Friends has, and really pushing this to remain focused on why this is important. I think that other data groups or other data consortiums can think about this, but Friends has that singular oncology focus.
We agree, and this is an effort that Syapse and the FDA are undertaking as part of our research collaboration. We look forward to sharing the results of this work.
The collaboration does plan, if feasible and time permits, to include additional analyses of real-world patients that match clinical trial eligibility requirements in order to assess whether real-world data can more closely align with clinical trial results and conclusions. Studies are needed to establish how real-world endpoints relate to more traditional regulatory endpoints.
If not included in Pilot 2.0, subsequent pilots could be developed and convened by Friends of Cancer Research. It is also possible that the results of RWE data more closely mimic “truth”, or what is really happening, than clinical trial data which by necessity has a highly selected patient population.
Outside of this collaboration, how is your organization using real-world evidence? Also, who are paying for it?
The Aetion Evidence Platform is being used by biopharma companies, payers, and regulators to conduct real-world evidence studies that answer questions about which treatments work best for which populations. Aetion works with 12 of the top 20 top biopharmaceutical companies, leading payers, and regulatory agencies including the FDA and EMA.
Besides the Friends collaboration, Aetion is also using RWE to help the FDA test where this kind of data can—and cannot—be used appropriately. The FDA’s Framework for Real-World Evidence Program included a reference to a landmark study, RCT DUPLICATE, being led by researchers at Brigham and Women’s Hospital using Aetion’s platform, to demonstrate the value of real-world evidence as an accelerant to drug approval.
Researchers are seeking to replicate the results of 30 randomized clinical trials that were used for FDA approval decisions to see whether the incorporation of real-world evidence would have led to the same regulatory decision. This year, the RCT DUPLICATE study was expanded to predict the results of seven additional Phase IV clinical trials that are ongoing.
CancerLinQ generates real-world data as a secondary byproduct from the data collected from practices which is used primarily for quality improvement and clinical care. We make available real-world data sets via CancerLinQ Discovery®for academic, government, and non-profit users. These data sets are accessed in a controlled cloud-based environment and are not downloaded. Commercial customers obtain access to real-world datasets through the TEMPRO licensees.
The creation of real-world datasets is funded through CancerLinQ’s operating budget. Customers of CancerLinQ Discovery pay to receive access to specific fit-for-purpose datasets after approval of their research proposal by our Research and Publications Committee. As recently announced, we will soon be making CancerLinQ Discovery datasets available to research customers through a customized version of the American Heart Association’s Precision Medicine Platform.
Concerto HealthAI focuses its research on research questions that can advance meaningful innovations to patients and which can provide confidence in the current treatment approaches bringing the greatest benefits to specific subpopulations.
Essentially, we are creating the tools—engineered real-world data and AI solutions—that are enabling precision oncology in practice. Concerto HealthAI uses real-world data to address a wide range of research questions of interest to health care providers, patients, life science companies, payers, and academic researchers. Some of this work is funded in partnership with life science companies, some is grant-funded, and other research is internally funded by Concerto HealthAI.
Many of our research projects are done through collaborations, such as those we presented at the 2019 ASCO annual meeting around the outcomes in patients with autoimmune disease, typically excluded from checkpoint inhibitor clinical trials—this being done with ASCO and FDA.
COTA has adopted a multi-pronged approach partnering with providers, life science companies, payers, the FDA, and others to bring clarity to the incredibly complex disease of cancer.
We work with major academic cancer centers to abstract and curate clinical data. COTA organizes this real-world, fragmented EHR data, transforming it to a clinically rich, longitudinal dataset. Institutions are using this data to unlock insights and transform care practices. Additionally, COTA is the exclusive partner in preparing NJ provider organizations to enter value-based oncology arrangements. We provide these organizations with clinical insights that cannot be gleaned from claims data alone.
Earlier this year, COTA signed a two-year Research Collaboration Agreement with FDA to establish a study protocol with an initial focus on breast cancer. The primary objective of the collaboration is to enhance our understanding of the real-world experience of cancer patients with an eye toward determining how best to use this experience in regulatory decision-making.
In addition to this work, COTA is supporting various life science companies in accelerating and augmenting clinical trials with RWD. In one case, the company has multiple clinical trials that received positive guidance from the FDA on the use of RWD given the rare trial populations as well as trial design. COTA is working collaboratively with the company to build the RWD inclusion/exclusion criteria, data models, and the relevant cohorts to the agreed upon data models. COTA anticipates that the RWD will be submitted to the FDA in multiple malignancies over the coming months.
In line with our mission, we believe that enabling the use of our de-identified datasets will help the entire cancer community advance research to find new, better therapies for patients. We license our real-world oncology de-identified datasets to researchers (typically for a fee) and to government agencies and non-profit organizations (at no charge) to accelerate cancer research.
IQVIA embraces opportunities to use RWE approaches to innovate drug approvals, to advance personalized medicine and to ultimately improve the lives of patients. We work with a variety of stakeholders from pharmaceutical and biotech companies to regulators, payers and clinical institutions around the world to develop approaches that allow us to better understand the benefits, risks, and costs of therapies and devices.
For example, IQVIA is collaborating with the FDA on its Sentinel initiative, the agency’s national electronic system, which uses electronic health care data to monitor the safety of FDA-regulated medical products.
Working with Deloitte Consulting, we are part of the Community Building and Outreach Center, which will focus on broadening awareness, access and use of Sentinel tools and data infrastructures. Moreover, IQVIA is working with other entities, such as the National Football League and the National Basketball Association, which are also interested in real-world data to monitor player health.
There are a couple of things to note. One is that, in research groups such as the one I’m affiliated with, the Division of Research for Kaiser Permanente Northern California, most of our funding comes from extramural grants, projects not funded by Kaiser Permanente. The NCI, of course, is one of our primary cancer-related funders and NIH in general funds much of the Division’s research projects—I think over half our funding comes from NIH grants. And then we have other Federal or State funding, foundation grants, and some some industry-related funding, as well as some directed internal Kaiser Permanente funds for specific projects
We have about 60 researchers in our group who conduct research across a broad spectrum of conditions, not just cancer. The other research groups affiliated with other Kaiser Permanente regions are a bit smaller, but similarly conduct research across many different areas. The work that we do is largely driven by the grants that we receive, most of which are investigator-initiated, although some are contracts.
When we do partner with an industry group, they might be interested in the type of question such as what we are doing with Friends—”what’s the long-term effect of some pharmaceutical or device”—then, that would be pretty focused. I’ve actually not participated directly in any industry-funded initiatives, but for example, we did one of the validation studies for Oncotype DX and how the recurrence score is associated with survival after breast cancer. This was one of the first two main studies that were done in that arena.
As another example, the NCI gave the CRN some funds to look at cardiotoxicity after getting anthracyclines and other cancer agents that may have cardiac effects. The rates of these cardiac events were substantially higher in the older age group than you would surmise, just based on clinical trials. These were patients who were not actually in clinical trials, because they were older and outside the age eligibility range. I was not directly involved in this project, and Erin Aiello Bowles of Kaiser Permanente Washington led a manuscript on these findings that was published in JNCI. Thus, the possibility and magnitude of the side effects of treatment might be different from what you see in clinical trials.
Another example is in cancer screening. Kaiser Permanente researchers are leading and participating in some of the major research projects on cancer screening in health care systems. One of the examples I like to give in this area—again, I was not involved in this, and this work was led by Dr. Doug Corley at Kaiser Permanente Northern California—showed that there’s wide variation in adenoma detection rates among gastroenterologists, which is the proportion of patients in their patient panel in which they detected at least one adenoma during colonoscopy. The ADR ranged from less than 10% to 50%.
More importantly, they then saw that ADR was also directly tied to subsequent 10-year colon cancer incidence. That is, the patients of gastroenterologists who had relatively high adenoma detection rates had colon cancer rates that was about half of those compared with patients of providers with low ADR. And there was this direct linear relationship for decreasing risk with higher adenoma detection rates.
That’s obviously not evaluating something that was looked at in an oncology clinical trial and seeing how it works in the real world. Instead, it’s taking data that are available in these types of health systems databases, with EHR data that documents what types of providers are seen, the procedures that are done and their results, and linking up with internal cancer registry information—all of which is available in the Kaiser Permanente setting. And in this case, discovering that, yes, this measure—the adenoma detection rate—makes a big difference on colorectal cancer rates, as big as anything in the colorectal cancer treatment space.
So, if you have a gastroenterologist, you probably want one that actually has experience finding those little adenomas that might be missed by some of their colleagues. This has led to more training of gastroenterologists, and is a direct example of real-world evidence informing a learning health care system, and of the type of analyses that can be done from data that are now available. So, it’s real-world evidence that impacts cancer-related care. This was published in the New England Journal of Medicine about four years ago.
My team is funded through our interactions with either grants, governmental funding bodies, biopharma and academic collaborations.
We use the data in two ways. On one side, it’s important and it’s one of our mandates that everything that we do is from a real-world evidence-generating standpoint—we enter into it with intent to publish. Over the last couple of years, the team’s published about 200 plus different publications in conjunction with our biopharma partners. Now, the benefit of this and where we add some uniqueness, is that every study we conduct has an actively practicing oncologist from The US Oncology Network (The Network) as a principal investigator. To give you some background, The Network brings together more than 1,200 independent physicians, forming a community of shared expertise and resources dedicated to advancing local cancer care and to delivering better patient outcomes.
So, we’re ensuring that the clinical questions that we’re answering are relevant and that the answer will be important to their practice. What ends up happening is as we continue to develop these papers and posters and manuscripts, that information works its way back into The Network. And so, there is a greater understanding of real-world performance in the real-world data in addition to the trial data—The US Oncology Network is, to a high degree, trialists, also.
They’re very familiar with the trial side, but they also understand that with real-world data, there’s going to have to be a way to do this. I was speaking with a physician recently who specializes in lung cancer, and he said, “No one wants to do a trial with the control of chemotherapy, because no one wants to put their patients on chemo in lung cancer anymore, unless they have to.”
No one wants to do that. But they’re still presented frequently with that’s the standard, that’s the control. And the thing is, we know what happens there. I mean, why do we still need to be using that as the control? We know what happens. We’ve got 40 years of experience knowing what happens with chemo and lung cancer. So, it’s frustrating to them, and right now, there’s not an option.
So, we’re excited that with the FDA, we’ve had a success with the synthetic control arm approval for avelumab in metastatic Merkel cell carcinoma. We think that there’s a continuing opportunity, it’s part of our strategy, and it’s something that we have a number of projects right now that will hopefully be successful. We’ve achieved buy-in from many physicians that this is going to be a good way to go to help feed some of these novel therapies to market.
We use RWE in many ways. We provide RWE insights to hundreds of researchers across the country, pharmaceutical companies, associations, government agencies, and regulatory bodies. In addition, we have a series of papers coming out in the near-term using RWE insights we have generated internally.
Our primary goal is to enable providers and health systems to improve outcomes for cancer patients through precision medicine. One of the primary ways we achieve this is through the use of real-world evidence. The health systems we work with join the Syapse Learning Health Network, which allows their providers to use real-world evidence from across the network to understand optimal testing and treatment strategies.
For example, when a patient is presented at the molecular tumor board, an expert oncologist can find all clinically and molecularly similar patients from across the network, see their treatment journeys, and compare outcomes by therapy.
We are very proud of our efforts to put RWE into the hands of health systems. Additionally, we work with life sciences companies to help them leverage RWE to accelerate bringing therapies to patients. This includes outcomes research, clinical trials optimization, and regulatory uses.
What do you see your organization being able to do once a real-world endpoints framework is established at FDA, and when real-world evidence is broadly ready to be used for regulatory purposes?
I want to amend the question slightly to what more do you see Aetion being able to do once the framework is established. I say that because we’re able to do quite a bit today, even without specifically established guidance, because global regulators and value assessment bodies are increasingly incorporating RWE into their decision-making, and letting us know—in ongoing discussions and through public documents—what works and what doesn’t.
We’re working with our clients today to support regulatory submissions, while we partner with academic and industry bodies to shape our collective understanding of what’s possible with RWE. Collaborative projects will be a huge help in guiding the way—Friends’ Pilot 2.0 is an important project, as are others Friends initiatives.
Beyond regulators, HTAs, U.S. payers and others are keen to understand the performance of medications in their decision-making, and to underpin efforts like value-based care. This is particularly important in oncology, where approaches like external control arms can greatly extend our knowledge about medications, especially in cases where randomized trials aren’t feasible.
We expect that CancerLinQ Discovery datasets will be an even more valuable source of real-world cancer data by showing patterns of care, treatment outcomes, and new associations in large populations of patients being treated with standard-of-care or off-label treatments.
This may be particularly useful for understanding toxicities or generating hypotheses for new indications or new populations to be treated with approved drugs. We also expect that industry may start to include CancerLinQ data in their regulatory filings especially for label expansions.
As the use of real-world data increases more broadly, researchers and clinicians will gain a better understanding of the benefits and limitations of this type of data and how it may be able to complement traditional clinical trial data.
However, the greater legitimacy for real-world data that will likely come from the FDA’s promulgation of its framework will be seen when clinicians begin incorporating real-world evidence into treatment decisions, particularly in situations where there is missing or poor-quality clinical trial data, for example for rare tumors not well-studied by trials.
The FDA has taken a rather innovative approach during this interim period prior to a formal framework and guidance being provided—allowing alternative approaches to be advanced in a very transparent and consultative manner. This is really bringing together the best thinking, data, and methodologies.
Consequently, we are already seeing the value of EMR data—especially EMR data from sources that allow fully abstracting the unstructured elements for specific study designs and questions—and sometimes even combining these EMR data with full genomic datasets and payer claims data. All of this is to have confidence in comparability or superiority of these data as a source of external controls or as the basis for a standard-of-care comparison.
We are also seeing the benefit of AI and machine learning approaches linked to real-world data analyses—allowing insights that go beyond the existing literature, or that provide context to findings in the literature based on larger scale analyses. Already, we are seeing the benefit of studies with regulatory intent done at scale using real-world data alone. This is remarkable progress in only two years.
With formalization, the industry will have even more confidence and clarity as to where and how real-world data can aid pre- and post-approval decisions. It further is aiding the generalizability of regulatory intent studies to the treatment decisions of community practitioners—a further goal of this move towards real-world evidence integral to different study phases and decisions.
A scenario that is growing in importance involves the use of RWE to support granting of an expanded indication for a drug already approved in another indication on the basis of RCT data. A recent example was the approval of Ibrance (palbociclib) for male breast cancer, which relied on multiple types of RWE against the backdrop of RCT data previously generated for breast cancer in women (The Cancer Letter, April 19).
A related application involves the creation of an external control group from RWD. Imagine that a new drug is being developed to target a novel mutation in patients with highly refractory solid tumors. In this case, patients are unlikely to accept randomization to an existing standard of care—which is associated with poor outcomes—and oncologists have ethical concerns about randomization because some evidence of unusual activity has been observed during a phase 1 study.
Therefore, the sponsor initiates a single arm phase 2 study with the blessing of the FDA. In this circumstance, the control group may be selected from a robust RWD set. Robustness is important because of the requirement to match prognostic factors between the experimental group and the RWD-derived control group as closely as possible. Additionally, statistical matching approaches such as propensity score analysis must be applied to ensure that measurable prognostic factors are balanced between the two arms. A similar approach, though not yet as widely used, involves creation of an external control group that represents a hybrid of RWD and controls from prior clinical trials. We expect this to be further developed in short order.
Even today, RWE has the potential to be applied to various use cases to support regulatory submissions; for instance, RWE can provide disease context during clinical trial development, compare or provide context for a treatment arm in single-arm trial, characterize unmet need, provide evidence needed to modify an indication (e.g., dose), or even support product effectiveness.
In the future, we expect to see increased opportunities for FDA acceptance of RWE. Flatiron believes that our de-identified real-world datasets and analytical methods need to be fit for use, to support the specific regulatory decision. Given that each clinical context and use case has different considerations, Flatiron’s involvement and support may range from providing RWD to generating fit-for-purpose RWE. As the FDA continues their work to finalize the RWE framework, our models will continue to evolve in line with best practices and regulatory guidance.
Once there is clarity about the evidentiary requirements for the FDA and other regulatory bodies, drug companies and medical product developers will feel more confident about using real-world evidence to supplement their regulatory applications for new indications and label expansions.
It’s not just the FDA that is developing a framework for using real-world evidence. IQVIA is also helping regulators in Europe, Japan and China develop guidance documents. All these regulators are calling for health stakeholders to share their experiences using real-world evidence, including pilot projects that will inform the development of formal guidelines. We are contributing extensively to these efforts.
I think that we can probably contribute to the evidence that would be generated. Kaiser has currently identified a handful of broad areas for enhancing research areas, with a focus on what can best integrate our research strengths or opportunities with improving the care we provide. Two of these, i.e. precision medicine and cancer, obviously dovetail directly with the type of work that we’ve been participating in with Friends of Cancer Research.
In order to fully realize the potential of research and analytics in these areas, we also need to work toward better data availability to address questions in these areas—and this points to another area of data limitation currently. One key part of this is being able to identify readily, in a structured format, people who’ve been tested for specific clinical genetic tests, and the results of those tests. For example, PD-L1 testing or EGFR testing—relevant to this specific project—are being done to guide clinical decisions.
Oftentimes, biospecimens are sent out to commercial labs for such testing, such as to Quest Diagnostics or to Ambry Genetics. Unfortunately, when they come back into the system for clinical care, they typically are documented in PDF files, so that information is not usable for analytic purposes from inside KP, unless one goes in and prints those PDFs or calls them up on the screen and re-enters that data into a structured database.
So, what we’re doing currently is we’re seeking out ways to capture that information in a more structured format. That includes approaching commercial vendors who conduct such genetic or molecular tests for Kaiser Permanente and basically saying, “Hey, can you give us the data that you have already given us in PDF format, and that you already have in structured format yourselves and give it to us in a similar structured way?” That would then enable our analysts to link those data to other EHR data to produce real-world evidence about the use of these tests, use of follow-on therapies, and their outcomes. And yes, the vendors that we’ve approached so far seem to be on board to do that. Or intent is to go beyond cancer and beyond these specific tests that were the focus of the Friends of Cancer Research effort.
So, we anticipate we can eventually get those data in a structured format. We had published a paper [in JCO CCI] that basically talked about this gap in availability of data, and one of the things that we had also said was that it would also be great if EHR vendors, such as Epic, Cerner, Allscripts, if they could also create a place where these tests and their results could be routinely captured, so that from a clinical or operational side, they would know where to put the data or where to find these data, and from an analytic side we would know where to go to pull these data.
That’s something that, I think, can really realize the potential of precision medicine. I think being able to access those types of data in a structured format is key. And when these gaps in data availability are solved, then any future guidance on generating real-world evidence from the FDA would be easier to follow, if we were to contribute to such efforts.
I think we are uniquely positioned because McKesson also supports US Oncology Research, which is a site management organization for trials conducted within The US Oncology Network.
When I start thinking about the commercialization pathway for new drugs, it’s important to be planful about how we use real-world evidence at the beginning of the trial. We need to connect with synthetic or historical controls on the front end of a trial rather than in trial rescue. We also partner with physicians in The Network who are conducting the trials as well as our site management teams. When we are planful and aligned across the company, then my team can really be innovative and see impactful results.
I think that it’s ended up being a really powerful combination of forces across McKesson to accelerate approvals and accelerate the innovation. We’re doing it for a reason and that by putting McKesson’s pieces together, we have a good forward-looking plan to be able to achieve these goals.
I believe that the industry is doing so many things either in theories or thinking about it after, and I think that once there’s a comfort level with the FDA and with biopharma—because it’s going to have to be both of them—there will be a culture shift on all sides. But putting those pieces together upfront and being very planful, I think that that’s going to be the most important.
We’ve seen there was a Genentech approval very recently that—I know the comments that came out saying that “We’re approving it based on a single control, but we’re going to ignore the real-world data that you put to it”—and so there was some good learning there and some of it was, it’s really important to let the FDA know what you’re doing, and if you’re going to use real-world data, make sure you tell them. So, I think that we’re kind of finding our way through some of this.
I was actually at a session that Aetion put out recently, and the former commissioner, Dr. [Scott] Gottlieb, spoke there. I asked him, “Normally, we always hear about approvals. We don’t really hear about what wasn’t approved. Is there a way that we can think about learning faster from the stuff that isn’t approved?”
The sooner we can learn from what doesn’t work, the faster we will understand what does, and so, I think that there’s an opportunity for all of us to think about being able to release those types of data and respecting confidentiality.
Precision medicine presents a substantial challenge to the current clinical development model: as patients are categorized into smaller and smaller cohorts based on molecular and clinical criteria, it will become difficult to perform RCTs for every drug-molecular-clinical indication due to lack of patient availability and high costs.
As the regulatory use of RWE matures, including determining the suitability of particular data sets and endpoints, we believe Syapse can have a large impact in working with both the FDA and life sciences companies to use RWE to assist in the evaluation of safety and efficacy of therapies.
As we have numerous projects in flight in this regard, and as we work closely with the FDA and ASCO as it relates to this topic, we are unable to comment at this time.
With the recent emphasis on data-sharing, these pilot projects are a great example of companies coming together and working together. What happens after (or when) FDA issues a final guidance for RWendpoints? Is ongoing collaboration between data competitors necessary, going forward? Is that possible, since there may be conflicting interests?
There may be conflicting interests here or there, but the challenge of this conflict is dwarfed by the opportunity that can be created through collaboration, and I think most stakeholders see that.
The reality is that for many oncology questions, a single dataset won’t capture either the quantity of patient experience needed to create sufficient regulatory evidence, nor will a single dataset capture the range of patient experience—different populations, different subgroups, different treatment settings—needed.
As such, many questions simply demand collaboration among multiple data providers, or if not explicit collaboration, at least peaceful coexistence within a study. We at Aetion work both with study sponsors and data holders to bring the full power and nuance of these data to bear in answering a question, and then we work to present the analysis in such a way that regulators can understand all of the steps that led to the result. This is the foundation for regulatory-grade evidence, and for instilling the requisite confidence in the evidence among all stakeholders.
This is done frequently today in other questions, such as drug safety, for which we combine analyses run in various datasets to get a bigger, more complete picture of the safety of a medication.
At the Friends 2.0 presentation, the diverse organizations came together for the common goal, even those that often compete in the same markets now. Dr. Wendy Rubinstein, who at the time was CancerLinQ deputy medical director, said “It’s remarkable how 10 “frenemy” organizations, that are typically competing, came together to create common definitions to help advance the field.”
There will be a need for more, not less collaboration, as the field matures. The FDA framework may define regulatory endpoints for RWD, but there are still a lot of unanswered questions. There will continue to be conflicting interests, but based on this experience, I believe there is an opportunity to work together and collaboratively explore unanswered questions about real-world data quality, new endpoints, comparison with trials, and a host of other methodologic issues. ASCO is highly interested in continuing to be involved in this type of exploration.
There will continue to be conflicting interests on the margin, but most of the industry is focused on doing what is right for the patient.
Because many cancers are now being defined by narrow biomarker characteristics, they are in effect becoming more rare diseases. This means that assembling relevant datasets for these subpopulations is harder. Consequently, you will see more study-by-study collaborations across data sources. This is a present and growing mutual regard for the important societal benefit that can result when academic and industry leaders collaborate to solve difficult problems.
Concerto HealthAI has often worked with sometime-competitors, and we expect the Friends collaborators to continue to work together to enhance the value of real-world data, and to provide guidance and insight on how real-world data can best be used to generate real-world evidence.
The companies included in the pilot study and those that participated in the analysis portion are often considered to be competitors. However, through this collaboration, it became clear that we share similar challenges and there is a willingness to share expertise to expand the understanding and acceptance of RWD.
There was consensus that we have a collective responsibility to advance the use of RWE to improve patient outcomes. Participants have used the phrase “frenemies” to describe the companies involved in this research, noting the importance of putting “normal” business practices aside to advance cancer care for the benefit of patients.
We recognize the value that research and perspectives across the scientific community can provide to inform regulatory guidance. We believe that continued collaboration between real-world data organizations will help advance discussions and drive consensus to support the development and use of real-world endpoints.
In addition to the important work of the Friends pilot project, Flatiron and many others are contributing to Duke Margolis Center for Health Policy work that aims to establish frameworks and principles for development of real-world endpoints. We hope that this work will also identify pathways for validation of these endpoints.
Given this is a new and emerging area for the scientific community, including industry and the FDA, we are supportive of collaborations across the RWE providers and believe they will be critical to the acceptance and use of real-world endpoints.
It is certain that there will be an increasing demand for data holders to work together. This is a fact of the era of big data. The most successful companies will be those that figure out how to forge successful collaborations with mutual benefits and a satisfied workforce. There is plenty of work to go around, but it will be difficult to stay small and unaffiliated.
That’s a really interesting question. I think someone at the last meeting in September had mentioned that we’re all “frenemies.” From that perspective, on one level, it is pretty remarkable that we’ve been able to talk and work together.
In our case, we basically started from an NCI-funded grant, the Cancer Research Network. But Kaiser Permanente is also largely a not-for-profit health care provider, but then you’ve got these startups that have all this VC money, and oncology practice groups that have access to their data, such as COTA, who are realizing that there’s something that they could do with it, and trying to figure out how to market it or use it to improve cancer care.
Then, there are groups that have basically come out of professional societies, such as ASCO’s CancerLinQ, and they’re also trying to somehow make data available from oncology care. There are other examples like that, which have somewhat different orientations, like AACR and their GENIE project, and the ORIEN network across cancer centers.
So, yes, you’re absolutely right that there are these competing organizational interests, but I think that at least one central element of all of these different groups is that they are interested in trying to do what’s best for everybody, for people with cancer. I think that’s partly why this works.
And I think it also partly helps that it’s Friends with Cancer Research who is the convener, coordinator of all this, rather than one of us. It’s not that we don’t trust each other. In fact, I think that there’s a high degree of trust amongst all the groups that have been involved in these discussions. I think that we are each somewhat more colored by our particular perspectives and familiarity with our own data, which I think we all get to express those in an equal or an open way, a reasonably transparent way, than if, say, Flatiron or Kaiser Permanente were the convener and guiding the discussions.
Ultimately, if guidance from the FDA is generated, I think that what may result in terms of analyses may be collaborative—the FDA Sentinel project is a good example of that, in which data from multiple health insurers are made accessible. Or, it may be individual groups responding to specific questions of interest based on the strengths and appropriateness of the data that they have to address those questions.
That’s a really good question. We’re watching data strategy evolve, and I think that there will be opportunity, in the same way that competitors have come together in the past, and in a model that’s worked and still maintain competitive value.
I’m lucky in that I’m nested in a Fortune 500 company. There are smaller VC companies that are driving a tremendous amount of valuation because of the value that is being placed in the data that they’re collecting. So, it’s going to be a tough thing to get through. And of course, the easiest thing is if all data are free and freely available, and we all move forward. That is the most unlikely to happen.
But I think where especially the FDA has been successful in the past with this is the Sentinel Initiative. And in that case, it was technically competitors from across a number of payer organizations providing their data directly to the FDA in a distributed data model. And I think that allowed everyone to be able to maintain a level of control with their data, but also provide data to the greater good.
When things are framed for the greater good and for public health, that’s when we can find common ground. It’s going to be hard, and it’ll definitely be kind of a slog to get there. But when I think about models like Sentinel, it absolutely brought together competitors in the space. In my previous role, I was the data contributor to the then-mini Sentinel, and I sat next to people against whom I would actively compete for projects. We were able to find common ground.
So, I don’t think we’re there yet with this, but I do believe that there is an opportunity for us to find a way forward and common ground with the data that both protects the individual value and the perception of value for the individual companies, but still be able to provide really good workable datasets for regulators and to be able to continue to make good decisions for us.
We believe it is important to work collaboratively, both with the FDA directly and as part of consortium efforts such as Friends, to inform the development of standards for RWendpoints. Syapse, alongside other RWE organizations, is well suited to do so given our daily work in the trenches of real-world data. It is in the interest of all RWE organizations and the stakeholders we serve for us to come together to advance this work.
Data from this pilot project intends to help inform a framework containing key data elements, RWEndpoint definitions, and algorithms. When the FDA issues the final guidance for RWEndpoints, collaborations between data organizations or pooling data (rather than analyses) from them become more achievable. We will still need to investigate the fit-for-purpose of each contributing data organization or dataset, as well as address the unknown level of overlap between them.
What are the next steps at present?
We look forward to a continued collaboration with multiple stakeholders, led by Friends.
We are currently working on providing a more detailed report of the current Friends project for submission as abstracts to meetings and for one or more manuscripts.
Follow-up work is underway to address various secondary questions of interest from the Pilot 2.0 engagement, and publications are in planning. Additional work is being planned to continue to advance our understanding of effectiveness endpoints drawn from real-world data, and how and under what conditions these can meaningfully be used to support regulatory decisions regarding safety and effectiveness.
The group is working on several manuscripts and congress presentations to more fully disclose the complete learnings from the current NSCLC project. There are also discussions advancing around doing more in-depth analyses to understand some of the differences identified during the project between data partners. Finally, there have been discussions regarding future collaborative work to continue to advance the use of RWD for both regulatory and non-regulatory uses to improve the care of oncology patients and to increase the speed of innovation in the market.
Flatiron plans to continue working towards the original objectives of the project. Collectively, we plan to further align as a group on definitions for important variables. Given that the results presented in September were preliminary, we also intend to apply methods that will allow us to evaluate the performance of real-world endpoints described in the objectives of the pilot project.
We did an initial, reasonably well-done, look at these relationships of immunotherapies with survival, and comparison with doublet chemotherapy, etc. The immediate next step is to understand more about what we actually see in front of us. So, what are the sources of variation in the findings, to the extent that they exist, and are those things that we can reasonably easily identify, like population differences? One of the groups had a relatively younger population than most of the others, for example.
There might be some differences there, by age, race, ethnicity, and whether that’s important for driving differences in observations, we don’t know at this point. So, just trying to understand what we’ve observed better and why there may be differences amongst different groups. I think that’s one thing. Part of it might also be missing data, or perhaps differing definitions of specific variables despite our upfront discussions in this area.
Another step is that clinical trials’ comparison, which probably only a subset of groups may be able to participate in, but in any case, I think that’s an important part because of what we’re trying to do—to determine if real-world evidence aligns with or provides different information from clinical trials evidence.
What would be ideal, but of course we didn’t know if this will happen, is if we can identify approximately the same population of people who would have been in a clinical trial, had they been recruited to do so, i.e., meet most key eligibility criteria. Hopefully, the results in that group would look pretty similar to what the clinical trials observed. And then, the rest of the population, results might be different, because they’re older, because they have comorbid conditions, but that would be something which we’re hoping to do.
And then, another step is basically laying out the procedures for how we went through all of this and documenting that in a way that other groups could potentially use it as a blueprint—perhaps not exactly a blueprint, but as points to think about as you’re taking real-world data to generate real-world evidence for a focused question.
We met recently to talk about Pilot 3.0. We are starting to dig into additional analyses that are happening. So, I think that we’ve declared our continued interest, and we will continue to be involved with the organization and the work that Friends is doing moving forward.
I’m convinced that this is a valuable platform where the “frenemies” can come together and can work towards this. So, we’re excited to be part of that. I think that we’re still trying to tie up pieces that didn’t exactly get completely wrapped up with 2.0, before we start thinking about what really is going to be the protocol or the objective of 3.0, but we’re looking forward to them.
I think it’s going to be a deeper view into how we conducted the analysis. Pilot 1.0 was, “Can we even see data?” Not, can we do anything with it, or can we design anything? So, they proved that. Then, fast forward to 2.0, it’s “Well, we can start to define terms, and we can start to come up with common definitions that are suitable across multiple datasets that we can start thinking about.”
For 3.0, I think there’s the, “Then how are we conducting these analyses so that they are similar? Are we doing that the same way? Can we start providing transparency to all of the statistical techniques or the censoring techniques, etc. that we’ve used to help find the population?”
For 4.0, I think it’s, “How did this do compared to what we were expecting, and how did this perform compared to what the results were?” And some of that’s going to be interesting discussion, because just off the bat with the real-world treatment in lung cancer, patients are about 15 years older than they were in the trial. There are some really big differences. And there’s going to be some good discussion on, do we try and match the trial, and is that really what we’re trying to prove?
Maybe 5.0 and 6.0 are, “Okay, so we’ve proven that we can match, we can measure, and we can meet all of these trial endpoints. Should we be thinking about what endpoints actually matter to the patients?”
And I think that that’s a piece that Friends will be really good at helping us bring forward and, with an understanding of all the stuff that we measure and all of the analyses that we perform, are we doing things that actually matter to the patients who are being treated? And I think that will end up being some really interesting conversation.
The organizations are actively working with Friends on additional analyses and publications to advance the goals of Pilot 2.0.
Did we miss anything?
Also, of note, in the advancement of RWE for regulatory decision-making, we recently announced a partnership with McKesson which will combine the Aetion Evidence Platform with data from McKesson’s iKnowMedSM oncology EHR system to power regulatory-grade RWE studies.
The solutions will first be made available to researchers at Brigham and Women’s Hospital who are leading the FDA demonstration project, RCT DUPLICATE, to replicate oncology randomized controlled trials with real-world data.
A wonderful set of questions.
This is good stuff. There aren’t too many other groups other than Friends that I think can pull this off in a neutral, friendly environment. And so, they took on a tough job in organizing all of us, but they’ve done an amazing job. So, it’s been good to work with them. It’ll be good to continue to work with them.
Friends did an amazing job in bringing all of us together and pushing this effort to a set of achievable milestones. Their contributions to this field are immense.