Imaging assessments are a requirement for most oncology trial endpoints. Yet, many cancer centers are still trying to support research trials with clinical care tools and ad hoc processes that are not fit-for-purpose and don’t meet the needs of sponsors or trial staff.
The result is predictable: avoidable errors, slow turnarounds, frustrated sponsors, and —most importantly—patients who don’t get accurate answers fast enough.
I’m writing this as a partner to sponsors and patients first, who look to cancer centers and radiologists for urgent, lifesaving, or life-extending, care, but with genuine compassion for the workload in radiology.
My goal is candid: Show where current site-read workflows break, outline a realistic path forward, and demonstrate how platforms like Yunu lighten the load and help trial sites meet sponsor expectations.
If quality and timely site reads aren’t delivered, sponsors will route around those sites and partner with others. Let’s keep that from happening.2,1,9
1. The quiet cost of outdated “good enough”
Most centers we see are running clinical trial research reads through PACS, with spreadsheets, paper forms, email, and manual record-keeping. That was the accepted status quo when trials were smaller and the criteria simpler. It doesn’t work anymore. Non-compliant mistakes—wrong criteria, missing lesions, inconsistent visit naming, or calculations—snowball into data queries, re-reads, and credibility hits.
We’ve seen typical error rates at major cancer centers in the 25–50% range drop to low single digits once consistent, protocol-driven tools are in place. That’s not a knock against radiologists; it’s proof that process and software matter.
These broken workflows require a significant effort on the back end for study staff, who typically spend eight hours managing these efforts for every one hour of radiology time.7,6 Yunu reduces this by 80% with a streamlined & compliant workflow.
2. Radiology’s reality (and why digging in makes it worse)
Radiologists are underwater: RVU pressure, faculty shortages, technologist gaps, and constant inboxes. Saying “no” to new workflows feels protective, but the status quo creates more interruptions (clarification emails, late-night re-measurement requests, and audit panic).
A dedicated research workflow doesn’t add more work; it bundles and structures the work so everyone can stop revisiting the same scans three times.8
3. PACS isn’t built for trials—and that’s okay
PACS excels at clinical care. Trials require different things:
- Protocol-specific measurement logic (RECIST, RANO, Lugano, and the dozens of variants).3,4,5
- Structured outputs aligned to visits and endpoints—not free-text impressions.
- Traceability: who changed what, when, and why.
- Instant data export to sponsors—without manual transcription.
Asking PACS to do all of that is like asking a car to become a fire truck. Keep PACS for care; layer a research platform on top.
4. A typical story (you’ve probably lived some version of this)
When onboarding a recent Yunu customer, we discovered that this particular cancer center had hundreds of unread patient time point imaging exams that needed to be read, which were stuck in email loops and unfinished Excel clean-up tasks. Turnaround times ran into months.
Simultaneously, a top 10 pharma sponsor inquired about the Yunu platform and requested to sample some of the data produced for their trials at sites Yunu recently onboarded and currently manages. The findings were astounding.
- Before Yunu, 50% of this sponsor’s trials at this particular site had errors in more than 50% of the time points.
- Missing measurements and protocol non-conformance errors were noted across all trials.
- With Yunu, all errors have now been corrected for this sponsor, as well as for all other trials from other sponsors.
After adopting a protocol-driven platform, the cancer center cleared its backlog in a few weeks and now achieves sub-48-hour read turnaround times.
5. AI won’t bail anyone out soon
Yes, AI can pre-measure lesions or flag inconsistencies, but fully replacing radiologists in heterogeneous oncology trials requires long-term, protocol-specific validation. Think decades, not months. In the meantime, AI is most useful when embedded in a system that captures its output, applies protocol rules, and keeps humans firmly in the loop.
6. What sponsors expect now
Sponsors (and auditors) increasingly expect:
- Protocol-driven templates and conformance checks.
- Structured, machine-readable data (not PDFs of dictations).
- Full audit trails for imaging endpoints.
- Predictable turnaround times with real-time status visibility.
Precision, predictability, and transparency create an environment that improves outcomes and accountability – a significant initiative of ICH E6(R3) and a goal of all major pharmaceutical sponsors. If these can’t be delivered, sponsors will either centralize everything externally or demand the use of a tool that can. That’s the fork in the road.10
7. The practical pivot: Decouple clinical and research workflows
This isn’t about blaming the radiology department, which has been tasked with managing clinical trials without appropriate workflow tools, but rather about developing a new research imaging workflow that works.
It’s about:
- Keeping clinical PACS as-is for care.
- Running research reads through a purpose-built platform that:
- Integrates with PACS to pull images, but doesn’t rely on it for measurements, calculations, and data analysis.
- Enforces protocol rules and captures structured results.
- Exports directly to the sponsor’s systems.
- Allows you to tap internal or external readers when bandwidth or expertiseis limited.
Yes, with Yunu, cancer centers can even bring in outside subspecialty readers without losing oversight, because the platform governs the workflow and data, not a patchwork of emails.
A checklist for a candid conversation at cancer centers & trial sites
Use this checklist in conjunction with the clinical trials office and radiology leadership to identify and benchmark current gaps.
If quality and timely site reads aren’t delivered, sponsors will route around those sites and partner with others. Let’s keep that from happening.
Part A: Today’s state
- Are clinical reports separate from research endpoint data?
- Are response criteria enforced by the imaging software, or expected to be remembered or interpreted by the reader?
- Can lesion-level data be exported without retyping?
- Does the current workflow maintain an auditable log of every measurement change?
- What’s the actual turnaround time from scan to sponsor-ready data?
- How many imaging-related queries hit coordinators per study?
- How frequent are re-reads because the first pass missed a protocol rule?
- Is there sufficient radiologist capacity— and if not, are external readers easily accessible?
Part B: What “good” looks like to sponsors
- Protocol-specific templates with built-in modifiable criteria logic checks.
- Structured, machine-readable outputs on demand.
- Immutable audit trails.
- Clear and reliable turnaround time commitments.
- Visibility into read status.
- Ready access to annotated source images.
- Flexible reader pools (internal + external) managed through one system.
Every check in Part A is a place to improve. Every check in Part B is a reason sponsors keep bringing trials in the door.
8. Consequences of standing still
If site reads remain as an inconsistent and unreliable status quo:
- Sponsors leave. They take the work and budget elsewhere.
- The work remains, but time is lost. Endless queries and unpaid rework.
- Audits hurt. Without transparent processes and records, findings create pain.
No one wins there—not the patients, not the sponsors, not cancer centers.
9. Where Yunu fits
Yunu was built to standardize and harmonize imaging endpoints across sites and trials through transparent and traceable end-to-end acquisition, measurement, and delivery. We’ve consistently seen error rates plummet and read efficiency rise when centers use our protocol-aware worklists and structured output tools. If you’re currently tackling or even just becoming aware of the problem, those are great first steps. What we can’t keep doing is pretending the existing patchwork is “fine.” It isn’t.
Conclusion
Radiologists remain essential. But essential doesn’t mean exempt from change. The workflows supporting site reads in oncology trials are overdue for an upgrade—one that helps radiologists, not burdens them. Sponsors and patients are demanding better data, faster. Let’s meet that demand with purpose-built tools and honest collaboration, rather than relying on more spreadsheets and emails. Lead the change, keep the work, and lighten the load. That’s the offer.
References
- Sorenson, J. (2025, July 8). Imaging’s ride to the bottom in clinical trials—and why it matters now. The Cancer Letter. https://cancerletter.com/sponsored-article/20250718_4/
- Fotenos, A. (2018, September 20). Update on FDA approach to safety issue of gadolinium retention after administration of gadolinium-based contrast agents (Presentation by Anthony Fotenos, MD, PhD, Lead Medical Officer, Division of Medical Imaging Products). U.S. Food and Drug Administration. https://www.fda.gov/media/116492/download
- Eisenhauer, E. A., Therasse, P., Bogaerts, J., Schwartz, L. H., Sargent, D., Ford, R., Dancey, J., Arbuck, S., Gwyther, S., Mooney, M., Rubinstein, L., Shankar, L., Dodd, L., Kaplan, R., Lacombe, D., & Verweij, J. (2009). New response evaluation criteria in solid tumours: Revised RECIST guideline (version 1.1). European Journal of Cancer, 45(2), 228–247. https://doi.org/10.1016/j.ejca.2008.10.026
- Wen, P. Y., Macdonald, D. R., Reardon, D. A., Cloughesy, T. F., Sorensen, A. G., Galanis, E., DeGroot, J., Wick, W., Gilbert, M. R., Lassman, A. B., Tsien, C., Mikkelsen, T., Wong, E. T., Chamberlain, M. C., Stupp, R., Lamborn, K. R., Vogelbaum, M. A., van den Bent, M. J., & Chang, S. M. (2010). Updated response assessment criteria for high-grade gliomas: Response Assessment in Neuro-Oncology Working Group. Journal of Clinical Oncology, 28(11), 1963–1972. https://doi.org/10.1200/JCO.2009.26.3541
- Cheson, B. D., Fisher, R. I., Barrington, S. F., Cavalli, F., Schwartz, L. H., Zucca, E., & Lister, T. A. (2014). Recommendations for initial evaluation, staging, and response assessment of Hodgkin and non-Hodgkin lymphoma: The Lugano classification. Journal of Clinical Oncology, 32(27), 3059–3067. https://doi.org/10.1200/JCO.2013.54.8800
- Herzog, T. J., Wahab, S. A., Mirza, M. R., Pothuri, B., Vergote, I., Graybill, W. S., Malinowska, I. A., York, W., Hurteau, J. A., Gupta, D., González-Martin, A., & Monk, B. J. (2024). Concordance between investigator and blinded independent central review in gynecologic cancer clinical trials. International Journal of Gynecological Cancer. Advance online publication. https://www.international-journal-of-gynecological-cancer.com/article/S1048-891X(24)01777-8/fulltext
- Yunu. (2024). 50% Imaging Error Rate in Clinical Trials Discussed by Expert Panelists from 5 NCI-Designated Cancer Centers. Retrieved from https://www.yunu.io/blogs/post/50-imaging-error-rate-in-clinical-trials-discussed-by-expert-panelists-from-5-nci-designated-cancers
- Hicks, L. (2022, February 18). Disrespect from colleagues is a major cause of burnout, radiologists say. Medscape. https://www.medscape.com/viewarticle/968776?form=fpf
- Rula, E. Y. (2024, July 3). Radiology workforce shortage and growing demand: Something has to give. American College of Radiology. https://www.acr.org/Clinical-Resources/Publications-and-Research/ACR-Bulletin/Radiology-Workforce-Shortage-and-Growing-Demand-Something-Has-to-Give
- International Council for Harmonisation Expert Working Group. (2025). ICH harmonised guideline: Guideline for good clinical practice E6(R3). https://database.ich.org/sites/default/files/ICH_E6%28R3%29_Step4_FinalGuideline_2025_0106.pdf







