Cliff Hudis on how AI in cancer care is “inevitable”—and ASCO is working to make it safer

Share on facebook
Share on twitter
Share on linkedin
Share on email
Share on print

“Our goal was very clear from the beginning, which was to develop a user-friendly simple chat-like interface that would provide specific high-quality answers to questions that our members would bring, utilizing only our content,” said Clifford A. Hudis, CEO of the American Society of Clinical Oncology and executive vice chair of the Conquer Cancer Foundation.

This episode is available on Spotify, Apple Podcasts, and YouTube.

On this week’s episode of The Cancer Letter Podcast, Hudis spoke with Paul Goldberg, publisher of The Cancer Letter, and Jacquelyn Cobb, associate editor, about his guest editorial, “ASCO and Google Cloud set forth a vision for using AI to modernize health care and advance oncology.”

Hudis co-wrote the article with Thomas Kurian, CEO of Google Cloud.

In the episode, Hudis, Jacquelyn, and Paul talk about the collaboration between the American Society of Clinical Oncology and Google Cloud to develop an AI tool to better access ASCO’s guidelines—and how to do so safely and ethically.

“This required tuning the AI tool, in this case, to do essentially two things. And I’m sure I’m oversimplifying it, but number one, it restricted its answering to only our source documents. It couldn’t look elsewhere,” said Hudis. “Number two, it was tuned towards facts and away from creativity. And there is a lot of technical background to this, but my understanding therefore is that tunes down the likelihood of it inventing an answer and makes it more reliant on facts. And then number three, and the biggest innovation I think for us at the time, was understanding that clinicians ultimately are absolutely responsible for whatever they do to patients.”

Hudis said that ultimately, the problem that was solved wasn’t the one he was anticipating. 

“We thought we were solving this utilitarian access to the data problem,” he said. “But, I think in the end, the problem we’re solving is building trust and support in our community to embrace this new technology, which, as I’m sure we’ll talk about, I think is going to be ubiquitous, nearly universal.”

Other stories mentioned in this podcast include:

This episode was transcribed using transcription services. It has been reviewed by our editorial staff, but the transcript may be imperfect. 

The following is a transcript of this week’s In the Headlines, a weekly series on The Cancer Letter podcast:

Jacquelyn Cobb: This week, on The Cancer Letter Podcast.

Clifford A. Hudis: In the state of California, about a year ago or so, regulations were passed that actually explicitly pointed out that a clinician cannot use, as an explanation and defense, the fact that an AI tool told them to do something. You have to think. That’s why we designed our guideline assistant the way we did, in part. It was to drive the clinician to the source material as opposed to stopping with the chatbot answer.

The chatbot answer’s a summary, but your job is to take a moment and see why the chatbot told you that, using the original source material. And when you do that, you will find mistakes. You will find, as we’ve seen occasionally, that one GI cancer is being referenced in a question about a different one. It’s a jarring, glaring error. It happens from time to time.

Paul Goldberg: You’re listening to The Cancer Letter Podcast. The Cancer Letter is a weekly independent magazine covering oncology since 1973. I’m your host, Paul Goldberg, editor and publisher of The Cancer Letter.

Jacquelyn Cobb: And I’m your host, Jacquelyn Cobb, associate editor of The Cancer Letter. We’ll be bringing you the latest stories, ground-breaking research and critical conversations shaping oncology.

Paul Goldberg: So let’s get going.

Jacquelyn Cobb: Hi, everyone. On this episode of In the Headlines special guest, Clifford A. Hudis, CEO of the American Society of Clinical Oncology and Executive Vice Chair of the Conquer Cancer Foundation, speaks with us about the future of AI in cancer care. We really sink our teeth into some really interesting ideas in this episode, talking about the ethics of AI, increasing the ubiquity of the technology, and Dr. Hudis’ view that AI will inevitably become a necessary tool in cancer care.

Just as a heads-up for those paying really close attention, since we had a special episode of In the Headlines last week in honor of Hispanic Heritage Month, this week we have Dr. Hudis’ conversation with us from two weeks ago in The Cancer Letter. The story was initially published October 10th, so just in case you’re paying attention.

Before we hear from Dr. Hudis, I will briefly go through last week’s headlines.

Our cover story was a guest editorial from Edison T. Liu, professor, president emeritus and honorary fellow at the Jackson Laboratory for Genomic Medicine. We rarely put a guest editorial on the cover, but Liu wrote in incredible detail and nuance about a really complicated topic, which is the recent changes in animal experimentation policy at FDA and NIH. He goes into the history, he goes back 40-plus years, and he speaks from the perspective of being the president emeritus at the, quote, unquote, “Mouse House,” an institution that relies heavily on mouse studies. So that is a fascinating read.

And then Claire and I wrote a sort of secondary B story just about animal testing, about the months-long story related to these policy changes and the move from animal rights activists to encourage NIH to move away from animal testing. Finally, we had a guest editorial last week from researchers at the University of Kentucky Markey Cancer Center about the relationship between CCSG funding and the cancer burdens in cancer center catchment areas. Really interesting story. There’s an accompanying academic paper as well that came out, and some really interesting figures. And just starting a conversation is sort of how they framed it, to talking about what goes into CCSG funding decisions. 

And with that, I am very happy to welcome Clifford A. Hudis to the podcast. Welcome, Dr. Hudis.

Well, thank you so much for being here, Dr. Hudis. Welcome to the podcast. I think can we maybe just start with you giving a little bit of a definitional update for listeners who don’t know what we’re talking about necessarily about what the collaboration actually looks like?

Clifford A. Hudis: Sure, thank you very much. In the end, this was about solving a problem. And the problem that we had as a professional society, and like many others across the house of medicine, is that we write guidelines that adhere to a certain high standard for evidence and involve a complicated production process with lots and lots of volunteer time and input, vetting, peer review and so forth.

At the end of that, what we have is a portfolio of at least or about 100 or a little more than 100 right now, reasonably up to date, deeply researched evidence-based trustworthy guidelines that our members and broader community complains they can’t find. And the reason for that is I think multifactorial. The guidelines are written broadly or narrowly, depending on the amount of material and the topic area. Some diseases can have many guidelines within them. And even the experienced clinician seeking a specific answer to a question can sometimes struggle to identify which of the breast cancer guidelines would have the kind of information they want on a specific aspect of care.

So I’m telling you the long version of this because I think it’s interesting, and it’s what led us methodically to where we ended up. So in the spring of 2024, at an annual board retreat focused on the highest priority strategic issues facing us, we were challenged by the board, reflecting broader feedback I hope from the membership, to do something about this. And I have to say this is against a broader backdrop of the digital transformation that we, like many organizations, have been going through over the last few years.

And the challenge issued to us was, “Fix it.” And it was I think part of a broader challenge to also fix a more generally access to information and content that we have at ASCO in many domains. So at that board retreat, we did something interesting. This was early on in the post-launch of the first public use of ChatGPT, the generative transformer. And we actually used that then version of ChatGPT along with Claude. I don’t even know if we had Gemini at the time. I think we did, but I could be wrong. I’m sure we had UpToDate and a couple of other resources, NCCN and so forth. And we broke the board up into little groups. We assigned them questions to research and answer using all these tools. And of course we made certain to put non-experts in each area, meaning the lung cancer docs would get a leukemia question and the lymphoma docs would get a breast cancer question.

And the result was predictable, which was in the end they found our guidelines to be what they believed to be highest quality and most trustworthy. And I want to be clear, I’m not actually saying that they were better than some of the other sources, but they were at the top, but they also found the usual frustrations. So that summer, we began to cast about looking for external collaborators who could help us efficiently solve this problem. I mean, let’s be clear, ASCO is not a tech company, and we don’t have billions of dollars to develop technology. We were fortunate in August of that year to begin meeting with Alphabet, and specifically Google Cloud. And Thomas Kurian really facilitated that, who wrote that piece with me. He’s the CEO of Google Cloud.

And it’s interesting because the kind of problem we want to solve is, I don’t want to say routine, but my impression is they solve this seven days a week for lots and lots of enterprises. But this was a new domain and they were excited by it. At this point, I have to say we really benefited tremendously from collaborating with them, because on the one hand they brought technical know-how and skill, but on the other, they brought a production discipline that was really helpful to us. They set a pace of meetings and discussions. They set a set of deliverables, which was a task for them as well as for us. They organized much of the interaction. They prompted us to bring in knowing, experienced volunteers who could vet the product. And our goal was very clear from the beginning, which was to develop a user-friendly simple chat-like interface that would provide specific high-quality answers to questions that our members would bring, utilizing only our content.

And that last bit seemed so revolutionary at the time, because of course in those early months and/or even first year or two of chat, everybody’s favorite exercise was to ask a question and poke fun at the hallucinated answer. I don’t think any of us thought that would be an enduring situation of course, but that was the beginning of this era. And to build trust and reliability on the part of our membership, we just couldn’t have that.

So this required tuning the AI tool, in this case, to do essentially two things. And I’m sure I’m oversimplifying it, but number one, it restricted its answering to only our source documents. It couldn’t look elsewhere. Number two, it was tuned towards facts and away from creativity. And there is a lot of technical background to this, but my understanding therefore is that tunes down the likelihood of it inventing an answer and makes it more reliant on facts. And then number three, and the biggest innovation I think for us at the time, was understanding that clinicians ultimately are absolutely responsible for whatever they do to patients.

We wanted to make sure that nobody was actually relying narrowly on the results of our answers. Instead, when you ask a question in our tool, there’s a second window easily available that provides the source citation and wording for every answer in the chat response. And the idea is that the responsible clinician should use the chat to get to the answer, but click on it. And then what they see is the source document, which is really a PDF from ASCO. It is titled, so you know what guideline it’s from. It is dated, so you know how recently it was updated. And the specific source for the answer is highlighted in yellow text. And the point there is that the clinician’s job is to interpret that and put it in context.

So I know it’s a long answer to the question, but the real problem that we’re solving in the end wasn’t the one I thought. We thought we were solving this utilitarian access to the data problem. But I think in the end, the problem we’re solving is building trust and support in our community to embrace this new technology, which as I’m sure we’ll talk about, I think is going to be ubiquitous, nearly universal. And our community is going to I think have to develop both comfort and trust in using it and an understanding of how and where to be suspicious and how to challenge it to make sure that what it’s delivering is actually useful. And I think our tool is a stepping stone in that journey.

Paul Goldberg: Yeah, it’s interesting because there are different kinds of guidelines out there. Yours are one set, but there’s also NCCN and others. I mean, there is UpToDate. There’s a lot. And how do you build something like that that could actually maybe span the entire coastline? Can you?

Clifford A. Hudis: I have a point of view about that, and it’s my personal point of view right now. So this is not official, but I think, and this has been a driving motivator for us, when you think about the broader world … I don’t know what your state of embraces right now for these tools. But as an intro, I’ll point out to you that we run Enterprise Gemini for all of our staff now. And we are actually pushing our teams to develop personal use goals for these tools for the year ahead. I think that this is necessary in order to just tread water and stay competitive with [inaudible 00:13:16].

But the flip side is I think that in our daily lives, all of us are going to see the either quick or slow, depending on your world, creeping in these tools into everything we do. And what that’s going to lead to I think is a set of expectations for speed, reliability, ease of interaction. And I don’t think the medical community or specifically the oncology community is going to then pivot from that broader world where they’re using all these tools to narrowly in medicine saying, “Oh, I’m going to go back to some bespoke, narrow tool.”

What I’m saying now is I think that the problem you asked about is going to be solved at a high level. And I think tools like ours have a relatively short shelf life, because I think that the actual processing, the bringing it all together that you asked about, is almost certainly going to happen in some universal high level tool that everybody’s going to have access to. That’s one version. And the second version, which we’ll probably get to, is I think even less visibly, that aggregation is likely to happen within the electronic record system anyway, and just be there.

Paul Goldberg: But who will do it? Will it be you? Will it be Google? Will it be-

Clifford A. Hudis: That’s what I’m saying. I think the tech giants are going to, on the one hand, do it. I think Enterprise, like OpenEvidence, is obviously well along the way to doing it. And I think the third possibility, and I don’t know how this ends up, is that the most used medical record systems, for example Epic, will at some point begin to provide these services or embed all this in their system. And the clinician will hardly know who did the aggregation, if you will.

Paul Goldberg: But what would be the ASCO rote? Will there be an ASCO rote?

Clifford A. Hudis: Well, that’s a long-term question about the future of guidelines. I think for the near term, the requirement for expert vetted, trustworthy guidelines is actually not diminished but heightened, because one of the things that these tools easily do is expose the, I’m going to coin a word, datedness of these guidelines. So now if we have a guideline from 2018 or ’19, that’s the first thing you see when we ask the question. You may rightly say, “Well, it’s 2026 now and I’m waiting for them to update it.” And so I think the pressure is on us in the guidelines world to actually keep our guidelines up to date like never before.

I’ll extend that by pointing out the rate of production of new scientific knowledge in oncology, but also more broadly, has never been faster. It’s not all going to be guideline changing, but enough of it is. I think in the end, this is a second place where this revolution is going to be so profound, because I do think that the broadly available tools are going to be scouring that same literature. And they will be helping to provide tools that will allow us to update the guideline.

But to answer your question specifically, I think the human in the loop requirement, which is really what the professional societies play, I think that need is going to be there for the foreseeable future to make sure that these guidelines have that human touch, that little bit of extra human judgment to make sure that they are relevant, accurate and useful for the clinical community.

I’m hinting at something longer term, though. I’m not sure that that’s true decades out from now, and maybe sooner than that. I do think that there’s going to be a point where these tools are able to generate, on the fly, a completely reliable, updated, trustworthy guideline that’s specific for the patient’s situation that the clinician has described or that the chart creates and presents. And in that future state, our role in generating guidelines may be quite different from what we’re talking about now.

Jacquelyn Cobb: I have a question. And I didn’t anticipate asking this, so please forgive the clumsiness. But it’s just because of what you said, it sort of triggers something in my memory of a UVA study last year. I was going to write a story about it and then all of this happened. And I still want to, but basically they compared three different groups, and it was about clinical decision-making. And one was just AI, one was just the clinician, and one was the combination of AI and a clinician. And the AI and the clinician together actually had the worst accuracy, whatever the measure was, and so that’s what I’m just-

Clifford A. Hudis: Yeah. It was a JAMA paper.

Jacquelyn Cobb: Yeah, okay, so you’re familiar. And so that’s what I am curious about. And this isn’t just you-specific, but just this field about there’s an intuition that having a human being check it is going to be the best option, but do we have data to support that? Is that actually the best way to go? That’s I guess my question, but maybe we don’t know the answer yet.

Clifford A. Hudis: Well, I think a couple of things. First of all, that was a pretty provocative study. What it suggested, as I recall the takeaway was, essentially if you just trusted … They were reviewing case studies, I thought at some point maybe the New England Journal case studies, something like that. But I may be wrong about that. I have to go back and refresh my memory. But the point is that, left unfettered, the AI was the most accurate, and that the clinician sometimes didn’t trust the AI and downgraded the answer, essentially. That’s the takeaway.

I think a couple of things about that. I think that’s a moment in time. I think that as clinicians get used to these tools, I think that maybe that downgrading of the answer by them will change. I think the other part is the interface between the patient and the doctor I think has always been critically important. And what these tools are going to actually enable is I think a warmer relationship, more time spent on picking up subtleties, more time spent on education. I think that these tools ultimately will allow the docs and all of the people taking care of cancer patients to spend more time doing just that. And I just think that the way that the study was done, as I said, is a moment in time, but it’s probably artificial, because I think you’re going to see a future where these tools are ubiquitous and embedded in our work.

I’ll jump ahead a little bit. I have a very simple analogy I use all the time. If you buy a modern car in 2025 … I don’t know about every car. I don’t buy cars that much. But I think a good number of them have auto braking, anti-skid brakes for sure, and lane keeping features. You can think of that lane keeping as a rudimentary AI. Think about it in the context of the electronic medical record. You are in full control of that car. You are driving it, making all the decisions. But when you begin to drift out of the lane, the car gives you a nudge back in the lane. That’s a kind of simple AI.

Now imagine in the medical record that you’re taking care of a patient. And as you begin to drift away from a standard, the embedded AI, using the guidelines that we’ve been talking about, is nudging you back in. It doesn’t overtly stop you as a hard stop from doing something when it’s clinically appropriate, but you’re going to get those nudges all the time. That’s what I think is likely to happen. I think that because in a way, in a very rudimentary way, we’ve had that in the electronic record for some years already anyway. We have drug interaction pop-ups, we have dosing schedule alerts. There are things that will be there. But I just think it’s inevitable that all this is going to be accelerated by these tools.

Jacquelyn Cobb: Totally. I do have another question, Paul. I don’t know if I’m interrupting you. But I’m curious about how about the response you’re getting from oncologists. Are they excited about this? Is there any resistance, sort of related to my last question?

Clifford A. Hudis: Well, there’s not really resistance, because those who aren’t interested won’t use it. I mean, nobody’s forcing this tool on anybody right now. I think that the reaction to it, the uptake is modest, but I think to be fair, that’s because there are really good competitive alternatives out there now, and we always knew that was the case. In part for us, and this will sound a little funny, but this project was as much about accelerating our work broadly as it was about specifically getting this product launched. And it has been tremendously helpful for us within ASCO just to done this exercise.

The other thing is there’s a subset of people, of course, as with all of the tools out there, who I think maybe they’re like treasure hunters or something, but they get a little pleasure out of finding the goof ups, and we want that. So every answer that our tool delivers is followed by a thumbs up, thumbs down, an open box to give us feedback, tell us what it’s getting wrong. We’re constantly working on it. And the engine behind it gets upgraded on the Alphabet side. The version of Gemini, for example, recently did a version upgrade for us. So the feedback is really helpful from the users right now.

In the end, the third thing is finding where we don’t have guidelines that people need. I mean, we’ve been able to do that I think in a more rudimentary way over the years, but I think from the usage, the questions that people are asking, they’re all logged. We learn which ones we can’t answer. So that’s going to help us accelerate. And I’ll provide a little tease. I’ve hinted at this, but I’ll say it plainly now. The short-term goal that we have here is to dramatically accelerate our production of guidelines. That’s the next phase of this project, and this really kicks that off, because this highlights the urgency of that need. And that’s where our efforts and resources are, this year into next.

Paul Goldberg: So a guideline doesn’t get cast in concrete, set in stone. A guideline becomes a constantly developing mechanism, right? That’s not something you review every three years or four years or five years?

Clifford A. Hudis: Whatever. I think that’s the change in the world now. I don’t think anybody expects to do a Google search … or make it more simple. Go to The Washington Post and do a story search. You’re expecting that this morning’s news story is the top of that feed when you ask a question. You’re not expecting that the first answer is from four years ago. I mean, I’m oversimplifying maybe. But the idea is that the pace of innovation, the ease of connectivity means that the static published guideline as a thing is a bit out of date now from the moment it’s published. So production has to be much, much faster. When you ask a question … I mean, maybe give a simpler example. If there is a groundbreaking abstract, a randomized phase III study on the plenary stage at the ASCO annual meeting in June of ’26, a reasonable person in this world will now expect that the guideline is updated that week, not six months later or a year later.

And the point is, if the standard of care has changed because of a survival advantage in a randomized trial, why should the knowing clinician or the wise clinician have to know that the abstract was presented and then overrule the guideline answer that doesn’t change until the next year? That doesn’t make sense. So demanding that we’re all current is reasonable right now, and our job is to deliver that kind of always up-to-date guideline. In our vision, guidelines become all dynamic. The technical term for them in the guidelines world is living. They’re called living guidelines. And one of our senior staff folks actually referred to this as the Christmas tree lights model, where you have a string of lights and you’re constantly unscrewing one piece and screwing in an updated light for each piece of the guideline as needed all the time.

Paul Goldberg: So the practice-changing findings truly become practice-changing that week?

Clifford A. Hudis: One hopes.

Paul Goldberg: So that means it’s a role for ASCO, right?

Clifford A. Hudis: Well, yeah, but I mean, with some modesty, I want to say I have great respect for the tech giants that are also accessing this information. And I would not suggest in the end that only we can accomplish this. I think that this is something that you’re going to see happen out there in the world.

Paul Goldberg: Yeah, but your peer review is what makes the practice-changing findings get to that point where they’re presented at the plenary session?

Clifford A. Hudis: Right, but that’s before the guideline. That’s exactly right.

Paul Goldberg: So that’s the row. My mind sort of takes me always towards dystopias. I probably should be treated for this. I may have been, unsuccessfully, but enough about me. You mentioned goof ups, Cliff. A goof up, the oops, means the patient might have died. Is there any way, ASCO that might be a role for ASCO, can say, “Hey, these are best practices for use of AI right now in the clinic where we stand”? Is there a limit? Is there some kind of a guardrail?

Clifford A. Hudis: Well, I want to be clear. I don’t think that there’s a need for ASCO to provide a guardrail there. I actually think you go to first principles. A clinician with a virtual pen in their hand issuing an order is responsible. In the state of California, about a year ago or so, regulations were passed that actually explicitly pointed out that a clinician cannot use as an explanation and defense the fact that an AI tool told them to do something.

Jacquelyn Cobb: Wow, interesting.

Clifford A. Hudis: You have to think. That’s why we designed our guideline assistant the way we did, in part. It was to drive the clinician to the source material, as opposed to stopping with the chatbot answer. The chatbot answers the summary, but your job is to take a moment and see why the chatbot told you that, using the original source material. And when you do that, you will find mistakes. You will find, as we’ve seen occasionally, that one GI cancer is being referenced in a question about a different one. It’s a jarring, glaring error. It happens from time to time. That’s, right now in this infancy, okay.

So I think that to try to make the point more plainly, the clinicians got to know the facts and the clinician has to be able to trust the source. And I don’t think we’re at a state yet where any AI tool’s answer in isolation can or should be relied upon. And I don’t think you need ASCO or anybody to tell you that as a clinician.

Paul Goldberg: That’s encouraging, because just to think of what a clinician should have to do, it’s basically this would be something that would take weeks potentially to go through the entire medical literature and say, “Oops, there’s a mistake in footnote 23.”

Clifford A. Hudis: Sure, but that’s where the power of this technology is so unbelievable. I don’t know how much you’re able to use it, but it can go through … We’re using NotebookLM here, for example. That’s an even more sophisticated tool. You can dump 15, 20, 100 papers into that on a topic and you can ask it to distill it. I mean, the power of this, I think for good, is just remarkable.

Paul Goldberg: Jacquelyn?

Jacquelyn Cobb: Well, I mean, I think we are going a little bit long on time, so I could talk about this all day, but is there anything else you want to make sure we talk about before we wrap up, Dr. Hudis?

Clifford A. Hudis: Well, you’ve given me a chance to cover, I think, all the high points. The one thing that I would just add, I said this already, but maybe if we’re going to end, I would end with this. Paul goes dystopian. I’ve had enough conversations with him to note that that’s his favorite place. I don’t want to sound like a Pollyanna. There are potholes risks, challenges here, and Paul’s asked about some of them. But I think underneath it all, there is the opportunity in this revolution to actually get back to a better kind of clinical care and to accelerate research, which we haven’t touched on so much. And I think that should be exciting and uplifting for clinicians.

We’re in a moment right now that’s almost predictable. And I often remind people of these stories. After the PC really became ubiquitous, and some on this call may be too young to remember this, but we lived with the blue screen of death lurking over us. I was early in my career when we had PCs. You were constantly saving to your floppy disk. And if you ever forgot to, there was a risk that your computer would die and you would with it lose two hours or six hours or a day of work. Remember those days?

Paul Goldberg: Oh, yeah.

Clifford A. Hudis: I mean, that was in the early days of Windows, to be frank. And there was literature at the time, business literature, that people liked to wave in your face and say, “Gee, companies, industry, we’ve all spent billions of dollars buying PCs, converting to this new world, and where’s the productivity gain? We’re not seeing it.”

Okay. About a week ago, maybe two, I can’t remember, an article popped up from I think it was Harvard Business Review. It could have been Fortune, Forbes. I’m not really sure. But I was laughing about it because it was essentially a search and replace version of that exact same argument: companies are spending billions of dollars right now deploying AI; where’s the big productivity game?

Well, go back now to the … really, it was into the ’90s, early ’90s, I think. Those articles stopped appearing. And what happened at that time was the LAN, the assignment of IP addresses, and the network, the intranet and internet. And all of a sudden productivity soared actually in lots of ways. Now, I’m not saying it wasn’t partly consumed by playing Minecraft or something, but it really did change. And I think that it would very, very short-sighted to bet against a similar productivity gain here. I think that’s coming. I think it’s inevitable.

And I think to tie all this together, clinicians, no matter where they practice, are going to have increasing access to higher quality, more reliable information to provide higher quality care to more patients at a … I’m not saying at a low cost, but getting that information is going to be at a lower cost than ever before. And it’s going to up everybody’s game.

And the last thing, because we didn’t even touch on this, is the tools that are going to ease the clinical interaction, whether it’s the automatic generation of notes based upon listening tools, and that’s in clinic already, automatic generation of treatment plans that you then review and sign all this. This has the potential to, I think, address one of the challenges of our time, which is drudgery and burnout, all those thankless tasks that take so much time on the clinician’s part. And if this is all true, I don’t want to sound … Paul may be dystopian, and I may sound like a Pollyanna, but I do think that clinicians will be able to turn away from the keyboard and screen face the patients and provide, I think, more humanistic, higher quality care than we have been able for the past couple of decades.

Paul Goldberg: Well, thank you so much for that wonderful illuminating commentary in The Cancer Letter as well as this even more illuminating discussion, because there’s something be said for discussion. And thank you, and please write again, and let’s do this again on any other subjects as this world continues to evolve.

Clifford A. Hudis: Absolutely. Great. Thanks again.

Jacquelyn Cobb: Thank you. Thank you for joining us on The Cancer Letter podcast, where we explore the stories shaping the future of oncology. For more in-depth reporting and analysis, visit us at cancerletter.com. With over 200 site license subscriptions, you may already have access through your workplace. If you found this episode valuable, don’t forget to subscribe, rate and share. Together, we’ll keep the conversation going.Paul Goldberg: Until next time, stay informed, stay engaged, and thank you for listening.

Table of Contents

YOU MAY BE INTERESTED IN

Arthur G. James Cancer Hospital and Richard J. Solove Research Institute at the Ohio State University Comprehensive Cancer Center is using artificial intelligence to devise new ways of predicting which patients will develop an aggressive and difficult-to-detect form of breast cancer called lobular cancer, which represents one in every 10 breast cancers diagnosed in the United States.

Never miss an issue!

Get alerts for our award-winning coverage in your inbox.

Login