Duke Scientist: I Hope NCI Doesn’t Get Original Data

Share on facebook
Share on twitter
Share on linkedin
Share on email
Share on print

This article is part of The Cancer Letter's Duke Scandal series.

In May 2008, the Blue Devils of genomic medicine were facing a mortal threat.

An NCI biostatistician was demanding the data Duke University scientists used to derive the predictors of response in ovarian cancer.

This inquiry had the potential to sink Duke’s technology that was purported to analyze tumors and use genomic insight to identify the optimal treatment for each patient. According to Duke’s projections, cancer treatment decisions are made 700,000 times a year in the U.S. alone.

Multiply that by $3,000—the going rate for advanced tests at that time—and you have $2.1 billion.

Had NCI’s statisticians been able to get the code and the data they sought, they would have been able to perform basic forensic bioinformatics that would have enabled them to spot unsubstantiated claims, and worse.

In an email dated May 6, 2008, Holly Dressman, a co-author on the Duke group’s key papers, shot an email to team captain Joseph Nevins, mentor and protector of its star scientist Anil Potti.

Dressman’s email, now cited in a lawsuit against Duke, may cause a double-take:

“I am working on the [topotecan] signature in OVC and it’s a big mess. NCI wants us to resubmit the revisions again and now asking for correct Topo info… and they may want the data for their stat folks to try out like what was done with plat stuff… I am beginning to wonder if the Topo signature is real. I guess for the review, I can just hope they don’t ask for original data and just report what is in the NatMed paper.”

Here, a government-funded researcher who—despite losing faith in the predictor used to decide which treatment an ovarian cancer patient would receive—expresses hope that NCI would relent before getting the “original” data and would settle for data published in one of the world’s premier scientific journals.

Dressman’s email, which has never intended to see the light of day, is as close as a single brief document can get to putting the entire Duke case in a nutshell. For starters, Dressman bemoans being unable to pin down Potti and find out how he got his predictors to work, because she can’t. The entire email is posted here.

The email, along with other documents supporting the case scheduled to go to trial at the Durham County Superior Court Jan. 26, demonstrates that the Duke scandal reached beyond Potti, the rogue researcher who cooked data and claimed falsely to have been a Rhodes Scholar.

Filings in the case focus on Potti’s ecosystem: the protective luminary Nevins, the appeasing Duke deans, the worried Dressman—and, in the case of topotecan, collaborators at another institution.

Notably, a filing by the plaintiff’s attorneys states that Duke didn’t provide Dressman’s email as part of discovery. The document was emailed to the plaintiff’s counsel by an attorney for Potti, one of the defendants in the civil case.

Dressman, a key member of the Duke genomics team, is an associate research professor at the Duke Center for Genomic and Computational Biology and director of the Duke Microarray Core Facility. She banged out this email less than a month after a dream team of Duke University deans executed a full-court press to silence Bradford Perez, a medical student who had the misfortune to find problems in the lab of star scientist Anil Potti (The Cancer Letter, Jan. 9).

Topotecan played a crucial role in the Duke scandal. Its signature was cited in the paper the Duke group had published in Nature Medicine in 2006. In that paper, validation of signatures was reported for a set of ovarian tumors. These samples were part of a larger cohort—some from Duke and others from the H. Lee Moffitt Cancer Center.

The Duke group also used this larger cohort in a 2007 paper, published in the Journal of Clinical Oncology, which proposed using a genomically-based approach to selecting treatment for patients with ovarian cancer.

Dressman was a coauthor of the Nature Medicine paper, the first author of the JCO paper, and an author of the 2006 lung cancer predictor model paper published in the New England Journal of Medicine.

All of these papers have been retracted.

Ovarian cancer, and using the chemotherapeutic agent topotecan to treat it, were clearly an area of emphasis for the Duke researchers and their colleagues at Moffitt. Ultimately, their failure to validate the topotecan signature would be cited as a key reason for retraction of the Nature Medicine paper.

Dressman didn’t respond to an email from The Cancer Letter. Duke and NCI officials declined to comment.

“The Potti case points to a strength in the clinician/researcher role that is not often noted,” said Rebecca Pentz, professor of research ethics at Emory University School of Medicine. “Most discussions of the dual role of clinician and researcher, which many oncologists have, point out the possible conflicts of interest that having a dual role entails. But the Potti case points out a potential strength. If you are directly involved in the basic research supporting clinical trials, and you discover something suspicious or doubtful in the research, as Holly Dressman did, then the research/clinician with integrity, the overwhelming majority in my decades of experience, will immediately put on her/his clinician hat and rethink any clinical trial that includes patients.

“Being involved in the research allows you to better protect patients, since you are involved in the research underpinnings of the clinical trial.”

Dressman and Nevins have PhDs. Potti is a clinician.

What NCI Wanted

Dressman’s email merits further unpacking.

NCI wasn’t running a dragnet operation to detect questionable science. Institute officials stumbled across problems at Duke while doing what they usually do: reviewing grant applications.

The grant that led them to look at Duke was at the H. Lee Moffitt Cancer Center.

According to materials released in the course of the IOM investigation triggered by the Duke scandal, NCI stumbled across problems at Duke in July 2007.

This is four months before Nature Medicine published a letter from MD Anderson Cancer Center biostatisticians Keith Baggerly and Kevin Coombes, who would devote thousands of hours to subjecting the Duke data to what they called “forensic bioinformatics” analysis.

NCI officials were reviewing the Moffitt application to advance a R-21 grant, which covers discovery of therapies, to the next phase, called R-33, which covers their development. The grant focused on using predictor models to select therapy for ovarian cancer, and it cited papers published by the Duke group.

The NCI official Dressman dreads is Lisa McShane, a statistician in the Biometric Research Branch of the Division of Cancer Treatment and Diagnosis.

Likely because of this experience, McShane would later emerge as the point person in setting NCI’s standards for moving omics advances to the clinic (The Cancer Letter, Feb. 8, 2013).

Even in the early phase of her experience with the Duke case, McShane believed that validation of predictors, if they are any good, shouldn’t be overly complicated.

“I think that one of the things that made this so difficult for people to get their arms around is that the Duke investigators were often steering things towards ‘Well, we’ve used this highly sophisticated statistical algorithm and you’re trying to reproduce it, but you’re not doing it exactly the way we did it,’ and in fact the problems ended up being much more simple than that,” McShane said in testimony to the IOM committee in March 2011.

“As I had said to Duke officials early on in our discussions over the last year: ‘This is not rocket science.’ There is computer code that evaluates the algorithm. There is data. And when you plug the data into that code, you should be able to get the answers back that you have reported.

“And to the extent that you can’t do that, there is a problem in one or both of those items. But it is amazing how throughout this process people still kept thinking that it was just debates about statistical issues. It really wasn’t debates about statistical issues. It was just problems with data and changing models.”

Indeed, Dressman and her colleagues had good reasons to worry.

NCI’s Circuitous Route

Moffitt’s project was directed by Jonathan Lancaster, formerly a Nevins collaborator at Duke.

Lancaster’s name appears on Duke’s original patents and on the Nature Medicine and NEJM papers, and he was the senior author on the Dressman et al. paper focused on ovarian cancer.

Lancaster’s goal was to apply the topotecan signature at Moffitt. His program was sharing personnel with the Duke group. Dressman and Nevins were assisting from Durham.

More than reputations and prestige were at stake.

At the time, Duke was running two clinical trials of the technology coming from the Nevins and Potti group. The two trials that were underway were focused on lung cancer. A third trial, in neo-adjuvant breast cancer, was getting started.

Though Dressman’s email doesn’t mention McShane by name, it does refer to NCI’s evaluation of the “plat stuff.” This is a reference to the chemotherapeutic agent cisplatin—and a specific case involving McShane.

Only one interpretation is possible here:

While reviewing the Moffitt grant, McShane was given the code for one of the five predictors that were mentioned in the Moffitt grant progress report.

“The reason that NCI initially made the request for Moffitt to send data and computer code is that information about the validation data and predictor accuracy estimates had been observed by NCI transition team reviewers to change during the course of the review,” McShane wrote in a 2011 letter to the IOM committee chair Gilbert Omenn, director of the University of Michigan Center for Computational Medicine and Bioinformatics.

In the letter to Omenn, attention-averse McShane wrote about herself in the third person.

“It took several weeks for Moffitt and Duke to produce this operational and stable version of code for the platinum/taxane sensitivity predictor, which was the only one evaluated by Dr. McShane,” she wrote. “Dr. McShane did not receive data or computer code that would have allowed her to ‘reproduce’ findings for the topotecan and liposomal doxorubicin predictors being used in the trial, nor even to establish that those predictors were locked down.”

In a nutshell: McShane is tossing questions at the folks in Tampa, who are forwarding them to folks in Durham.

The NCI review team considered the Moffitt R-33 grant to allow retrospective validation of predictors.

According to McShane’s letter to Omenn, NCI expected that in the Moffitt study the tumor samples would be collected prospectively. The calculation of the predictions and correlations of the predictions with clinical response were expected to take place after patients had been treated and follow-up for clinical response was complete.

Patients were not to be assigned to treatment based on the predictors.

The IOM correspondence related to the Moffitt case is posted here.

While NCI officials believed that the predictors at Moffitt would only be retrospectively evaluated, the investigators applied for funding from the Department of Defense and started to accrue patients to a study in which the predictor models were used to prospectively assign patients to treatment.

Lancaster was listed as a sub-principal investigator on the study. Robert Wenham, a gynecologic oncologist and the principal investigator on the Moffitt study, had trained at Duke and is also listed among authors on that group’s publications.

“NCI was not informed that a trial had already been initiated while NCI was funding the R-33 grant to validate the predictors,” McShane wrote. “NCI believed that the predictors would be evaluated retrospectively for their validity in the R-33 portion of the grant, and would not be used to direct patient therapy.”

People familiar with the situation say that at the time Duke’s Dressman wrote her email to Nevins, the Moffitt researchers were in a bind.

Presumably, the Duke predictors in ovarian cancer that were published in the Nature Medicine paper were built by Potti on the basis of data from Moffitt.

However, as Moffitt scientists prepared to launch their DOD-funded trial, they were finding—as the Dressman email indicates in detail—that their predictors didn’t work.

Dressman’s email refers to her inability to pin down Potti.

It appears that after Dressman’s failed efforts to get Potti to provide a thorough accounting of his predictors, Moffitt officials developed their own predictors. It’s not publicly known how those predictors were built.

NCI officials learned about Moffitt’s DOD-sponsored trial in early October 2009.

“NCI program staff called Dr. Lancaster to voice concerns about using the predictors in an ongoing trial to guide patient care,” McShane wrote to IOM. “The following day, October 9, 2009, NCI was informed that the trial was closed.”

The Moffitt trial was stopped two days after Duke officials suspended two of their single-institution trials. While those trials were resumed after a cursory review, the Moffitt trial was stopped for good.

At the time, Moffitt officials told The Cancer Letter that the study was stopped because “funds for this project have been spent.”

“The trial was closed during extension of funding for low accrual,” Patricia Kim, a Moffitt spokesman, said in an email at the time (The Cancer Letter, Oct. 23, 2009).

The Moffitt trial had accrued only four patients.

“I do not recall ever seeing Dr. Dressman’s email previously,” Lancaster, president of the Moffitt Medical Group and director of the Moffitt Center for Women’s Oncology, said to The Cancer Letter.

“Importantly, the topotecan signature referenced in Dr. Dressman’s email is NOT the topotecan signature used in the Moffitt clinical trials,” Lancaster said. “In her email, Dr. Dressman references the Nature Medicine paper, which was the publication reporting the Potti topotecan signatures. This publication had nothing to do with Moffitt-developed signatures. The signatures used in the Moffitt clinical trial were developed at Moffitt.”

This is consistent with what is publicly known about the Duke-Moffitt collaboration.

Herein lies the difference between Moffitt and Duke. Moffitt officials saw the bullet coming and got out of its way. Duke officials apparently thought they were bulletproof.

What Nevins Knew

The Dressman email raises new questions about what Nevins and other officials knew—and what they should have been expected to recognize.

Nevins had just played a key role in hushing Perez, the bright young man who had turned his back on seven months of work, and, placing his career in jeopardy, instructed the Nevins and Potti team to take his name off all manuscripts.

The Perez incident is important, because it establishes that top Duke officials, who had known about it, had said falsely to the IOM committee that no whistleblower had come forward in the Duke case.

In an interview with the CBS news show 60 Minutes, Nevins contended that his faith in his protégé and friend Potti was intact even after this publication reported that Potti had misstated his credentials, claiming to have been a Rhodes Scholar.

In a deposition cited by the plaintiffs, Nevins acknowledged that he didn’t check Potti’s data until October 2010, three months after Potti was banned from Duke campus.

The plaintiffs’ attorney asks: “Once you started digging, how long did it take you to find the manipulations that had been done?”

Replies Nevins: “It would take you maybe an hour.”

It’s not rocket science after all.

Paul Goldberg
Editor & Publisher


Paul Goldberg
Editor & Publisher