The 3 Rs of cancer research: rigor, reproducibility and robustness

As scientists gathered at the 2017 meeting of the American Association for Cancer Research (AACR) in Washington in early April to learn about the latest advances in cancer research, one of the questions was how much of that research would hold up to attempts to reproduce it.

The mood at a Saturday session on “Robustness: Enhancing Research Reproducibility” was optimistic that the problem could be tackled.

Certainly, the research community collectively has taken an important first step in acknowledging that it has a problem. That’s progress from the denial that greeted researchers who first drew attention to the problem. (See BioWorld Today, from Clarivate Analytics, April 1, 2012, and Dec. 13, 2013.)

“People would say, ‘Oh, that’s a neuroscience problem; that’s a geochemical problem,'” Larry Tabak, a senior investigator in the National Institute of Dental and Craniofacial Research and principal deputy director of the NIH, said at the symposium, describing those initial reactions. “It was ‘anyone’s problem but mine.'”

These days, everyone from the NIH to nonprofit organizations such as the Global Biological Standards Institute (GBSI) and the Center for Open Science are devoting time and resources to understand the problem and its solutions. (See BioWorld Today, Dec. 13, 2013, and June 12, 2015.)

As awareness of the problem has increased, so have opportunities for it. Jeremy Berg, the editor-in-chief of Science, noted that with the advent of big data, there is now an additional way in which experiments can fail to be reproducible. With complex analyses performed on big datasets, if steps are not carefully documented, sometimes it is impossible to repeat an analysis and arrive at the same result.

Good intentions, bad results

Although misconduct cases such as South Korea’s Woo Suk Hwang are the ones that draw the attention of press and policymakers, such cases are “not the bulk of the issue,” Berg told the audience. The vast majority of nonreproducible research is done during “good-faith attempts” to do rigorous experiments, he added.

Nevertheless, the best available evidence suggests that the spirit is willing but the results are weak.

Chi Van Dang, scientific director of the Ludwig cancer research institute, quoted a 2016 Nature survey in which 70 percent of respondents said they had tried and failed to reproduce the work of other laboratories, and “more than 50 percent have tried and failed to reproduce their own work. . . . Those numbers are cause for concern,” Dang said.

His colleagues agree. In the Nature survey, just over half of respondents called the crisis significant, while only 3 percent said there was no crisis. Some of the reasons are unsurprising. The goal of science is discovery, and as such there is an inherent tension between innovation and validation in the doing of science.

The scientific enterprise as a whole, logically enough, is also set up to reward success in the form of novel discoveries and, in highly competitive situations, pressures can mount to make those discoveries.

A more surprising issue is that experiments can be hard to reproduce because their methods are never fully described.

“Methods sections are [getting] shorter and shorter,” Tabak said, because they are the easiest thing to cut to comply with word count limits for manuscripts. But given that “everything is digitized now, there is absolutely no rationale” for that shortening.

Then, there is the question of what it means for a study to be reproducible – and what it means if a study isn’t.

Not black and white

In January, the Center for Open Science – its mission is to “increase openness, integrity and reproducibility of research,” and its funding comes from the John and Laura Arnold foundation along with NIH, NSF and DARPA – published the first results from Reproducibility Project: Cancer Biology (RP:CB), an attempt to replicate 50 high-impact papers in the field of cancer biology.

Of the five initial papers the project attempted to replicate, it succeeded for two, failed for one, and got ambiguous results for another two.

One of the papers where replication was in a gray area was originally published by a team from Stanford University, and showed that the surface molecule CD47 is overexpressed on a number of different tumor types, and that blocking the interaction of CD47 with its receptor, SIRP-alpha, could stir macrophages to antitumor activity.

The group reproducing the work, for unexplained reasons, saw spontaneous remissions in the control group, confounding the results.

Siddhartha Mitra, a senior scientist at the Stanford Institute for Stem Cell Biology and Regenerative Medicine who was a co-author on the original paper, said that the ambivalence was in the reproducibility study, not the original study.

“The reviewers were very clear on this,” he told BioWorld Today.

Bob Uger, the chief scientific officer of Trillium Therapeutics Inc., agreed. The company is developing a fusion protein that targets CD47. The replication paper “doesn’t concern us in any way,” he told BioWorld Today, because “when we look at the landscape . . . we see that the broad concepts are reproduced across a number of groups.”

Such conceptual reproduction means the weight of the evidence supports a role for CD47 in suppressing the antitumor activity of macrophages, regardless of the outcome of any one experiment.

The fact that there are more than a half dozen companies working on CD47 also attests to the strength of the underlying science, Uger said: “You won’t get that sort of activity if the fundamental thesis is flawed.”

Uger agreed that reproducibility deserves attention.

“We definitely have encountered situations where we couldn’t replicate another lab’s work,” he said. “That comes with the territory.”

But “doing the reproducibility project such as it is . . . suffers from the same limitations that the original paper might suffer from: It’s a one-off. It’s one lab doing one thing.”

Robust replication, on the other hand, comes from what he termed “sort of a natural selection,” where interesting findings – if they are real in the first place – will tend to be reproduced over time as other researchers want to work on a problem.

“The answer really comes from aggregate data” generated in those replication attempts, Uger said. “I don’t think there’s a shortcut.”

 

For more on cancer research, check out Oncology Talks. This series of podcasts from Clarivate Analytics aims to disseminate the scientific landscape in immuno-oncology to professionals in the field, with comprehensive insights from our industry experts, including BioWorld’s Anette Breindl, the author of this article, presented in concise and digestible episodes.