On Renewing Rigor and Reproducibility in Science

In a 2005 essay in PLOS Medicine, John P.A. Ioannides, a faculty member at the Stanford University School of Medicine, detailed how bias, faulty design, and analytic errors were contributing to a state of affairs in which, as one of the essay’s sub-headings bluntly expressed it, “Most research findings are false for most research designs and for most fields.” The essay – just one instance of Ioannides’s ongoing investigation of the topic – encapsulates the “reproducibility” or “replication” crisis plaguing current research. In a varying but consistently high percentage of cases across different fields, scientists have been unable to repeat the results reported in previous work – both their own and that of other researchers.

For science managers, and particularly those charged with overseeing and dispensing federal funds to support research, confronting the crisis is merely one element in a broad array of daily challenges.

One response to the crisis, from the US National Institutes of Health and other bodies, has been a renewed emphasis on the rigorous design of new studies and on ensuring the reproducibility of the results. This response, however, brings its own parcel of complications. How does one even consistently define such terms as “rigor” and “reproducibility”? And what are the most useful questions to ask when evaluating research proposals for funding support?

In a new white paper, Clarivate Analytics addresses these matters. Discussing definitions, for example, the piece examines subtle yet important shadings between different varieties of reproducibility, whether pertaining to “methods,” “results,” or the sense of “inferential.”

The paper also presents three basic questions for managers to consider when making funding decisions. Supporting each question is explanatory text that includes links to key research and expert commentary on evaluative standards and methods.

To download the white paper, please click here