Reproducibility, Trust, and the Digital Laboratory

Over the last decade, there has been an increasing recognition that results published in scientific journals often cannot be reproduced by other scientists. This has been called the “reproducibility crisis” and has been described in a number of studies. A 2016 Nature article reported that more than 70% of surveyed researchers tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments. A 2021 study reported that fewer than half of influential preclinical cancer research papers could be replicated. 

Most Americans are not preoccupied with the problems in reproducibility. However, the pandemic and the exhortation to “follow the science” has provoked a national conversation about trust in science and how we interpret scientific outputs. It is important that we protect the integrity of the scientific endeavor in an increasingly cynical and suspicious era. The reproducibility crisis is not just a concern therefore for professional scientists, it is important to all who are invested in the scientific process.

Multiple factors drive the reproducibility gap. For example, there are strong career incentives to report positive results, even if they can’t be reproduced, because positive results are more likely to be published; and publication drives recognition, promotion, and academic tenure. In industry, “successful” studies create confidence in compounds and products and boost their potential market performance. This can drive stock prices and advance careers.

There are also some inherent human biases that can contaminate the evaluation of experimental outcomes and influence reproducibility. For example, hindsight, and the human tendency to say “I knew it all along”, creates a natural tendency to retrofit hypotheses to match experimental outcomes. Some have called this post-diction, to contrast it with prediction. This weakens the correlation between findings and hypotheses and thereby reduces rigor and reproducibility. Senior investigators who practice post-diction encourage more junior investigators to engage in similar behaviors; poor mentorship perpetuates the problem.

One major issue in reproducibility is the variability that is associated with human behavior. One manifestation of this is the inconsistency in scientific protocols—the sequence and details of how each step in an experiment is performed. Different researchers, even in the same lab, may vary slightly in protocol execution, and, to further complicate matters, how they record what they did. This can create data gaps and “protocol drift” that undermines reproducibility. 

Variability is further exacerbated by the highly manual processes that still dominate most laboratories. Many experiments require the use of multiple instruments and are done under varying experimental conditions (e.g., temperature, pH, flow rate). However, quality control of these instruments is generally done manually. This inspection-based approach is time-consuming and intermittent and creates gaps in the detection of instrument-driven variation. Furthermore, the outputs of these instruments are often captured on paper printouts and kept in a paper-based filing system. As automation inevitably advances, this hybrid paper-digital environment becomes increasingly risky as paper-based processes are inadequate to manage the vast amounts of data produced by automation. This introduces more risk of variation and gaps in reproducibility.

Gaps in technical interoperability create additional challenges since investigators working on different platforms, even within the same institution, often have difficulty sharing data. This is exacerbated by paper-based systems that create barriers to data sharing, and by concerns about who “owns” the data and thus earns credit for its use. This often induces researchers to shield their data from others and yields numerous small, siloed datasets that are inaccessible to colleagues and impedes their ability to validate the studies. 

The drivers of the reproducibility crisis are multi-dimensional and so must be the solutions. Some have suggested publishing the name of the editor who reviews a journal article to improve editorial accountability and diligence in the review process. Others have suggested that researchers pre-register their hypotheses before their studies commence to prevent the “post-diction” phenomenon. This has spawned a robust pre-registration movement in the scientific community.

However, there are also technology solutions to reduce human variability, increase data integrity, and drive consistency in experimental execution. This requires developing and maintaining a digital, interconnected laboratory ecosystem. A critical component of this ecosystem is the electronic laboratory instrument management system (ELIMS). ELIMS can integrate research data with inventory records (e.g., the number of botulinum vials and who last used them) and instrument service records to monitor the overall performance of a research program. For example, an anomalous result might be tracked to a specific lot of a chemical reagent. Linking multiple instruments to inventory records could identify devices that used that lot of the reagent.

In an ELIMS, digital sensors embedded in instruments can continuously monitor performance. If the dynamic range of a device begins to drift, producing variations for example in temperature, pH readings, or other parameters, researchers can receive automated alerts. 

An individual researcher or laboratorian can join the ELIMS via an electronic lab notebook that plugs into the network. Using this notebook, they can record the steps in their protocol in a more structured and consistent manner, thereby muting variability. They can capture and share data more easily with colleagues and will have a continuous connection to metadata relevant to inventory and instrument integrity.

The digital laboratory offers the potential to more easily aggregate, integrate, and share data from multiple sources. By creating bigger data sets, it becomes more productive to harness the power of AI and machine learning to detect and manage variation.

It is said that the future is here it is just unevenly distributed. That is certainly true when it comes to life science laboratories. In this digital age, it is ironic that many of the most advanced laboratories in academia, industry, and the government still rely on analog and manual processes and are drowning in paper. Making the digital lab of the future more ubiquitous today will improve reproducibility and, ultimately, trust in science.


About Kevin Vigilante & George Plopper

Kevin Vigilante, Chief Medical Officer and EVP: Dr. Kevin Vigilante is a leader in Booz Allen’s health business, advising government healthcare clients at the Departments of Health and Human Services, Veterans Affairs, and the Military Health System. He currently leads a portfolio of work at the Department of Veteran’s Affairs. Kevin is a physician who offers new ideas for health system planning and operational efficiency, biomedical informatics, life sciences and research management, public health, program evaluation, and preparedness. His work is published in academic journals and top-tier media outlets including the New York Times on a broad range of topics, including research innovation and informatics, tax policy and healthcare reform, and care of underserved HIV populations.

George Plopper, Senior Lead Technologist: George Plopper is a leader in Booz Allen’s healthcare business. Based out of the Rockville, Maryland, office, he applies expertise in cell and molecular biology, stem cell biology, and signal transduction to benefit the firm’s civilian, military, and public sector health clients. As a senior lead scientist and healthcare analyst lead with more than 25 years’ experience in biomedical research and higher education, George serves as a subject matter expert, project manager, scientific writer, and editor for life sciences. Prior to joining Booz Allen, he was a professor and associate head of biological sciences at Rensselaer Polytechnic Institute. He was also an assistant professor at the University of Nevada, Las Vegas (UNLV), and a founding member of the UNLV Cancer Institute.