Credit: CC0 Public Domain

According to a new report on COVID-19 research, the results of studies posted online as preprints — studies published before undergoing the peer review and approval required by most scientific journals — often stand up to this scrutiny quite well. .

For now preprint manuscripts became popular in many scientific fields after physicists made their repository arXiv (pronounced “archive”) available online in 1991, the COVID pandemic spurred new groups of researchers into the habit of publishing and consulting fresh experimental results and analyzes earlier for analogues review.

“Preprints have been widely adopted in the social sciences, computer science, and mathematics for quite some time,” says B. Ian Hutchins, a professor in the School of Information at the University of Wisconsin-Madison and lead author of a new study of preprints published today in The Lancet Global Health. “I think biomedical research has been more cautious precisely because people use that information to make health-changing decisions.”

The emergence and rapid global spread of the new virus, as well as the swift response of scientists around the world, have led many to reconsider that caution, weighing it against the cost of the typical delay of a few months or more for recently completed studies to clear the hurdles of careful journal review.

According to Hutchins, a group of journal publishers decided during the pandemic to require preprints of manuscripts related to COVID-19 that were submitted for their review. preprint (deep meta)..

UW-Madison researchers randomly selected 100 COVID-19 studies that had been published as preprints and then peer-reviewed and successfully published in journals. They examined how peer review affected 1,606 data points in the manuscripts, representing four types of data common to the genre of COVID research: closely related measures of infection and case-fatality rates, basic measures of viral reproduction (how many people are expected to be infected person to infect) and the incidence rate (the number of new people infected during a certain period of time).

“That was an advantage of using infectious disease research for this study,” Hutchins says. “Because when you’re talking about a mortality rate, there’s an agreed-upon definition of what that is, in general terms, and so we could better compare that data across labs.”

Comparing preprint manuscripts with possible published versions of individual studies, about 90 percent of these 1,606 data points still remained in the text after peer review. More than 170 were edited and more than 300 new data points were added in a sample of 100 studies.

And while the researchers found that the confidence intervals associated with the estimates — “it’s like the margins of error you hear about in polls,” Hutchins says — shrank by about 7% after peer review, the changes in actual estimates were small and statistically insignificant.

“The wild swings between preprints and published versions would be hard to explain,” Hutchins says. “But that’s not what we’re seeing. There’s not much change in the data presented and the estimates based on that data.”

Quantifying the differences typically observed after studies cross the peer review finish line can help consumers of the latest science consider how much weight they place on preprint results when reporting discoveries or making health recommendations.

“Journalists and politicians should look at the fact that 90% of data points make it through expert assessment, need to understand how much they typically change, and ask myself, am I comfortable accepting that degree of change?” Hutchins says. “The answer to that question may depend on how important the decision is. If all you’re concerned about is your reputation, you may be exposed to a different degree of risk than if you’re making life-or-death decisions.”

According to Hutchins, who while at NIH developed iCite, a curated search tool for COVID-19 research, the National Institutes of Health promoted preprint manuscripts as a way to accelerate the pace of scientific discovery.

Hutchins co-authored the new study with statistician Honghan Ye, who earned his doctorate at Madison State University in 2021, and several undergraduate students at the University of Madison. He hopes to expand his preprint research to include a wider range scientific fields and how the quality of preprints changed over time.

Comparison of preprints and their revised publications during a pandemic

Additional information:
Lindsay Nelson et al., Reliability of Preprint Evidence During Peer Review, The Lancet Global Health (2022). DOI: 10.1016/S2214-109X(22)00368-0

Citation: Most COVID-19 preprint studies stand up to peer review: Study (2022, October 12) Retrieved October 12, 2022, from

This document is subject to copyright. Except in good faith for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.