In the United States, quicker and more decisive actions to combat COVID-19 could have potentially prevented 70% to 99% of related deaths, and one such improvement would have been increased testing. But when asked why the United States could not import made tests from other countries to alleviate its massive and deadly shortage, White House Coronavirus Task Force coordinator Dr. Deborah Birx cited a study about how foreign tests had a high false positive rate. Upon closer inspection, however, that study, published in a peer-reviewed Chinese journal, contained numerous critical flaws and was retracted just a few days after it was published in early March. Indeed, importing tests may well have saved some of the 100,000 Americans who have lost their lives to the virus.
The incompetence of this study — and its life-or-death consequences — is by no means an isolated incident. With thousands dying each week, scientists have felt pressured to accelerate the research and publication process in order to supply as much potentially life-saving information as possible. However, some observers are concerned that this acceleration may result in publications of lower quality, even with traditional peer review processes. One expert commented, “You’re seeing papers published in the world’s leading medical journals that probably shouldn’t have even been accepted in the world’s worst medical journals.”
Central to this problem is that many researchers have resorted to directly uploading papers that have yet to undergo peer review onto what are known as “preprint servers.” Though there are benefits to accelerating the sharing of vital information among scientists, flawed findings are also increasingly being presented as fact. When these findings are irreversibly disseminated in the news and on social media, the result is rampant public misinformation and a loss of trust in science.
Peer Review and Preprints
To be published in a traditional scientific journal, a paper must first be read and evaluated by two to three experts. This process is called peer review. In theory, its purpose is to catch mistakes and faulty scientific conclusions before they reach the greater community. Typically, this quality-control process can take months to complete.
Preprint servers, which skip this process, are not a new phenomenon. Physical science researchers had been uploading their work onto preprint servers such as arXiv.org ever since the 1990s. However, scientists in the biological and medical fields were more reluctant to do so. As National Institutes of Health bioethicist Dr. David Resnik explained to the HPR, “You publish a paper about black holes … [and] whether you’re right or wrong about black holes, there’s not much we can do about a black hole at this point. If you publish a [faulty] paper on a drug, people can die. … There’s a lot at stake.”
This has changed in recent years. In 2013, bioRxiv, a preprint server for biology papers, was founded. In 2019, medRxiv, a server for medical papers, followed suit. While initially these servers did not receive much attention outside of the scientific community, the recent urgency to find medical breakthroughs for COVID-19 has led to an explosion in their usage and coverage. Views and downloads of articles on medRxiv have increased hundredfold over the course of the pandemic. According to Resnik, of the more than 16,000 COVID-19 related publications, about 6,000 are on preprint servers. This indicates that at least 40% have not been through a rigorous peer-review process.
Before the pandemic, one-quarter to one-half of articles uploaded to preprint servers were rejected due to serious errors. It follows that a considerable portion of the COVID-19 articles on preprint servers may also be fallible — kindling for scientific misinformation.
Questionable Science
The consequences of the lack of peer review have already been significant. In January, a team of researchers from India uploaded a preprint to bioRxiv suggesting “uncanny” similarities between the SARS-CoV-2 (the official name of the virus that causes COVID-19) and a variant of HIV. While the preprint was removed within 48 hours because of serious scientific flaws, the headline-catching conclusions were widely discussed by many news outlets and conspiracy-theory propagators, including over 17,000 mentions on Twitter alone. The paper suggested, falsely, that parts of the HIV genome were inserted in the SARS-Cov-2 genetic sequence, implying that the coronavirus was potentially lab-manufactured as a bioweapon.
In February, another preprint was uploaded onto bioRxiv that suggested that the SARS-CoV-2 virus could have been man-made. While the paper was found to be flawed and retracted within 24 hours, the conspiracy theories about COVID-19 as a nebulous bioweapon did not stop. These conspiracy theories dangerously exacerbated distrust of public health authorities, causing more people to refuse to comply with public health practices.
In April, researchers from Stanford University and the University of Southern California uploaded a preprint onto medRxiv, suggesting that the prevalence of COVID-19 in Santa Clara County, California was potentially 85 times higher than previously expected. Many researchers expressed severe criticism of the study’s statistical methodology, with some pointing out that there were at least five major mistakes. “It’s just that given the data they reported, they should’ve been clearer on what you can learn and what you can’t learn from the data,” Dr. Andrew Gelman told the HPR. Gelman is a Columbia University statistics professor who has written unfavorably about the study.
Within hours of the Stanford-USC paper’s uploading, numerous anti-lockdown activists, commentators and elected officials used the study as proof that the COVID-19 pandemic was overblown and that the public health measures were excessive. Many used the findings from this study to call for a premature end of stay-at-home orders.
This problem is not just confined to the United States. Another retracted preprint from April suggested that the anti-parasitic drug ivermectin could reduce COVID-19 mortality rates. Despite the paper’s removal, many governments in South America, including Peru, Bolivia and Brazil, began adding ivermectin to their treatment protocols and preventative recommendations. Many desperate individuals even began asking their doctors about acquiring ivermectin from veterinary sources and the black market. While there is no concrete proof that ivermectin is effective, some scientists have suggested that ivermectin may have toxic effects.
Preprints are by no means the sole cause of misinformation during the pandemic: Nearly 1 in 4 of the most widely-viewed COVID-19 videos on YouTube contains misleading information, too. All these sources are especially crucial to consider during the pandemic: As University of Louisville political science professor Dr. Adam Enders argues, the U.S. public is particularly susceptible to misinformation and conspiracy theories surrounding COVID-19 because the pandemic has activated people’s feelings of uncertainty, powerlessness and anxiety. With their loved ones succumbing to the virus, their jobs on the line, and even experts unable to provide the answers, people are left feeling powerless.
“The way conspiracy theories work is that they impose structure on the world,” said Enders. “They take a messy world that is full of coincidences and random patterns, and they weave it all together. … So when you [believe the false theory] that the novel coronavirus is a Chinese bioweapon, then you think that there is an antidote, or that there is already a vaccine. Because these people perpetrated it on us and built [it] in a lab, so there is an answer. It makes us feel less uncertain about the future. ”
People prone to believing these theories will also discredit scientific experts after witnessing false preprints, citing the “sloppiness” or inconsistency of scientific papers. “It’s disparate pieces of information coming from experts, who you presuppose are involved in the conspiracy,” Enders said. “We know, as scientists, that science is inherently iterative, and that we don’t always get it right on the first shot. … That’s just how the scientific process works. But it’s the iterative nature of the process that conspiracy theorists feed on and use as evidence for the conspiracy.”
A Complex Problem
Despite these concerns, preprints do bring about many benefits: They allow results to be shared in as little as 48 hours with fellow scientists and the broader public. This rapid sharing of information allows other scientists to more quickly build on each other’s work — “rapid communication” alongside “rapid feedback,” according to Resnik — and, theoretically, for governments to make proactive, versatile public health decisions.
Despite his criticism of the Stanford preprint, Gelman agrees with Resnik and also does not believe preprints are the root of the problem. “In general, I’m a proponent of open post-publication review rather than secret pre-publication review,” he said. “So I don’t have any problem with preprint servers. People put stuff into a preprint server and they also send stuff into a journal for peer-review. I’ve done that [too].”
After weighing the costs and benefits of preprints, Dr. Resnik concludes, “Ethically, I think most people would agree that the emergency situation does justify trying to speed up the process as much as you can, without introducing unacceptable levels of error.”
At this point, the question we should be asking is not whether to abolish preprint servers in the hopes of curtailing misinformation. The more relevant question lies in how can we reform the preprint system — and, more broadly, the interactions between the scientific community and the public — to minimize the risk of misinformation while sustaining the life-saving accelerated rate of research. Where do we find this balance?
Reforms
Scientific Publishers
Recently, many journal publishers in the Open Access Scholarly Publishers Association released an Open Letter of Intent calling for the formation of a rapid review system and increased collaboration between journals and preprint servers. These include Hindawi and the Royal Society. In a joint interview with the HPR, Dr. Sarah Greaves, chief publishing officer of Hindawi, and Phil Hurst, a publisher at the Royal Society, acknowledged that preprint servers are here to stay. The question, from their perspective, is how journals can collaborate with preprints to ensure the highest quality research “in a time when peer-reviewed, verified work needed to get out into the community as quickly as [possible],” according to Greaves.
One aspect of the proposed system is creating a rapid reviewer list of academics who are willing to commit to quick turnaround times in writing peer review reports. Another critical part involves journals sending teams of reviewers directly into preprint servers to place comments and identify important preprints for accelerated peer review.
Dr. Richard Blendon, professor emeritus of health policy and political analysis at the Harvard T.H. Chan School of Public Health, told the HPR that these journal editors have a vital responsibility to target and remove studies with potentially deadly consequences in order to preserve public trust in science. “You don’t know where the tipping point is,” he said. “I want to be careful that there is not some tipping point where people get so cynical about what’s going on that they won’t believe anything.”
Scientific Researchers
To minimize misinformation in the scientific process, Resnik said, “A big part of the ethics of science is transparency and openness.” Toward this end, Gelman suggested that the scientific research community should establish norms of sharing data and code with each publication, so that experiments are more easily reproducible and verifiable.
In addition, Hurst believes that part of the solution lies in changing the previously anonymous system of peer review to give peer reviewers recognition for their services. “Too long, they’ve been doing unpaid work without any sort of recognition. And this is where the open-science comes in again, publishing the peer reviewer reports and giving the reviewer credit for it.” Mr. Hurst suggests, “If somebody is doing a lot of work assessing preprints, maybe that should be a part of their CV and [recognized] in tenure and employment decisions [as] playing a role in the community.”
Peer Reviewers
Many of the above reforms would reinstate a more rigorous peer review process for preprints — but the peer review process itself is by no means perfect. A study (also a preprint) recently found that in a sample of 14 peer-reviewed scientific journals, the average turnaround time for peer-reviewing and publication decreased by nearly half during COVID-19, from 117 days to 60 days. This drastic acceleration has led some skeptics to doubt whether peer review really can stop shoddy science from being disseminated to the community, and has led prestigious medical journals like The Lancet to retract peer-reviewed studies, too.
Today, peer review is considered the “gold standard” in scholarly publications. But, contrary to popular belief, the concept of peer review as we know it is actually a very recent phenomenon. Harvard History of Science professor Alex Csiszar notes that it was not until 1973, a century after its inaugural publication, that prestigious scientific journal Nature began requiring published articles to first undergo an external refereeing system. Csiszar contends that these external refereeing systems served mainly to convince the public that the scientific community “had procedures for regulating themselves and for producing consensus,” around the time that “refereeing emerged as a symbol of objective judgement and consensus in science.” Its original intention was never to serve as a “scientific gatekeeper,” but rather to increase science’s public confidence and visibility.
Csiszar suggests, “In this sense, peer review has always been broken.”
The Public
Resnik believes that another part of the solution to minimizing public scientific misinformation lies in public education — journalists and the public need a better understanding of how the scientific and peer review processes work. He noted, “What the public needs to understand is that preprint servers are not peer-reviewed. What they are looking at is a work-in-progress … that could be subject to revision or could even turn out to be completely wrong.” Dr. Greaves echoed the idea that educating the media — especially as a responsibility of scientific publishers — is an important and understated step.
For all of us, COVID-19 is just one of many wake-up calls that the public’s perception of science is due for reflection. While we must ramp up public education and scientific oversight reforms surrounding preprint servers, these servers themselves are one part of a much bigger problem: the public’s erroneous perception of science as a perfect instrument of discovery, a one-hit wonder. In reality, it is an iterative process that builds upon itself toward truth.
Science is, and should always be, the gold standard in determining public health policy. But with today’s rapid speed of scientific publication, it is more important than ever for the public to scrutinize what we read and to look at the disclaimers. It is more important than ever that we recognize that one study might not always capture the complete scientific truth. Above all, it is more important than ever to accept that this is precisely how science works.
If we succeed in implementing these changes, 2020 may be a turning point, a historic moment of science and society working together for the collective good.
Image source: “Microscope” by Ousa Chea is used under the Unsplash license