PREreview of Is biomedical research self-correcting? Modeling insights on the persistence of spurious science
- Published
- DOI
- 10.5281/zenodo.8318618
- License
- CC BY 4.0
This review reflects comments and contributions from Jessica Polka, Martyn Rittman and anonymous crowd members. Review synthesized by Jessica Polka.
The paper develops a model in which there is a competition between erroneous and corrective literature in order to interrogate situations where self-correction of literature is likely to fail. The idea seems novel and a useful effort that could be expanded on in the future.
The paper highlights the lack of optimal self-correction in research and presents a model for understanding contributing factors. The reproducibility crisis is a serious problem in academia and thus the work is relevant.
The author does a good job of highlighting the causes of lack of self-correction in research.
Major comments:
A discussion on some specific examples of corrective literature, maybe a deep-dive into one published paper and how the result spread or was countered in the literature would help to both motivate the research and test the model.
A fuller discussion of how corrective attempts are made would be helpful and motivate the inclusion of ‘d’ as a linear term in the equation. How does one corrective attempt spur others? Would you anticipate the same ‘gold rush’ effect as for erroneous work, that one measure would lead to others? Does this term relate to things like negative news/social media coverage, sanctions against poor research, whistle-blowing?
How would different assumptions about the kinetics of how spurious and corrective results interact (as modeled in the first system of equations presented) affect the conclusions of the paper? For example, I could imagine that an exponential relationship might be more appropriate in some cases. If there are multiple papers all claiming something spurious, I'd expect it would be harder to publish a rebuttal, but the impact of that one rebuttal would potentially be a lot higher if it debunks multiple papers rather than a single one. Similarly, if there were multiple corroborating debunking papers, that would also probably have an even bigger effect than the effect of either paper alone.
The discussion could benefit more from more concrete suggestions on how to remedy the situation as well as provide examples of what has worked, if any
Minor comments:
In Figure 1, I'm puzzled here by the fact that the publishing delay seems to change the # of publications, but not the overall fraction of spurious vs reliable at most time points. Intuitively I struggle to understand this. I would expect that with no corrective delay, many more papers would be published quickly (which seems to be the case) but that the reliable ones would more quickly overtake the spurious ones
“The time taken for corrective work to begin is a perhaps surprising factor suggested by this investigation.” - Interesting point that could be discussed in the context of 1. delays in publishing retractions 2. the continued citation of retracted works after retraction (https://doi.org/10.1080/08989621.2021.1886933) 3. lack of clear labelling of some retracted works on publisher websites, or downstream services. It's not sufficient simply to identify a work as misleading, you need to let the community know.
Suggestions for future studies
It would be interesting to model other technological interventions with this approach, such as retraction notices being displayed inline in bibliographies etc.
Competing interests
The author declares that they have no competing interests.