Skip to PREreview

PREreview of Fast & Fair peer review: a pilot study demonstrating feasibility of rapid, high-quality peer review in a biology journal

Published
DOI
10.5281/zenodo.15303247
License
CC BY 4.0

Summary

The authors describe a 6-month pilot on the Company of Biologists’ journal Biology Open, in which manuscripts assigned to two of the ten academic editors were sent to (1) reviewers who were paid a fee per successfully and rapidly reviewed manuscript or (2) reviewers who were paid a retainer to rapidly review up to three manuscripts per quarter. Comparison of manuscripts reviewed through this pilot program to those that underwent the conventional review process revealed significantly reduced time to first decision, similar rejection rates, and adequate review quality as assessed by academic editors.

Major comments

The study uses a very small number of manuscripts evaluated during the pilot, which makes conclusions less generalisable. Only 25 manuscripts were eligible for the trial and 23 were reviewed in this way. 

The quality of review was assessed by the academic editor on a scale from 0-2 (expressed as 0-50-10), which is not a very sophisticated assessment—see for example van Rooyen S, Black N, Godlee F. Development of the review quality instrument (RQI) for assessing peer reviews of manuscripts. Journal of Clinical Epidemiology 1999; 52(7): 625 – 629, https://doi.org/10.1016/s0895-4356(99)00047-5. By this assessment, the quality of review was not significantly different within the trial, and the rejection rate was also not significantly different.

Along similar lines, could there be potential biases in using editorial assessments as a measure of review quality? Based on the description in the manuscript, it appears that academic editors were not blinded to the Fast & Fair pilot. Could this bias them to higher scoring of reviewer reports? Is there an unbiased way to compare reviewer report quality from the Fast & Fair group and the conventional group (e.g., masked computational method or scoring by an editor blinded to the review process manuscripts underwent)? 

How did the editors find the process? Did having a pool of reviewers make it easier to find reviewers? Did they feel compelled to use reviewers who had signed a contract? Were they more reluctant to seek an extra review in cases where the reviewers disagreed, because of the extra cost?

The speed of review was greatly enhanced in either model compared to the regular review process handled by the remaining 8 academic editors and non-incentivised reviewers. However, at an estimated cost of over £50,000, or more than £2,000 per manuscript, it is hard to conclude the intervention is cost-effective. The freelance payment route is less expensive per article but still requires at least an additional £440 payment to reviewers for each manuscript that receives two reviews or £660 if it receives three reviews; adding in the cost of administering the scheme would mean a significant impact on, for example, Article Processing Charges if passed on to the authors. Running it for longer will be more efficient, but the costs will probably be £500-1000 per article. Across the industry, this would run to billions per year. 

Can you add more discussion about the effect of paying reviewers on the broader publishing industry, for example an estimation of the total costs if this was rolled out across all journals, or for a specific field. Also, how would journals running on a very low (or zero) budget be able to compete with those that are paying reviewers? As the authors note, their methodology means that the pilot was restricted to two major areas of biology. It remains to be seen whether all subject areas behave similarly. 

Minor comments

Table 3 nicely separates reviewer fees from staff costs. It would be very helpful to split staff costs into two, separating set-up of the systems required for the pilot from the costs of administering it while running. 

Did any reviewers invited to participate decline because of the time constraints, objections to payment, unwillingness to sign a contract, or any other reason? How many potential reviewers were contacted?

The authors used rejection rates as a measure of editorial quality. However, there are other possible outcomes for each manuscript (Acceptance, minor revision, major revision mentioned in Fig 1 and Table 1). Did the Fast & Fair group differ from the control group in the distribution of papers that received these outcomes?

The authors discussed “differences in reviewer demographics and manuscript types” as potential factors that contributed to differences between their results and those of Cotton et al. In the Biology Open pilot. Are there any differences in reviewer demographics and manuscript types between the Fast & Fair group and the Conventional group in Figure 2?

Could the length of reviews (i.e. word count) be used to assess the quality? Could there be some other way of assessing them, e.g. ask editors to highlight comments they considered high quality, or that helped them make a decision, and check the distribution of those comments in the reviews.

Can you include a more specific time range of when the study was conducted, in terms of submission and publication dates of manuscript, or editorial decisions? Also, provide more details about the control (number of articles in the same period, the range of the time of the final decision).

In-line comments

“The Fast & Fair peer review initiative”: We suggest including the number of people at each position, initial check, editorial check, etc.

“80 reviewers were recruited”: Were they recruited at random from motivated reviewers, or were they targeted to have specific expertise?

“Retainer reviewers received £600 per quarter”: Even though it was a pilot study, would you be able to determine whether there is a difference in quality of review between a retainer reviewer who did 3 manuscripts and one that did only 1 manuscript? How much time were they taking for each review?

“Freelance reviewers, in contrast, were paid £220 per manuscript.”: How was this number arrived at? Was there any thought of adapting it based on factors such as reviewer experience? Also, can you estimate the administrative cost of handling payments to the reviewers?

Figure 2: Please include N for each group in the figure legend, is it the number of dots?

“Academic editors rated each review”: What was the cost of this in terms of time?

“A discrete three-level scale”: Can you include more details of this scale used and any advice provided to the reviewers?

Table 1: We suggest turning this into a supplement and showing summary statistics in a graph.

“experimental group (n=20)”: this is 3 times lower than control group n =65, 1:1 or close to that would have been good for study. Also, in Table 1 it says that 23 manuscripts were reviewed in the pilot, can you explain the difference?

“rejection rate (30%)” : Based on Table 1, 8 manuscripts were rejected out of the 20 that were sent for peer review. 8/20 = 40% rejection rate.

Table 2: Were the reasons for non-acceptance by freelancers collected? There are fewer retainers, but more reviews were completed by them. Was there a preference for the editors to tap into the retainers for this pilot when possible? Does this introduce bias into the cost-effectiveness and could it have an effect on the quality of review?

“Many are used to a slower workflow”: I don't think it's simply a shift in expectations. Most scholars have very busy schedules and high workloads, and can't find the time to review a manuscript at very short notice.

Suggestions for further study

What would the implications be for a journal like Nature or Cell - high impact journals that take 6–12 months to review.

Competing interests

Theodora Bloom: I work for BMJ which conducts and publishes research on peer review - see https://www.bmj.com/about-bmj/evidence-based-publishing