Comments
Write a commentNo comments have been published yet.
This preprint describes quantitative and qualitative results of a survey on perceptions and motivations in peer review administered to a selected sample of recent corresponding authors of medical journal articles. 8.6% of those invited responded and completed >80% of the survey. Most respondents:
This is a useful and well-carried out piece of work which is extremely helpful for continuing research into attitudes surrounding peer review, particularly training, motivations and incentives. It includes important recommendations about how current peer review recruitment should consider motivation and training, e.g. “Currently, journal editors may spend time emailing numerous potentially untrained peer reviewers when they could instead contact a smaller number of highly trained and motivated peer reviewers.”
The aim of the survey is stated as being “to provide an up-to-date perspective of international biomedical researchers' views on peer review training.” The sample of respondents is a limiting factor in how far such an analysis is possible, and so my main critiques are about the makeup of the sample, and particularly whether the authors have put these limitations into sufficient context, which overall I think could use further work:
I was a little unclear on what the responses to the questions on participation in peer review may be telling us. For instance, “How many articles have your peer reviewed in the last 12 months?” does not specify if the respondent was an invited reviewer, or a co-reviewer. The possible confusion arising from this question may have affected some of the data: for example the number of respondents to the question “For how many years have you been active as a manuscript peer reviewer?” does not appear to correspond with how many respondents say they have never been involved in review in the previous question. I found the high level of participation of biomedical researchers in peer review surprising, until I considered some of the biases above e.g. that the sample is enriched for faculty. But I still found it surprising (if the question is interpreted as having been an invited reviewer, or involved in some serious way in peer review) that the number of masters, graduate students, postdocs etc reported in the dataset appear to be participants. That is, again, until I considered that if someone in these roles is allowed to be a corresponding author (because there can be controversies and arguments about who gets this position), they are likely to be in a particularly privileged position in their lab compared to many of their peers. All in all, this still makes the data very interesting - but I would urge providing greater context about who this population is, and who they are likely to reflect. I would claim that they reflect a particularly privileged group of researchers, and are therefore not reflective of the typical biomedical researcher. Selecting for corresponding authors is a good strategy for engaging with those who are allowed access to/invited to peer review - but this is a biased and privileged group generally, which may affect broader generalizations about the experiences of all biomedical researchers. This is also before considering that these are the authors who were ultimately successful in publishing in these journals which, depending on the perceived competitiveness of the journals, may affect the population further.
Another bias that is not discussed in the preprint is that the responses will be affected by who was motivated to respond, which may be reflected in/affect the opinion-based questions. For example, the people who responded may have been motivated to do so precisely because they think peer review is so important, rather than there being a general consensus that this is the case. These points may require more discussion. But this makes the correlating opinions interesting too, especially in light of the population that has been invited to respond, and so the data are very insightful, but require more context.
The methods are particularly clear and the full recruitment email and survey texts have been included in the preprint, which is very helpful for those wishing to take prompts forward in future surveys for potential comparisons. The authors registered the study protocol prior to carrying out the work which is a particular strength. In addition I appreciate the transparency in data sharing and that anonymized data, and the codebook, are included at the Open Science Framework Project page for this work: https://osf.io/wgxc2/
One comment regarding the requirement for training - I have observed there are some journals (e.g. biological society journals) who do not require all reviewers to be trained, but rather are forcing early career researchers specifically to go through training before being added to the reviewer database, whereas faculty reviewers are not made to go through training. Was such a nuance, or something like it ever reported in this dataset (it seems that there are opportunities for respondents to volunteer that information, but they aren’t explicitly prompted to do so)? Any discrimination in who is "required" to be trained, would be interesting. It would also be interesting to know if there are journals that do require training of everyone who wants to be a reviewer at that journal.
Based on some of my comments above, is there anything notable about the population (n=80) who are involved with journals themselves, with respect to their responses elsewhere in the survey? This insight into the role of editors may be helpful, especially when compared with work such as Hamilton’s (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7717900/). The extent to which this subset and the rest of the cohort agree could be interesting to look at.
On the author's recommendations: while peer review does indeed take place at journals, and so there should well be an expectation that they provide training, it may be important to consider also that generic peer review training happen as part of undergraduate or graduate training at institutions. Journals are particularly focused on curation; but curation goes beyond evaluation of the validity of research, or scholarly discourse about it. I would therefore be interested to see future discussions focused on which stakeholders may be best placed to carry out training about peer review, and the wider benefits that a more devolved training landscape may have. For example, most of the undergraduates we are carrying out our peer review education with will not go to graduate school or do peer review for journals, but they will encounter science in their everyday lives in a vast array of roles. An ability to engage with and critique research seems a very important skill to learn at undergraduate levels in STEM, and pre-medicine, and is part of the case we are making for the need for peer review education separate from the specific need for labor at journals. Similar arguments could be made for the importance of journal-agnostic peer review training, which indeed could perhaps address some critiques of the way current reviewers are perceived to be approaching their reviewing tasks.
Finally - I’ve found this work useful in rethinking some ongoing projects of my own. I’ve provided some examples here, to illustrate some of the contexts where I see this work will be important/some thoughts the authors brought up that I share in the hope that they are of interest or use:
I have no conflicts to report; I do not know the authors personally and have not been involved in any way with the work. I do not stand to gain or suffer financially or otherwise from this publication.
This review is published under a CC-BY license.
No comments have been published yet.