Avalilação PREreview de Are Digital Humanities really committed to open? An exploratory study on the availability of methodological workflows and open peer review practices
- Publicado
- DOI
- 10.5281/zenodo.20041952
- Licença
- CC BY 4.0
This preprint analyzes two important components of open scholarship practices in DH: data documentation and open peer review (OPR). The results revealed limited uptake of both practices:data documentation in the Journal of Open Humanities Data and OPR in a list of 29 DH publication venues. These results may encourage reflection within DH communities on the aspects of open scholarship currently implemented in the field. The paper also makes the associated data, code, and data management plans openly available, which is consistent with the paper’s topic. This is a commendable effort, as preparing this documentation requires additional time and preparation.
While the paper is well-written and provides interesting results, there are some suggestions that could help to improve it.
Major issues
Based on the methods section and the GitHub repository associated with the project, the uptake of data documentation and OPR was mainly studied using computational methods. However, the paper would benefit from further justification as to why these methods were more appropriate for answering the research questions than more manual qualitative methods, especially given the relatively small number of articles and journals analyzed.
In particular, the search terms used to identify the presence of data management plans and OPR seem rather limited. Some papers in the JOHD may not use the terms 'research data management' or 'data management plan', yet still provide documentation of the process of data creation and maintenance.
A similar issue arises when identifying the OPR policies of DH journals. Some journals have information about their review policies scattered across multiple web pages, but according to the openly available dataset (cfp_full.csv), the analysis only considered one web page per journal. This may have caused some false negatives. For example, Digital Humanities Quarterly practices single-blind peer review: “DHQ's peer review is blind but not double-blind […] As a result, concealing the author's identity from reviewers is not as essential for DHQ as it is in other domains.” (https://dhq.digitalhumanities.org/submissions/peerReviewing.html). As this web page is not included in the cfp.csv spreadsheet, it seems that the type of review that DHQ uses is misclassified.
I would suggest complementing the computational analysis with a close reading of at least a sample, but ideally all, the methods sections of the JOHD articles, as well as a larger selection of DH journal editorial policy webpages, in order to verify the accuracy of the computational analysis.
Minor issues
In Figure 1, adding a label to the x-axis would make the graph clearer.
In the paragraph after Figure 1, it would be helpful to add an introductory phrase or subheading to indicate that the topic has shifted from data documentation to OPR.
I would suggest providing more descriptive file names for the data shared on Github and Zenodo, so that people can better understand what each file contains without necessarily having to open the file.
In the second paragraph of section 4, a sentence reads: “we observed that a few articles […] provided an appropriate, detailed description of the procedural methodology”. It would be more precise to state exactly how many articles did this.
In the conclusions, it would be helpful to provide some recommendations for DH journals and researchers to increase the uptake of data documentation and OPR.
Competing interests
The author declares that they have no competing interests.
Use of Artificial Intelligence (AI)
The author declares that they did not use generative AI to come up with new ideas for their review.