Comments
Write a commentNo comments have been published yet.
The paper addresses an in important and timely issue in open science: the appropriateness of adopting generative AI into open science practices. The authors delve into benefits and limitations of genAI in the conduct and dissemination of science by using the UNESCO open science recommendations as a type of rubric.
There are no major issues by my read with the manuscript, however, there are a significant number of minor issues the authors should consider addressing in their next draft. The authors should be commended for putting this research and thought leadership together into the manuscript.
The abstract of the paper implies that there's a direct, one way relationship between genAI and open science: genAI affects open science practices. However, the relationship is better characterized as reciprocal and the section titled can OS open up Genai is a good start to acknowledging this but the authors would serve the scope of the paper more fairly by stating it in the abstract as well. Yes, new technology affects open science practices, but those technologies are often built upon, predicted on, or depend on open science. The limitations of genAI 's model training, to for instance, can be greatly improved with open data that is transparently communicated to the users. Greater explication of this nuance would improve the scope of the paper.
UNESCO'S definition of open science is a widely adopted and a very good definition. The paper, however, does discuss limitations of genAI for equity based upon the UNESCO framework. It would be worthwhile mentioning that the UNESCO definition does not address equity directly, Focusing more on inclusiveness. The official US federal definition however, does address equity directly and perhaps the authors would help readers make the connection between open science and The issues related to equity in generative AI by referencing that definition at least as a complimentary construct.
One issues in terms of the use of AI in science communication, particularly in publications such as the Frontiers rat-gate that the author's cite, One wonders. Whether AI is the issue here at all. It seems reasonable that accelerated review models such as those promised by For-Profit publications such as Frontiers might be the real problem. A lack of editorial and reviewer oversight, coupled with the lack of genAI lookout training of reviewers, may very well have resulted seems to be symptomatic of a much larger problem with the review process than with generative AI and it's widespread availability. This isn't just a human in the loop problem. This is a market problem with for-profit publishing being misaligned with the public good of science. Certainly, generative AI magnifies this problem and it will become increasingly incumbent on reviewers and editors to make more difficult decisions in terms of editorial process throughout all of scholarly communication, but this process will not be helped at all until the incentives for quality review are disentangled from bottom lines. I think it's very clear how this fits into the UNESCO framework and it would be good for the authors to pontificate a bit on the implications of having generative AI interfacing with a seemingly broken peer review system.
I absolutely love the recommendation section and I think this is very much needed for the Scientific community. I think that it would also be helpful to add an additional section for the use of AI by researchers, not just as a precondition of using AI, but an active condition. Such recommendations might include things like: transparently communicate that you have used generative AI and explain when and where and how including what prompts were used and whether any differentiation for the selection of results was done by the user ; describe any known limitations of the generative AI model used ; and, attempt to replicate an approximation of the results using more than one model for robustness.
The author declares that they have no competing interests.
No comments have been published yet.