Comments
Write a commentNo comments have been published yet.
This review reflects comments and contributions from Abebe Melis, Randa Salah Gomaa Mahmoud, Stephen Gabrielson, Melissa Chim, Ashley Farley, Allie Tatarian, Gliday Yuka, Deepak Antiya, Chalermchai Rodsangiam, Martyn Rittman, and Rachel Riffe. Review synthesized by Stephen Gabrielson.
The study aims to demonstrate, using bibliometric data, that APCs have failed to achieve equity in open access publishing and instead impacts researchers from low and middle-income countries (LMIC) the most. While APCs are high across all fields, the lack of significant differences (p = 0.2) suggests that APCs are uniformly a barrier to global health equity. This aligns with the title’s concern about APCs threatening equity, as they impose a financial burden on authors, particularly from LMICs. The consistently low percentage of LMIC first authors (median 4%, IQR 1–10) across all fields highlights limited representation of researchers from LMICs in these journals. This supports the argument that high APCs and limited accessibility to open access publishing disproportionately affect LMIC researchers. The significant difference (p = 0.016) in the distribution of open vs. hybrid journals suggests that surgery journals have the lowest proportion of open access (12%) and the highest proportion of hybrid journals (88%). This may imply that surgery journals are less accessible to LMIC researchers, exacerbating inequities in publishing opportunities.
Major comments
I think the context setting needs a major rewrite with an expert on the background of the open access movement and its intersection with traditional academic publishing. There should be mention of the market forces that have increased the prices of APCs and how many of the large commercial publishers have leveraged open access as another revenue stream. It should be acknowledged that not all open access journals levy APCs nor is paid open access the only route to open access. However, this study is focusing solely on the impact of APCs as the mechanism to achieve open access. I like how the calculation chunks APCs into sets of 500 dollars. I do think it might be important for the overall analysis to compare LMIC author lists in closed articles in the same hybrid journals. While I don’t disagree that APCs are completely inequitable and unaffordable, the bigger issue may be the lack of LMIC authorship in general. Maybe separating out fully OA journals and hybrid journals would help make the distinction clearer. I find it interesting that the biggest fully OA publishers aren’t included. I don’t think it’s necessary but do wonder if this was intentional. At least with publishers like PLoS they are more transparent with their waiver options (not that waivers are equitable).
While there is some evidence that APCs and LMIC author numbers correlate, I don’t feel like this paper provides compelling evidence as to why. I don’t doubt that APC levels play a role, however well-documented bias in editorial processes and at publishers are also very likely to be involved. These aren’t considered here, despite the strong negative correlation between SJR value and LMIC authorship. Some control groups of journals would also be useful: are the same effects seen in subscription journals, diamond OA journals, or journals with a regional focus?
The study does not directly address whether LMIC authors are better represented in open access versus hybrid journals. This is a critical limitation, as it prevents drawing specific conclusions about the role of journal type in fostering equity. Given the focus on global health equity, a subgroup analysis or stratification should be conducted to evaluate whether there is a statistically significant difference in the representation of LMIC authors between open access and hybrid journals. Such an analysis would provide more concrete evidence to support the argument that high APCs limit the inclusion of LMIC authors.
I can tell the authors are invested in this topic, however, the article suffers from not including a scholarly communication voice on the research team. The concept of OA, as presented, seems muddled or incomplete. It also doesn’t take into consideration the social complexity of the OA movement or of the publishing process as indicated by the stronger correlation between LMIC authorship and SJR. Setting the stage in the introduction by clearly defining the OA movement, then defining one path some publishers have taken to support it (APCs) could help focus the lens on the inequities APCs perpetuate.
Minor comments
General:
There is a lot of talk in the paper about different CC licenses and OA vs hybrid models that I don’t think is actually important for the analysis and results presented in the paper.
The article clearly states its objective: to measure APC costs and assess the impact on equity and journal bibliometric measurement. Overall, this is a good starting point, but adding a bit more meta-research relevance information would make the article stronger and more compelling. Adding empirical data would make the article more powerful and would help readers believe in the arguments it makes.
The percentage of LMIC first authors is uniformly low across fields, with no significant differences (p = 0.3). While this highlights inequity, it does not suggest that surgery is uniquely affected.
The similarity in APCs across fields (median ~$3,700) suggests that financial barriers are universal and not field-specific. While important, this does not directly differentiate the impact on Surgery or other fields.
Copyright licensing is tangential to the focus of this study. Waivers however are very relevant to the intersection of APCs and publishing equity, and could benefit from having some time spent on them in the introduction. As such, including a “colors” of open access table with APC considerations of each could frame your discussion better than bringing in copyright types.
Introduction section:
The way hybrid journals are defined here sounds more like a description of green OA. Hybrid journals allow authors two options: !) publish under the journal’s subscription model or 2) pay the APC tomake their work OA, immediately. Also, when the authors refer to “publication” they should be clear as to what that refers to as well (article, journal, or publisher) and be consistent throughout. With this definition of hybrid OA in this manuscript, it’s worth noting that many articles published in hybrid journals as subscription-access remain that way.
The sentence that explains that OA exists within the Creative Commons licenses isn't quite right. Open Access can exist without CC licensing. The license is attributed to an individual article. Also, OA isn't a specific model and CC isn't necessarily a set of rules. CC can be used in much broader contexts than scientific (or even scholarly) research.
The authors write “APCs are intended to support the financial sustainability of OA journals and add value for researchers”. However, hybrid journals have an OA option, so APCs aren't intended for just financial sustainability but as another revenue source for the publisher. Or it may be a paid value to the authors.
The statement, “Amid growing concerns among the academic global surgery community that APCs are becoming increasingly unaffordable…” could benefit from a reference.
The authors say that equity is a guiding principle in global health fields. Who established and upholds this principle? Maybe reword this as something desirable rather than framing it as a universally accepted principle.
When the authors mention the cost of publishing, we think that this is really the “price” of publishing. Many journals run on high profit margins.
Why were the metrics in this study chosen? H-index, for example, is much more commonly used for individuals and not often for journals. It isn't mentioned in the article cited (reference #12). Also, the h-index isn’t correctly defined: it should be the number h where h articles have received at least h citations (see: https://en.wikipedia.org/wiki/H-index). Limitations for the h-index, and any metric, should be described as well. For example with h-index, it is known to be highly influenced by the length of time that the person (or in this case, journal) has been active. Why was it thought to be an appropriate measure for this study? Other established metrics such as the Impact Factor could be used instead to help assess the reach of a journal.
We were also hoping to see mention of diamond open access, which is free (usually gratis and libre) for both authors and readers. Or mention of other factors that can affect equity in publishing, such as bias in editorial decisions. It’s also worth noting that the largest publishers are based in the US and Europe (see e.g. https://onlinelibrary.wiley.com/doi/10.1002/leap.1301 and https://onlinelibrary.wiley.com/doi/10.1002/leap.1531)
Methods section:
The authors included journals only if they had at least 100 articles with first author information listed. How was this 100 article limit arrived at? It might exclude newer or independent journals that are likely to have a lower APC.
Can the authors provide more information on the open-sourced datasets of APCs that they used? How was the information obtained or can they direct readers to where they can access the APC data?
Any reason why corresponding authors weren’t included with first authors?
Did the authors come across any papers where multiple authors shared the first author position? We would imagine that in these instances they usually share the same affiliation, but maybe not always?
Reference to the World Bank Group’s classifications should be added.
A direct comparison between APCs and the incomes of the people likely to be publishing in these journals would be more interesting and telling than comparing to the GNI. Especially since we're talking about surgeons, I would expect their incomes to be higher than the GNI per capita, but how much higher may vary a lot by country.
Results section:
I would be interested to see how the CC licenses were "ranked?" Does a CC BY-NC or a CC BY-ND offer more "researcher rights”?
With how the subsection on “Publishing Options” is worded, CC licenses and waivers seem to be conflated. It would be more relevant to have this subsection focus on waivers (not CC licensing) as this is important. It should also reference some of the studies or thought pieces on the topic. Such as: https://www.csescienceeditor.org/article/left-in-the-cold-the-failure-of-apc-waiver-programs-to-provide-author-equity/
We would suggest removing the word “waiver” from the statement “an institutional agreement for APC waivers…”. An institutional agreement means that the institution has already paid for publications under an agreement, it’s just that the authors are not seeing the APC price tag themselves.
The authors could provide examples of APC waivers based on author criteria.
Discussion section:
I think the authors have missed a key conclusion: the SJR value has a more significant effect on LMIC authorship than APC level. This suggests some other significant forces that work against equity.
For APCS that are incurred as out-of-pocket expenses, how do the authors know that the expenses are out of pocket? Many institutions have different funding models to cover APCs for their affiliated authors. Perhaps unaffiliated authors should have been included in the analysis, as they would be more likely to face out of pocket expenses. We agree that APCs are too high, in any case.
The authors write “This means that where APCs are incurred as out-of-pocket expenses for researchers, their high cost can be prohibitive for researchers who may have to prioritize which data to publish, or even whether to submit for publication at all.” We’d like to point out that the costs of data sharing are separate from publishing. Perhaps “which results to publish” is more accurate than “which data to publish” to clarify this distinction?
What is meant by “CC restrictions” in the “financial burden” section?
Was “democratizing science for all” the initial promise of OA? It could be argued that the point of OA is to make scientific results freely available online, but that isn't the same as democratizing science for all – just a small part. As the authors and data have alluded to, there are many hurt points in publishing itself that negatively impact researchers in LMICs.
“Science has been commoditized by the high cost of APCs”. Science publication was commoditized by subscriptions long before APCs came along. The authors should word the effects of APCs more carefully. (I'd recommend https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science for background reading)
What does the abbreviation “HIC” refer too?
I think that the bibliometrics findings are actually the more interesting finding of your analysis - the negative correlation between LMIC researchers being published was stronger with SJR than with APC costs. In fact, your "SJR moderated with APC" correlation was slightly positive! This could use more attention in my opinion.
“We found the relationship between APC and both bibliometric measures shows a weak positive association for both H-index and SJR.” I think this is very interesting. I've seen other research that APC pricing is tied to prestige and authors will pay for the prestige. I would assume that would be reflective in the analysis, but it most likely speaks to larger inequities in authorship and research more broadly.
For the shifting of financial burden to authors, I don't see how this shift affects APC levels. The existence of APCs - yes. The level of APC is determined by market forces and business considerations of publishers.
The subsection “Implications and Future Directions” could use a discussion on how institutions, libraries, scholarly societies, and publishers all have a part to play in setting the value of APCs and supporting alternative models for OA publishing.
I think linking to some of the community work in this space would be helpful. Such as OASPA's recommended practices: https://zenodo.org/records/14261488 & https://www.coalition-s.org/pricing-framework-to-foster-global-equity-in-scholarly-publishing/ & https://www.coalition-s.org/blog/maximizing-participation-in-scholarly-communication-through-equitable-pricing/
The authors mention self-archiving as ways to disseminate work outside of the OA publishing model. We’re not exactly sure how the authors are defining self-archiving here, but preprinting should be mentioned as a way to achieve equitable OA and CC-BY licensing.
Could the authors find a citation to support the statement “factors such as advertising, journal visibility, and author factors may affect citations more than the OA model itself…”? There are studies that show OA articles tend to receive higher citations than subscription-based articles: https://peerj.com/articles/4375/
Comments on reporting
The stronger negative correlation between journal metrics and publication of LMIC authors than between APCs and LMIC authors begs the question - is it the journal status or the APCs that are getting in the way? When corrected for metrics like SJR, does the correlation between APCs and LMIC authors still stand?
Suggestions for future studies
I think focusing on a specific geographical area or a comparison of multiple areas will be a good future study based on the Figure 2 map.
In the discussion, the researchers say: “While we were unable to evaluate the disparate definitions of the “hybrid” designation given by journals and publishers” - first, I don’t think this is actually relevant to this paper, BUT, I think doing this analysis would be really interesting!
I would like to see more follow-up, if it doesn’t already exist, on the correlation between LMIC authors and SJR or other journal metrics, which were more strongly negatively correlated than high APC costs.
The link between SJR and LMIC authorship naturally lends itself to deeper study of the impact of prestige publishing on medical researchers in LMICs. A quantitative study with negative socioeconomic implications deserves to be followed by a focused (say geographically) qualitative study in order to see if the lived experiences of the researchers match the data.
The authors declare that they have no competing interests.
No comments have been published yet.