Comentários
Escrever um comentárioNenhum comentário foi publicado ainda.
Given that this review is not specific to any particular journal, I will focus on the strengths and weaknesses of the manuscript.
Strengths
The paper addresses a relevant and timely topic. Global university rankings increasingly incentivize institutions to prioritize bibliometric indicators, often at the expense of research integrity. This metric-driven environment fosters questionable research and authorship practices, potentially undermining the reliability of rankings as proxies for academic performance.
The study's first objective was to identify universities exhibiting anomalous bibliometric patterns suggestive of metric-driven behaviors. This part employed rigorous methodology. From the 1,000 most-publishing universities worldwide (2018–2024), 98 institutions with >140% publication growth were identified. Of these, 16 universities (across India, Lebanon, Saudi Arabia, and the UAE) showed sharp declines in first and corresponding authorship rates, an early signal of potential metric manipulation. Trends were contextualized using national, regional, and international control groups, including top institutions from each represented country and globally recognized universities (e.g., MIT, Princeton, ETH Zurich, UC Berkeley). Seven bibliometric indicators were examined:
Overall and field-specific research output
First and corresponding authorship rates
Prevalence of hyper-prolific authorship (≥40 articles/year)
Publication surges in STEM fields
Co-authorship network density
Reciprocal citation and self-citation rates
The share of output published in delisted journals and retraction rates
This methodology yielded evidence-based findings. The 16 flagged institutions demonstrated:
Sharp declines in research leadership roles (first/corresponding authorship)
Unusually high publication surges in STEM disciplines
Dense, insular co-authorship networks
Elevated reciprocal and self-citation behaviors
Rising rates of retractions and publications in delisted journals
Weaknesses
However, the study has significant flaws that need addressing.
First, the problem with university rankings is much broader. My previous study attempted to reflect the entire spectrum of the academic community's complaints about university rankings (Kochetkov, 2024). The core issue is that reducing the performance of a complex system to a single composite number (league tables) is fundamentally flawed. No revisions to methodology can change this.
The study's second objective was to develop a new composite metric, the Research Integrity Risk Index (RI²), to systematically detect institutions at risk of compromising research integrity. However, a significant portion of critique towards university rankings targets composite indicators (for instance, see Fauzi et al. (2020) and Bellantuono, Monaco et al. (2022)). Specifically, the technical issue lies in the arbitrariness of weights. In this case, the weights of the RI² components are equal. Is this justified? We cannot know, as there is no evidence-based solution for justifying weights.
I strongly suggest avoiding composite indicators. The individual components (e.g., rates of retractions and publications in delisted journals) can be meaningful in context. However, I would check them for mutual correlation.
Minor comments:
1. The selection of universities is difficult to grasp. Adding a flow diagram illustrating the selection criteria and filtering steps would significantly enhance clarity at this point..
2. P. 3 “… first and corresponding authorship rates, which serve as recognized indicators of intellectual contribution and leadership.” While generally recognized as such, the interpretation and typical attribution of these roles may vary across different national and institutional research contexts. This variability should be acknowledged.
3. P. 9 “Its adoption would mark a significant advance toward restoring trust in global academic rankings.” The discussion would be strengthened by briefly acknowledging initiatives exploring alternatives, such as the More Than Our Rank initiative, which question the necessity of “league tables” altogether.
Conclusion
In summary, while the manuscript presents a relevant topic and employs a rigorous methodology to identify institutions with anomalous bibliometric patterns, the proposed solution of creating a new composite index (RI²) fundamentally contradicts the valid critique of ranking systems that the study itself relies upon. The evidence-based findings from the first objective are valuable, but the development of the RI² metric introduces significant weaknesses related to the inherent flaws of composite indicators.
References
Bellantuono, L., Monaco, A., Amoroso, N., Aquaro, V., Bardoscia, M., ... Bellotti, R. (2022). Territorial bias in university rankings: A complex network approach. Scientific Reports, 12(1), 4995. https://doi.org/10.1038/s41598-022-08859-w
Fauzi, M. A., Tan, C. N., Daud, M., & Awalludin, M. M. N. (2020). University rankings: A review of methodological flaws. Issues in Educational Research, 30(1), 79–96. http://www.iier.org.au/iier30/fauzi.pdf
Kochetkov, D. (2024). University rankings in the context of research evaluation: A state-of-the-art review. Quantitative Science Studies, 5(3), 533–555. https://doi.org/10.1162/qss_a_00317
The author declares that they have no competing interests.
Nenhum comentário foi publicado ainda.