Ir para o conteúdo principal

Escrever um comentário

Avalilação PREreview de Gaming the Metrics? Bibliometric Anomalies and the Integrity Crisis in Global University Rankings

Publicado
DOI
10.5281/zenodo.15772738
Licença
CC BY 4.0

Given that this review is not specific to any particular journal, I will focus on the strengths and weaknesses of the manuscript.

Strengths

The paper addresses a relevant and timely topic. Global university rankings increasingly incentivize institutions to prioritize bibliometric indicators, often at the expense of research integrity. This metric-driven environment fosters questionable research and authorship practices, potentially undermining the reliability of rankings as proxies for academic performance.

The study's first objective was to identify universities exhibiting anomalous bibliometric patterns suggestive of metric-driven behaviors. This part employed rigorous methodology. From the 1,000 most-publishing universities worldwide (2018–2024), 98 institutions with >140% publication growth were identified. Of these, 16 universities (across India, Lebanon, Saudi Arabia, and the UAE) showed sharp declines in first and corresponding authorship rates, an early signal of potential metric manipulation. Trends were contextualized using national, regional, and international control groups, including top institutions from each represented country and globally recognized universities (e.g., MIT, Princeton, ETH Zurich, UC Berkeley). Seven bibliometric indicators were examined:

  1. Overall and field-specific research output

  2. First and corresponding authorship rates

  3. Prevalence of hyper-prolific authorship (≥40 articles/year)

  4. Publication surges in STEM fields

  5. Co-authorship network density

  6. Reciprocal citation and self-citation rates

  7. The share of output published in delisted journals and retraction rates

This methodology yielded evidence-based findings. The 16 flagged institutions demonstrated:

  1. Sharp declines in research leadership roles (first/corresponding authorship)

  2. Unusually high publication surges in STEM disciplines

  3. Dense, insular co-authorship networks

  4. Elevated reciprocal and self-citation behaviors

  5. Rising rates of retractions and publications in delisted journals

Weaknesses

However, the study has significant flaws that need addressing.

First, the problem with university rankings is much broader. My previous study attempted to reflect the entire spectrum of the academic community's complaints about university rankings (Kochetkov, 2024). The core issue is that reducing the performance of a complex system to a single composite number (league tables) is fundamentally flawed. No revisions to methodology can change this.

The study's second objective was to develop a new composite metric, the Research Integrity Risk Index (RI²), to systematically detect institutions at risk of compromising research integrity. However, a significant portion of critique towards university rankings targets composite indicators (for instance, see Fauzi et al. (2020) and Bellantuono, Monaco et al. (2022)). Specifically, the technical issue lies in the arbitrariness of weights. In this case, the weights of the RI² components are equal. Is this justified? We cannot know, as there is no evidence-based solution for justifying weights.

I strongly suggest avoiding composite indicators. The individual components (e.g., rates of retractions and publications in delisted journals) can be meaningful in context. However, I would check them for mutual correlation.

Minor comments:

1.      The selection of universities is difficult to grasp. Adding a flow diagram illustrating the selection criteria and filtering steps would significantly enhance clarity at this point..

2.      P. 3 “… first and corresponding authorship rates, which serve as recognized indicators of intellectual contribution and leadership.” While generally recognized as such, the interpretation and typical attribution of these roles may vary across different national and institutional research contexts. This variability should be acknowledged.

3.      P. 9 “Its adoption would mark a significant advance toward restoring trust in global academic rankings.” The discussion would be strengthened by briefly acknowledging initiatives exploring alternatives, such as the More Than Our Rank initiative, which question the necessity of “league tables” altogether.

Conclusion

In summary, while the manuscript presents a relevant topic and employs a rigorous methodology to identify institutions with anomalous bibliometric patterns, the proposed solution of creating a new composite index (RI²) fundamentally contradicts the valid critique of ranking systems that the study itself relies upon. The evidence-based findings from the first objective are valuable, but the development of the RI² metric introduces significant weaknesses related to the inherent flaws of composite indicators.

References

Bellantuono, L., Monaco, A., Amoroso, N., Aquaro, V., Bardoscia, M., ... Bellotti, R. (2022). Territorial bias in university rankings: A complex network approach. Scientific Reports12(1), 4995. https://doi.org/10.1038/s41598-022-08859-w

Fauzi, M. A., Tan, C. N., Daud, M., & Awalludin, M. M. N. (2020). University rankings: A review of methodological flaws. Issues in Educational Research30(1), 79–96. http://www.iier.org.au/iier30/fauzi.pdf

Kochetkov, D. (2024). University rankings in the context of research evaluation: A state-of-the-art review. Quantitative Science Studies5(3), 533–555. https://doi.org/10.1162/qss_a_00317

Competing interests

The author declares that they have no competing interests.

Você pode escrever um comentário nesta Avaliação PREreview de Gaming the Metrics? Bibliometric Anomalies and the Integrity Crisis in Global University Rankings.

Antes de começar

Vamos pedir para você fazer login com seu ORCID iD. Se você não tiver um iD, você pode criar um.

O que é um ORCID iD?

Um ORCID iD é um identificador único que distingue você de outras pessoas com o mesmo nome ou nome semelhante.

Começar agora