Skip to PREreview

PREreview of Analysis of citation dynamics reveals that you do not receive enough recognition for your influential science

Published
DOI
10.5281/zenodo.17335296
License
CC BY 4.0

This is an interesting paper presenting a large-scale comparison between the use of journal-level and article-level citation indicators for assessing biomedical researchers.

While the empirical findings reported by the authors are of significant interest, I disagree with the interpretation the authors give to their findings. In their interpretation of Figure 2, the authors conclude that their results suggest “a substantial improvement in recognition for a large segment of the biomedical research workforce by including article-level indicators as a way of recognizing research”. I disagree with this interpretation because in most contexts, in particular hiring, promotion, and funding allocation, researchers essentially find themselves in a zero-sum setting. If one researcher gets more recognition (e.g., is more likely to be hired, promoted, or funded), this implies that some other researcher will get less recognition (e.g., will be less likely to be hired, promoted, or funded).

In the final subsection in the Results section, the authors recognize that researchers often find themselves in zero-sum settings. Surprisingly, however, the authors claim that even in a zero-sum setting there are “large differences between how many authors are favored using article level metrics rather than journal level metrics”. This is an odd conclusion. In a zero-sum setting, the number of authors favored using one indicator must by definition be equal to the number of authors favored using some other indicator.

This odd conclusion turns out to follow from the specific statistical approach the authors take (without explaining it in full detail) to convert raw indicator scores into percentile ranks, as illustrated in Figure 4. According to the authors, after converting raw indicator scores into percentile ranks, there are still many more researchers who receive recognition using article-level indicators than researchers who received recognition using journal-level indicators. However, this is an artefact of the statistical approach taken by the author. In the real world, the number of opportunities to be hired, to be promoted, or to be funded is fixed, so in the real world these settings are of a truly zero-sum nature. If one researcher receives more recognition and is therefore more likely to be hired, promoted, or funded, there must be some other researcher who receives less recognition and is less likely to be hired, promoted, or funded. The use of article-level indicators instead of journal-level indicators will not result in an increase in the number of researchers who can be offered a job or a promotion, or who can be awarded funding.

There are important reasons to criticize excessive reliance of journal-level indicators, but the argument presented by the authors is not convincing. The empirical insights provided by the authors are valuable, but the authors need to rethink the interpretation they give to their findings.

Finally, a minor comment I have relates to the RCR indicator used by the authors to measure the citation impact of an article. There has been some debate about this indicator. See the following paper by Janssens and colleagues: https://doi.org/10.1371/journal.pbio.2002536. And see also the following blog post by myself: https://www.cwts.nl/blog?article=n-q2u294. In my view, the authors should inform readers that there are different perspectives on the pros and cons of the RCR indicator.

Note: This review pertains to version 3 of the preprinted article.

Competing interests

The author declares that they have no competing interests.

Use of Artificial Intelligence (AI)

The author declares that they did not use generative AI to come up with new ideas for their review.