PREreview of Controlled experiment finds no detectable citation bump from Twitter promotion
- Published
- DOI
- 10.5281/zenodo.10044712
- License
- CC BY 4.0
This review reflects comments and contributions from Melissa Chim, Allie Tatarian, Martyn Rittman, Pen-Yuan Hsing. Review synthesized by Stephen Gabrielson.
Selected journal articles were tweeted from one of several Twitter accounts with a large number of followers. The altmetrics and citations of these papers were compared with a set of control papers for a study period of three years. While altmetrics saw an increase immediately after tweeting, there was no statistically significant increase in citations for the study papers versus the controls by the end of the study period.
Major comments:
I would like to see a more explicit acknowledgement that this experiment was conducted with only ecological papers - the results are written as if the conclusions apply to scientific research broadly and not one specific discipline. For example, disciplinary differences in citation politics and mechanisms may have a big impact on the effects of social media dissemination. Would social media affect citations of monographs in the humanities the same as ecology papers? If the authors believe their findings can be generalised to other domains, I’m happy for them to make that argument, too.
I like the authors’ methodical approach to the study. It’s well-designed and takes into account weaknesses of previous similar studies. I appreciate how thoroughly the authors explained their criteria for choosing articles. It’s a shame that it is statistically under-powered to detect citation changes in WoS/Scopus, but that is an interesting result in itself and sets parameters for future studies.
From my perspective, the current discussion section of this paper (1) summarises the key learnings from the experiment, (2) acknowledges that social media engagement is useful beyond paper citation counts, and (3) a “wistful” commentary on the value of social media dissemination of research. These points are worthwhile. However, I’d like to see a deeper, constructive dissection into the limitations of this experiment in the discussion section. In addition to mostly ecologist accounts tweeting ecology papers, there are various potential minor issues which could be tackled in future studies. I’d love to see a discussion of them.
While the authors state that the dataset collected from this experiment is shared in the Supplementary Materials, I was not able to find it from reading the paper. Where is the dataset, and can the authors directly cite it in the text? Similarly, the current Acknowledgements section states that the publisher of these journals (John Wiley & Sons) wrote the scripts to collect much of the raw social media data. Where are these scripts published? And what about the statistical analyses? Did the authors also write scripts to do that, or were they done in some other way? There is currently very little reporting in this paper on the data and implementation details (e.g. source code). I suggest a dedicated data and code availability section that discusses which aspects of that have been published (with full citation and open source license metadata), and a discussion of limitations and reproducibility. This is not a box-ticking exercise. For example, this paper describes using classical frequentist statistics, but it may be interesting to apply a different analytical approach (e.g. Bayesian modeling). Any code that has been written should also be published in commented form for others to study, peer review, and build upon. For components which the authors could not publish for any reason, and discussion of these limitations could inform future efforts.
Not everyone is familiar with English-language social media platforms and how they work. I think this paper would be informative and useful to a wider international audience if the authors could briefly discuss how Twitter works, and how it compares to other popular platforms. This information would allow a more critical analysis into how much of the effects seen in this experiment can be attributed to Twitter versus social media in general. And because the authors are social media experts, the Discussion section could also discuss if the way Twitter and journal publishers make (nor not make) it easy to access data for this experiment. This would be a useful methodology discussion to inform future studies.
Minor comments:
I would have liked to see a bit more detail about the authors’ backgrounds since their expertise played such a large role in the study overall.
I agree - also it seemed to me that all of the authors and the journals they targeted were in the ecology/conservation field, but I don’t think there was an explicit acknowledgement of it in the text.
Did the authors check if any of their control articles were tweeted by other scientists during the time of their study? If this happened, it could weaken the effect and the authors could be drawing false negative conclusions.
This study was explicitly designed as a hypothesis-based experiment. In line with that, I suggest the authors explicitly state what their hypotheses is/are (and the null hypotheses, etc.) in the Materials and Methods section.
The authors acknowledge that in at least one prior study, “Twitter promotion was also associated with 24 hours of free access to the articles.” For the current reported experiment, did the authors track and account for the ease of access to the 110 articles in the study? If so, how?
The authors “obtained daily download counts for articles in five of the journals” - Did the publishers of the other journals simply refuse to provide that data? Also, how is “download” defined? Is it literally someone clicking to download the PDF file? If so, did the authors account for the possibility that some of the articles studied can be read online in addition to being downloadable as a PDF file? What are the potential limitations here?
I appreciate the reporting on the growth of followers for all 11 Twitter accounts used in this experiment. Is it possible that an article tweeted later in the 3-year study period would receive higher Altmetrics or citations because the account tweeting it had more followers at that time? I suspect the randomisation tests would account for this, but I’d like to double check.
Figure(s) with color (e.g. Figure 2) should be checked and edited (if necessary) for color-blind and black-and-white printing accessibility.
Important point on inclusive terminology: The current text describes the authors as “scientists” who use social media to communicate with a diverse audience, including those in the “general public”. There is a growing body of research that critiques this dichotomy. For example, are authors not also members of the “public”? And for those in the “public”, can they not be called “scientists” even if they happen to perform science in some capacity? Etc. Without having to cite the relevant body of peer-reviewed literature, I suggest a few ways to make the text more inclusive of the diverse ways in which people perform science. For example, the authors could state in the Introduction that they are those who “are professionally employed to conduct scientific research at universities/research institutions, which we will shorten as ‘scientists’ for the practical purposes of this article. The ‘general public’ in this text refers to those whose primary vocation is not conducting scientific research.”
Under Experimental design, the authors report that “other non-standard article types” were excluded. That is fine, but I suggest removing “non-standard” as it unnecessarily devalues those “other” articles for the purposes of this experiment or paper.
Can the authors please include a statement on contributor roles, such as expressed through the CRediT contributor roles taxonomy? (https://credit.niso.org/) This can be located in the Acknowledgements section, or elsewhere depending on their preference.
In the first paragraph of the introduction, it would be good to spell out "AP" to "Associated Press" for a diverse international audience.
Comments on reporting:
(see comment on data and code availability under major comments)
Suggestions for future studies:
The authors acknowledged that their study focused on articles only from John Wiley & Sons. I can see further studies being done with a focus on other journals and/or other disciplines.
+1, and would also like to see if you see the same effect with journals that have a more broad focus or higher impact factors.
It would be interesting to see if there is an effect based on tweeting multiple times from the same or different accounts. Previous studies took more of a marketing campaign approach, it would be interesting to see where the boundary lies in how much effort is needed to increase citations (if indeed there is a boundary!)
An interesting future meta study would be to investigate how the mechanics of different social media platforms and the makeup of their users (e.g. Twitter vs Mastodon vs Threads, etc.) relate to if and how they impact the citational metrics and citation politics of academic research across different fields of study (including non-STEM!).
Competing interests
The author declares that they have no competing interests.