Skip to PREreview

PREreview of An intervention to improve students’ knowledge of open science practices in an undergraduate psychology course

Published
DOI
10.5281/zenodo.20029544
License
CC BY 4.0

Summary:

This manuscript evaluates a brief (20-minute) instructional intervention aimed at improving psychology undergraduate students’ knowledge of and attitudes toward open science practices. The topic is timely and relevant, and the study makes a valuable contribution by embedding open science teaching within a substantive psychology course context.

We commend the authors for conducting a field-based educational intervention and for promoting open science literacy among students. The comments below are intended to strengthen the methodological clarity and interpretability of findings.

Major Comments:

  1. The study is currently described as a “pre–post intervention design”. However, because participants cannot be linked across timepoints, the data represent independent (albeit partially overlapping) samples rather than repeated measures.

We, thus, strongly recommend:

  • Renaming the design to something like “cross-sectional cohort study (pre vs. post)”

  • Explicitly stating this study design limitation in the abstract

  • Hedging the causal claims about change due to the intervention

This is a central methodological constraint and should be framed as such throughout the manuscript.

  1. We found it difficult to follow the decision process regarding the use of mixed-effects models (MEMs) versus OLS models. While the authors report ICCs and fits indices, the rationale for switching between models does not seem to be consistently transparent. The authors could improve the description of their decision rule and consider directly comparing the fit of MEM and OLS models.

  2. A related statistical issue concerns the structure of the dataset. Because some participants contributed data at both timepoints without being identifiable, while others contributed data only once, the resulting dataset is neither fully independent nor fully paired. We are not sure how the current analytical approach can account for this hybrid structure and how this can influence statistical inference. At minimum, this should be acknowledged as a limitation that introduces uncertainty into the estimates and may affect the conclusions’ robustness.

  3. The use of ANOVA for comparisons between psychology students vs. students from other disciplines also raises questions. Given that only two groups are compared, a t-test would be more conventional. In addition, the groups are substantially imbalanced in size, which may influence the stability and interpretability of the results.

  4. All comparisons would benefit from the inclusion of effect sizes, which are currently missing throughout the manuscript. Many details advised in the APA-JARS reporting guidelines can be helpful with writing papers such as these.

  5. In terms of results reporting, non-significant findings should be described more consistently in the main body of the paper rather than only being presented in the supplementary materials (e.g. see non-significant results when comparing psychology vs. other students). Furthermore, Figure 1 would benefit from indicators of statistical significance.

  6. Given the explicit focus of the manuscript on open science, we encourage the authors to strengthen their own adherence to open science principles. While they already provide access to their study materials (thank you!), we would like to ask the authors to also make their raw data and analysis code openly available.

  7. Explicitly mention and interpret the somewhat bimodal distribution of your data (Fig. 1).

Minor Comments:

  1. The sample size reported in the abstract appears to differ slightly from that reported in the methods section (116 vs. 115 participants post-intervention).

  2. The keyword list could be expanded to include more specific and informative terms.

  3. The introduction would benefit from situating the study within the broader curricular context in which it is embedded. In particular, it would strengthen the manuscript to provide some background on the role of mandated introductory psychology courses and the extent to which metascience and open science practices are currently integrated into the department’s, school’s, or faculty’s teaching curriculum.

  4. We recommend that you extend your introduction and/or discussion on the difference between your study intervention vs. other studies using open science interventions.

  5. The rationale for conducting the intervention within a health psychology course, rather than a methods-oriented course, could be more clearly articulated.

  6. It would be informative to describe the types of health problems assigned to student groups in the supplement.

  7. Data/insights from group presentations would be valuable on their own to understand how students learned. Those could be analyzed using qualitative coding, content analysis, etc. Many options are possible.

  8. The post-intervention questionnaire has positive wording for each item, which may bias students to select agree or strongly agree for their responses relative to other questionnaire designs in which the wording is neutral or mix of positive and negative wording (e.g. “I believe that the benefits of open science outweigh the challenges of implementation.” could be “My view of open science practice implementations is …”). Maybe the authors could briefly touch on this in their discussion.

  9. The discussion could be expanded to address the generalizability of the intervention to other disciplines and to students at more advanced stages of training.

Suggestions for future studies:

The current study appears to apply a quantitative design in a context where the available statistical power is limited, which constrains the strength of the conclusions. In such cases, it may be more appropriate to begin with a qualitative or mixed-methods approach. Methods such as interviews, focus groups, reflective writing assignments, or participatory observations could provide richer insights into how students understand and engage with open science practices. These insights could then inform the development of more robust quantitative studies.

For future research, a within-participant longitudinal design using pseudonymized identifiers would allow direct estimation of individual-level change and substantially strengthen causal inference. Such an approach would also improve statistical power and precision.

The authors might also want to consider designs that compare different course contexts or cohorts (e.g., across subjects or semesters) to assess how effectiveness varies by setting.

When assessing attitude change, instead of abstract questions, maybe ask questions about specific (real or hypothetical) cases in which open science practices may be more or less advisable. This would dig deeper into students’ attitudes.

Future work could also examine differences between more theory-oriented and practice-oriented instruction to assess how teaching approaches influence knowledge, attitudes, and the uptake of open science practices. For example, comparing students who actively conduct research projects incorporating open science practices with those who receive more traditional instruction could provide valuable insights into both learning outcomes and students’ experiences of applying these practices in context.

Further notes:

We did not independently verify the simulation-based sensitivity analysis or reproduce the reported results, as the raw data were not publicly available.

AI (ChatGPT, GPT-5.3, 02.05.2026) was used to assist with language refinement and structuring of this review. All scientific judgments are the reviewer’s own.

Competing interests

The authors declare that they have no competing interests.

Use of Artificial Intelligence (AI)

The authors declare that they did not use generative AI to come up with new ideas for their review.