PREreview del Qualitative tools to understand the capability, opportunity and motivation behind researcher behavior: a scoping review
- Publicado
- DOI
- 10.5281/zenodo.17781579
- Licencia
- CC BY 4.0
This review is the result of a virtual, collaborative Live Review discussion held on Tuesday, November 11, 2025, and Friday, November 14, 2025, organized and hosted by the PREreview Club of Future of Research Communication and e-Scholarship (FORCE11). The discussion was joined by 9 people: 3 facilitators, 4 review authors and 2 discussion participants (Ava Chan and Mercury Shitindo). We thank all participants whose thoughtful and generous contributions made this review possible. The review authors dedicated additional asynchronous time over the course of two weeks to synthesize the notes and prepare this final report. A final synthesis was prepared by Rosario Rogel-Salazar and reviewed by all review authors.
Summary
This scoping review investigates how qualitative tools—interview guides, focus-group guides, open-ended questionnaires—have been developed, applied, and validated to study researcher behavior using behavioral frameworks such as COM-B, TDF, BCW, and A-COM-B. Using a preregistered protocol and PRISMA-ScR guidelines, the authors identified 27 studies published through September 2024 and extracted detailed information about methodological design, theoretical integration, openness, and instrument availability.
The review makes a valuable contribution by mapping an emerging methodological landscape, highlighting widespread open-access practices and the proliferation of behavioral-model–informed qualitative instruments. However, transparency around tool development, validation procedures, and theoretical integration remains inconsistent. Strengthening sections on methodological clarity, conceptual framing, and reporting of study selection would significantly enhance the overall utility and rigor of the manuscript.
Major concerns and feedback
Insufficient transparency in study selection and PRISMA reporting. Reviewers struggled to understand how the initial ~5,000 records were narrowed to 27 included studies. Numerical inconsistencies in the PRISMA flowchart (e.g., identical counts for removed duplicates and ineligible records) further complicate interpretation. Recommendation: Correct PRISMA counts; provide clearer explanations of exclusion criteria; optionally list excluded studies or categorize reasons for exclusion to improve reproducibility.
Incomplete conceptual framing of theoretical integration. The conclusions strongly emphasize underuse or superficial use of behavioral frameworks, yet this normative argument is not sufficiently introduced or justified early in the manuscript. Recommendation: Strengthen the Background by explaining why deep theoretical integration matters, drawing on implementation science and qualitative research design sources cited later in the paper.
Limited search of grey literature and implications for bias. The review did not search key repositories where qualitative tools are often shared (OSF, Zenodo, Figshare, protocols.io). This omission likely reinforces the dominance of the Global North, health-science publications observed in the sample. Recommendation: Explicitly acknowledge this limitation and, if feasible, conduct a supplementary targeted grey literature search.
Ambiguous definition of “open access” and “open tools”. The term “open” is used broadly but without clear definitions. It is unclear whether “open” includes supplemental PDFs, licensed repositories, materials available upon request, or fully reusable resources with open licenses. Recommendation: Add operational definitions and coding criteria for openness and, if needed, revise results to reflect consistent categorization.
Ambiguity in inclusion logic for “qualitative tools”. About 41% of included studies used questionnaires, which can be qualitative, quantitative, or mixed. Inclusion criteria do not specify how qualitative components were identified. Recommendation: Clarify how the authors determined whether a questionnaire qualified as a qualitative instrument.
Issues of internal consistency. Reviewers noted mismatches in the reported number of included studies (27 vs. 28), discrepancies between the preregistered plan and manuscript (e.g., number of piloted studies), and conflicting statements across sections. Recommendation: Perform a careful consistency check across the manuscript.
Minor concerns and feedback
Issues in tables and figures
PRISMA numerical errors require correction.
Some author names are inconsistently formatted (e.g., “Hughes, Williamson & Young” should be “Hughes et al.”).
Table 3 could be made more useful by merging “publication status” and “tool availability”, or by graphically indicating depth of theoretical integration.
Accessibility and formatting issues
Avoid full-justified text for accessibility.
In the data extraction section, rephrase to avoid unnecessary nested parentheses and clarify ambiguous phrasing (e.g., “where the sample is inserted”).
Provide short definitions or consistent use of acronyms (COM-B, BCW, TDF, etc.).
Citation and reference inconsistencies
Add citations for PCC framework, and theoretical integration literature, and the seven previously known articles used to develop the search strategy.
Concluding remarks
This scoping review provides an important contribution to qualitative meta-research by mapping how behavioral frameworks are used to study researcher behavior. Its strengths include a transparent pre-registered protocol, clear reporting of data extraction, and extensive descriptive analysis of the selected studies. Addressing the major methodological and reporting concerns identified above—particularly those related to transparency, conceptual framing, and definitions of openness—will significantly increase the clarity and practical value of the work.
The manuscript will be valuable not only to researchers designing qualitative behavioral studies, but also to methodologists, open science practitioners, funders, journal editors, and graduate advisors who support rigorous and theoretically grounded research practices.
Competing interests
The review authors declare no competing interests related to the authors or content of this preprint.
This review represents the opinions of the authors and does not represent the position of FORCE11 as an organization.
Use of Artificial Intelligence (AI)
The authors declare that they did not use generative AI to come up with new ideas for their review.
Competing interests
The authors declare that they have no competing interests.
Use of Artificial Intelligence (AI)
The authors declare that they did not use generative AI to come up with new ideas for their review.