Skip to PREreview

PREreview of Determining vaccine responders in the presence of baseline immunity using single-cell assays and paired control samples

Published
DOI
10.5281/zenodo.17993412
License
CC0 1.0

Summary

This paper introduces a novel statistical framework to determine whether participants in a vaccine study exhibit a true vaccine-induced immune response, even in the presence of baseline immunity or assay-related issues such as misclassification errors, batch effects, and noise. The authors use paired control samples collected at baseline and post-vaccination to estimate assay drift between timepoints. They then adjust responder p-values using two approaches - minimally adjusted and maximally adjusted p-values - to better distinguish biological signal from assay variability. Overall, the reported p-value reflects confidence that the post-vaccination response (T1) meaningfully exceeds the baseline response (T0) after accounting for assay noise.

Overall Assessment

This is a strong paper that introduces a practical and much-needed statistical method for vaccine immunogenicity analysis. Intracellular cytokine staining (ICS) assays are widely used but highly variable, so a framework that corrects responder calls is extremely valuable. The introduction of both minimally and maximally adjusted p-values provides researchers with flexible tools that can be used either conservatively or exploratorily, depending on study goals. The real-world application to CoVPN 3008 clearly demonstrates how unadjusted ICS data can mislabel participants as responders or infected. Overall, the paper makes a meaningful contribution to how ICS data should be analyzed moving forward.

Limitations 

  1. Clarify the assumptions about independence and stability

    1. Comment: 

      1. The framework assumes that control samples are independent and identically distributed across timepoints and that misclassification rates are constant across sample types. In practice, these assumptions may not always hold for ICS assays, particularly when working with heterogeneous PBMC donors.

    2. Recommendation: 

      1. The authors could add a discussion of potential violations of these assumptions and how such violations might impact inference. This would strengthen the robustness and interpretability of the framework.

  2. Interpretation for non-statisticians

    1. Comment: 

      1. The paper is mathematically dense, with extensive formulas and complex statistical concepts. Immunology-focused readers may struggle with ideas such as confidence sets over misclassification rates and the interpretation of minimally adjusted p-values.

    2. Recommendation: 

      1. Adding more visual explanations (e.g., schematic diagrams, flowcharts, or intuitive figures) alongside or instead of some formulas would make the methodology more accessible to a broader audience.

  3. Limited discussion of generalizability to other assay types of IC S 

    1. Comment: 

      1. Similar issues of assay variability exist beyond ICS, including ELISpot, FluoroSpot, and single-cell RNA sequencing assays.

    2. Recommendation: 

      1. The authors could expand the discussion to explain how this p-value adjustment framework might generalize to other immunological or single-cell assay types, increasing the broader relevance of the method.

  4. Properties of the minimally adjusted p-value

    1. Comment: 

      1. Because the minimal adjustment only slightly modifies the p-value, it may still classify some participants as “responders” when the observed signal is driven by assay noise. While this concern is briefly acknowledged, the paper does not quantify the potential risk.

    2. Recommendation: 

      1. The authors could provide guidance on when minimally adjusted p-values are appropriate versus when maximal adjustment should be preferred. Additionally, estimating or simulating the potential false-positive rate under minimal adjustment would be informative.

  5. Formatting comments 

    1. Figures: Axes should be labeled more clearly (e.g., Figure 2 heatmaps should explicitly specify “false positive rates”).

    2. Tables: Table 4 is very dense; splitting it into two tables or simplifying the presentation of counts would improve readability.

Competing interests

The author declares that they have no competing interests.

Use of Artificial Intelligence (AI)

The author declares that they used generative AI to come up with new ideas for their review.