Comments
Write a commentNo comments have been published yet.
We, the students of MICI5029/5049, a Graduate Level Molecular Pathogenesis Journal Club at Dalhousie University in Halifax, NS, Canada, hereby submit a review of the following BioRxiv preprint:
Rachel Eguia, Katharine H. D. Crawford, Terry Stevens-Ayers, Laurel Kelnhofer-Millevolte, Alexander L. Greninger, Janet A. Englund, Michael J. Boeckh, Jesse D. Bloom. A human coronavirus evolves antigenically to escape antibody immunity. bioRxiv 2020.12.17.423313; doi:https://doi.org/10.1101/2020.12.17.423313
We will adhere to the Universal Principled (UP) Review guidelines proposed in:
Universal Principled Review: A Community-Driven Method to Improve Peer Review. Krummel M, Blish C, Kuhns M, Cadwell K, Oberst A, Goldrath A, Ansel KM, Chi H, O'Connell R, Wherry EJ, Pepper M; Future Immunology Consortium. Cell. 2019 Dec 12;179(7):1441-1445. doi: 10.1016/j.cell.2019.11.029.
SUMMARY: Egula et al. investigated the ability of human sera collected over decades to neutralize contemporary and non-contemporary strains of HCoV-229E (229E). Using sera and viruses collected from the late 1980’s to the present, the authors found that human sera from 1982 was unable to neutralize subsequent generations of 229E viruses, whereas sera recently collected from patients alive and likely exposed to 229E strains from 1982 could neutralize both modern and historic viruses, thereby providing some evidence for durable immunity. Finally, the authors confirmed that most of the mutations that prevent neutralization of modern viruses were found in the 3 loops of the receptor binding domain, although mutations in the N-terminal domain also likely contribute to antigenic escape.
OVERALL ASSESSMENT: This is a good article with solid experiments and conclusions. We appreciate the way the article was written; it is clear and concise and follows a sound logical progression. However, we are not convinced that the work is particularly novel, although this may be an issue of framing (see final section on Subjective Criteria, below).
STRENGTHS: The manuscript is well written, with a logical flow of ideas. Bioinformatics and neutralization data appear to be robust. Overall, while we have some concerns about statistics, the data is convincing.
WEAKNESSES: The narrowing of sera samples for analysis seems to limit the scope of the study, and we think it might bias the results. Because of this, we are unsure if this data can be generalized to a larger population, or if it is only applicable to those whose antibody responses mirror the cutoffs chosen in this study.
DETAILED U.P. ASSESSMENT:
OBJECTIVE CRITERIA (QUALITY)
1. Quality: Experiments (1–3 scale) SCORE = 1.5
● Figure by figure, do experiments, as performed, have the proper controls?
- Fig. 1A: Since the authors have identified regions for each strain, they should justify relevance of those they selected compared to the region from which the sera they are testing are coming from. For instance, is it likely that the individual’s sera tested would be exposed to viruses from Australia, China and the USA and if not, how would the phylogenetic divergence affect neutralization studies? It would have been helpful if the authors had also compared neutralization between closely related viruses from the same regions to determine if the sera was better at neutralizing certain spikes than others they selected.
o Furthermore, for the figure legend, the distance bar labelled 4 years is not described in the figure legend and this would help the reader follow along.
- Fig. 2C: The fold change representation of neutralization titers has a lot of variance and is difficult to determine any overall result from this figure.
- Fig. 4B: The authors should have followed this experiment using NTD chimeras, as changes in the NTD of Spike may have clarified why, for some individuals, the RBD chimera had higher neutralization responses than the full Spike.
o Furthermore, without stats on this figure how can the authors claim that “neutralization activity was rapidly eroded by antigenic evolution” considering that some of the sera did not see change. This is not a convincing claim without statistics.
● Are specific analyses performed using methods that are consistent with answering the specific question?
- Fig. 2: There are no stats performed for neutralization titers. Is it common practice in the field to conduct only one replicate for these assays? Conclusions could be strengthened by studying multiple replicates (N=3) and applying appropriate statistical tests.
● Is there the appropriate technical expertise in the collection and analysis of data presented?
- Yes.
● Do analyses use the best-possible (most unambiguous) available methods quantified via appropriate statistical comparisons?
- Yes.
● Are controls or experimental foundations consistent with established findings in the field? A review that raises concerns regarding inconsistency with widely reproduced observations should list at least two examples in the literature of such results. Addressing this question may occasionally require a supplemental figure that, for example, re-graphs multi-axis data from the primary figure using established axes or gating strategies to demonstrate how results in this paper line up with established understandings. It should not be necessary to defend exactly why these may be different from established truths, although doing so may increase the impact of the study and discussion of discrepancies is an important aspect of scholarship.
- Yes.
2. Quality: Completeness (1–3 scale) SCORE = 1
● Does the collection of experiments and associated analysis of data support the proposed title- and abstract-level conclusions? Typically, the major (title- or abstract-level) conclusions are expected to be supported by at least two experimental systems,
- Yes, the conclusions are well supported by three experimental systems: evolutionary analysis, human sera neutralization assays, and chimera spike receptor neutralization analysis.
● Are there experiments or analyses that have not been performed but if ‘‘true’’ would disprove the conclusion (sometimes considered a fatal flaw in the study)? In some cases, a reviewer may propose an alternative conclusion and abstract that is clearly defensible with the experiments as presented, and one solution to ‘‘completeness’’ here should always be to temper an abstract or remove a conclusion and to discuss this alternative in the discussion section.
- There is no positive control for the neutralization assays. It would be beneficial to show the reader that these assays perform as expected against known antibody/antigen combinations. This would show the reader that the authors know what to expect from their other neutralization assays.
3. Quality: Reproducibility (1–3 scale) SCORE = 2
● Figure by figure, were experiments repeated per a standard of 3 repeats or 5 mice per cohort, etc.?
- Repeats were not performed for the neutralization assays.
● Is there sufficient raw data presented to assess rigor of the analysis?
- Yes.
● Are methods for experimentation and analysis adequately outlined to permit reproducibility?
- Yes.
● If a ‘‘discovery’’ dataset is used, has a ‘‘validation’’ cohort been assessed and/or has the issue of false discovery been addressed?
- N/A.
4. Quality: Scholarship (1–4 scale but generally not the basis for acceptance or rejection) SCORE = 1.5
● Has the author cited and discussed the merits of the relevant data that would argue against their conclusion?
- Is it possible that the decrease seen in the older patients for the neutralization of historic viruses could be related to immune senescence or some immune amnesia-like phenotype from a different infection? This might be worth exploring in the discussion as it is an interesting data point.
● Has the author cited and/or discussed the important works that are consistent with their conclusion and that a reader should be especially familiar when considering the work?
- Yes, the authors did a wonderful job explaining that there is little work in this area and highlighting why they believe their work is novel.
● Specific (helpful) comments on grammar, diction, paper structure, or data presentation (e.g., change a graph style or color scheme) go in this section, but scores in this area should not to be significant bases for decisions.
- Fig. 2: box plot could perhaps be formatted differently. As it is shown, it seems to work against the authors conclusions. Also, were stats performed on this box plot? If they are, they should be shown. Stats would make the plot more understandable in context of your other data.
MORE SUBJECTIVE CRITERIA (IMPACT)
1. Impact: Novelty/Fundamental and Broad Interest (1–4 scale) SCORE = 2
● A score here should be accompanied by a statement delineating the most interesting and/or important conceptual finding(s), as they stand right now with the current scope of the paper. A ‘‘1’’ would be expected to be understood for the importance by a layperson but would also be of top interest (have lasting impact) on the field.
- This article is interesting and provoked a great deal of discussion in our journal club. It advances our current understanding of antigen evasion by viruses (specifically coronaviruses) and our immune system's ability to detect and neutralize historic coronaviruses. It also confirms that coronaviruses mutate largely in the receptor binding domains, but not as much elsewhere, to evade the immune system.
- However, we have not awarded full points on novelty because on some level we think that the results are to be expected. People who have been exposed to a virus likely have neutralizing antibodies against them. People who haven’t likely do not. Because this takes up most of Figures 2 and 3, we believe this is a major conclusion of the paper.
- With that said, this remains foundational work for understanding human serum neutralization capacity for coronaviruses over time. A greater emphasis on the implications for durable anti-coronavirus immunity in the Discussion and elsewhere may strengthen the perception of novelty.
● How big of an advance would you consider the findings to be if fully supported but not extended? It would be appropriate to cite literature to provide context for evaluating the advance. However, great care must be taken to avoid exaggerating what is known comparing these findings to the current dogma (see Box 2). Citations (figure by figure) are essential here.
o N/A
2. Impact: Extensibility (1–4 or N/A scale) SCORE = N/A
● Has an initial result (e.g., of a paradigm in a cell line) been extended to be shown (or implicated) to be important in a bigger scheme (e.g., in animals or in a human cohort)?
● This criterion is only valuable as a scoring parameter if it is present, indicated by the N/A option if it simply doesn’t apply. The extent to which this is necessary for a result to be considered of value is important. It should be explicitly discussed by a reviewer why it would be required. What work (scope and expected time) and/or discussion would improve this score, and what would this improvement add to the conclusions of the study? Care should be taken to avoid casually suggesting experiments of great cost (e.g., ‘‘repeat a mouse-based experiment in humans’’) and difficulty that merely confirm but do not extend (see Bad Behaviors, Box 2).
o N/A
No comments have been published yet.