PREreview of Human coronaviruses disassemble processing bodies
- Published
- DOI
- 10.5281/zenodo.4768659
- License
- CC BY 4.0
We, the students of MICI5029/5049, a Graduate Level Molecular Pathogenesis Journal Club at Dalhousie University in Halifax, NS, Canada, hereby submit a review of the following BioRxiv preprint:
Human coronaviruses disassemble processing bodies. Carolyn-Ann Robinson, Mariel Kleer, Rory P. Mulloy, Elizabeth L. Castle, Bre Q. Boudreau and Jennifer A. Corcoran. bioRxiv 2020.11.08.372995; doi: https://doi.org/10.1101/2020.11.08.372995
We will adhere to the Universal Principled (UP) Review guidelines proposed in:
Universal Principled Review: A Community-Driven Method to Improve Peer Review. Krummel M, Blish C, Kuhns M, Cadwell K, Oberst A, Goldrath A, Ansel KM, Chi H, O'Connell R, Wherry EJ, Pepper M; Future Immunology Consortium. Cell. 2019 Dec 12;179(7):1441-1445. doi: 10.1016/j.cell.2019.11.029.
SUMMARY:
Robinson, et al., investigated the effects of human coronavirus OC43 infection on RNA granules known as processing bodies (PBs). They demonstrated that HCoV-OC43 infection reduces numbers of visible PBs in human umbilical vein endothelial cells (HUVECs). They observed that overexpression of the PB-resident mRNA-decapping enzyme Dcp1a prior to HCoV-OC43 infection restricts viral replication in HUVECs through an unknown mechanism. Using plasmid vectors expressing individual SARS-CoV-2 genes, the authors identified several candidate genes that caused PB loss. They screened the same collection of viral genes for stabilization of a labile AU-rich element (ARE)-containing luciferase reporter, which often correlates with PB loss. They identified a non-overlapping set of hits in these two screens. Overall, this manuscript reports on a novel discovery that HCoV-OC43 infection induces PBs disassembly, possibly through the integrated effects of several virus-encoded proteins.
OVERALL ASSESSMENT:
STRENGTHS:
The authors provided evidence that HCoV-OC43 infection correlates with loss of visible PBs. This is the first report of PB modulation by a HCoV. The identification of candidate SARS-CoV-2 genes that modulate PBs provides a useful starting point for future mechanistic investigations. The authors discussed the findings and the limitation of their study in detail and with appropriate support from the literature.
WEAKNESSES:
The manuscript could be improved by providing stronger rationale for selection of hits and each set of experiments as the reader moves through the Results section. There was concern that the antiviral effects of DCP1a overexpression could be due to DCP1a itself rather than a PB-specific effect. Further experiments are required to fully support the proposed model (see below). There was concern that while the data showed certain correlations, causality was not properly established in support of the model.
DETAILED U.P. ASSESSMENT:
OBJECTIVE CRITERIA (QUALITY)
1. Quality: Experiments (1–3 scale) SCORE = 2
· Figure by figure, do experiments, as performed, have the proper controls? (While discussing proper controls of the experiments, we also use this opportunity to discuss the rationale and approaches of each experiment)
Figure 1:
· Fig. 1B: Bars appear to represent the mean rather than SEM as stated in the Figure Legend. Please clarify.
· Fig 1C: Inclusion of a loading control like actin would strengthen these western blots and account for any changes in protein expression due to protein loading differences. The total protein stain that was mentioned in the Materials and Methods could also be used effectively here.
· In Fig 1A, ectopic expression of E protein appeared to result in more PBs, while it was showed as nonsignificant change compared to control in Fig 2B. Is the image in Fig. 1A representative? We think adding the SEM on Fig 1B and some author commentary on the proteins that led to increased PBs (especially E and nsp13) would be helpful to address the stringency of the assay.
· In the Results, the authors mentioned that there were 6 SARS-CoV-2 proteins that mediate PB disassembly. However, only two of these met the standard of statistical significance in Fig 1B (N in thresholded group and nsp11 in unthresholded group). We are wondering what criteria were used to determine whether a protein was a significant hit or not in Fig 1B. The authors should describe these criteria clearly in the Results section.
Figure 2:
· Overall, these ARE-mRNA reporter assays are well described and appear to be properly controlled. Minor points:
o In the Results section, the statement ‘Of these ARE-mRNA regulating SARS-CoV-2 gene products’ seems to be a strong conclusion just based on the luciferase experiment without confirming whether those proteins could regulate endogenous ARE-mRNAs. Is it possible for the authors to directly test the steady-state levels and stability of some endogenous ARE-mRNAs?
o In the case of nsp1, is it fair to claim that the ARE-mRNA luciferase assay screen confirms the reduced levels of visible PBs in Fig 1B? This statement could be much better supported by data on endogenous ARE-mRNAs.
o We noticed that the authors are currently validating some of the SARS-CoV-2 proteins in endothelial cells, which will provide additional opportunities to measure the accumulation and turnover rate of endogenous ARE-mRNAs. Therefore, we recommend that the authors add those data once available to strengthen their conclusion or simply tone down their conclusions based on the data included in the current pre-print.
Figure 3:
· No issues with this dataset.
Figures 1-3:
· There was considerable discussion and confusion about the rationale for moving forward with further investigation of N and nsp14 hits in HUVECs, rather than other candidate genes. The authors should make this rationale very clear, to avoid such confusion.
o Based on the results of the screen, it seems the most prominent hit would be the ORF7b which was confirmed by both IF (Fig 1B) and by luciferase assay (Fig 2B), however it was not further investigated in HUVECs. N protein was shown to mediate PB disassembly by IF and was the top hit in the thresholded group but showed no elevation in the ARE-containing FLuc reporter. On the other hand, nsp14 was not one of the 6 hits from Fig 1B but showed significant elevation in the ARE-containing FLuc reporter, and it was not until the later validation in HUVECs showed nsp14 led to PB disassembly.
In any case, further justification of these choices would be welcome, including how the results of two assays (IF and luciferase) were weighted in the decision. We also think the authors should provide some description of the known biological functions of these hits in the Results if this information was relevant to their decisions. We realize that some of this relevant information is in the Discussion, but it would be much more helpful to the readers to understand the data if some of this could be found in the Results section.
Figure 4:
· No major issues with experiment setup or controls.
· One minor point is that in Fig 4C, the error bars for cytokine transcript levels are very large. It appears that there is only one replicate (especially for IL-8) that is pulling the average up. We realize this could be a genuine result. However, this data could be strengthened by inclusion of a few more biological replicates. Units should be shown on the y-axis label.
Figure 5:
· The purpose of overexpression of GFP-Dcp1a was to induce PB formation before viral infection. Could the authors quantify this effect to show that there is higher number of PBs in the GFP-Dcp1a expressing cells compared to the control GFP-expressing cells?
· For Fig 5C, the lentivirus inoculum should be described in terms of MOI, so the reader understands how many infectious lentiviruses are being delivered to target cells at the 1x and 5x doses. There was also some discussion of the potential negative effects of higher MOI lentivirus transduction on coronavirus replication. To mitigate this concern, the authors could include an additional control with a high MOI infection with the GFP lentivirus. Also, the y-axis label is confusing.
· Are specific analyses performed using methods that are consistent with answering the specific question?
o Yes
· Is there the appropriate technical expertise in the collection and analysis of data presented?
o Yes
· Do analyses use the best-possible (most unambiguous) available methods quantified via appropriate statistical comparisons?
o Yes
o One factor that may affect the results of the statistical analysis is the inherent false discovery rate associated with making so many comparisons in these screens. Careful attention to these statistical tests is warranted.
· Are controls or experimental foundations consistent with established findings in the field? A review that raises concerns regarding inconsistency with widely reproduced observations should list at least two examples in the literature of such results. Addressing this question may occasionally require a supplemental figure that, for example, re-graphs multi-axis data from the primary figure using established axes or gating strategies to demonstrate how results in this paper line up with established understandings. It should not be necessary to defend exactly why these may be different from established truths, although doing so may increase the impact of the study and discussion of discrepancies is an important aspect of scholarship.
o The experimental foundations of this study are solid.
2. Quality: Completeness (1–3 scale) SCORE = 2
· Does the collection of experiments and associated analysis of data support the proposed title- and abstract-level conclusions? Typically, the major (title- or abstract-level) conclusions are expected to be supported by at least two experimental systems.
o The authors made a novel discovery of human CoV-mediated PB disassembly, which is well supported by their results. However, more evidence is required to support the statement that stimulating PB formation prior to OC43 infection restricts viral replication. Specifically, the correlation between the overexpression of Dcp1a and PB formation were not clearly demonstrated in the study. In addition, these experiments to not make a distinction between a potential antiviral effect of Dcp1a vs. a potential antiviral effect of PBs. We suggest the authors may want to consider overexpressing other PB proteins such as DDX6, along with complementary approaches like silencing expression of Xrn1 to help assess whether PBs are antiviral. Moreover, the statement that ‘disassembly of PBs enhances translation of proinflammatory cytokine mRNAs’ in the abstract was not fully supported by the data in the study. Perhaps, the authors could consider using cytokine ELISA to demonstrate the enhance translation of cytokines such as IL-6 and IL-8. Alternatively, this statement could be removed from the abstract.
· Are there experiments or analyses that have not been performed but if ‘‘true’’ would disprove the conclusion (sometimes considered a fatal flaw in the study)? In some cases, a reviewer may propose an alternative conclusion and abstract that is clearly defensible with the experiments as presented, and one solution to ‘‘completeness’’ here should always be to temper an abstract or remove a conclusion and to discuss this alternative in the discussion section.
o No
3. Quality: Reproducibility (1–3 scale) SCORE = 1.5
· Figure by figure, were experiments repeated per a standard of 3 repeats or 5 mice per cohort, etc.?
o There are inconsistency regarding to the N numbers in few figures:
§ Fig 3B and Fig 5B display inconsistent N number between treatment groups. Several experiments are only supported by 2 biological replicates.
· Is there sufficient raw data presented to assess rigor of the analysis?
o We think that providing the representative IF images for the strep-tag staining of each viral protein and the Raw data of the luciferase assay in supplementary would be very helpful for the audience to understand the rationale behind Figs 1-3.
· Are methods for experimentation and analysis adequately outlined to permit reproducibility?
o Yes, with one exception. The duration and the choice of antibiotic for selection after lentivirus transduction for GFP-Dcp1a overexpression in HUVECs was not clearly stated in the legend for Figure 5.
· If a ‘‘discovery’’ dataset is used, has a ‘‘validation’’ cohort been assessed and/or has the issue of false discovery been addressed?
o N/A
4. Quality: Scholarship (1–4 scale but generally not the basis for acceptance or rejection) SCORE = 2
· Has the author cited and discussed the merits of the relevant data that would argue against their conclusion?
o The authors provided a very informative discussion including the limitations of the current study, speculations based on their data and published work, as well as their ongoing research work on further investigating the findings of the current preprint.
· Has the author cited and/or discussed the important works that are consistent with their conclusion and that a reader should be especially familiar when considering the work?
o The authors paid close attention to the literature throughout.
· Specific (helpful) comments on grammar, diction, paper structure, or data presentation (e.g., change a graph style or color scheme) go in this section, but scores in this area should not be significant bases for decisions.
o There is room for some improvements in grammar and writing to enhance readability.
o The authors may consider adding description of hits and rationale for selecting certain hits for further study to the corresponding results section. We believe this will help the readers to better understand their choices.
o The authors mentioned ‘Strep tag shown in red’ in the legend of Fig 3A but it is actually green.
MORE SUBJECTIVE CRITERIA (IMPACT)
Impact: Novelty/Fundamental and Broad Interest (1–4 scale) SCORE = 2.5
· A score here should be accompanied by a statement delineating the most interesting and/or important conceptual finding(s), as they stand right now with the current scope of the paper. A ‘‘1’’ would be expected to be understood for the importance by a layperson but would also be of top interest (have lasting impact) on the field.
o The authors clearly made a novel discovery of HCoV-mediated PB disassembly, which contributes to our understanding of the human CoV-host interaction.
· How big of an advance would you consider the findings to be if fully supported but not extended? It would be appropriate to cite literature to provide context for evaluating the advance. However, great care must be taken to avoid exaggerating what is known comparing these findings to the current dogma (see Box 2). Citations (figure by figure) are essential here.
o The authors have included a model that proposes that HCoV replication is restricted by the PBs while the virus combats the PBs by inducing PB disassembly, which results in inflammatory cytokine production. If fully supported by the data, this model would be important as it revealed a novel role of PBs in host antiviral response in human CoV infection. However, based on the data provided, we think more work need to be done to support the proposed model. The evidence showed in the current study is not sufficient to demonstrate whether there is a direct link between PBs formation and the antiviral effects as well as the link between viral-induced PBs disassembly and the proinflammatory cytokine level.
o The identification of candidate SARS-CoV-2 genes for virus-induced PB disassembly is useful as it could lead to the identification of some mechanisms SARS-CoV-2 manipulating the host upon infection. However, the incomplete characterization of these genes could be problematic. We believe by having one PB-disrupting CoV protein fully characterized for both PB and ARE-mRNA turnover modulation would greatly strengthen the conclusion.
Impact: Extensibility (1–4 or N/A scale) SCORE = N/A
· Has an initial result (e.g., of a paradigm in a cell line) been extended to be shown (or implicated) to be important in a bigger scheme (e.g., in animals or in a human cohort)?
· This criterion is only valuable as a scoring parameter if it is present, indicated by the N/A option if it simply doesn’t apply. The extent to which this is necessary for a result to be considered of value is important. It should be explicitly discussed by a reviewer why it would be required. What work (scope and expected time) and/or discussion would improve this score, and what would this improvement add to the conclusions of the study? Care should be taken to avoid casually suggesting experiments of great cost (e.g., ‘‘repeat a mouse-based experiment in humans’’) and difficulty that merely confirm but do not extend (see Bad Behaviors, Box 2).