Skip to PREreview

PREreview of Deep learning-based predictions of gene perturbation effects do not yet outperform simple linear methods

Published
DOI
10.5281/zenodo.14019384
License
CC BY 4.0

In this preprint (v4) Ahlmann-Eltze, Huber, and Anders investigate whether sophisticated nonlinear ML models (”foundation models”) pre-trained on single-cell RNA sequencing data (scRNA) can predict the effect of gene expression perturbations (e.g. CRISPR knockdown) on scRNA levels. Such models were fine-tuned to the application and then compared to far simpler, linear null models. Null models for predicting unseen double perturbation effects included (i) predicting no effect perturbation does not affect expression), and (ii) an additive model where the effect of a double mutation is the sum of the effects of the single mutants. For analysis of unseen single perturbations, the authors developed a PCA-based linear regression predicting perturbation effects from the correlation structure of the training data. They find that these null models generally outperform fine-tuned non-linear models for predicting both single and double perturbation effects. These results are summarized by the title — Deep learning-based predictions of gene perturbation effects do not yet outperform simple linear methods.

We have several high level concerns about this work.

(I) The authors use different linear models for predicting single and double perturbation effects on mRNA levels. For example, the additive null model, which is the only substantive null model considered for double perturbations, can only be applied to double perturbations and only when the single perturbations are separately measured. For this reason the authors devise a distinct model, based on PCA, for predicting single perturbation effects. This model can predict the effects of double perturbations, but the performance of such predictions is not reported. This seems like a far more apt comparison, as the various non-linear models considered can be applied to single and double perturbations both.

(II) The fine-tuning procedure applied to scFoundation, scGPT, and Gears, is not adequately described. We could not replicate the procedure if we attempted. We have some concerns here. (a) The linear models operate on pseudobulked data — i.e. per-condition averages of scRNA levels. The methods section does not make it clear whether average effects or per-cell effects were predicted from ML models like scFoundation, and some plots comparing ML models to nulls appear to make predictions for every cell rather than every condition (e.g. Fig 1C). If the deep learning models are indeed trained in such a fashion, then identical inputs (perturbations) are implicitly expected to have different outputs (expression levels), which is impossible in non-probabilistic neural networks. Training on this kind of data can lead to poor performance. (b) Relatedly, does the scRNA data have any value over bulk data here? None of the models are tested for their capacity to predict variance in RNA counts across cells in the same condition, for example. Perhaps the training on scRNA data is expected to enable models to learn the correlations needed to predict perturbation effects? This point should be made explicitly if so.

(III) Very little of the fine-tuning procedure is described. Casual statements like “we limited the fine-tuning time to three days” are insufficient to understand or reproduce the work. How was this criterion chosen? What learning rate was used, and which learning rates were experimented with (using a held-out validation test)? How were batches constructed? Was an early stopping criterion applied? Without such crucial details, it is difficult to evaluate the work. Indeed, it is entirely possible that the results reported are due to overfitting of the non-linear models. 

(IV) Does the pre-training of the foundation models make them appropriate to the tasks at hand? The authors should describe how the foundation models (and Gears) were trained and whether the effects of genetic perturbations were included in the pretraining data. If the training data included genetic perturbations or some other source of relevant data, then it seems important to demonstrate that fine-tuning improves model predictions of unseen perturbations (i.e. generalization). 

Competing interests

The authors declare that they have no competing interests.