Comentários
Escrever um comentárioNenhum comentário foi publicado ainda.
Adaptive optics can improve imaging performance by mitigating optical aberrations due to both the sample and the system. However, measuring this aberration typically requires careful use of wavefront sensors. The authors present NeAT (Neural fields for Adaptive optical Two-photon fluorescence microscopy), which extends a previous method for computational aberration correction known as CoCoA (Coordinate-based neural representations for Computational Adaptive optics, reference 27). Both NeAT and CoCoA solve three different but related problems: (1) estimation of aberration, (2) computational correction of those aberrations (blind deconvolution), and (3) when corrective hardware such as an SLM (spatial light modulator) is available, adaptive optics correction of the aberration using (1). NeAT and CoCOA are also similar in that both consider only static aberrations that can be corrected with a single corrective wavefront. Both NeAT and CoCoA also use an implicit neural representation of the sample and a Zernike polynomial representation of the aberrations.
While CoCoA was originally implemented for widefield fluorescence microscopy, NeAT extends the method to two-photon microscopy. NeAT makes several valuable improvements over CoCoA: estimation of background within the deconvolved structure, estimation of misalignment of adaptive optics and back focal plane (conjugation error) using phase diversity measurements, estimation of sample-independent system aberrations using a bead calibration measurement, and estimation of motion across planes of the measured volume (outperforming a standard registration approach). All of these are common problems in two-photon microscopy, especially on typical commercial microscopes. Despite some incomplete discussion of potential artifacts and the motivation of all these new techniques in NeAT over the original CoCoA (see major comments), the authors present compelling evidence of the utility of NeAT for in vivo functional imaging of densely labeled tissue in a commercial two-photon microscope, along with thorough evaluations of NeAT’s aberration estimation performance, reconstruction quality, and degradation under various difficult imaging conditions.
A. Additional measurement motivation and comparisons with other aberration estimation methods are needed:
i. Background: NeAT introduces new measurements which are not directly part of the loss function for improving the structure and aberration estimations. One of the original motivations of CoCoA was that no calibration measurements were required for aberration estimation and adaptive optics correction. The core NeAT pipeline still allows for this calibration-free aberration estimation paradigm. However, NeAT introduces two new measurements that are necessary for the full NeAT pipeline: NeAT estimates an affine transformation of the aberration to account for any conjugation error between the back focal plane of the objective to the SLM (Fourier) plane. The experiments shown in Figure 3c acquire multiple images with known calibration aberrations (phase diversity measurements) to determine this conjugation error affine transform. Moreover, the estimation of system aberrations also involves taking another calibration measurement with beads. Presumably these new phase diversity measurements and bead calibration measurements could also directly be used to estimate the aberrated wavefront, especially if these measurements are necessary for the full NeAT correction anyway.
ii. Suggestions: These new measurements invite the reader to question why these measurements are ignored in the core loss function of NeAT (equation 3). This confusion warrants some comparison against other methods that do use these additional measurements more directly. Including a table comparing the difference between other methods (reference 11 Feng et al. and Johnson et al.) that more directly use phase diversity or bead calibration images versus NeAT would clarify this point for the reader, e.g. under what conditions each method is applicable or optimal to use. These conditions could include volume scanning rate, signal-to-noise ratio (SNR), sample density, sample thickness, dynamic vs. static aberrations, etc. It is worth noting that the authors have already included thorough evaluations of NeAT deconvolution performance (distinct from adaptive optics aberration correction performance) under various SNR conditions. While not necessary to demonstrate the utility of NeAT, it would also be very useful for readers contextualizing NeAT amongst other computational aberration estimation methods to include another experiment comparing NeAT versus these other aberration estimation methods adapted to two-photon microscopy in terms of adaptive optics aberration correction (i.e. not deconvolution) under some known aberration (via correlation to a ground truth unaberrated measurement).
iii. References: Two examples of prior work that use phase diversity measurements for aberration estimation followed by adaptive optics correction include that of Feng et al., https://doi.org/10.1126/sciadv.adg4671 (reference 11) who use a loss function directly incorporating phase diversity measurements into their optimization of a neural representation of the sample/aberration estimation and Johnson et al., https://doi.org/10.1364/OPTICA.518559 who use a more classical approach to estimate aberrations from phase diversity measurements.
B. Evaluation of artifacts in structure reconstruction is needed: NeAT reconstructs an estimate of the unaberrated sample structure by optimizing a neural representation of the sample. This reconstruction process is a blind deconvolution problem which is ill-posed. As noted in the manuscript, the ill-posed nature of the problem is mitigated through priors on the simulated point spread function and choice of regularization terms. However, the possibility for artifacts in the reconstructions remains. The experiments shown in Figure 2 demonstrate the deconvolution performance of NeAT via Pearson correlation and structural similarity, but these performance metrics do not capture whether the estimated structure contains any artifacts, especially as it relates to neural activity. To help users of NeAT’s blind deconvolution method understand the potential artifacts that may be introduced in the deconvolved structure, it would be highly valuable to include some figures depicting the types (or lack) of artifacts that NeAT may produce.
NeAT optimizes all correction parameters (e.g. motion or conjugation error) directly except for the structure, which is parameterized as a neural field. Neural fields are often used to learn a continuous representation of the target structure, but here we are not interested in interpolation. To justify this architectural choice to the readers, it would be useful to include discussion on why the neural field is useful in NeAT versus simply optimizing the voxels of the reconstruction directly (e.g. whether this choice finds a better local minimum of the optimization or adversely affects the required voxel size sampling rate).
NeAT and CoCoA use a structural similarity loss between the simulated image and the measured image stack, but NeAT adds a relative mean-squared-error term. It would be useful for readers replicating these methods to include an ablation study comparing simpler loss functions (even a simple mean-squared-error loss) to demonstrate why these more complicated loss functions are necessary.
NeAT demonstrates in vivo functional imaging (Figure 5f) by using a single volume measurement to estimate a corrective wavefront which is applied to the measured volumes at each timepoint. This is introduced as “real-time” aberration correction, but NeAT is estimating a static aberration and the aberration estimation process is run once from a single volume measurement. A single NeAT optimization process can require minutes to run, so there is a significant lag introduced between wavefront estimation and aberration correction. While this can still be valuable for in vivo functional imaging, “real-time” may not be the right term to describe this process and it would be clarifying for readers to describe this as “functional imaging.”
NeAT demonstrates that including motion estimation during the aberration estimation process leads to more accurate estimations of the aberration (Figure 2h-i). However, it is not clear which volumes from Figure 3 onwards are using motion correction to improve aberration estimation versus to also perform motion correction of the measured stacks. If the latter is true, it is not obvious how motion is estimated for new volumes over time. Does the full in vivo involve running multiple NeAT optimizations to extract the motion correction at each timepoint? To clarify this point for the reader, some more explanation of this motion estimation process in the methods section of the text would be useful.
NeAT introduces a learned baseline parameter, which is a low rank estimate of background added to the structure during simualtion. This baseline term is supposed to account for signal decrease along z but is only added to the convolved structure. Two-photon signal decrease due to scattering or absorption should decrease the total collected light as well as add background, and it is not obvious from the explanation how addition of a smooth background handles this. Consider adding some explanatory text in the methods section to clarify to the reader how this additive baseline term models signal loss due to scattering.
NeAT, like CoCoA, uses a two-stage optimization process. The first stage only approximately optimizes the neural representation of the sample structure, and the second stage optimizes both the neural representation of the sample more finely and the estimated aberrations. NeAT introduces a low pass filter of the measured stack in the first stage’s loss function, which was not originally in CoCoA. It would be useful for readers interested in implementing their own neural representations for microscopy to compare the new loss with the low pass filter versus using the old loss function with fewer iterations in the first stage.
Figures 1 and 2 demonstrate NeAT’s ability to recover sample structure (deconvolve the sample) from only the measured image stack. These figures label these deconvolutions as both “s” and “structure.” The remaining figures and experiments seem to shift focus to show the effect of adaptive optics correction of estimated aberrations rather than sample structure deconvolution and are only labeled as “AO” or “No AO.” For clarity, it would be useful to more clearly label which images shown are deconvolved sample structure from the neural representation and which are measurements.
Equation 1 describes the model of the aberrated point spread function (PSF) without the estimated conjugation error correction, but the conjugation error correction transformation H is shown in Figure 1. It may be clarifying to the reader to reference the explanation of H which is included later in the text when introducing equation 1.
Figure 2l shows very faint points around the mean points which are hard to see. For clarity, consider darker colors and include an explanation of these points in the legend.
The numbers in Figure 5f label regions in Figure 5e, but this is not obvious from the arrangement of the panels. To make this connection clearer to the reader, it may help to use the same color for the labels of the numbered regions in panel f and panel e (or ideally arrange the figure so that e and f align in a similar way to a and c).
Figure 5p and 5q have a similar issue as stated above.
On page 10, "represented the wavefront distortion at the objective BPF" uses "BPF" instead of "BFP" for "back focal plane."
Expertise and LLM usage disclaimer: I am a graduate student in computational optics. LLMs were not used in any part of this review.
The authors declare that they have no competing interests.
Nenhum comentário foi publicado ainda.