Multiphoton microscopy (MPM) is a powerful imaging tool that has been a critical enabler for live tissue imaging. However, since most multiphoton microscopy platforms rely on point scanning, there is an inherent trade-off between acquisition time, field of view (FOV), phototoxicity, and image quality, often resulting in noisy measurements when fast, large FOV, and/or gentle imaging is needed. Deep learning could be used to denoise multiphoton microscopy measurements, but these algorithms can be prone to hallucination, which can be disastrous for medical and scientific applications. We propose a method to simultaneously denoise and predict pixel-wise uncertainty for multiphoton imaging measurements, improving algorithm trustworthiness and providing statistical guarantees for the deep learning predictions. Furthermore, we propose to leverage this learned, pixel-wise uncertainty to drive an adaptive acquisition technique that rescans only the most uncertain regions of a sample. We demonstrate our method on experimental noisy MPM measurements of human endometrium tissues, showing that we can maintain fine features and outperform other denoising methods while predicting uncertainty at each pixel. Finally, with our adaptive acquisition technique, we demonstrate a 120X reduction in acquisition time and total light dose while successfully recovering fine features in the sample. We are the first to demonstrate distribution-free uncertainty quantification for a denoising task with real experimental data and the first to propose adaptive acquisition based on reconstruction uncertainty.
Intro to Multiphoton Microscopy
Multiphoton microscopy (MPM) is a form of laser-scanning microscopy based on nonlinear interactions between ultrafast laser pulses and biological tissues. Since its first demonstration decades ago, MPM has become the imaging technique of choice for non-invasive imaging of thick or living samples. Owing to its unique advantage of imaging depth and subcellular resolution, MPM has been used extensively to measure calcium dynamics in deep scattering mouse brains in neuroscience and characterizing multicellular dynamics in immunology and cancer studies. Because of these unique advantages of MPM, it has made tremendous progress and become an increasingly popular tool for tissue and cell microscopy in neuroscience, immunology, and cancer research.
Multiphoton microscopy (MPM) is a powerful imaging tool that has been a critical enabler for live tissue imaging.
Proposed Method
Uncertainty-based Adaptive Imaging: A noisy measurement is acquired with a scanning multiphoton microscope (MPM) and passed into a deep learning model that predicts a denoised image and its associated pixel-wise uncertainty. Subsequently, the top N uncertain pixels are selected for a rescan, obtaining more measurements at only the uncertain regions. As more adaptive measurements are taken, the deep learning model predicts a denoised image with lower uncertainty. Scan duration and power are minimized, limiting sample damage while maintaining high confidence in the model prediction.
Deep learning (DL) based methods have shown exciting results for denoising extremely noisy images in microscopy, however, they still produce hallucinations. To counter this uncertainty quantification techniques can help catch model hallucinations and improve the robustness of deep learning methods. We demonstrate distribution-free uncertainty quantification for MPM denoising and propose an adaptive microscopy imaging pipeline informed by uncertainty quantification. This pipeline leverages the learned uncertainty to drive adaptive acquisition: we capture more measurements of our sample only at the most uncertain regions rather than rescanning the whole sample.
Denoising Results
We evaluated our fine-tuned NAFNet model with learned uncertainty against BM3D (classical method), Noise2Self (self-supervised DL method), and pre-trained NAFNet (supervised DL method) for single-image denoising. Our method, which is fine-tuned with our SHG dataset, outperforms the other methods in terms of MSE and SSIM. Our fine-tuned model can reconstruct features that BM3D and its pre-trained version cannot. In the region highlighted by the green box, our model recovers fine structures present in the ground truth that the other methods cannot. Since leveraging multiple image measurements could enhance a model’s overall performance, next we compare denoising performance against several multi-frame denoisers. To measure performance, we chose VBM4D (classic method) and FastDVDNet (deep method) as reference benchmarks for denoising sequences of frames. When comparing denoised results, it is evident that all the multi-image techniques outperform their single-image counterparts as expected.
Uncertainty Quantification
Increasing measurements reduces uncertainty: Results of single-image, three-image, and fiveimage denoising, showing the image prediction and predicted uncertainty. As the number of measurements increases, the predicted image more closely matches the ground truth, and the pixel-wise uncertainty decreases.
With more measurements, the predicted uncertainty of the network decreases, demonstrating an increase in the confidence of the predicted image. We determined the display threshold for all three uncertainty predictions by choosing the uncertainty interval linked to the top 5% most uncertain pixels. Red pixel regions , indicate areas with larger uncertainty, whereas pixels appearing blue represent lower uncertainty intervals. More importantly, as more fine structures are present, the uncertainty decreases. This finding affirms that increasing measurements to our model not only improves denoising performance but also decreases the denoised prediction’s uncertainty.
Learned Adaptive Acquisition
To evaluate our uncertainty-informed adaptive acquisition, we compared the denoising and uncertainty performance for various different uncertainty thresholds. After a measurement is denoised, a new acquisition pattern is chosen based on a user-defined uncertainty threshold. This is repeated for four subsequent acquisitions, each time acquiring new measurements only in the areas of the sample that are too uncertain (i.e higher than the uncertainty threshold). Figure 5 shows the results of this sweep for a representative sample from our test set. Here, we see that when the uncertainty threshold is lower (more pixels are rescanned), performance (based on MSE, SSIM, and average uncertainty) is better, but the total scanning duration is higher. When the threshold is higher and fewer than 40% of the pixels are rescanned, the performance drops and fewer features are successfully recovered. For this sample, we found that an uncertainty threshold corresponding to rescanning 57% of the sample minimizes total time and light dose while maintaining denoising performance and successfully recovering fine features within the sample.
Conclusion
We presented a method to utilize learned, distribution-free uncertainty quantification for multiimage denoising and proposed an adaptive acquisition technique based on the learned uncertainty. We demonstrated both methods on experimental MPM SHG measurements, showing a 120× decrease in total scanning time and a 120× decrease in total light dose while successfully recovering fine structures and outperforming existing denoising benchmarks. These speed and total light dose improvements are significant and demonstrate an important step towards faster and gentler MPM, which will enable the imaging of a new class of interesting samples and lead to new scientific insights and advances.
Furthermore, we demonstrate how deep learning methods for microscopy can be designed to be trustworthy by building in uncertainty quantification to provide error bars for each prediction. To the best of our knowledge, we are the first to utilize distribution-free uncertainty quantification for a denoising task. Uncertainty quantification should become standard practice when using deep-learning techniques for scientific and medical imaging to reduce hallucinations and build confidence in image predictions. We believe that the distribution-free learned uncertainty quantification presented here is an attractive path toward this due to its ease of use, fast computational time, and statistical guarantees.
Bibtex Citation
References
2023
Learned, Uncertainty-driven Adaptive Acquisition for Photon-Efficient Multiphoton Microscopy
Cassandra Tong Ye, Jiashu Han, Kunzan Liu, and 4 more authors