Learned, Uncertainty-Driven Adaptive Acquisition
for Photon-Efficient Multiphoton Microscopy
- Cassandra Tong Ye MIT
- Jiashu Han Columbia University
- Kunzan Liu MIT
- Anastasios Angelopoulos UC Berkeley
- Linda Griffith MIT
- Kristina Monakhova MIT
- Sixian You MIT
Abstract
Multiphoton microscopy (MPM) is a powerful imaging tool that has been a critical enabler for live tissue imaging. However, since most multiphoton microscopy platforms rely on point scanning, there is an inherent trade-off between acquisition time, field of view (FOV), phototoxicity, and image quality, often resulting in noisy measurements when fast, large FOV, and/or gentle imaging is needed. Deep learning could be used to denoise multiphoton microscopy measurements, but these algorithms can be prone to hallucination, which can be disastrous for medical and scientific applications. We propose a method to simultaneously denoise and predict pixel-wise uncertainty for multiphoton imaging measurements, improving algorithm trustworthiness and providing statistical guarantees for the deep learning predictions. Furthermore, we propose to leverage this learned, pixel-wise uncertainty to drive an adaptive acquisition technique that rescans only the most uncertain regions of a sample. We demonstrate our method on experimental noisy MPM measurements of human endometrium tissues, showing that we can maintain fine features and outperform other denoising methods while predicting uncertainty at each pixel. Finally, with our adaptive acquisition technique, we demonstrate a 120X reduction in acquisition time and total light dose while successfully recovering fine features in the sample. We are the first to demonstrate distribution-free uncertainty quantification for a denoising task with real experimental data and the first to propose adaptive acquisition based on reconstruction uncertainty.
Multiphoton Microscopy
Here we show our denoised 5-10fps videos taken at submillilux light levels with no external illumination.
Comparison against other methods
Our method produces good temporal consistency and minimal artifacts at the lowest light levels.
Dancing under the stars Dataset
We provide a dataset of submillilux images as well as a dataset of calibration images (paired). All images are available either in RAW format (.DNG) or as a preloaded .mat file.
Submillilux videos
We provide 42 unpaired raw noisy video clips taken at 5-10fps. These video clips vary in length, totalling over 35 minutes of content. All videos were taken on a clear, moonless night with no external illumination and each clip contains significant motion (e.g. dancing, volleyball, flags waving, etc.), serving as a challenging test for video denoising algorithms.
- Submillilux Dataset (92 GB): [link]
Paired calibration images
We provide several bursts of paired images for the purpose of noise model training.
Unpaired clean RGB+NIR videos
Since we use a RGB+NIR camera instead of an RGB cameras, we also provide a dataset of clean (noiseless) videos from our camera.
- Unpaired clean dataset: [link]
Citation
Supplemental materials for the paper can be found
here.
The website template was borrowed from Ben Mildenhall.