QUTCC🤗: Quantile Uncertainty Training and Conformal Calibration for Imaging Inverse Problems

Cassandra Tong Ye, Shamus Li, Tyler King, Kristina Monakhova

GitHub arXiv

Abstract

Deep learning models often hallucinate, producing realistic artifacts that are not truly present in the sample. This can have dire consequences for scientific and medical inverse problems, such as MRI and microscopy denoising, where accuracy is more important than perceptual quality. Uncertainty quantification techniques, such as conformal prediction, can pinpoint outliers and provide guarantees for image regression tasks, improving reliability. However, existing methods utilize a linear constant scaling factor to calibrate uncertainty bounds, resulting in larger, less informative bounds. We propose QUTCC, a quantile uncertainty training and calibration technique that enables nonlinear, non-uniform scaling of quantile predictions to enable tighter uncertainty estimates. Using a U-Net architecture with a quantile embedding, QUTCC enables the prediction of the full conditional distribution of quantiles for the imaging task. During calibration, QUTCC generates uncertainty bounds by iteratively querying the network for upper and lower quantiles, progressively refining the bounds to obtain a tighter interval that captures the desired coverage. We evaluate our method on several denoising tasks as well as compressive MRI reconstruction. Our method successfully pinpoints hallucinations in image estimates and consistently achieves tighter uncertainty intervals than prior methods while maintaining the same statistical coverage.

Quantile Regression: Pinball Loss

Quantile regression is a general approach to estimate the conditional quantiles of a target distribution rather than the mean of a response variable. This is often accomplished by leveraging an asymmetric loss function, called pinball loss, tailored to the specified quantile level.

Lq(x, x̂) =
{
q · |x - x̂|                    if x - x̂ ≥ 0
(1 - q) · |x - x̂|           otherwise.
q = 0.35

where q ∈ (0,1) is the desired quantile level, x is the true value, and x̂ is the predicted value. This asymmetric penalty ensures that overestimation and underestimation are weighted differently according to the target quantile. For example, if q = 0.1, then overestimates will be penalized heavier than underestimates.

Pinball loss has been used in the past to predict a fixed set of upper and lower confidence bounds (Im2Im-UQ). In contrast, our work investigates learning the full spectrum of quantiles, commonly referred to as Simultaneous Quantile Regression (SQR) , for inverse image tasks.

Our Method: QUTCC

We propose QUTCC (pronounced: CUTESY🤗), short for Quantile Uncertainty Training and Conformal Calibration, a novel method for simultaneous quantile prediction and conformal calibration that enables efficient and accurate uncertainty quantification for imaging inverse problems. QUTCC uses a single neural network to estimate a distribution of quantiles. During the conformal calibration step, QUTCC applies a non-uniform, nonlinear scaling to the uncertainty bounds, compared to constant scaling used by prior methods. This results in smaller and potentially more informative uncertainty intervals. Additionally, because all quantiles are learned during training, QUTCC can query the full quantile range at inference time to construct a pixel-wise estimate of the underlying probability distribution.

Training
Training
During training, QUTCC uses a U-Net architecture with quantile embeddings to learn the full spectrum of quantiles simultaneously. The network is trained using the pinball loss function, which enables it to predict different quantile levels of the conditional distribution for each pixel in the image reconstruction task. This simultaneous quantile regression approach allows the model to capture the uncertainty inherent in the inverse problem.

QUTCC Produces Smaller Uncertainty Intervals

We show the predictions from both QUTCC and Im2Im-UQ of an undersampled MRI task. In the example below, both models highlight regions of high uncertainty that correspond to regions of high error. The circled region point to a hallucination that appears in the Im2Im-Deep and QUTCC model predictions that is not present in the ground truth. QUTCC produces tighter uncertainty intervals that can better pinpoint uncertainty and hallucinations compared to Im2Im-UQ, which highlights a larger region.
To evaluate QUTCC, we test our approach against Im2Im-UQ in four separate imaging tasks- MRI, Gaussian, Poisson, and Real denoising. We compare the predictive interval sizes of Im2Im-Deep and QUTCC across all four inverse tasks. QUTCC consistently produces narrower uncertainty intervals. By achieving smaller interval lengths while exhibiting comparable risk, QUTCC demonstrates that its uncertainty quantification is both more precise and well-calibrated, effectively capturing predictive confidence without sacrificing coverage.

Pixel-wise Probability Density Functions

Previous uncertainty quantification methods for image-to-image regression tasks aim to approximate the underlying probability density function (PDF) of pixel-wise predictions, with varying degrees of success. Im2Im-Deep is limited in this regard, as it only predicts discrete upper and lower bounds. We demonstrate that querying multiple quantiles from QUTCC enables construction of a conformalized, pixel-wise PDF. To provide statistical coverage guarantees, we calibrate the model across a range of miscoverage levels α. By systematically varying α (e.g., from 0.1 to 0.9) and recording the corresponding quantile bounds, we obtain a collection of confidence intervals that, when aggregated, approximate the full cumulative distribution function (CDF). Differentiating this CDF yields a conformalized pixel-wise PDF with formal coverage guarantees at each risk level. In the measurement below, we take a noisy Gaussian image (σ = 0.1) and compare the PDFs at different pixel regions.
1
2
3
Interactive Demo: Click on the different colored regions to see different QUTCC pixel-wise PDFs predictions

Conclusion

We propose QUTCC, a new uncertainty quantification method for imaging inverse problems that can achieve tighter uncertainty estimates than previous methods while maintaining the same statistical coverage. QUTCC accomplishes this by training a U-Net with a quantile embedding simultaneously on q ∈ (0, 1) quantiles and then dynamically adjusting its quantile bound predictions during calibration until the desired risk is satisfied. Our method exhibited tighter uncertainty intervals, on average, while still pinpointing model hallucinations and regions of high error. This can be attributed to our model applying a nonlinear and asymmetrical scaling to its pixel-wise uncertainty predictions. While quantifying model uncertainty remains a significant open challenge in the field of deep learning, we believe that QUTCC offers a simple, yet robust method of uncertainty quantification for imaging inverse problems and image-to-image regression tasks.
Bibtex Citation