Saltar al contenido principal

Escribe una PREreview

ARB-Dropout: Gradient-Adaptive Single-Pass Uncertainty Quantification in Neural Networks

Publicada
Servidor
Preprints.org
DOI
10.20944/preprints202509.0867.v1

We introduce ARB-Dropout, an efficient single-pass alternative to Monte Carlo (MC) Dropout for uncertainty estimation in deep neural networks. ARB-Dropout adaptively determines per-input dropout rates from gradient variance and analytically propagates the resulting epistemic variance through the network, eliminating the need for multiple stochastic forward passes. By integrating this analytic epistemic estimate with heteroscedastic aleatoric noise from a dedicated output head, ARB-Dropout produces well-calibrated predictive distributions with lower Expected Calibration Error (ECE) and Negative Log-Likelihood (NLL) than MC Dropout, while maintaining comparable accuracy. Experiments on CIFAR-10, CIFAR-100, SVHN, and STL-10 demonstrate that ARBDropout offers substantial inference-time speedups and robust uncertainty estimates, making it well-suited for real-time, uncertainty-aware applications.

Puedes escribir una PREreview de ARB-Dropout: Gradient-Adaptive Single-Pass Uncertainty Quantification in Neural Networks. Una PREreview es una revisión de un preprint y puede variar desde unas pocas oraciones hasta un extenso informe, similar a un informe de revisión por pares organizado por una revista.

Antes de comenzar

Te pediremos que inicies sesión con tu ORCID iD. Si no tienes un iD, puedes crear uno.

¿Qué es un ORCID iD?

Un ORCID iD es un identificador único que te distingue de otros/as con tu mismo nombre o uno similar.

Comenzar ahora