Assessing the Probabilistic Fit of Neural Regressors via Conditional Congruence
Published in 28th European Conference on Artifical Intelligence (ECAI), 2025
While significant progress has been made in specifying neural networks capable of representing uncertainty, deep networks still often suffer from overconfidence and misaligned predictive distributions. Existing approaches for measuring this misalignment are primarily developed under the framework of calibration, with common metrics such as Expected Calibration Error (ECE). However, calibration can only provide a strictly marginal assessment of probabilistic alignment. Consequently, calibration metrics such as ECE are distribution-wise measures and cannot diagnose the point-wise reliability of individual inputs, which is important for real-world decision-making. We propose a stronger condition, which we term conditional congruence, for assessing probabilistic fit. We also introduce a metric, Conditional Congruence Error (CCE), that uses conditional kernel mean embeddings to estimate the distance, at any point, between the learned predictive distribution and the empirical, conditional distribution in a dataset. We perform several high dimensional regression tasks and show that CCE exhibits four critical properties: correctness, monotonicity, reliability, and robustness.
Recommended citation: Young, S., Edgren, C., Sinema, R., Hall, A., Dong, N. & Jenkins, P. (2025). "Assessing the Probabilistic Fit of Neural Regressors via Conditional Congruence." 28th European Conference on Artifical Intelligence.
Download Paper | Download Bibtex