Sitemap

A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

publications

A General Method for Measuring Calibration of Probabilistic Neural Regressors

Published in 3rd Workshop on Uncertainty Reasoning and Quantification in Decision Making (KDD), 2024

As machine learning systems become increasingly integrated into real-world applications, accurately representing uncertainty is crucial for enhancing their robustness and reliability. Neural networks are effective at fitting high-dimensional probability distributions but often suffer from poor calibration, leading to overconfident predictions. In the regression setting, we find that existing metrics for quantifying model calibration, such as Expected Calibration Error (ECE) and Negative Log Likelihood (NLL), introduce bias, require parametric assumptions, and suffer from information theoretic bounds on their estimating power. We propose a new approach using conditional kernel mean embeddings to measure calibration discrepancies without these shortcomings. Preliminary experiments on synthetic data demonstrate the method’s potential, with future work planned for more complex applications.

Recommended citation: Young, S. & Jenkins, P. (2024). "A General Method for Measuring Calibration of Probabilistic Neural Regressors." 3rd Workshop on Uncertainty Reasoning and Quantification in Decision Making (KDD).
Download Paper | Download Bibtex

Fully Heteroscedastic Count Regression with Deep Double Poisson Networks

Published in 42nd International Conference on Machine Learning, 2025

Neural networks capable of accurate, input-conditional uncertainty representation are essential for real-world AI systems. Deep ensembles of Gaussian networks have proven highly effective for continuous regression due to their ability to flexibly represent aleatoric uncertainty via unrestricted heteroscedastic variance, which in turn enables accurate epistemic uncertainty estimation. However, no analogous approach exists for count regression, despite many important applications. To address this gap, we propose the Deep Double Poisson Network (DDPN), a novel neural discrete count regression model that outputs the parameters of the Double Poisson distribution, enabling arbitrarily high or low predictive aleatoric uncertainty for count data and improving epistemic uncertainty estimation when ensembled. We formalize and prove that DDPN exhibits robust regression properties similar to heteroscedastic Gaussian models via learnable loss attenuation, and introduce a simple loss modification to control this behavior. Experiments on diverse datasets demonstrate that DDPN outperforms current baselines in accuracy, calibration, and out-of-distribution detection, establishing a new state-of-the-art in deep count regression.

Recommended citation: Young, S., Jenkins, P., Da, L., Dotson, J., & Wei, H. (2025). "Fully Heteroscedastic Count Regression with Deep Double Poisson Networks." 42nd International Conference on Machine Learning.
Download Paper | Download Slides | Download Bibtex

Learnable Product-Price Attribution for Retail Shelf Images

Published in Pending, 2025

Price compliance is a vital component of overall execution for brick-and-mortar retailers. Poor price compliance can negatively impact revenue, undermine consumer trust, and invite damaging legal action. Despite the growing deployment of AI-powered solutions for automated compliance monitoring, the task of reliably capturing prices on a display and associating them with products in complex, real-world environments remains underexplored. Existing methods often rely on fragile spatial heuristics and rigid shelf-structure assumptions, or otherwise require high-definition, close-up display images that are expensive to obtain. Even state-of-the-art vision-language models struggle with the fine-grained spatial reasoning required for this task. In this work, we present PriceLens, a fully end-to-end system for product-price attribution from retail images. PriceLens integrates off-the-shelf object detection and OCR with a novel transformer-based association model, PriceNet, which learns to associate products and price tags by modeling global spatial and semantic context. Unlike heuristic-based approaches, PriceNet captures compositional relationships directly from data. We show that PriceLens significantly outperforms both heuristic and structural baselines, as well as leading vision-language models, on a challenging real-world shelf dataset. To support further research, we release a new benchmark dataset for product-price association.

Assessing the Probabilistic Fit of Neural Regressors via Conditional Congruence

Published in 28th European Conference on Artifical Intelligence (ECAI), 2025

While significant progress has been made in specifying neural networks capable of representing uncertainty, deep networks still often suffer from overconfidence and misaligned predictive distributions. Existing approaches for measuring this misalignment are primarily developed under the framework of calibration, with common metrics such as Expected Calibration Error (ECE). However, calibration can only provide a strictly marginal assessment of probabilistic alignment. Consequently, calibration metrics such as ECE are distribution-wise measures and cannot diagnose the point-wise reliability of individual inputs, which is important for real-world decision-making. We propose a stronger condition, which we term conditional congruence, for assessing probabilistic fit. We also introduce a metric, Conditional Congruence Error (CCE), that uses conditional kernel mean embeddings to estimate the distance, at any point, between the learned predictive distribution and the empirical, conditional distribution in a dataset. We perform several high dimensional regression tasks and show that CCE exhibits four critical properties: correctness, monotonicity, reliability, and robustness.

Recommended citation: Young, S., Edgren, C., Sinema, R., Hall, A., Dong, N. & Jenkins, P. (2025). "Assessing the Probabilistic Fit of Neural Regressors via Conditional Congruence." 28th European Conference on Artifical Intelligence.
Download Paper | Download Bibtex

talks