Statistics and Data Science Seminar
Department of Mathematics and Statistics
Fall 2025 Seminars
Welcome to the Fall 2025 Seminar series! The seminar takes place on Wednesdays at 1 p.m. CT. The seminars will be hybrid (in-person and over Zoom) or virtual only (over Zoom). The location is Parker Hall 358. For any questions or requests, please contact Huan He or Haotian Xu. The list of speakers for this series can be found in the table below which is followed by information on the title and abstract of each talk.
| Speaker | Institution | Date | Format |
|---|---|---|---|
| Yin Tang | University of Kentucky | Sep. 17 | In-person |
| Xiaodong Li | UC Davis | Sep. 24 | Online |
| Oct. 1 | TBD | ||
| Bo Li | Washington University in St. Louis | Oct. 8 | In-person |
| Oct. 15 | TBD | ||
| Dmitrii Ostrovskii | Georgia Tech | Oct. 22 | Online |
| Weidong Ma | Upenn | Oct. 29 | Online |
|
|
|
|
|
| Anh Nguyen | CSSE Dept, Auburn | Nov. 12 | In-person |
| Ruizhi Zhang | University of Georgia | Nov. 19 | Online |
| NA | NA | Nov. 26 | NA |
| Carlos Misael Madrid Padilla | Washington University in St. Louis | Dec. 3 | Online |
Yin Tang (University of Kentucky)
Title: Belted and Ensembled Neural Network for Linear and Nonlinear Sufficient Dimension Reduction
Abstract: We introduce a unified, flexible, and easy-to-implement framework of sufficient dimension reduction that can accommodate both linear and nonlinear dimension reduction, and both the conditional distribution and the conditional mean as the targets of estimation. This unified framework is achieved by a specially structured neural network -- the Belted and Ensembled Neural Network (BENN) -- that consists of a narrow latent layer, which we call the belt, and a family of transformations of the response, which we call the ensemble. By strategically placing the belt at different layers of the neural network, we can achieve linear or nonlinear sufficient dimension reduction, and by choosing the appropriate transformation families, we can achieve dimension reduction for the conditional distribution or the conditional mean. Moreover, thanks to the advantage of the neural network, the method is very fast to compute, overcoming a computation bottleneck of the traditional sufficient dimension reduction estimators, which involves the inversion of a matrix of dimension either p or n. We develop the algorithm and convergence rate of our method, compare it with existing sufficient dimension reduction methods, and apply it to two data examples.
https://arxiv.org/abs/2412.08961
Xiaodong Li (UC Davis)
Title: Estimating SNR in High-Dimensional Linear Models: Robust REML and a Multivariate Method of Moments
Abstract: This talk presents two complementary approaches to estimating signal-to-noise ratios (and residual variances) in high-dimensional linear models, motivated by heritability analysis. First, I show that the REML estimator remains consistent and asymptotically normal under substantial model misspecification—fixed coefficients and heteroskedastic and possibly correlated errors. Second, I extend a method-of-moments framework to multivariate responses for both fixed- and random-effects models, deriving asymptotic distributions and heteroskedasticity-robust standard-error formulas. Simulations corroborate the theory and demonstrate strong finite-sample performance.
Bo Li (Washington University in St. Louis)
Title: Spatially Varying Changepoint Detection with Application to Mapping the Impact of the Mount Pinatubo Eruption
Abstract: Significant events such as volcanic eruptions can exert global and long-lasting impacts on climate. These impacts, however, are not uniform across space and time. Motivated by the need to understand how the 1991 Mt. Pinatubo eruption influenced global and regional climate, we propose a Bayesian framework to simultaneously detect and estimate spatially varying temporal changepoints. Our approach accounts for the diffusive nature of volcanic effects and leverages spatial correlation. We then extend the changepoint detection problem to large-scale spherical spatiotemporal data and develop a scalable method for global applications. The framework enables Gibbs sampling for changepoints within MCMC, offering greater computational efficiency than the Metropolis–Hastings algorithm. To address the high dimensionality of global data, we incorporate spherical harmonic transformations, which further substantially reduce computational burden while preserving accuracy. We demonstrate the effectiveness of our method using both simulated datasets and real data on stratospheric aerosol optical depth and surface temperature to detect and estimate changepoints associated with the Mt. Pinatubo eruption.
Dmitrii Ostrovskii (Georgia Tech)
Title: Near-Optimal and Tractable Estimation under Shift-Invariance
Abstract: How hard is it to estimate a discrete-time signal (x1,...,xn)∈ℂn satisfying an unknown linear recurrence relation of order s and observed in i.i.d. complex Gaussian noise? The class of all such signals is parametric but extremely rich: it contains all exponential polynomials over ℂ with total degree s, including harmonic oscillations with s arbitrary frequencies. Geometrically, this class corresponds to the projection onto ℂn of the union of all shift-invariant subspaces of ℂℤ of dimension s. We show that the statistical complexity of this class, as measured by the squared minimax radius of the (1−δ)-confidence ℓ2-ball, is nearly the same as for the class of s-sparse signals, namely O(slog(en)+log(δ−1))⋅log2(es)⋅log(en/s). Moreover, the corresponding near-minimax estimator is tractable, and it can be used to build a test statistic with a near-minimax detection threshold in the associated detection problem. These statistical results rest upon an approximation-theoretic one: we show that finite-dimensional shift-invariant subspaces admit compactly supported reproducing kernels whose Fourier spectra have nearly the smallest possible ℓp-norms, for all p∈[1,+∞] at once.
Weidong Ma (UPenn - Biostat)
Title: A Novel Framework for Addressing Disease Under-Diagnosis Using EHR Data
Abstract: Effective treatment of medical conditions begins with an accurate diagnosis. However, many conditions are often underdiagnosed, either being overlooked or diagnosed after significant delays. Electronic Health Records (EHRs) contain extensive patient health information, offering an opportunity to probabilistically identify underdiagnosed individuals. The rationale is that both diagnosed and underdiagnosed patients may display similar health profiles in EHR data, distinguishing them from condition-free patients. Thus, EHR data can be leveraged to develop models that assess an individual’s risk of having a condition. To date, this opportunity has largely remained unexploited, partly due to the lack of suitable statistical methods. The key challenge is the positive-unlabeled EHR data structure, which consists of data for diagnosed (``positive") patients and the remaining (``unlabeled") that include underdiagnosed patients and many condition-free patients. Therefore, data for patients who are unambiguously condition-free, essential for developing risk assessment models, is unavailable. To overcome this challenge, we propose ascertaining condition statuses for a small subset of unlabeled patients. We develop a novel statistical method for building accurate models using this supplemented EHR data to estimate the probability that a patient has the condition of interest. Building on the developed risk prediction model, we further study the potential factors that may contribute to under-diagnosis. Numerical simulation studies and real data applications are conducted to assess the performance of the proposed methods.
Anh Nguyen (Auburn - Computer Science)
Title: How to make Vision Language Models see and explain themselves
Abstract: Large Language Models, or LLMs, with their massive world knowledge learned from text, have completely changed the game. They’ve introduced a new era: vision-language models (VLMs). In these models, images and text live in the same representation space, and instead of predicting from a fixed set of labels, they draw predictions from an open vocabulary. In this talk, I’ll walk you through three challenges of integrating vision capabilities into LLMs: First, LLMs are strongly biased, for example, they might (over)prefer the number 7 or certain names like Biden, and that bias comes straight from their training data. Second, a language bias is usually seen as a blessing that helps models generalize beyond training data but also a curse in vision tasks that demand careful, detailed image analysis. And third, it turns out that VLMs do not have very good "eyesight" when tested on a test similar to the eye exams for humans. Because of this, VLMs can sometimes behave in ways we don’t expect, which calls for an interface that allows humans to understand the thought process of VLMs. However, there is not yet a natural way to explain VLM decisions on an image like chain of thoughts in text. I’ll share my proposal for general Explainable Bottleneck, and our implementation of Part-Based Explainable and Editable Bottleneck (PEEB) networks. In fine-grained image classification, PEEB does not only explain its predictions by describing each visual part of an object, but also lets users reprogram the classifier’s logic using natural language---right at test time.
Ruizhi Zhang (University of Georgia)
Title: Robust Sequential Change Detection: The Approach Based on Breakdown Points and Influence Functions
Abstract: Sequential change-point detection has many important applications in industrial quality control, signal detection, and clinical trials. However, many classical procedures may fail when the observed data are contaminated by outliers, even if the percentage of outliers is very small. In this paper, we focus on the problem of robust sequential change-point detection in the presence of a small proportion of random outliers. We first study the statistical detection properties of a general family of detection procedures under Huber’s gross error model. Moreover, we incorporate ideas of the breakdown point and the influence function from the classical offline robust statistics literature and propose their new definitions to quantify the robustness of general sequential change-point detection procedures. Then, we derive the breakdown points and influence functions of our proposed family of detection procedures, which provide a quantitative analysis of the robustness of these procedures. Moreover, we find the optimal robust bounded-influence procedure in that general family that has the smallest detection delay subject to the constraints on the false alarm rate influence function. It turns out the optimal procedure is based on the truncation of the scaled likelihood ratio statistic and has a simple form. Finally, we demonstrate the robustness and the detection efficiency of the optimal robust bounded-influence procedure through extensive simulations and compute numerical approximations of breakdown points and influence functions of some procedures to have a quantitative understanding of the robustness of different procedures.