Statistics and Data Science Seminar
Department of Mathematics and Statistics
Fall 2025 Seminars
Welcome to the Fall 2025 Seminar series! The seminar takes place on Wednesdays at 1 p.m. CT. The seminars will be hybrid (in-person and over Zoom) or virtual only (over Zoom). The location is Parker Hall 358. For any questions or requests, please contact Huan He or Haotian Xu. The list of speakers for this series can be found in the table below which is followed by information on the title and abstract of each talk.
Speaker | Institution | Date | Format |
---|---|---|---|
Yin Tang | University of Kentucky | Sep. 17 | In-person |
Xiaodong Li | UC Davis | Sep. 24 | Online |
Oct. 1 | TBD | ||
Bo Li | Washington University in St. Louis | Oct. 8 | In-person |
Oct. 15 | TBD | ||
Dmitrii Ostrovskii | Georgia Tech | Oct. 22 | Online |
Weidong Ma | Upenn | Oct. 29 | Online |
Gregory J. Matthews | Loyola University Chicago | Nov. 5 | In-person |
Anh Nguyen | CSSE Dept, Auburn | Nov. 12 | In-person |
Ruizhi Zhang | University of Georgia | Nov. 19 | Online |
NA | NA | Nov. 26 | NA |
Carlos Misael Madrid Padilla | Washington University in St. Louis | Dec. 3 | Online |
Yin Tang (University of Kentucky)
Title: Belted and Ensembled Neural Network for Linear and Nonlinear Sufficient Dimension Reduction
Abstract: We introduce a unified, flexible, and easy-to-implement framework of sufficient dimension reduction that can accommodate both linear and nonlinear dimension reduction, and both the conditional distribution and the conditional mean as the targets of estimation. This unified framework is achieved by a specially structured neural network -- the Belted and Ensembled Neural Network (BENN) -- that consists of a narrow latent layer, which we call the belt, and a family of transformations of the response, which we call the ensemble. By strategically placing the belt at different layers of the neural network, we can achieve linear or nonlinear sufficient dimension reduction, and by choosing the appropriate transformation families, we can achieve dimension reduction for the conditional distribution or the conditional mean. Moreover, thanks to the advantage of the neural network, the method is very fast to compute, overcoming a computation bottleneck of the traditional sufficient dimension reduction estimators, which involves the inversion of a matrix of dimension either p or n. We develop the algorithm and convergence rate of our method, compare it with existing sufficient dimension reduction methods, and apply it to two data examples.
https://arxiv.org/abs/2412.08961
Xiaodong Li (UC Davis)
Title: Estimating SNR in High-Dimensional Linear Models: Robust REML and a Multivariate Method of Moments
Abstract: This talk presents two complementary approaches to estimating signal-to-noise ratios (and residual variances) in high-dimensional linear models, motivated by heritability analysis. First, I show that the REML estimator remains consistent and asymptotically normal under substantial model misspecification—fixed coefficients and heteroskedastic and possibly correlated errors. Second, I extend a method-of-moments framework to multivariate responses for both fixed- and random-effects models, deriving asymptotic distributions and heteroskedasticity-robust standard-error formulas. Simulations corroborate the theory and demonstrate strong finite-sample performance.
Bo Li (Washington University in St. Louis)
Title: Spatially Varying Changepoint Detection with Application to Mapping the Impact of the Mount Pinatubo Eruption
Abstract: Significant events such as volcanic eruptions can exert global and long-lasting impacts on climate. These impacts, however, are not uniform across space and time. Motivated by the need to understand how the 1991 Mt. Pinatubo eruption influenced global and regional climate, we propose a Bayesian framework to simultaneously detect and estimate spatially varying temporal changepoints. Our approach accounts for the diffusive nature of volcanic effects and leverages spatial correlation. We then extend the changepoint detection problem to large-scale spherical spatiotemporal data and develop a scalable method for global applications. The framework enables Gibbs sampling for changepoints within MCMC, offering greater computational efficiency than the Metropolis–Hastings algorithm. To address the high dimensionality of global data, we incorporate spherical harmonic transformations, which further substantially reduce computational burden while preserving accuracy. We demonstrate the effectiveness of our method using both simulated datasets and real data on stratospheric aerosol optical depth and surface temperature to detect and estimate changepoints associated with the Mt. Pinatubo eruption.
Dmitrii Ostrovskii (Georgia Tech)
Title: Near-Optimal and Tractable Estimation under Shift-Invariance
Abstract: How hard is it to estimate a discrete-time signal (x1,...,xn)∈ℂn satisfying an unknown linear recurrence relation of order s and observed in i.i.d. complex Gaussian noise? The class of all such signals is parametric but extremely rich: it contains all exponential polynomials over ℂ with total degree s, including harmonic oscillations with s arbitrary frequencies. Geometrically, this class corresponds to the projection onto ℂn of the union of all shift-invariant subspaces of ℂℤ of dimension s. We show that the statistical complexity of this class, as measured by the squared minimax radius of the (1−δ)-confidence ℓ2-ball, is nearly the same as for the class of s-sparse signals, namely O(slog(en)+log(δ−1))⋅log2(es)⋅log(en/s). Moreover, the corresponding near-minimax estimator is tractable, and it can be used to build a test statistic with a near-minimax detection threshold in the associated detection problem. These statistical results rest upon an approximation-theoretic one: we show that finite-dimensional shift-invariant subspaces admit compactly supported reproducing kernels whose Fourier spectra have nearly the smallest possible ℓp-norms, for all p∈[1,+∞] at once.