Applied and Computational Mathematics Seminars



Upcoming Applied and Computational Mathematics Seminars
DMS Applied and Computational Mathematics Seminar
Sep 26, 2025 02:00 PM
328 Parker Hall


lii

Speaker: Wuchen Li (University of South Carolina)

Title: Information Gamma Calculus: Convexity Analysis for Stochastic Differential Equations
 
 
Abstract: We study the Lyapunov convergence analysis for degenerate and non-reversible stochastic differential equations (SDEs). We apply the Lyapunov method to the Fokker–Planck equation, in which the Lyapunov functional is chosen as a weighted relative Fisher information functional. We derive a structure condition and formulate the Lyapunov constant explicitly. Given the positive Lyapunov constant, we prove the exponential convergence result for the probability density function towards its invariant distribution in the L1 norm. Several examples are presented: underdamped Langevin dynamics with variable diffusion matrices, quantum SDEs in Lie groups (Heisenberg group, displacement group, and Martinet sub-Riemannian structure), three oscillator chain models with nearest-neighbor couplings, and underdamped mean field Langevin dynamics (weakly self-consistent Vlasov–Fokker–Planck equations). If time is allowable, some extensions will be discussed on the time-inhomogeneous SDEs.

Host: Yanzhao Cao


More Events...

Past Applied and Computational Mathematics Seminars
DMS Applied and Computational Mathematics Seminar
Sep 19, 2025 02:00 PM
328 Parker Hall


 abdelgalil

Speaker: Mahmoud Abdelgalil (University of California, San Diego)

Title: On some foundational issues in feedback control

 

Abstract: The remarkable success of closed-loop control in mitigating the effect of uncertainty on a system’s performance has undoubtedly enabled much of the technological world around us. Indeed, feedback regulation can be found “under the hood” in the functioning of engines, the workings of biological organisms, interplanetary navigation, GPS tracking, robotics and more. While the mitigation of uncertainty has been at the heart of control theory since its inception, explicit control of uncertainty is a relatively recent development that has garnered much attention. In this, a main object of study is the Liouville (continuity) equation -- the PDE governing the evolution of the probability distribution of the state of a dynamical system. While it was widely believed that the basic question of controllability of the Liouville equation had been resolved, it escaped the community’s attention for almost two decades that early investigations on the subject came short of providing a satisfactory answer, even for linear systems. In this talk, we revisit and address this topic and develop a theory for Collective Steering, the endeavor to shepherd an ensemble of dynamical systems between desired configurations using a common feedback law. Our investigation sheds light on a topological obstruction at the heart of the issue that limits the ability to design feedback control laws that are globally continuous with respect to the specifications. Along the way, we touch upon an elegant geometric framework at the intersection of optimal transport, geometric hydrodynamics, and quantum mechanics.

 

Host: Yuming Paul Zhang

ACM seminar’s website: https://sites.google.com/view/yzhangpaul/applied-and-computational-mathematics-seminars


DMS Applied and Computational Mathematics Seminar
Sep 05, 2025 02:00 PM
328 Parker Hall


 
liao
 
Speaker: Wenjing Liao (Georgia Tech)
 
Title: Exploiting Low-Dimensional Data Structures and Understanding Neural Scaling Laws of Transformers
 
 
Abstract: When training deep neural networks, a model’s generalization error is often observed to follow a power scaling law dependent on the model size and the data size. A prominent example is transformer-based large language models (LLMs), where networks with billions of parameters are trained on trillions of tokens. A theoretical interest in LLMs is to understand why transformer scaling laws emerge. In this talk, we exploit low-dimensional structures in language datasets by estimating its intrinsic dimension and establish statistical estimation and mathematical approximation theories for transformers to predict the scaling laws. This perspective shows that transformer scaling laws can be explained in a manner consistent with the underlying data geometry. We further validate our theory with empirical observations of LLMs and find strong agreement between the observed empirical scaling laws and our theoretical predictions. Finally, we turn to in-context learning, analyzing its scaling behavior by uncovering a connection between the attention mechanism in transformers and classical kernel methods in machine learning.
 
 
Host: Yimin Zhong
 
 
 
 
 

DMS Applied and Computational Mathematics Seminar
May 02, 2025 02:00 PM
328 Parker Hall


zhang

Speaker: Dr. Zhongqiang Zhang (Worcester Polytechnic Institute, Worcester Massachusetts) 

Title: Solving Fokker-Planck Equations in High Dimensions Using Tensor Neural Networks 

 

Abstract: We solve high-dimensional Fokker-Planck equations on the whole space by using tensor neural networks. The tensor neural networks consist of a sum of tensor products of one-dimensional feedforward networks or a linear combination of several selected radial basis functions. These networks allow us to exploit auto-differentiation in major Python packages efficiently. Furthermore, using radial basis functions can fully avoid auto-differentiation, which is very expensive in high dimensions. We then use the physics-informed neural networks and stochastic gradient descent methods to learn the tensor networks. One essential step is to determine a proper numerical support for the Fokker-Planck equation. We demonstrate numerically that the tensor neural networks in physics-informed machine learning are efficient for Fokker-Planck equations from two to ten dimensions. 

 
 

DMS Applied and Computational Mathematics Seminar
Apr 25, 2025 02:00 AM
328 Parker Hall


massatt
 
Speaker: Daniel Massatt (LSU) 
 
Title: Continuum model accuracy for electronic moiré 2D materials
 
 
Abstract: Incommensurate 2D materials with two similar periodicities form large moiré patterns resulting in unique electronic properties including correlated insulators and superconductors. Before studying correlated effects, a thorough understanding of the single-particle picture is critical as a reduced single-particle basis is key for constructing two-body models. The most popular approximate models are the continuum models.  In this work, we analyze the accuracy of continuum models relative to the more fundamental tight-binding models (including ab initio tight-binding). We show continuum models such as the popular Bistritzer-MacDonald model can be realized as careful Taylor expansions of a momentum space approximation of tight-binding, and discuss accuracy and the effect on the band structure of various order expansions in the setting of twisted bilayer graphene. The momentum space and continuum models both yield candidate single-particle bases for many-body models.
 
 

DMS Applied and Computational Mathematics Seminar
Apr 18, 2025 02:00 PM
ZOOM


trent

Speaker: Catalin Trenchea (University of Pittsburgh)  

Title: An energy stable, second-order time-stepping method for two phase flow in porous media

 

Abstract: We propose and analyze a second-order partitioned time-stepping method for a two phase flow problem in porous media. The algorithm is based on a refactorization of Cauchy’s one-legged θ-method. The main part consists of the implicit backward Euler method, while part two uses a linear extrapolation. In the backward Euler step, the decoupled equations are solved iteratively. We prove that the iterations converge linearly to the solution of the coupled problem, under some conditions on the data. When θ=1/2, the algorithm is equivalent to the symplectic midpoint method. Similar to the continuous case, we also prove a discrete Helmholtz free energy balance, without numerical dissipation. We compare this midpoint method with the classic backward Euler method, and two implicit-explicit time-lagging schemes. The midpoint method outperforms the other schemes in terms of rates of convergence, long-time behaviour and energy approximation, for small and large values of the time step.

 
 

DMS Applied and Computational Mathematics Seminar
Apr 11, 2025 02:00 PM
328 Parker Hall


 
 jamesscott
 
Dr. James Scott  (incoming faculty member in Applied Mathematics)
 
Title: Nonlocal Boundary Value Problems with Local Boundary Conditions
 
 
Abstract: We state and analyze nonlocal problems with classically-defined, local boundary conditions. The model takes its horizon parameter to be spatially dependent, vanishing near the boundary of the domain. We establish a Green's identity for the nonlocal operator that recovers the classical boundary integral, which permits the use of variational techniques. Using this, we show the existence of weak solutions, as well as their variational convergence to classical counterparts as the bulk horizon parameter uniformly converges to zero. In certain circumstances, global regularity of solutions can be established, resulting in improved modes and rates of variational convergence. Generalizations of these results pertaining to models in continuum mechanics and Laplacian learning will also be presented.
 
 

DMS Applied and Computational Mathematics Seminar
Apr 04, 2025 02:00 PM
328 Parker Hall


moleitao

Speaker: Molei Tao (Georgia Tech)  
 
Title: Optimization, Sampling, and Generative Modeling in Non-Euclidean Spaces


Abstract: Machine learning in non-Euclidean spaces have been rapidly attracting attention in recent years, and this talk will give some examples of progress on its mathematical and algorithmic foundations. A sequence of developments that eventually leads to the generative modeling of data on Lie groups will be reported. Such a problem occurs, for example, in the Gen-AI design of molecules.

More precisely, I will begin with variational optimization, which, together with delicate interplays between continuous- and discrete-time dynamics, enables the construction of momentum-accelerated algorithms that optimize functions defined on manifolds. Selected applications, such as a generic improvement of Transformer, and a low-dim. approximation of high-dim. optimal transport distance, will be described. Then I will turn the optimization dynamics into an algorithm that samples from probability distributions on Lie groups. This sampler provably converges, even without log-concavity condition or its common relaxations. Finally, I will describe how this sampler can lead to a structurally-pleasant diffusion generative model that allows users to, given training data that follow any latent statistical distribution on a Lie group manifold, generate more data exactly on the same manifold that follow the same distribution. If time permits, applications such as molecule design and generative innovation of quantum processes will be briefly discussed.

Short bio: 
Molei Tao is a full professor in School of Math at Georgia Tech, working on the mathematical foundations of machine learning. He received B.S. from Tsinghua Univ. and Ph.D. from Caltech, and worked as a Courant Instructor at NYU before starting at Georgia Tech. He serves as an Area Chair for NeurIPS, ICLR and ICML, and he is a recipient of W.P. Carey Ph.D. Prize in Applied Mathematics (2011), American Control Conference Best Student Paper Finalist (2013), NSF CAREER Award (2019), AISTATS best paper award (2020), IEEE EFTF-IFCS Best Student Paper Finalist (2021), Cullen-Peck Scholar Award (2022), GT-Emory AI.Humanity Award (2023), SONY Faculty Innovation Award (2024), Best Poster Award at an international conference “Recent Advances and Future Directions for Sampling” held at Yale (2024), as well as several other recognitions.
 
 

DMS Applied and Computational Mathematics Seminar
Mar 28, 2025 02:00 PM
328 Parker Hall


zhong
 
Speaker: Dr. Yimin Zhong (Auburn)
 
Title: Numerical Understanding of Neural Networks
 
 
Abstract: In this talk, I will talk about a couple of recent works on neural networks. The motivation is to see whether neural networks are suitable for general scientific computing. Our study of shallow neural networks demonstrates that shallow neural networks are in general low-pass filters from different perspectives. Based on this observation, we proposed to make use of the composition of shallow networks to construct deep neural networks, which demonstrates better performance over the vanilla fully connected neural networks of comparable parameters.
 
 
This talk is a part of Dr. Zhong's 3rd year review process.
 
 

DMS Applied and Computational Mathematics Seminar
Mar 21, 2025 02:00 PM
328 Parker Hall


qitang
 
Speaker: Qi Tang (Georgia Tech) 
 
Title: Structure-preserving machine learning for learning dynamical systems
 
 
Abstract: I will present our recent work on structure-preserving machine learning (ML) for dynamical systems. First, I introduce a structure-preserving neural ODE framework that accurately captures chaotic dynamics in dissipative systems. Inspired by the inertial manifold theorem, our model learns the ODE’s right-hand side by combining a linear and a nonlinear term, enabling long-term stability on the attractor for the Kuramoto-Sivashinsky equation. This framework is further enhanced with exponential integrators. Next, I discuss ML for singularly perturbed systems, leveraging the Fenichel normal form to simplify fast dynamics near slow manifolds. A fast-slow neural network is proposed that enforces the existence of a trainable, attractive invariant slow manifold as a hard constraint.
 

DMS Applied and Computational Mathematics Seminar
Feb 21, 2025 02:00 PM
328 Parker Hall


weizhu

Speaker: Wei Zhu (Georgia Tech)  

Title: Symmetry-Preserving Machine Learning: Theory and Applications

 

Abstract: Symmetry underlies many machine learning and scientific computing tasks, from computer vision to physical system modeling. Models designed to respect symmetry often perform better, but several questions remain. How can we measure and maintain approximate symmetry when real-world symmetries are imperfect? How much training data can symmetry-based models save? And in non-convex optimization, do these models truly converge to better solutions? In this talk, I will share my work on these challenges, revealing that the answers are sometimes surprising. The approach draws on applied probability, harmonic analysis, differential geometry, and optimization, but no specialized background is required.


More Events...