Applied and Computational Mathematics Seminars
May 02, 2025 02:00 PM
328 Parker Hall
Speaker: Dr. Zhongqiang Zhang (Worcester Polytechnic Institute, Worcester Massachusetts)
Title: Solving Fokker-Planck Equations in High Dimensions Using Tensor Neural Networks
Abstract: We solve high-dimensional Fokker-Planck equations on the whole space by using tensor neural networks. The tensor neural networks consist of a sum of tensor products of one-dimensional feedforward networks or a linear combination of several selected radial basis functions. These networks allow us to exploit auto-differentiation in major Python packages efficiently. Furthermore, using radial basis functions can fully avoid auto-differentiation, which is very expensive in high dimensions. We then use the physics-informed neural networks and stochastic gradient descent methods to learn the tensor networks. One essential step is to determine a proper numerical support for the Fokker-Planck equation. We demonstrate numerically that the tensor neural networks in physics-informed machine learning are efficient for Fokker-Planck equations from two to ten dimensions.
DMS Applied and Computational Mathematics Seminar
Apr 25, 2025 02:00 AM
328 Parker Hall

DMS Applied and Computational Mathematics Seminar
Apr 18, 2025 02:00 PM
ZOOM

Speaker: Catalin Trenchea (University of Pittsburgh)
Abstract: We propose and analyze a second-order partitioned time-stepping method for a two phase flow problem in porous media. The algorithm is based on a refactorization of Cauchy’s one-legged θ-method. The main part consists of the implicit backward Euler method, while part two uses a linear extrapolation. In the backward Euler step, the decoupled equations are solved iteratively. We prove that the iterations converge linearly to the solution of the coupled problem, under some conditions on the data. When θ=1/2, the algorithm is equivalent to the symplectic midpoint method. Similar to the continuous case, we also prove a discrete Helmholtz free energy balance, without numerical dissipation. We compare this midpoint method with the classic backward Euler method, and two implicit-explicit time-lagging schemes. The midpoint method outperforms the other schemes in terms of rates of convergence, long-time behaviour and energy approximation, for small and large values of the time step.
DMS Applied and Computational Mathematics Seminar
Apr 11, 2025 02:00 PM
328 Parker Hall

DMS Applied and Computational Mathematics Seminar
Apr 04, 2025 02:00 PM
328 Parker Hall
Abstract: Machine learning in non-Euclidean spaces have been rapidly attracting attention in recent years, and this talk will give some examples of progress on its mathematical and algorithmic foundations. A sequence of developments that eventually leads to the generative modeling of data on Lie groups will be reported. Such a problem occurs, for example, in the Gen-AI design of molecules.
More precisely, I will begin with variational optimization, which, together with delicate interplays between continuous- and discrete-time dynamics, enables the construction of momentum-accelerated algorithms that optimize functions defined on manifolds. Selected applications, such as a generic improvement of Transformer, and a low-dim. approximation of high-dim. optimal transport distance, will be described. Then I will turn the optimization dynamics into an algorithm that samples from probability distributions on Lie groups. This sampler provably converges, even without log-concavity condition or its common relaxations. Finally, I will describe how this sampler can lead to a structurally-pleasant diffusion generative model that allows users to, given training data that follow any latent statistical distribution on a Lie group manifold, generate more data exactly on the same manifold that follow the same distribution. If time permits, applications such as molecule design and generative innovation of quantum processes will be briefly discussed.
DMS Applied and Computational Mathematics Seminar
Mar 28, 2025 02:00 PM
328 Parker Hall

DMS Applied and Computational Mathematics Seminar
Mar 21, 2025 02:00 PM
328 Parker Hall

DMS Applied and Computational Mathematics Seminar
Feb 21, 2025 02:00 PM
328 Parker Hall
Speaker: Wei Zhu (Georgia Tech)
Title: Symmetry-Preserving Machine Learning: Theory and Applications
Abstract: Symmetry underlies many machine learning and scientific computing tasks, from computer vision to physical system modeling. Models designed to respect symmetry often perform better, but several questions remain. How can we measure and maintain approximate symmetry when real-world symmetries are imperfect? How much training data can symmetry-based models save? And in non-convex optimization, do these models truly converge to better solutions? In this talk, I will share my work on these challenges, revealing that the answers are sometimes surprising. The approach draws on applied probability, harmonic analysis, differential geometry, and optimization, but no specialized background is required.
DMS Applied and Computational Mathematics Seminar
Feb 14, 2025 02:00 PM
328 Parker Hall

DMS Applied and Computational Mathematics Seminar
Nov 22, 2024 01:00 PM
328 Parker Hall
Speaker: Yi Liu (Auburn University)
Title: Convergence Analysis of the ADAM Algorithm for Linear Inverse Problems
Abstract: The ADAM algorithm is one of the most popular stochastic optimization methods in machine learning. Its remarkable performance in training models with massive datasets suggests its potential efficiency in solving large-scale inverse problems. In this work, we apply the ADAM algorithm to solve linear inverse problems and establish the sub-exponential convergence rate for the algorithm when the noise is absent. Based on the convergence analysis, we present an a priori stopping criterion for the ADAM iteration when applied to solve inverse problems at the presence of noise. The convergence analysis is achieved via the construction of suitable Lyapunov functions for the algorithm when it is viewed as a dynamical system with respect to the iteration numbers. At each iteration, we establish the error estimates for the iterated solutions by analyzing the constructed Lyapunov functions via stochastic analysis. Various numerical examples are conducted to support the theoretical findings and to compare with the performance of the stochastic gradient descent (SGD) method.