Mathematical Theory in Deep Learning
This course explores the fundamental mathematical theories underpinning deep learning. It covers the theoretical analysis of neural networks' approximation capabilities through universal approximation theorems, examines optimization models in the training process including loss landscape analysis and convergence properties, and investigates mathematical frameworks for generalization including statistical learning theory and overparameterization phenomena. The course also addresses the mathematical foundations of generative models, analyzing theories behind VAEs, GANs, and diffusion models through optimal transport and manifold learning perspectives.
Lecturer
Jiayi Kang
Date
25th February ~ 17th June, 2025
Location
| Weekday | Time | Venue | Online | ID | Password |
|---|---|---|---|---|---|
| Tuesday | 13:30 - 16:05 | A3-4-101 | ZOOM 13 | 637 734 0280 | BIMSA |
Audience
Advanced Undergraduate
, Graduate
Video Public
Yes
Notes Public
Yes
Language
English
Lecturer Intro
Jiayi Kang received his Ph.D. in Mathematics from Tsinghua University in 2024. He joined the Beijing Institute of Mathematical Sciences and Applications (BIMSA) as an Assistant Researcher in July 2024, and became an Assistant Professor at the Hetao Institute for Mathematical and Interdisciplinary Sciences (HIMIS) in November 2025.
His research focuses on the intersection of deep learning, nonlinear filtering, and computational biology. His main research interests include: neural network-based filtering algorithms and their mathematical foundations, sampling methods in Wasserstein geometry, nonlinear filtering theory (including the Yau-Yau method) and its applications in climate science and other fields, as well as computational genomics and evolutionary system modeling. He is committed to solving complex problems in science and engineering using mathematical and machine learning methods.