Mathematical Theory in Deep Learning
This course explores the fundamental mathematical theories underpinning deep learning. It covers the theoretical analysis of neural networks' approximation capabilities through universal approximation theorems, examines optimization models in the training process including loss landscape analysis and convergence properties, and investigates mathematical frameworks for generalization including statistical learning theory and overparameterization phenomena. The course also addresses the mathematical foundations of generative models, analyzing theories behind VAEs, GANs, and diffusion models through optimal transport and manifold learning perspectives.
讲师
康家熠
日期
2025年02月25日 至 06月17日
位置
| Weekday | Time | Venue | Online | ID | Password |
|---|---|---|---|---|---|
| 周二 | 13:30 - 16:05 | A3-4-101 | ZOOM 13 | 637 734 0280 | BIMSA |
听众
Advanced Undergraduate
, Graduate
视频公开
公开
笔记公开
公开
语言
英文
讲师介绍
Jiayi Kang received his Ph.D. in Mathematics from Tsinghua University in 2024. He joined the Beijing Institute of Mathematical Sciences and Applications (BIMSA) as an Assistant Researcher in July 2024, and became an Assistant Professor at the Hetao Institute for Mathematical and Interdisciplinary Sciences (HIMIS) in November 2025.
His research focuses on the intersection of deep learning, nonlinear filtering, and computational biology. His main research interests include: neural network-based filtering algorithms and their mathematical foundations, sampling methods in Wasserstein geometry, nonlinear filtering theory (including the Yau-Yau method) and its applications in climate science and other fields, as well as computational genomics and evolutionary system modeling. He is committed to solving complex problems in science and engineering using mathematical and machine learning methods.