Selected Papers and Deep Dive in Large Foudation Model
Large foundation models have achieved remarkable success across various domains, including general applications like Natural Language Processing, image, speech, and video, as well as scientific fields such as materials science, molecular biology, and protein engineering. While the underlying techniques are firmly rooted in applied mathematics, their development has often been driven by empirical engineering practices, leading to significant practical breakthroughs. Diffusion models serve as a prime example, demonstrating both substantial engineering benefits and profound mathematical underpinnings.
This seminar series aims to bridge the gap between foundation models and their mathematical foundations, fostering interdisciplinary discussions, particularly between mathematics and machine learning. In this seminar, we will select papers and deeply dive into these papers to boost our own research.
Basic orgnization principle is one for all, all for one. Generative models, including but not limited to a) Multimodality b) Reasoning/RL c) Agents from ICML NeuraIPS, ICLR and Nature, Science.
Suggestions:
When reading a paper: start with the title—if it looks interesting ⇒ read the abstract ⇒ look at the figures and the Method, and interpret the Method together with the figures. If any step feels uninteresting, stop and move on to the next paper. Then check the experiments and the compared baselines. After a close read, if you still have energy, you can look at the reviewers’ comments on OpenReview, e.g., https://openreview.net/forum?id=NniXePXVXw
Spend <0.5h to write Beamer LaTeX slides. You can upload the paper PDF to an LLM and have it generate Beamer LaTeX; Claude has worked well before. Typically: upload the PDF, then prompt “Make a detailed Beamer LaTeX slides for me” ⇒ copy-paste the output into Overleaf and compile. Usually the model will leave placeholders for images. Install a tool like Snipaste: press F1 to select regions; there’s an icon at the bottom-right to copy to the clipboard. Then paste directly into the Overleaf document—the image will upload and the includegraphics code will be generated. Adjust it and drop it into the pre-reserved image slots.
If you are interested, please contact Pipi Hu, hpp@bimsa.cn.
This seminar series aims to bridge the gap between foundation models and their mathematical foundations, fostering interdisciplinary discussions, particularly between mathematics and machine learning. In this seminar, we will select papers and deeply dive into these papers to boost our own research.
Basic orgnization principle is one for all, all for one. Generative models, including but not limited to a) Multimodality b) Reasoning/RL c) Agents from ICML NeuraIPS, ICLR and Nature, Science.
Suggestions:
When reading a paper: start with the title—if it looks interesting ⇒ read the abstract ⇒ look at the figures and the Method, and interpret the Method together with the figures. If any step feels uninteresting, stop and move on to the next paper. Then check the experiments and the compared baselines. After a close read, if you still have energy, you can look at the reviewers’ comments on OpenReview, e.g., https://openreview.net/forum?id=NniXePXVXw
Spend <0.5h to write Beamer LaTeX slides. You can upload the paper PDF to an LLM and have it generate Beamer LaTeX; Claude has worked well before. Typically: upload the PDF, then prompt “Make a detailed Beamer LaTeX slides for me” ⇒ copy-paste the output into Overleaf and compile. Usually the model will leave placeholders for images. Install a tool like Snipaste: press F1 to select regions; there’s an icon at the bottom-right to copy to the clipboard. Then paste directly into the Overleaf document—the image will upload and the includegraphics code will be generated. Adjust it and drop it into the pre-reserved image slots.
If you are interested, please contact Pipi Hu, hpp@bimsa.cn.
Date
19th September, 2025 ~ 23rd January, 2026
Location
Weekday | Time | Venue | Online | ID | Password |
---|---|---|---|---|---|
Friday | 13:00 - 14:30 | Shuangqing | - | - | - |