BIMSA >
Seminar on Control Theory and Nonlinear Filtering
Accelerated Linearized Laplace Approximation for Bayesian Deep Learning
Accelerated Linearized Laplace Approximation for Bayesian Deep Learning
Organizer
Speaker
Time
Tuesday, January 3, 2023 9:00 PM - 9:30 PM
Venue
Online
Abstract
I shall report a work on Laplace approximation. Laplace approximation (LA) and its linearized variant (LLA) enable effortless adaptation of pretrained deep neural networks to Bayesian neural networks. The generalized Gauss-Newton (GGN) approximation is typically introduced to improve their tractability. However, LA and LLA are still confronted with non-trivial inefficiency issues and should rely on Kronecker-factored, diagonal, or even last-layer approximate GGN matrices in practical use. These approximations are likely to harm the fidelity of learning outcomes. To tackle this issue, inspired by the connections between LLA and neural tangent kernels (NTKs), we develop a Nystrom approximation to NTKs to accelerate LLA.