On the regularization of convolutional layers
Organizers
Zhen Li
,
Xin Liang
,
Zhi Ting Ma
,
Seyed Mofidi
,
Li Wang
,
Fan Sheng Xiong
,
Shuo Yang
,
Wu Yue Yang
Speaker
Peichang Guo
Time
Monday, December 23, 2024 3:00 PM - 4:00 PM
Venue
Online
Online
Zoom 928 682 9093
(BIMSA)
Abstract
Convolutional neural network is an important model in deep learning, where a convolution operation can be represented by a tensor. To avoid exploding/vanishing gradient problems and to improve the generalizability of a neural network, it is desirable to let the singular values of the transformation matrix corresponding to the tensor be bounded. We propose penalty functions to constrain the singular values of the transformation matrix. We derive the gradient descent algorithm for each penalty function in terms of the tensor. Numerical examples are presented to demonstrate the effectiveness of the method.