Speakers

Qizhai Li, University of Chinese Academy of Sciences

BiographyQizhai Li (Chinese: 李启寨) is professor at Academy of Mathematics and Systems Science, Chinese Academy of Sciences. He received the B.Sc degree from University of Science and Technology of China in 2001, and Ph.D degree from The Graduate School of Chinese Academy of Sciences in 2006. He was awarded as a fellow of ASA in 2020, Elected Member of ISI in 2016, and Outstanding Young Scholar Award of CSIAS in 2015. His research interests include biostatistics, statistical genetics and high-dimensional data analysis. He has published 110 more papers. Now, He is Board of Executive Directors of Chinese Mathematical Society etc. 

 

Title: Generalization Error Bounds for Nonconvex Learning via Iterative Stochastic Gradient Algorithms

Abstract:  Algorithm stability is one of major concerns in machine learning. The generalization error, as a measure of over fitting, has a close connection with the stability of the algorithm. Stochastic gradient descent (SGD) has been widely used in training deep neural networks and its effectiveness has been verified theoretically and practically. Alternatively, Stochastic Gradient Langevin Dynamics (SGLD) is used for Bayesian deep learning. In this paper, we study the generalization error bounds of SGD, and establish a bound for non-convex learning, which is further extended to a general update rule. For SGLD, we achieve a generalization error bound of O(1/n) for its weighted averaging estimator. We also achieve a generalization error bound for the continuous time Langevin dynamics, which enhances our understanding to the algorithm stability of SGLD.

The presentation is based on joint work with Wenjun Xiong, Faming Liang, Juan Ding, and Junjian Zhang.