Speakers

Shiliang Sun, East China Normal University

Biography: Shiliang Sun (Chinese: 孙仕亮) received the B. E. degree from Beijing University of Aeronautics and Astronautics (BUAA), and the M. E. and Ph.D. degrees in Pattern Recognition and Intelligent Systems from Tsinghua University. In 2004, he was awarded Microsoft Fellowship. In 2007, he joined the Department of Computer Science and Technology, East China Normal University (ECNU), and founded the Pattern Recognition and Machine Learning (PRML) Research Group. From 2009 to 2010, he was a visiting researcher at the Centre for Computational Statistics and Machine Learning (CSML) and the Department of Computer Science, University College London (UCL). From March to April 2012, he was a visiting researcher at the Department of Statistics and Biostatistics, Rutgers University. In July 2014, he was a visiting researcher at the Department of Electrical Engineering, Columbia University. He is a member of the PASCAL (Pattern Analysis, Statistical Modelling and Computational Learning) network of excellence and a program co-chair for ICONIP 2017.

 

TitleAdversarial Attacks and Defenses for Multi-view Learning

Abstract: Deep models are highly susceptible to adversarial perturbations. Even if the benign examples are added with imperceptible perturbations, most of the models will output incorrect results. Utilizing the vulnerability of deep models to adversarial examples, adversaries can easily perform malicious attacks, which poses security concerns. Consequently, the robustness of deep models against adversarial attacks has become a crucial area of research. Although various kinds of single-view adversarial attacks and defense have been proposed, there is no specific research on the adversarial attacks and defenses for multi-view deep models. As we all know, multi-view models usually have superior performance. However, it is an open problem whether multi-view models are more robust to adversarial examples than single-view models. We investigate the relative robustness of the multi-view deep model and single-view model, propose effective multi-view adversarial attack and defense methods, and discuss some possible research thoughts on multi-view defenses.