Guanghua School of Management, Peking University, Beijing, People's Republic of China
Guanghua School of Management, Peking University, Beijing, People's Republic of China
yuan_gao@pku.edu.cn; ygao_stat@outlook.com
Guanghua School of Management, Peking University, Beijing, People's Republic of China
Center for Applied Statistics and School of Statistics, Renmin University of China, Beijing, People's Republic of China
School of Economics and Management, Beihang University, Beijing, People's Republic of China
School of Statistics and Mathematics, Central University of Finance and Economics, Beijing, People's Republic of China
School of Statistics, Beijing Normal University, Beijing, People's Republic of China
Center for Applied Statistics and School of Statistics, Renmin University of China, Beijing, People's Republic of China
School of Statistics and Management, Shanghai University of Finance and Economics, Shanghai, People's Republic of China
School of Statistics, University of International Business and Economics, Beijing, People's Republic of China
Center for Applied Statistics and School of Statistics, Renmin University of China, Beijing, People's Republic of China
School of Data Science and MOE Laboratory for National Development and Intelligent Governance, Fudan University, Shanghai, People's Republic of China
School of Statistics, University of International Business and Economics, Beijing, People's Republic of China
Guanghua School of Management, Peking University, Beijing, People's Republic of China
This paper presents a selective review of statistical computation methods for massive data analysis. A huge amount of statistical methods for massive data computation have been rapidly developed in the past decades. In this work, we focus on three categories of statistical computation methods: (1) distributed computing, (2) subsampling methods, and (3) minibatch gradient techniques. The first class of literature is about distributed computing and focuses on the situation, where the dataset size is too huge to be comfortably handled by one single computer. In this case, a distributed computation system with multiple computers has to be utilized. The second class of literature is about subsampling methods and concerns about the situation, where the blacksample size of dataset is small enough to be placed on one single computer but too large to be easily processed by its memory as a whole. The last class of literature studies those minibatch gradient related optimization techniques, which have been extensively used for optimizing various deep learning models.
To cite this article: Xuetong Li, Yuan Gao, Hong Chang, Danyang Huang, Yingying Ma, Rui Pan, Haobo Qi, Feifei Wang, Shuyuan Wu, Ke Xu, Jing Zhou, Xuening Zhu, Yingqiu Zhu & Hansheng Wang (23 Apr 2024): A selective review on statistical methods for massive data computation: distributed computing, subsampling, and minibatch techniques, Statistical Theory and Related Fields, DOI: 10.1080/24754269.2024.2343151
To link to this article: https://doi.org/10.1080/24754269.2024.2343151