Review Articles

Discussion of: ‘A review of distributed statistical inference’

Shaogao Lv ,

School of Statistics and Data Science, Nanjing Audit University, Nanjing, People's Republic of China

Xingcai Zhou

School of Statistics and Data Science, Nanjing Audit University, Nanjing, People's Republic of China

Pages 105-107 | Received 11 Nov. 2021, Accepted 20 Nov. 2021, Published online: 28 Dec. 2021,
  • Abstract
  • Full Article
  • References
  • Citations

References

  • Blanchard, P., El Mhamdi, E. M., Guerraoui, R., & Stainer, J. (2017). Machine learning with adversaries: Byzantine tolerant gradient descent. Proceedings of the 31st international conference on neural information processing systems, Long Beach, CA, USA (pp. 118128).
  • Chen, Y., Su, L., & Xu, J. (2017). Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 46(1), 125. https://doi.org/10.1145/3308809.3308857
  • Ghosh, A., Maity, R. K., Kadhe, S., Mazumdar, A., & Ramchandran, K. (2020). Communication-efficient and byzantine-robust distributed learning with error feedback. arXiv:1911.09721v3.
  • Gu, R., Yang, S., & Wu, F. (2019). Distributed machine learning on mobile devices: a survey. arXiv:1909.08329v1, 1–28.
  • Lin, S.-B., Guo, X., & Zhou, D.-X. (2017). Distributed learning with regularized least squares. The Journal of Machine Learning Research, 18(49), 32023232.
  • Liu, J., Huang, J., Zhou, Y., Li, X., Ji, S., Xiong, H., & Dou, D. (2021). From distributed machine learning to federated learning: a survey. arXiv:2104.14362.
  • Shamir, O., Srebro, N., & Zhang, T. (2014). Communication-efficient distributed optimization using an approximate newton-type method. In International conference on machine learning (pp. 1000–1008).
  • Su, L., & Vaidya, N. H. (2016). Fault-tolerant multi-agent optimization: optimal iterative distributed algorithms. In Proceedings of the 2016 ACM symposium on principles of distributed computing (pp. 425–434). Association for Computing Machinery.
  • Tu, J. Y., Liu, W. D., & Mao, X. J. (2021). Byzantine-robust distributed sparse learning for M-estimation. Machine Learning. https://doi.org/10.1007/s10994-021-06001-x.
  • Verbraeken, J., Wolting, M., Katzy, J., Kloppenburg, J., Verbelen, T., & J. S. Rellermeyer (2020). A survey on distributed machine learning. ACM Computing Surveys, 53(2), 133. https://doi.org/10.1145/3377454
  • Yin, D., Chen, Y., Ramchandran, K., & Bartlett, P. (2018). Byzantine-robust distributed learning: Towards optimal statistical rates. Proceedings of the 35th international conference on machine learning, Stockholm, Sweden, PMLR 80 (pp. 56505659).
  • Zhang, Y., Duchi, J. C., & Wainwright, M. J. (2013). Communication-efficient algorithms for statistical optimization. The Journal of Machine Learning Research, 14, 33213363.
  • Zhou, X. C., Chang, L., Xu, P. F., & Lv, S. G. (2021). Communication-efficient Byzantine-robust distributed learning with statistical guarantee. arXiv:2103.00373v1.

To cite this article: Shaogao Lv & Xingcai Zhou (2021): Discussion of: ‘A review of distributed
statistical inference’, Statistical Theory and Related Fields, DOI: 10.1080/24754269.2021.2015868
To link to this article: https://doi.org/10.1080/24754269.2021.2015868