收稿日期: 2023-01-04
网络出版日期: 2024-03-18
基金资助
国家自然科学基金 (61972154); 上海市科委项目 (20511101600)
Parallel block-based stochastic computing with adapted quantization
Received date: 2023-01-04
Online published: 2024-03-18
深度神经网络模型的庞大存储和高计算量的需求限制了其在面积和功耗受限的嵌入式设备上的部署. 为了解决这一问题, 随机计算将数据表示为一个随机序列, 继而通过基本逻辑运算单元实现加法和乘法等算术运算, 以减小神经网络的存储空间和降低计算复杂度. 然而, 当随机序列的长度较短时, 网络权重在从浮点数转换到随机序列的过程中存在离散化误差, 这会降低随机计算网络模型的推理准确率. 尽管使用更长的随机序列可以扩大随机序列的表示范围以缓解这一问题, 但也会导致更长的计算时延和更大的能源功耗. 本文提出了一种基于傅立叶变换的可微量化函数的设计, 可以在网络的训练过程中, 通过提高模型对随机序列的匹配度, 来减小数据转换过程中的离散化误差, 从而保证较短随机序列的随机计算神经网络的准确率. 此外, 还设计了一种加法器, 用于提高运算单元的准确性, 并通过将输入分块来并行计算以进一步缩短时延. 最后, 通过实验表明, 本文相较于其他方法可以提高20%的模型推理准确率, 并能够达到缩短50%的计算时延.
张永卓 , 诸葛晴凤 , 沙行勉 , 宋玉红 . 基于并行块的自适应量化随机计算[J]. 华东师范大学学报(自然科学版), 2024 , 2024(2) : 76 -85 . DOI: 10.3969/j.issn.1000-5641.2024.02.009
The demands of deep neural network models for computation and storage make them unsuitable for deployment on embedded devices with limited area and power. To solve this issue, stochastic computing reduces the storage and computational complexity of neural networks by representing data as a stochastic sequence, followed by arithmetic operations such as addition and multiplication through basic logic operation units. However, short stochastic sequences may cause discretization errors when converting network weights from floating point numbers to the stochastic sequence, which can reduce the inference accuracy of stochastic computing network models. Longer stochastic sequences can improve the representation range of stochastic sequences and alleviate this problem, but they also result in longer computational latency and higher energy consumption. We propose a design for a differentiable quantization function based on the Fourier transform. The function improves the matching of the model to stochastic sequences during the network’s training process, reducing the discretization error during data conversion. This ensures the accuracy of stochastic computational neural networks with short stochastic sequences. Additionally, we present an adder designed to enhance the accuracy of the operation unit and parallelize computations by chunking inputs, thereby reducing latency. Experimental results demonstrate a 20% improvement in model inference accuracy compared to other methods, as well as a 50% reduction in computational latency.
Key words: stochastic computing; quantization; neural network optimization
1 | JIANG W W, YANG L, DASGUPTA S, et al.. Standing on the shoulders of giants: Hardware and neural architecture co-search with hot start. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2020, 39 (11): 4154- 4165. |
2 | JIANG W W, YANG L, SHA E H M, et al.. Hardware/software co-exploration of neural architectures. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2020, 39 (12): 4805- 4815. |
3 | SONG Y H, JIANG W W, LI B B, et al. Dancing along battery: Enabling transformer with run-time reconfigurability on mobile devices [C]// 2021 58th ACM/IEEE Design Automation Conference. 2021: 1003-1008. |
4 | JIANG W W, ZHANG X Y, SHA E H M, et al. Accuracy vs. efficiency: Achieving both through FPGA-implementation aware neural architecture search [C]// Proceedings of the 56th Annual Design Automation Conference. 2019. DOI: https://doi.org/10.1145/3316781.3317757. |
5 | PENG H, HUANG S, GENG T, et al. Accelerating transformer-based deep learning models on FPGAs using column balanced block pruning [C]// 2021 22nd International Symposium on Quality Electronic Design. 2021: 142-148. |
6 | QI P J, SHA E H M, ZHUGE Q F, et al. Accelerating framework of transformer by hardware design and model compression co-optimization [C]// 2021 IEEE/ACM International Conference on Computer Aided Design. 2021. DOI: https://doi.org/10.1109/ICCAD51958.2021.9643586. |
7 | YANG L, JIANG W W, LIU W, et al. Co-exploring neural architecture and network-on-chip design for real-time artificial intelligence [C]// 2020 25th Asia and South Pacific Design Automation Conference. 2020: 85-90. |
8 | GAINES B R. Stochastic Computing Systems [M]// TOU J T. Advances in Information Systems Science. New York: Springer, 1969: 37-172. |
9 | LI B, NAJAFI M H, LILJA D J. Using stochastic computing to reduce the hardware requirements for a restricted boltzmann machine classifier [C]// Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. 2016: 36-41. |
10 | LI B, NAJAFI M H, YUAN B, et al. Quantized neural networks with new stochastic multipliers [C]// 2018 19th International Symposium on Quality Electronic Design. 2018: 376-382. |
11 | LI B, QIN Y, YUAN B, et al. Neural network classifiers using stochastic computing with a hardware-oriented approximate activation function [C]// 2017 IEEE International Conference on Computer Design. 2017: 97-104. |
12 | LI B, QIN Y, YUAN B, et al.. Neural network classifiers using a hardware-based approximate activation function with a hybrid stochastic multiplier. ACM Journal on Emerging Technologies in Computing Systems, 2019, 15 (1): 12. |
13 | LIU Y, LIU S, WANG Y, et al.. A survey of stochastic computing neural networks for machine learning applications. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32 (7): 2809- 2824. |
14 | QIAN W, LI X, RIEDEL M D, et al.. An architecture for fault-tolerant computation with stochastic logic. IEEE Transactions on Computers, 2011, 60 (1): 93- 105. |
15 | KIM K, KIM J, YU J, et al. Dynamic energy-accuracy trade-off using stochastic computing in deep neural networks [C]// Proceedings of the 53rd Annual Design Automation Conference. 2016. DOI: https://doi.org/10.1145/2897937.2898011. |
16 | ZHAKATAYEV A, LEE S, SIM H, et al. Sign-magnitude SC: Getting 10X accuracy for free in stochastic computing for deep neural networks [C]// 2018 55th ACM/ESDA/IEEE Design Automation Conference. 2018: 158. |
17 | JENSON D, RIEDEL M. A deterministic approach to stochastic computation [C]// 2016 IEEE/ACM International Conference on Computer-Aided Design. 2016. DOI: https://doi.org/10.1145/2966986.2966988. |
18 | SIM H, LEE J. A new stochastic computing multiplier with application to deep convolutional neural networks [C]// Proceedings of the 54th Annual Design Automation Conference. 2017. DOI: https://doi.org/10.1145/3061639.3062290. |
19 | FARAJI S R, NAJAFI M H, LI B, et al. Energy-efficient convolutional neural networks with deterministic bit-stream processing [C]// 2019 Design, Automation & Test in Europe Conference & Exhibition. 2019: 1757-1762. |
20 | WU D, LI J, YIN R, et al. uGEMM: Unary computing architecture for GEMM applications [C]// Proceedings-International Symposium on Computer Architecture. 2020: 377-390. |
21 | SONG Y H, SHA E H M, ZHUGE Q F, et al. BSC: Block-based stochastic computing to enable accurate and efficient TinyML [C]// 2022 27th Asia and South Pacific Design Automation Conference. 2022: 314-319. |
/
〈 |
|
〉 |