Jianing Han, Ziming Wang, Jiangrong Shen, Huajin Tang. Symmetric-threshold ReLU for Fast and Nearly Lossless ANN-SNN Conversion. Machine Intelligence Research, vol. 20, no. 3, pp.435-446, 2023. https://doi.org/10.1007/s11633-022-1388-2
Citation: Jianing Han, Ziming Wang, Jiangrong Shen, Huajin Tang. Symmetric-threshold ReLU for Fast and Nearly Lossless ANN-SNN Conversion. Machine Intelligence Research, vol. 20, no. 3, pp.435-446, 2023. https://doi.org/10.1007/s11633-022-1388-2

Symmetric-threshold ReLU for Fast and Nearly Lossless ANN-SNN Conversion

doi: 10.1007/s11633-022-1388-2
More Information
  • Author Bio:

    Jianing Han received the B. Eng. degree in Internet of things engineering from Taiyuan University of Technology, China in 2021. Currently, she is a master student in computer technology at College of Computer Science and Technology, Zhejiang University, China. Her research interests include learning algorithms in deep spiking neural networks and neural coding. E-mail: jnhan@zju.edu.cn ORCID iD: 0000-0002-5097-7894

    Ziming Wang received the B. Sc. degree in computer science from Sichuan University, China in 2020. Currently, he is a Ph.D. degree candidate in computer science and technology at Department of Computer Science, Zhejiang University, China. His research interests include neuromorphic computing, machine learning, and model compression. E-mail: zi_ming_wang@zju.edu.cn

    Jiangrong Shen received the Ph.D. degree in computer science and technology from College of Computer Science and Technology, Zhejiang University, China in 2022. She is currently a postdoctoral fellow at Zhejiang University, China. She studied as the honorary visiting scholar with University of Leicester, UK in 2019. Her research interests include neuromorphic computing, cyborg intelligence and neural computation. E-mail: jrshen@zju.edu.cn (Corresponding author) ORCID iD: 0000-0003-3683-3779

    Huajin Tang received the Ph.D. degree in computing intelligence from National University of Singapore, Singapore in 2005. He is currently a professor with Zhejiang University, China. He received the 2016 IEEE Outstanding Transactions on Neural Networks andLearning Systems (TNNLS) Paper Award. He has served as an Associate Editor for the IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Cognitive and Developmental Systems, and Frontiers in Neuromorphic Engineering. His research work on brain GPS has been reported by MIT Technology Review in 2015. His research interests include neuromorphic computing, neuromorphic hardware and cognitive systems, and robotic cognition. E-mail: htang@zju.edu.cn

  • Received Date: 2022-07-24
  • Accepted Date: 2022-10-28
  • Publish Online: 2023-03-31
  • Publish Date: 2023-06-01
  • The artificial neural network-spiking neural network (ANN-SNN) conversion, as an efficient algorithm for deep SNNs training, promotes the performance of shallow SNNs, and expands the application in various tasks. However, the existing conversion methods still face the problem of large conversion error within low conversion time steps. In this paper, a heuristic symmetric-threshold rectified linear unit (stReLU) activation function for ANNs is proposed, based on the intrinsically different responses between the integrate-and-fire (IF) neurons in SNNs and the activation functions in ANNs. The negative threshold in stReLU can guarantee the conversion of negative activations, and the symmetric thresholds enable positive error to offset negative error between activation value and spike firing rate, thus reducing the conversion error from ANNs to SNNs. The lossless conversion from ANNs with stReLU to SNNs is demonstrated by theoretical formulation. By contrasting stReLU with asymmetric-threshold LeakyReLU and threshold ReLU, the effectiveness of symmetric thresholds is further explored. The results show that ANNs with stReLU can decrease the conversion error and achieve nearly lossless conversion based on the MNIST, Fashion-MNIST, and CIFAR10 datasets, with 6× to 250 speedup compared with other methods. Moreover, the comparison of energy consumption between ANNs and SNNs indicates that this novel conversion algorithm can also significantly reduce energy consumption.

     

  • loading
  • [1]
    Y. LeCun, Y. Bengio, G. Hinton. Deep learning. Nature, vol. 521, no. 7553, pp. 436–444, 2015. DOI: 10.1038/nature14539.
    [2]
    Y. Lecun, L. Bottou, Y. Bengio, P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. DOI: 10.1109/5.726791.
    [3]
    W. Zaremba, I. Sutskever, O. Vinyals. Recurrent neural network regularization. [Online], Available: https://arxiv.org/abs/1409.2329, 2014.
    [4]
    Y. J. Zhang, Z. F. Yu, J. K. Liu, T. J. Huang. Neural decoding of visual information across different neural recording modalities and approaches. Machine Intelligence Research, vol. 19, no. 5, pp. 350–365, 2022. DOI: 10.1007/s11633-022-1335-2.
    [5]
    Y. Wu, D. H. Wang, X. T. Lu, F. Yang, M. Yao, W. S. Dong, J. B. Shi, G. Q. Li. Efficient visual recognition: A survey on recent advances and brain-inspired methodologies. Machine Intelligence Research, vol. 19, no. 5, pp. 366–411, 2022. DOI: 10.1007/s11633-022-1340-5.
    [6]
    R. Girshick, J. Donahue, T. Darrell, J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, pp. 580–587, 2014.
    [7]
    W. Maass. Networks of spiking neurons: The third generation of neural network models. Neural Networks, vol. 10, no. 9, pp. 1659–1671, 1997. DOI: 10.1016/S0893-6080(97)00011-7.
    [8]
    Q. Xu, J. R. Shen, X. M. Ran, H. J. Tang, G. Pan, J. K. Liu. Robust transcoding sensory information with neural spikes. IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 5, pp. 1935–1946, 2022. DOI: 10.1109/TNNLS.2021.3107449.
    [9]
    K. Roy, A. Jaiswal, P. Panda. Towards spike-based machine intelligence with neuromorphic computing. Nature, vol. 575, no. 7784, pp. 607–617, 2019. DOI: 10.1038/s41586-019-1677-2.
    [10]
    J. Pei, L. Deng, S. Song, M. G. Zhao, Y. H. Zhang, S. Wu, G. R. Wang, Z. Zou, Z. Z. Wu, W. He, F. Chen, N. Deng, S. Wu, Y. Wang, Y. J. Wu, Z. Y. Yang, C. Ma, G. Q. Li, W. T. Han, H. L. Li, H. Q. Wu, R. Zhao, Y. Xie, L. P. Shi. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature, vol. 572, no. 7767, pp. 106–111, 2019. DOI: 10.1038/s41586-019-1424-8.
    [11]
    P. U. Diehl, M. Cook. Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Frontiers in Computational Neuroscience, vol. 9, Article number 99, 2015. DOI: 10.3389/fncom.2015.00099.
    [12]
    P. J. Gu, R. Xiao, G. Pan, H. J. Tang. STCA: Spatio-temporal credit assignment with delayed feedback in deep spiking neural networks. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, pp. 1366–1372, 2019.
    [13]
    Y. J. Wu, L. Deng, G. Q. Li, J. Zhu, L. P. Shi. Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience, vol. 12, Article number 331, 2018. DOI: 10.3389/fnins.2018.00331.
    [14]
    Y. Q. Cao, Y. Chen, D. Khosla. Spiking deep convolutional neural networks for energy-efficient object recognition. International Journal of Computer Vision, vol. 113, no. 1, pp. 54–66, 2015. DOI: 10.1007/s11263-014-0788-3.
    [15]
    P. U. Diehl, D. Neil, J. Binas, M. Cook, S. C. Liu, M. Pfeiffer. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In Proceedings of International Joint Conference on Neural Networks, IEEE, Killarney, Ireland, pp. 1–8, 2015. DOI: 10.1109/IJCNN.2015.7280696.
    [16]
    Z. M. Wang, S. Lian, Y. H. Zhang, X. X. Cui, R. Yan, H. J. Tang. Towards lossless ANN-SNN conversion under ultra-low latency with dual-phase optimization. [Online], Available: https://arxiv.org/abs/2205.07473, 2022.
    [17]
    B. Rueckauer, I. A. Lungu, Y. H. Hu, M. Pfeiffer, S. C. Liu. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers in Neuroscience, vol. 11, Article number 682, 2017. DOI: 10.3389/fnins.2017.00682.
    [18]
    A. Sengupta, Y. T. Ye, R. Wang, C. Liu, K. Roy. Going deeper in spiking neural networks: VGG and residual architectures. Frontiers in Neuroscience, vol. 13, Article number 95, 2019. DOI: 10.3389/fnins.2019.00095.
    [19]
    S. Kim, S. Park, B. Na, S. Yoon. Spiking-YOLO: Spiking neural network for energy-efficient object detection. Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 7, pp. 11270–11277, 2020. DOI: 10.1609/aaai.v34i07.6787.
    [20]
    Y. H. Li, S. K. Deng, X. Dong, R. H. Gong, S. Gu. A free lunch from ANN: Towards efficient, accurate spiking neural networks calibration. In Proceedings of the 38th International Conference on Machine Learning, pp. 6316–6325, 2021.
    [21]
    Z. L. Yan, J. Zhou, W. F. Wong. Near lossless transfer learning for spiking neural networks. Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 12, pp. 10577–10584, 2021. DOI: 10.1609/aaai.v35i12.17265.
    [22]
    J. H. Ding, Z. F. Yu, Y. H. Tian, T. J. Huang. Optimal ANN-SNN conversion for fast and accurate inference in deep spiking neural networks. In Proceedings of the 30th International Joint Conference on Artificial Intelligence, Montreal, Canada, pp. 2328–2336, 2021.
    [23]
    T. Bu, W. Fang, J. H. Ding, P. L. Dai, Z. F. Yu, T. J. Huang. Optimal ANN-SNN conversion for high-accuracy and ultra-low-latency spiking neural networks. In Proceedings of the 10th International Conference on Learning Representations, 2022.
    [24]
    B. Rueckauer, S. C. Liu. Conversion of analog to spiking neural networks using sparse temporal coding. In Proceedings of IEEE International Symposium on Circuits and Systems, Florence, Italy, 2018. DOI: 10.1109/ISCAS.2018.8351295.
    [25]
    Y. Li, D. C. Zhao, Y. Zeng. BSNN: Towards faster and better conversion of artificial neural networks to spiking neural networks with bistable neurons. Frontiers in Neuroscience, vol. 16, Article number 991851, 2022. DOI: 10.3389/fnins.2022.991851.
    [26]
    Y. Li, Y. Zeng. Efficient and accurate conversion of spiking neural network with burst spikes. In Proceedings of the 31st International Joint Conference on Artificial Intelligence, Vienna, Austria, pp. 2485–2491, 2022.
    [27]
    Q. Yu, C. X. Ma, S. M. Song, G. Y. Zhang, J. W. Dang, K. C. Tan. Constructing accurate and efficient deep spiking neural networks with double-threshold and augmented schemes. IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 4, pp. 1714–1726, 2022. DOI: 10.1109/TNNLS.2020.3043415.
    [28]
    B. Xu, N. Y. Wang, T. Q. Chen, M. Li. Empirical evaluation of rectified activations in convolutional network. [Online], Available: https://arxiv.org/abs/1505.00853, 2015.
    [29]
    A. L. Maas, A. Y. Hannun, A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the 30th International Conference on Machine Learning, Atlanta, USA, vol. 30, Article number 3, 2013.
    [30]
    Y. H. Liu, X. J. Wang. Spike-frequency adaptation of a generalized leaky integrate-and-fire model neuron. Journal of Computational Neuroscience, vol. 10, no. 1, pp. 25–45, 2001. DOI: 10.1023/A:1008916026143.
    [31]
    M. Barbi, S. Chillemi, A. Di Garbo, L. Reale. Stochastic resonance in a sinusoidally forced LIF model with noisy threshold. Biosystems, vol. 71, no. 1–2, pp. 23–28, 2003. DOI: 10.1016/S0303-2647(03)00106-0.
    [32]
    S. K. Deng, S. Gu. Optimal conversion of conventional artificial neural networks to spiking neural networks. In Proceedings of the 9th International Conference on Learning Representations, 2021.
    [33]
    A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. M. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. J. Bai, S. Chintala. PyTorch: An imperative style, high-performance deep learning library. In Proceedings of the 33rd International Conference on Neural Information processing Systems, Vancouver, Canada, vol. 32, Article number 721, 2019.
    [34]
    H. Xiao, K. Rasul, R. Vollgraf. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms. [Online], Available: https://arxiv.org/abs/1708.07747, 2017.
    [35]
    A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. University of Toronto, Canada, Technical Report TR-2009, 2009.
    [36]
    K. Simonyan, A. Zisserman. Very deep convolutional networks for large-scale image recognition. [Online], Available: https://arxiv.org/abs/1409.1556, 2014.
    [37]
    E. D. Cubuk, B. Zoph, D. Mané, V. Vasudevan, Q. V. Le. AutoAugment: Learning augmentation strategies from data. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Long Beach, USA, pp. 113–123, 2019. DOI: 10.1109/CVPR.2019.00020.
    [38]
    T. DeVries, G. W. Taylor. Improved regularization of convolutional neural networks with cutout. [Online], Available: https://arxiv.org/abs/1708.04552, 2017.
    [39]
    B. Han, G. Srinivasan, K. Roy. RMP-SNN: Residual membrane potential neuron for enabling deeper high-accuracy and low-latency spiking neural network. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Seattle, USA, pp. 13555–13564, 2020. DOI: 10.1109/CVPR42600.2020.01357.
    [40]
    J. R. Shen, Y. Zhao, J. K. Liu, Y. M. Wang. HybridSNN: Combining bio-machine strengths by boosting adaptive spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems, to be published. DOI: 10.1109/TNNLS.2021.3131356.
    [41]
    D. Roy, I. Chakraborty, K. Roy. Scaling deep spiking neural networks with binary stochastic activations. In Proceedings of IEEE International Conference on Cognitive Computing, Milan, Italy, pp. 50–58, 2019. DOI: 10.1109/ICCC.2019.00020.
    [42]
    L. Deng, Y. J. Wu, X. Hu, L. Liang, Y. F. Ding, G. Q. Li, G. S. Zhao, P. Li, Y. Xie. Rethinking the performance comparison between SNNS and ANNS. Neural Networks, vol. 121, pp. 294–307, 2020. DOI: 10.1016/j.neunet.2019.09.005.
    [43]
    N. Rathi, K. Roy. DIET-SNN: Direct Input encoding with leakage and threshold optimization in deep spiking neural networks. [Online], Available: https://arxiv.org/abs/2008.03658, 2020.
    [44]
    P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura, B. Brezzo, I. Vo, S. K. Esser, R. Appuswamy, B. Taba, A. Amir, M. D. Flickner, W. P. Risk, R. Manohar, D. S. Modha. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, vol. 345, no. 6197, pp. 668–673, 2014. DOI: 10.1126/science.1254642.
    [45]
    J. B. Wu, E. Yılmaz, M. L. Zhang, H. Z. Li, K. C. Tan. Deep spiking neural networks for large vocabulary automatic speech recognition. Frontiers in Neuroscience, vol. 14, Article number 199, 2020. DOI: 10.3389/fnins.2020.00199.
    [46]
    M. Horowitz. 1.1 Computing′s energy problem (and what we can do about it). In Proceedings of IEEE International Solid-State Circuits Conference Digest of Technical Papers, San Francisco, USA, pp. 10–14, 2014. DOI: 10.1109/ISSCC.2014.6757323.
    [47]
    N. Qiao, H. Mostafa, F. Corradi, M. Osswald, F. Stefanini, D. Sumislawska, G. Indiveri. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses. Frontiers in Neuroscience, vol. 9, Article number 141, 2015. DOI: 10.3389/fnins.2015.00141.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(8)  / Tables(5)

    用微信扫码二维码

    分享至好友和朋友圈

    Article Metrics

    Article views (175) PDF downloads(7) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return