Published Online

Display Method:
Review
Efficient Visual Recognition: A Survey on Recent Advances and Brain-inspired Methodologies
Yang Wu, Ding-Heng Wang, Xiao-Tong Lu, Fan Yang, Man Yao, Wei-Sheng Dong, Jian-Bo Shi, Guo-Qi Li
doi: 10.1007/s11633-022-1340-5
Abstract:
Visual recognition is currently one of the most important and active research areas in computer vision, pattern recognition, and even the general field of artificial intelligence. It has great fundamental importance and strong industrial needs, particularly the modern deep neural networks (DNNs) and some brain-inspired methodologies, have largely boosted the recognition performance on many concrete tasks, with the help of large amounts of training data and new powerful computation resources. Although recognition accuracy is usually the first concern for new progresses, efficiency is actually rather important and sometimes critical for both academic research and industrial applications. Moreover, insightful views on the opportunities and challenges of efficiency are also highly required for the entire community. While general surveys on the efficiency issue have been done from various perspectives, as far as we are aware, scarcely any of them focused on visual recognition systematically, and thus it is unclear which progresses are applicable to it and what else should be concerned. In this survey, we present the review of recent advances with our suggestions on the new possible directions towards improving the efficiency of DNN-related and brain-inspired visual recognition approaches, including efficient network compression and dynamic brain-inspired networks. We investigate not only from the model but also from the data point of view (which is not the case in existing surveys) and focus on four typical data types (images, video, points, and events). This survey attempts to provide a systematic summary via a comprehensive survey that can serve as a valuable reference and inspire both researchers and practitioners working on visual recognition problems.
Neural Decoding of Visual Information Across Different Neural Recording Modalities and Approaches
Yi-Jun Zhang, Zhao-Fei Yu, Jian. K. Liu, Tie-Jun Huang
doi: 10.1007/s11633-022-1335-2
Abstract:
Vision plays a peculiar role in intelligence. Visual information, forming a large part of the sensory information, is fed into the human brain to formulate various types of cognition and behaviours that make humans become intelligent agents. Recent advances have led to the development of brain-inspired algorithms and models for machine vision. One of the key components of these methods is the utilization of the computational principles underlying biological neurons. Additionally, advanced experimental neuroscience techniques have generated different types of neural signals that carry essential visual information. Thus, there is a high demand for mapping out functional models for reading out visual information from neural signals. Here, we briefly review recent progress on this issue with a focus on how machine learning techniques can help in the development of models for contending various types of neural signals, from fine-scale neural spikes and single-cell calcium imaging to coarse-scale electroencephalography (EEG) and functional magnetic resonance imaging recordings of brain signals.
Research Article
Clause-level Relationship-aware Math Word Problems Solver
Chang-Yang Wu, Xin Lin, Zhen-Ya Huang, Yu Yin, Jia-Yu Liu, Qi Liu, Gang Zhou
doi: 10.1007/s11633-022-1351-2
Abstract:
Automatically solving math word problems, which involves comprehension, cognition, and reasoning, is a crucial issue in artificial intelligence research. Existing math word problem solvers mainly work on word-level relationship extraction and the generation of expression solutions while lacking consideration of the clause-level relationship. To this end, inspired by the theory of two levels of process in comprehension, we propose a novel clause-level relationship-aware math solver (CLRSolver) to mimic the process of human comprehension from lower level to higher level. Specifically, in the lower-level processes, we split problems into clauses according to their natural division and learn their semantics. In the higher-level processes, following human′s multi-view understanding of clause-level relationships, we first apply a CNN-based module to learn the dependency relationships between clauses from word relevance in a local view. Then, we propose two novel relationship-aware mechanisms to learn dependency relationships from the clause semantics in a global view. Next, we enhance the representation of clauses based on the learned clause-level dependency relationships. In expression generation, we develop a tree-based decoder to generate the mathematical expression. We conduct extensive experiments on two datasets, where the results demonstrate the superiority of our framework.
Exploring the Brain-like Properties of Deep Neural Networks: A Neural Encoding Perspective
Qiongyi Zhou, Changde Du, Huiguang He
doi: 10.1007/s11633-022-1348-x
Abstract:
Nowadays, deep neural networks (DNNs) have been equipped with powerful representation capabilities. The deep convolutional neural networks (CNNs) that draw inspiration from the visual processing mechanism of the primate early visual cortex have outperformed humans on object categorization and have been found to possess many brain-like properties. Recently, vision transformers (ViTs) have been striking paradigms of DNNs and have achieved remarkable improvements on many vision tasks compared to CNNs. It is natural to ask how the brain-like properties of ViTs are. Beyond the model paradigm, we are also interested in the effects of factors, such as model size, multimodality, and temporality, on the ability of networks to model the human visual pathway, especially when considering that existing research has been limited to CNNs. In this paper, we systematically evaluate the brain-like properties of 30 kinds of computer vision models varying from CNNs and ViTs to their hybrids from the perspective of explaining brain activities of the human visual cortex triggered by dynamic stimuli. Experiments on two neural datasets demonstrate that neither CNN nor transformer is the optimal model paradigm for modelling the human visual pathway. ViTs reveal hierarchical correspondences to the visual pathway as CNNs do. Moreover, we find that multi-modal and temporal networks can better explain the neural activities of large parts of the visual cortex, whereas a larger model size is not a sufficient condition for bridging the gap between human vision and artificial networks. Our study sheds light on the design principles for more brain-like networks. The code is available at https://github.com/QYiZhou/LWNeuralEncoding.
Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients
Cheng-Cheng Ma, Bao-Yuan Wu, Yan-Bo Fan, Yong Zhang, Zhi-Feng Li
doi: 10.1007/s11633-022-1328-1
Abstract:
Adversarial example has been well known as a serious threat to deep neural networks (DNNs). In this work, we study the detection of adversarial examples based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution (GGD) but with different parameters (i.e., shape factor, mean, and variance). GGD is a general distribution family that covers many popular distributions (e.g., Laplacian, Gaussian, or uniform). Therefore, it is more likely to approximate the intrinsic distributions of internal responses than any specific distribution. Besides, since the shape factor is more robust to different databases rather than the other two parameters, we propose to construct discriminative features via the shape factor for adversarial detection, employing the magnitude of Benford-Fourier (MBF) coefficients, which can be easily estimated using responses. Finally, a support vector machine is trained as an adversarial detector leveraging the MBF features. Extensive experiments in terms of image classification demonstrate that the proposed detector is much more effective and robust in detecting adversarial examples of different crafting methods and sources compared to state-of-the-art adversarial detection methods.