Display Method:
Research Article
Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients
Cheng-Cheng Ma, Bao-Yuan Wu, Yan-Bo Fan, Yong Zhang, Zhi-Feng Li
doi: 10.1007/s11633-022-1328-1
Abstract PDF SpringerLink
Adversarial example has been well known as a serious threat to deep neural networks (DNNs). In this work, we study the detection of adversarial examples based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution (GGD) but with different parameters (i.e., shape factor, mean, and variance). GGD is a general distribution family that covers many popular distributions (e.g., Laplacian, Gaussian, or uniform). Therefore, it is more likely to approximate the intrinsic distributions of internal responses than any specific distribution. Besides, since the shape factor is more robust to different databases rather than the other two parameters, we propose to construct discriminative features via the shape factor for adversarial detection, employing the magnitude of Benford-Fourier (MBF) coefficients, which can be easily estimated using responses. Finally, a support vector machine is trained as an adversarial detector leveraging the MBF features. Extensive experiments in terms of image classification demonstrate that the proposed detector is much more effective and robust in detecting adversarial examples of different crafting methods and sources compared to state-of-the-art adversarial detection methods.
Display Method:
Causal Reasoning Meets Visual Representation Learning: A Prospective Study
Yang Liu, Yu-Shen Wei, Hong Yan, Guan-Bin Li, Liang Lin
2022,  vol. 19,  no. 6, pp. 485-511,  doi: 10.1007/s11633-022-1362-z
Abstract PDF SpringerLink
Visual representation learning is ubiquitous in various real-world applications, including visual comprehension, video understanding, multi-modal analysis, human-computer interaction, and urban computing. Due to the emergence of huge amounts of multi-modal heterogeneous spatial/temporal/spatial-temporal data in the big data era, the lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models. The majority of the existing methods tend to fit the original data/variable distributions and ignore the essential causal relations behind the multi-modal knowledge, which lacks unified guidance and analysis about why modern visual representation learning methods easily collapse into data bias and have limited generalization and cognitive abilities. Inspired by the strong inference ability of human-level agents, recent years have therefore witnessed great effort in developing causal reasoning paradigms to realize robust representation and model learning with good cognitive ability. In this paper, we conduct a comprehensive review of existing causal reasoning methods for visual representation learning, covering fundamental theories, models, and datasets. The limitations of current methods and datasets are also discussed. Moreover, we propose some prospective challenges, opportunities, and future research directions for benchmarking causal reasoning algorithms in visual representation learning. This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods, publicly available benchmarks, and consensus-building standards for reliable visual representation learning and related real-world applications more efficiently.
Long-term Visual Tracking: Review and Experimental Comparison
Chang Liu, Xiao-Fan Chen, Chun-Juan Bo, Dong Wang
2022,  vol. 19,  no. 6, pp. 512-530,  doi: 10.1007/s11633-022-1344-1
Abstract PDF SpringerLink
As a fundamental task in computer vision, visual object tracking has received much attention in recent years. Most studies focus on short-term visual tracking which addresses shorter videos and always-visible targets. However, long-term visual tracking is much closer to practical applications with more complicated challenges. There exists a longer duration such as minute-level or even hour-level in the long-term tracking task, and the task also needs to handle more frequent target disappearance and reappearance. In this paper, we provide a thorough review of long-term tracking, summarizing long-term tracking algorithms from two perspectives: framework architectures and utilization of intermediate tracking results. Then we provide a detailed description of existing benchmarks and corresponding evaluation protocols. Furthermore, we conduct extensive experiments and analyse the performance of trackers on six benchmarks: VOTLT2018, VOTLT2019 (2020/2021), OxUvA, LaSOT, TLP and the long-term subset of VTUAV-V. Finally, we discuss the future prospects from multiple perspectives, including algorithm design and benchmark construction. To our knowledge, this is the first comprehensive survey for long-term visual object tracking. The relevant content is available at https://github.com/wangdongdut/Long-term-Visual-Tracking.
Research Article
Video Polyp Segmentation: A Deep Learning Perspective
Ge-Peng Ji, Guobao Xiao, Yu-Cheng Chou, Deng-Ping Fan, Kai Zhao, Geng Chen, Luc Van Gool
2022,  vol. 19,  no. 6, pp. 531-549,  doi: 10.1007/s11633-022-1371-y
Abstract PDF SpringerLink
We present the first comprehensive video polyp segmentation (VPS) study in the deep learning era. Over the years, developments in VPS are not moving forward with ease due to the lack of a large-scale dataset with fine-grained segmentation annotations. To address this issue, we first introduce a high-quality frame-by-frame annotated VPS dataset, named SUN-SEG, which contains 158690 colonoscopy video frames from the well-known SUN-database. We provide additional annotation covering diverse types, i.e., attribute, object mask, boundary, scribble, and polygon. Second, we design a simple but efficient baseline, named PNS+, which consists of a global encoder, a local encoder, and normalized self-attention (NS) blocks. The global and local encoders receive an anchor frame and multiple successive frames to extract long-term and short-term spatial-temporal representations, which are then progressively refined by two NS blocks. Extensive experiments show that PNS+ achieves the best performance and real-time inference speed (170 fps), making it a promising solution for the VPS task. Third, we extensively evaluate 13 representative polyp/object segmentation models on our SUN-SEG dataset and provide attribute-based comparisons. Finally, we discuss several open issues and suggest possible research directions for the VPS community. Our project and dataset are publicly available at https://github.com/GewelsJI/VPS.
YOLOP: You Only Look Once for Panoptic Driving Perception
Dong Wu, Man-Wen Liao, Wei-Tian Zhang, Xing-Gang Wang, Xiang Bai, Wen-Qing Cheng, Wen-Yu Liu
2022,  vol. 19,  no. 6, pp. 550-562,  doi: 10.1007/s11633-022-1339-y
Abstract PDF SpringerLink
A panoptic driving perception system is an essential part of autonomous driving. A high-precision and real-time perception system can assist the vehicle in making reasonable decisions while driving. We present a panoptic driving perception network (you only look once for panoptic (YOLOP)) to perform traffic object detection, drivable area segmentation, and lane detection simultaneously. It is composed of one encoder for feature extraction and three decoders to handle the specific tasks. Our model performs extremely well on the challenging BDD100K dataset, achieving state-of-the-art on all three tasks in terms of accuracy and speed. Besides, we verify the effectiveness of our multi-task learning model for joint training via ablative studies. To our best knowledge, this is the first work that can process these three visual perception tasks simultaneously in real-time on an embedded device Jetson TX2(23 FPS), and maintain excellent accuracy. To facilitate further research, the source codes and pre-trained models are released at https://github.com/hustvl/YOLOP.
Glaucoma Detection with Retinal Fundus Images Using Segmentation and Classification
Thisara Shyamalee, Dulani Meedeniya
2022,  vol. 19,  no. 6, pp. 563-580,  doi: 10.1007/s11633-022-1354-z
Abstract PDF SpringerLink
Glaucoma is a prevalent cause of blindness worldwide. If not treated promptly, it can cause vision and quality of life to deteriorate. According to statistics, glaucoma affects approximately 65 million individuals globally. Fundus image segmentation depends on the optic disc (OD) and optic cup (OC). This paper proposes a computational model to segment and classify retinal fundus images for glaucoma detection. Different data augmentation techniques were applied to prevent overfitting while employing several data pre-processing approaches to improve the image quality and achieve high accuracy. The segmentation models are based on an attention U-Net with three separate convolutional neural networks (CNNs) backbones: Inception-v3, visual geometry group 19 (VGG19), and residual neural network 50 (ResNet50). The classification models also employ a modified version of the above three CNN architectures. Using the RIM-ONE dataset, the attention U-Net with the ResNet50 model as the encoder backbone, achieved the best accuracy of 99.58% in segmenting OD. The Inception-v3 model had the highest accuracy of 98.79% for glaucoma classification among the evaluated segmentation, followed by the modified classification architectures.
Feature Selection and Feature Learning for High-dimensional Batch Reinforcement Learning: A Survey
De-Rong Liu, Hong-Liang, Li Ding Wang
2015,  vol. 12,  no. 3, pp. 229-242,  doi: 10.1007/s11633-015-0893-y
Abstract PDF SpringerLink
Second-order Sliding Mode Approaches for the Control of a Class of Underactuated Systems
Sonia Mahjoub, Faiçal Mnif, Nabil Derbel
2015,  vol. 12,  no. 2, pp. 134-141,  doi: 10.1007/s11633-015-0880-3
Abstract PDF SpringerLink
Genetic Algorithm with Variable Length Chromosomes for Network Intrusion Detection
Sunil Nilkanth Pawar, Rajankumar Sadashivrao Bichkar
2015,  vol. 12,  no. 3, pp. 337-342,  doi: 10.1007/s11633-014-0870-x
Abstract PDF SpringerLink
Cooperative Formation Control of Autonomous Underwater Vehicles: An Overview
Bikramaditya Das, Bidyadhar Subudhi, Bibhuti Bhusan Pati
2016,  vol. 13,  no. 3, pp. 199-225,  doi: 10.1007/s11633-016-1004-4
Abstract PDF SpringerLink
Recent Progress in Networked Control Systems-A Survey
Yuan-Qing Xia, Yu-Long Gao, Li-Ping Yan, Meng-Yin Fu
2015,  vol. 12,  no. 4, pp. 343-367,  doi: 10.1007/s11633-015-0894-x
Abstract PDF SpringerLink
Grey Qualitative Modeling and Control Method for Subjective Uncertain Systems
Peng Wang, Shu-Jie Li, Yan Lv, Zong-Hai Chen
2015,  vol. 12,  no. 1, pp. 70-76,  doi: 10.1007/s11633-014-0820-7
Abstract PDF SpringerLink
A Wavelet Neural Network Based Non-linear Model Predictive Controller for a Multi-variable Coupled Tank System
Kayode Owa, Sanjay Sharma, Robert Sutton
2015,  vol. 12,  no. 2, pp. 156-170,  doi: 10.1007/s11633-014-0825-2
Abstract PDF SpringerLink
An Unsupervised Feature Selection Algorithm with Feature Ranking for Maximizing Performance of the Classifiers
Danasingh Asir Antony Gnana Singh, Subramanian Appavu Alias Balamurugan, Epiphany Jebamalar Leavline
2015,  vol. 12,  no. 5, pp. 511-517,  doi: 10.1007/s11633-014-0859-5
Abstract PDF SpringerLink
Advances in Vehicular Ad-hoc Networks (VANETs): Challenges and Road-map for Future Development
Elias C. Eze, Si-Jing Zhang, En-Jie Liu, Joy C. Eze
2016,  vol. 13,  no. 1, pp. 1-18,  doi: 10.1007/s11633-015-0913-y
Abstract PDF SpringerLink
Sliding Mode and PI Controllers for Uncertain Flexible Joint Manipulator
Lilia Zouari, Hafedh Abid, Mohamed Abid
2015,  vol. 12,  no. 2, pp. 117-124,  doi: 10.1007/s11633-015-0878-x
Abstract PDF SpringerLink
Bounded Real Lemmas for Fractional Order Systems
Shu Liang, Yi-Heng Wei, Jin-Wen Pan, Qing Gao, Yong Wang
2015,  vol. 12,  no. 2, pp. 192-198,  doi: 10.1007/s11633-014-0868-4
Abstract PDF SpringerLink
Robust Face Recognition via Low-rank Sparse Representation-based Classification
Hai-Shun Du, Qing-Pu Hu, Dian-Feng Qiao, Ioannis Pitas
2015,  vol. 12,  no. 6, pp. 579-587,  doi: 10.1007/s11633-015-0901-2
Abstract PDF SpringerLink
Distributed Control of Chemical Process Networks
Michael J. Tippett, Jie Bao
2015,  vol. 12,  no. 4, pp. 368-381,  doi: 10.1007/s11633-015-0895-9
Abstract PDF SpringerLink
Extracting Parameters of OFET Before and After Threshold Voltage Using Genetic Algorithms
Imad Benacer, Zohir Dibi
2016,  vol. 13,  no. 4, pp. 382-391,  doi: 10.1007/s11633-015-0918-6
Abstract PDF SpringerLink
Appropriate Sub-band Selection in Wavelet Packet Decomposition for Automated Glaucoma Diagnoses
Chandrasekaran Raja, Narayanan Gangatharan
2015,  vol. 12,  no. 4, pp. 393-401,  doi: 10.1007/s11633-014-0858-6
Abstract PDF SpringerLink
Analysis of Fractional-order Linear Systems with Saturation Using Lyapunov s Second Method and Convex Optimization
Esmat Sadat Alaviyan Shahri, Saeed Balochian
2015,  vol. 12,  no. 4, pp. 440-447,  doi: 10.1007/s11633-014-0856-8
Abstract PDF SpringerLink
Generalized Norm Optimal Iterative Learning Control with Intermediate Point and Sub-interval Tracking
David H. Owens, Chris T. Freeman, Bing Chu
2015,  vol. 12,  no. 3, pp. 243-253,  doi: 10.1007/s11633-015-0888-8
Abstract PDF SpringerLink
Backstepping Control of Speed Sensorless Permanent Magnet Synchronous Motor Based on Slide Model Observer
Cai-Xue Chen, Yun-Xiang Xie, Yong-Hong Lan
2015,  vol. 12,  no. 2, pp. 149-155,  doi: 10.1007/s11633-015-0881-2
Abstract PDF SpringerLink
Flexible Strip Supercapacitors for Future Energy Storage
Rui-Rong Zhang, Yan-Meng Xu, David Harrison, John Fyson, Fu-Lian Qiu, Darren Southee
2015,  vol. 12,  no. 1, pp. 43-49,  doi: 10.1007/s11633-014-0866-6
Abstract PDF SpringerLink
Finite-time Control for a Class of Networked Control Systems with Short Time-varying Delays and Sampling Jitter
Chang-Chun Hua, Shao-Chong Yu, Xin-Ping Guan
2015,  vol. 12,  no. 4, pp. 448-454,  doi: 10.1007/s11633-014-0849-7
Abstract PDF SpringerLink
A High-order Internal Model Based Iterative Learning Control Scheme for Discrete Linear Time-varying Systems
Wei Zhou, Miao Yu, De-Qing Huang
2015,  vol. 12,  no. 3, pp. 330-336,  doi: 10.1007/s11633-015-0886-x
Abstract PDF SpringerLink