Neural Network Based on Inter-layer Perturbation Strategy for Text Classification
-
Graphical Abstract
-
Abstract
Recently, many researches have created adversarial samples to enrich the diversity of training data for improving the text classification performance via reducing the loss incurred in the neural network training. However, existing studies have focused solely on adding perturbations to the input, such as text sentences and embedded representations, resulting in adversarial samples that are very similar to the original ones. Such adversarial samples can not significantly improve the diversity of training data, which restricts the potential for improved classification performance. To alleviate the problem, in this paper, we extend the diversity of generated adversarial samples based on the fact that adding different disturbances between different layers of neural network has different effects. We propose a novel neural network with perturbation strategy (PTNet), which generates adversarial samples by adding perturbation to the intrinsic representation of each hidden layer of the neural network. Specifically, we design two different perturbation ways to perturb each hidden layer: 1) directly adding a certain threshold perturbation; 2) adding the perturbation in the way of adversarial training. Through above settings, we can get more perturbed intrinsic representations of hidden layers and use them as new adversarial samples, thus improving the diversity of the augmented training data. We validate the effectiveness of our approach on six text classification datasets and demonstrate that it improves the classification ability of the model. In particular, the classification accuracy on the sentiment analysis task improved by an average of 1.79% and on question classification task improved by 3.2% compared to the BERT baseline, respectively.
-
-