Structural Dependence Learning Based on Self-attention for Face Alignment
-
Graphical Abstract
-
Abstract
Self-attention aggregates similar feature information to enhance the features. However, the attention covers nonface areas in face alignment, which may be disturbed in challenging cases, such as occlusions, and fails to predict landmarks. In addition, the learned feature similarity variance is not large enough in the experiment. To this end, we propose structural dependence learning based on self-attention for face alignment (SSFA). It limits the self-attention learning to the facial range and adaptively builds the significant landmark structure dependency. Compared with other state-of-the-art methods, SSFA effectively improves the performance on several standard facial landmark detection benchmarks and adapts more in challenging cases.
-
-