DOI: 10.3390/electronics13061061 ISSN: 2079-9292

Multi-Scale Residual Spectral–Spatial Attention Combined with Improved Transformer for Hyperspectral Image Classification

Aili Wang, Kang Zhang, Haibin Wu, Yuji Iwahori, Haisong Chen
  • Electrical and Electronic Engineering
  • Computer Networks and Communications
  • Hardware and Architecture
  • Signal Processing
  • Control and Systems Engineering

Aiming to solve the problems of different spectral bands and spatial pixels contributing differently to hyperspectral image (HSI) classification, and sparse connectivity restricting the convolutional neural network to a globally dependent capture, we propose a HSI classification model combined with multi-scale residual spectral–spatial attention and an improved transformer in this paper. First, in order to efficiently highlight discriminative spectral–spatial information, we propose a multi-scale residual spectral–spatial feature extraction module that preserves the multi-scale information in a two-layer cascade structure, and the spectral–spatial features are refined by residual spectral–spatial attention for the feature-learning stage. In addition, to further capture the sequential spectral relationships, we combine the advantages of Cross-Attention and Re-Attention to alleviate computational burden and attention collapse issues, and propose the Cross-Re-Attention mechanism to achieve an improved transformer, which can efficiently alleviate the heavy memory footprint and huge computational burden of the model. The experimental results show that the overall accuracy of the proposed model in this paper can reach 98.71%, 99.33%, and 99.72% for Indiana Pines, Kennedy Space Center, and XuZhou datasets, respectively. The proposed method was verified to have high accuracy and effectiveness compared to the state-of-the-art models, which shows that the concept of the hybrid architecture opens a new window for HSI classification.

More from our Archive