Unblurring ISAR Imaging for Maneuvering Target Based on UFGAN
At a Glance
Section titled âAt a Glanceâ| Metadata | Details |
|---|---|
| Publication Date | 2022-10-21 |
| Journal | Remote Sensing |
| Authors | Wenzhe Li, Yanxin Yuan, Yuanpeng Zhang, Ying Luo |
| Institutions | Air Force Engineering University |
| Citations | 12 |
| Analysis | Full AI Review Included |
Executive Summary
Section titled âExecutive SummaryâThe research presents a novel deep learning (DL) approach, UFGAN (Uformer-based Generative Adversarial Network), for unblurring Inverse Synthetic Aperture Radar (ISAR) images of highly maneuvering targets, overcoming limitations of traditional methods under low Signal-to-Noise Ratio (SNR) and sparse aperture conditions.
- Core Technology: UFGAN, utilizing state-of-the-art LeWin (Locally-enhanced Window) Transformer blocks within a U-Net architecture, effectively captures both local context and global dependencies to restore fine image details and textures.
- Performance Superiority: The proposed method significantly outperforms traditional algorithms (RD, STFT, RWT) and existing data-driven methods, achieving superior image quality metrics (e.g., Target-to-Clutter Ratio, TCR).
- Robustness: UFGAN demonstrates high-quality reconstruction even under extreme conditions, including low SNR (down to -12 dB) and highly sparse aperture (sampling ratio as low as 15%).
- Data Generalization: A pseudo-measured data generation method (combining DeepLabv3+ and Diamond-Square algorithms) is introduced to create realistic âblock targets,â ensuring the network generalizes effectively to real measured ISAR data.
- Efficiency: The UFGAN-based imaging process is fast, achieving imaging times significantly shorter than iterative traditional methods, making it favorable for real-time applications.
Technical Specifications
Section titled âTechnical Specificationsâ| Parameter | Value | Unit | Context |
|---|---|---|---|
| Peak TCR (Yak-42 Exp. 2) | 89.4582 | dB | 25% sampling ratio, SNR = -10 dB |
| Peak TCR (Boeing-727) | 87.7593 | dB | Full aperture, SNR = -10 dB |
| Minimum IE (Boeing-727) | 0.8452 | - | Full aperture, SNR = -10 dB (Lower is better) |
| Imaging Time (Yak-42) | 0.186 | s | Proposed UFGAN method |
| Training Epochs (Block Targets) | 300 | - | Total training for UFGAN on block targets |
| Training Time (Block Targets) | 7 | h | Total training time for block targets dataset |
| Radar Carrier Frequency (Yak-42) | 5.52 | GHz | Measured block target data |
| Radar Bandwidth (Yak-42) | 400 | MHz | Measured block target data |
| Pulse Width (Yak-42) | 25.6 | ”s | Measured block target data |
| Minimum Sampling Ratio Tested | 15 | % | Extreme sparse aperture condition |
| Minimum SNR Tested | -12 | dB | Extreme low SNR condition |
| Image Size (Training/Testing) | 256 x 256 | pixels | Azimuth x Range cells |
Key Methodologies
Section titled âKey MethodologiesâThe UFGAN-based ISAR imaging method involves two primary phases: pseudo-measured data generation and adversarial network training.
-
Pseudo-Measured Data Generation:
- Outline Acquisition: Aircraft geometric outlines are segmented from public datasets (PASCAL VOC2012, ImageNet2012) using a trained DeepLabv3+ network.
- Block Target Simulation: The outlines are gridded (e.g., 40 m x 40 m) and filled with continuous scattering blocks using the iterative Diamond-Square algorithm to mimic complex, realistic scattering distributions.
- Blurred Image Creation: Ideal ISAR echoes are generated using a variable acceleration motion model (including angular jerk, Îł). Blurred ISAR images (inputs) are then derived using the Range Doppler (RD) algorithm, simulating phase errors from target maneuverability.
- Robustness Augmentation: Training data is augmented by adding random Additive White Gaussian Noise (AWGN) (SNR range -10 dB to 10 dB) and applying random sparse aperture sampling (20% to 80% ratio).
-
UFGAN Network Architecture and Training:
- Generator Design: Constructed as a symmetric U-Net structure. It incorporates Locally-enhanced Window (LeWin) Transformer blocks to reduce computational cost (O(M2HWC) complexity) while enhancing local context extraction via a Locally-enhanced Feed-Forward Network (LeFF).
- Discriminator Design: A novel Transformer-based discriminator is used, fusing a Global GAN path (for overall image structure) and a PatchGAN path (for local texture consistency).
- Loss Function: A comprehensive loss function is used to guide optimization:
- Charbonnier Loss (Lchar): Used instead of MSE to prevent over-smoothing and preserve weak scatterers.
- Perceptual Loss (Lperc): Compares feature maps (from the VGG networkâs fourth layer) to enhance similarity in feature space.
- Adversarial Loss (Ladv): Uses Wasserstein GAN with Gradient Penalty (WGAN-GP) for stable training and improved gradient flow.
Commercial Applications
Section titled âCommercial ApplicationsâThis technology is critical for high-performance radar systems requiring rapid, high-fidelity imaging of non-cooperative targets, particularly in defense and advanced surveillance sectors.
- Defense and Military Surveillance: Essential for real-time tracking, identification, and classification of highly maneuvering aerial threats (e.g., fighter jets, hypersonic missiles) where traditional ISAR methods fail due to motion-induced blurring.
- Advanced Remote Sensing: Applicable to airborne and spaceborne radar platforms that must generate clear images from sparse or noisy data collected over long ranges or short Coherent Processing Intervals (CPI).
- Autonomous Navigation and Perception: Provides robust, high-resolution radar imagery for autonomous vehicles and systems operating in environments where optical sensors are limited (e.g., fog, heavy rain, night).
- Signal Processing Hardware Acceleration: The DL approach allows for the deployment of fast, trained models on specialized hardware (GPUs/TPUs), enabling rapid image reconstruction necessary for real-time decision-making.
- Synthetic Data Generation: The proposed pseudo-measured data generation technique is valuable for any radar application where real-world maneuvering target data is scarce or expensive to acquire.
View Original Abstract
Inverse synthetic aperture radar (ISAR) imaging for maneuvering targets suffers from a Doppler frequency time-varying problem, leading to the ISAR images blurred in the azimuth direction. Given that the traditional imaging methods have poor imaging performance or low efficiency, and the existing deep learning imaging methods cannot effectively reconstruct the deblurred ISAR images retaining rich details and textures, an unblurring ISAR imaging method based on an advanced Transformer structure for maneuvering targets is proposed. We first present a pseudo-measured data generation method based on the DeepLabv3+ network and Diamond-Square algorithm to acquire an ISAR dataset for training with good generalization to measured data. Next, with the locally-enhanced window Transformer block adopted to enhance the ability to capture local context as well as global dependencies, we construct a novel Uformer-based GAN (UFGAN) to restore the deblurred ISAR images with rich details and textures from blurred imaging results. The simulation and measured experiments show that the proposed method can achieve fast and high-quality imaging for maneuvering targets under the condition of a low signal-to-noise ratio (SNR) and sparse aperture.
Tech Support
Section titled âTech SupportâOriginal Source
Section titled âOriginal SourceâReferences
Section titled âReferencesâ- 1998 - Joint time-frequency transform for radar range Doppler imaging [Crossref]
- 2020 - Rotation parameters estimation and cross-range scaling research for range instantaneous Doppler ISAR images [Crossref]
- 1998 - Time-varying spectral analysis for radar imaging of maneuvering targets [Crossref]
- 2009 - New ISAR imaging algorithm based on modified Wigner Ville distribution [Crossref]
- 2003 - Application of adaptive chirplet representation for ISAR feature extraction from targets with rotating parts [Crossref]
- 2010 - Scaled radon-Wigner transform imaging and scaling of maneuvering target [Crossref]
- 2022 - Clutter removal in millimeter wave GB-SAR images using OTSUâs thresholding method