WNASNet: Wavelet-Guided Neural Architecture Search for Efficient Single-Image De-raining

Authors

  • Wenyin Tao School of Intelligent Management; Suzhou Industrial Park Institute of Services Outsourcing; Jiangsu, China https://orcid.org/0009-0003-5134-3839
  • Qiang Chen School of Intelligent Management; Suzhou Industrial Park Institute of Services Outsourcing; Jiangsu, China
  • Chunjiang Yu School of Intelligent Management; Suzhou Industrial Park Institute of Services Outsourcing; Jiangsu, China

DOI:

https://doi.org/10.5755/j01.itc.54.2.40643

Keywords:

Image Deraining, Wavelet Transform, Signal Processing, Neural Architecture Search

Abstract

On rainy days, the uncertainty of the shape and distribution of rain streaks can cause the images captured by RGB image-based measurement equipment to be blurred and distorted. The wavelet transform is extensively utilized in conventional image-enhancing techniques because of its capacity to deliver spatial and frequency domain information and its multidirectional and multiscale characteristics. In image de-raining, the distribution of rain streaks is intricately linked to both spatial domain characteristics and frequency domain spatial attributes. Nonetheless, deep learning-based rain removal models predominantly depend on the spatial characteristics of the image, and RGB data is sometimes insufficient to differentiate rain marks from image details, resulting in the loss of essential image information during the rain removal process. To overcome this limitation, we have created a lightweight single-image rain removal model named the wavelet-enhanced neural architecture search network (WNASNet). This technique isolates image features from rain-affected images and can more efficiently eliminate rain artifacts. The proposed WNASNet presents three notable contributions. Initially, it utilizes wavelet transform to extract multi-frequency feature components. It allocates a distinct feature search block (FSB) to each component, facilitating the identification of task-specific feature extraction networks to enhance deraining efficacy. Secondly, we present a straightforward yet efficient wavelet feature fusion technique (SFF) that selectively employs high- and low-frequency features during the inverse wavelet transformation. This method maintains deraining efficacy while substantially decreasing computational complexity relative to conventional frequency blending techniques. Comprehensive studies on four synthetic and two real-world datasets illustrate the better performance of WNASNet across many evaluation measures, including PSNR, SSIM, LPIPS, NIQE, and BRISQUE, thereby verifying its efficacy and robustness for single-image deraining tasks. 

Downloads

Published

2025-07-14

Issue

Section

Articles