LVRNet: Lightweight Image Restoration for Aerial Images under Low Visibility

1Birla Institute of Technology and Science, Pilani 2Carnegie Mellon University
*Equal Contribution
In Student Abstract, AAAI 2023

LVRNet is able to restore the clear image from different combinations of dark and hazy images.

Abstract

Learning to recover clear images from images having a combination of degrading factors is a challenging task. That being said, autonomous surveillance in low visibility conditions caused by high pollution/smoke, poor air quality index, low light, atmospheric scattering, and haze during a blizzard becomes even more important to prevent accidents. It is thus crucial to form a solution that can result in a high-quality image and is efficient enough to be deployed for everyday use. However, the lack of proper datasets available to tackle this task limits the performance of the previous methods proposed. To this end, we generate the LowVis-AFO dataset, containing 3647 paired dark-hazy and clear images. We also introduce a lightweight deep learning model called Low-Visibility Restoration Network (LVRNet). It outperforms previous image restoration methods with low latency, achieving a PSNR value of 25.744 and an SSIM of 0.905, making our approach scalable and ready for practical use.

Starting from the top-left: The input image is passed to the pre-processing convolution layers where feature maps are learned and passed to NAF Groups (here we have used 3 groups). The features extracted from each group are concatenated (or stacked) along the channel dimension and sent as input to the Level Attention Module (LAM). Finally, we pass LAM’s output to CNN layers for post-processing, adding the original image through residual connection and extracting the restored image at the bottom-left.

Results

Comparison Against Existing Methods

BibTeX

@article{pahwa2023lvrnet,
      title={LVRNet: Lightweight Image Restoration for Aerial Images under Low Visibility},
      author={Pahwa, Esha and Luthra, Achleshwar and Narang, Pratik},
      journal={arXiv preprint arXiv:2301.05434},
      year={2023}
    }

Acknowledgements

We would like to thank the authors of FFANet, NAFNet and MC-Blur for their codebase. We have used their codebase as a starting point for our work. This material is based upon work supported by ARTPARK, IISc.