Low-Light Image Super-Resolution Using GANs: A Comprehensive Comparative Review

Main Article Content

Niharika Pachori, Sanjay Patidar

Abstract

Image acquisition under low-light conditions poses serious limitations across numerous imaging domains, resulting in noisy, low-contrast, and resolution-degraded outputs. These limitations not only impact visual discriminability but also lead to disruption in downstream processes such as detection, recognition, and interpretation. Traditional image enhancement techniques, including histogram equalization and gamma correction, provide limited improvement in complex low-light scenarios and often amplify noise or distort colours. In contrast, Generative Adversarial Networks (GANs) have demonstrated significant success in both enhancing brightness and performing super-resolution in a data-driven manner. Their ability to model complex visual distributions enables the recovery of realistic textures and structures from degraded inputs.


This paper presents a comprehensive comparative review of recent GAN-based approaches for low-light image super-resolution. We explore key architectural strategies, loss functions, dataset choices, and evaluation metrics across prominent models. The analysis addresses three core research questions: limitations in texture restoration, effectiveness of performance metrics, and generalization challenges in low-light super resolution models across diverse scenarios. Furthermore, we highlight real-world application areas including surveillance, autonomous systems, mobile imaging, and document analysis where these techniques are most impactful. The paper concludes by identifying persistent challenges and proposing future research directions aimed at improving perceptual realism and robustness in low-light SR systems.

Article Details

Section
Articles