DeepRecNet: A Convolutional Neural Network Architecture for High-Fidelity Image Reconstruction from Compressed Representations

Main Article Content

Dharmesh Dhabliya, Dr. Priya Vij

Abstract

Image rebuilding from compressed models has gotten a lot of attention lately because it can be used in medical imaging, satellite data, and video streaming, among other things.  When working with highly compressed pictures, traditional image restoration methods often lose information and make the images look bad. This problem by introducing DeepRecNet, a brand new Convolutional Neural community (CNN) version that may recreate high-constancy pix from compressed ones.  DeepRecNet uses deep gaining knowledge of to prepare snap shots which might be much better than the originals, even when the compression stage is low. The recommended structure is made from several convolutional layers linked by way of bypass connections. Those connections help the version hold essential spatial trends while it reconstructs. Those links make it viable for DeepRecNet to research quick from compressed information, which is not viable with older strategies that depend on cautiously crafting algorithms through hand. A combination of visual loss and pixel-sensible loss is used to teach the version. This makes certain that the rebuilt photograph has correct pixel values and good form integrity. We test DeepRecNet on several standard datasets, such as the CIFAR-10, ImageNet, and Kodak Image datasets, and compare its results to those of other cutting-edge image reconstruction methods. The test results show that DeepRecNet works better than other methods in terms of both objective measures (like PSNR and SSIM) and perceived visual quality. 

Article Details

Section
Articles