Abstract:

Recent years have witnessed the unprecedented success of deep convolutional neural networks (CNNs) and Generative Adversarial Networks (GANs) applied in single image super-resolution (SISR) tasks. However, CNN-SISR based methods often assume that the lower resolution (LR) image is downsampled bicubicly from its high resolution (HR) counterpart. It results in poor performance on images with degradations that do not follow this assumption. Here, we propose a framework to learn a residual image super-resolver that handles multiple degradations, improving its performance on natural images. Our basic premise is that the residuals between an upsampled LR image and the HR counterpart contain information about the true degradation and downsampling processes, controlled by particular image features. We show that learning residuals in image space leads to detail reconstruction improvement in many cases. In this work, we apply different CNNs/GAN-based models to learn and predict the residual image given the LR image. The residual to be learned is obtained by subtracting a bicubicly upscaled image of the LR image from the true HR image. The LR images are generated by applying a random blur degradation to the HR image followed by a bicubic downsampling. We also generate residuals from 3 different downsampling methods in LR image space dimensions to use as features. Finally, we show that our method is able to learn the spatially upsampled higher dimensional residuals and it can recover detailed HR images from bicubicly upsampled LR images by adding our proposed high resolution residual error. Keywords: Super Resolution. Downsampling. Convolutional Neural Networks. Gen- erative Adversarial Networks