With GPP, the priors on which the algorithm is trained are composed of patches of images, rather than whole images. Priors, properties of images such as spatial coherence, help the algorithm fill in gaps based on its understanding of images.
Patch-based priors, Anirudh said, were once popular; they were computationally easier to deal with than entire images. Today, he said, in the deep-learning era, we are witnessing a resurgence of path-based methods.
The new technology is designed to solve “inverse problems” in which an original scene or image must be estimated with a limited amount of data
“As a simple example of a classic inverse problem, say we are shown only a random 10% of the pixels in an image, and asked to fill out the remaining 90%. If we guess the remaining pixels at random, we are not going to get a solution that looks anything like the original image,” Anirudh said. “The central idea behind GPP is to exploit the fact that images can be broken down into small patches and it is much easier for a machine learning model to approximate all the variations in patches than in images due to the reduced size.”
A model that can capture the diversity and complexity of all images does not yet exist — though the team observed that teaching a model that can capture most of the variations in patches is a relatively simple process.
“GPP obtains higher-quality solutions in compressive sensing even under extreme settings where very few measurements are available,” Anirudh said.
The research team, consisting of personnel from Mitsubishi Electric Research Laboratories and Arizona State University, showed that the method is able to outperform several common methods in compressive sensing and compressive phase retrieval tasks. GPP, the researchers concluded, is more broadly applicable to a variety of images than existing generative priors. They proposed a method of self-calibration that enables the model to automatically calibrate itself against real-world sensor distortions and corruptions, and showed that it performed well on a number of real-world baselines.
The research was presented at the 2021 IEEE Winter Conference on Applications of Computer Vision and received the conference’s Best Paper, Honorable Mention award.