Algorithmic Training Technique Aims to Democratize Deep Learning-Enhanced Microscopy
A tool called the “crappifier” developed at Salk Institute has the potential to democratize deep-learning enhanced microscopy. The technology addresses the difficulties of training algorithms to enhance low-resolution images.
Deep learning offers the potential for scientists to gather information from low-resolution images that would be virtually inaccessible otherwise. In the case of cell imaging, gaining a detailed image can be complicated, as it requires low-light conditions, which result in low-resolution images. Laser illumination can damage or alter the cells, and, by extension, the quality of information that can be gleaned from the images.
“We invest millions of dollars in these microscopes and we’re still struggling to push the limits of what they can do,” said Uri Manor, director of the Waitt Advanced Biophotonics Core Facility at Salk. “That is the problem we are trying to solve with deep learning.”
To apply deep learning to improve microscope images, whether by improving sharpness or addressing background noise, the system must be trained on examples of both high- and low-resolution images. Capturing perfectly identical microscopy images in separate exposures is difficult and expensive. Doing so with live cells, which are often moving during the process, is a major challenge.
Rather than trying to take two identical images, the Manor-led team took one, copied it, and computationally degraded it by running it through a mechanism it dubbed the crappifier. The method makes the images appear like the lowest low-resolution images the team would acquire.
Software called Point-Scanning Super-Resolution (PSSR) was then shown high-resolution images along with the degraded versions. After studying the images, the system was able to learn how to improve low-quality images without ever encountering a high-resolution counterpart.
Previous systems that learned from artificially degraded data still struggled when presented with organically deficient data.
“We tried a bunch of different degradation methods, and we found one that actually works,” Manor said. “You can train a model on your artificially generated data and it actually works on real-world data.”
The technology also has potential for boosting the power of older or less powerful microscopes.
“Using our method, people can benefit from this powerful, deep learning technology without investing a lot of time or resources,” said lead author Linjing Fang, an image analysis specialist at the Waitt Advanced Biophotonics Core Facility. “You can use preexisting high-quality data, degrade it, and train a model to improve the quality of a lower-resolution image.”
The team demonstrated that PSSR works with both electron microscopy and fluorescence live cell images — two scenarios where it can be extremely difficult or impossible to obtain the duplicate high- and low-resolution images needed to train AI systems. Though the study demonstrated the method on images of brain tissue, Manor hopes it can be applied to other systems of the body in the future. He also hopes that it may someday be used to make high-resolution microscopic imaging more widely accessible, as powerful microscopes can cost hundreds of thousands of dollars to well over a million.
“One of our visions for the future is to be able to start replacing some of those expensive components with deep learning so we could start making microscopes cheaper and more accessible,” Manor said.
The research was published in
Nature Methods (
www.doi.org/10.1038/s41592-021-01080-z).
LATEST NEWS