When the Camera Is a Computer: Computational Life Sciences Imaging
ROBERT LABELLE, PHOTOMETRICS and QIMAGING
Sensors and cameras continue to progress, although life sciences imaging remains far from perfect. Even in its current state, barriers still exist. These include signal photon noise, light scattering, optical blur of finite aperture imaging systems and others. Versatile high-speed, high-resolution systems are overcoming these and other challenges, moving microscopy into a diagnostic role.
Figure 1. PrimeEnhance, a nonlocal patch-based imaging method, leverages similarity in images. A patch (blue) centered on the current pixel being denoised (green) is compared to patches in the surrounding neighborhood. Similar patches (a) and (b) are averaged with a weight close to one, whereas dissimilar patches (c) and (d)
are given a weight close to zero. Over 500 million patch comparisons
are made each second using the Prime’s on-board field-programmable gate
array (FPGA). Courtesy of the University of Arizona.
Noise: beyond the camera
When attempting to quantify faint signal levels, camera sensitivity previously played a major role in determining the limits of quantification. Thankfully, this is no longer the case. Many sensor improvements have been made, resulting in a camera that converts almost all available light to useful signal, adding no noise to the process. Such improvements include scientific CMOS with electronic noise below 1 e
- rms and backside illumination resulting in near-perfect 95 percent efficiency in turning light into signal electrons.
Photon noise is different, however, because it is an inherent property of light. There will always be a statistical variation in the number of photons (or photoelectrons) that are detected in a given time period. To obtain acceptable precision, several standard methods to increase signal-to-noise ratios exist:
• Increase the exposure duration to collect signal over a longer period of time. This allows for a higher signal level and reduced impact of photon noise. The ability to image at a specific frame rate may be sacrificed, and in biological live-cell microscopy, it may result in increased phototoxicity, which destroys physiological conditions, and photobleaching, which prevents further investigation of the cell.V
• Average frames to reduce noise. This method allows for a reduction in total image noise as a square root of the number of frames averaged. The ability to image at adequate frame rates will again be sacrificed, increasing noise contributions from the camera electronics. The electronic noise, however small, also increases as the square root of the number of frames.
• Increase the excitation intensity. Of course, in some cases it is possible to simply increase the amount of light reaching the camera. This can be problematic for quantitative live cell imaging, as it can cause cell bleaching and phototoxicity. In addition, lasers or other high intensity sources can increase the overall system cost.
• Increase the area of the photo site. While generally not available on scientific CMOS, a tried-and-true method of increasing the light flux per pixel is to sum the pixels on a chip prior to digitization and readout, a process known as binning. This will thereby only incur the read noise penalty one time. This method comes at the expense of spatial resolution, when the pixel size is close to the resolution of the optical system.
Computational imaging to the rescue
Signal processing methods have been applied to noise reduction since the advent of signal processing; simple frame averaging is a trivial example. There are many challenges when processing data to reduce noise, particularly in the context of scientific imaging, where preserving the quantitative nature of recorded pixel intensities, as well as key features like edges, textures and details with low contrast, are imperative. Processing must be accomplished without introducing new image artifacts such as ringing, aliasing or blurring. Additionally, because noise tends to vary with the level of signal, it is difficult for many noise-reducing algorithms to distinguish signal from noise, and as a consequence, small details tend to be removed (Figure 1).
First-class denoising methods do exist, but are computationally intensive. This tends to discourage use due to the necessity to post-process data, and can require parameter tuning for a given application. One such algorithm is SAFIR, developed at the French Institute for Research in Computer Science and Automation (Inria), and optimized for fluorescence microscopy in collaboration with the Institut Curie.
To address the requirement for post-processing and optimization, a commercial collaboration with Photometrics, a camera manufacturer based in Tucson, Ariz., has embedded an optimized version of SAFIR in a high-performance scientific camera using a powerful in-camera field-programmable gate array (FPGA). SAFIR has the ability to preserve the finer details and features of biological samples, without introducing image artifacts (Figure 2). It has also met the requirement for maintaining quantitative detection, as any useful algorithm must ensure that intensity values remain unchanged.
Figure 2. Acquired images before and after denoising are shown in the left and center panels. The difference image (right) illustrates that the removed noise retains the white noise characteristic of the original. The outlines of the cell can be seen as a faint change in texture from the removal of photon shot noise. Courtesy of the University of Arizona.
Making faster, better
Scientific CMOS cameras can generate nearly a gigabyte of 16-bit data per second — the amount of data that must be transferred, stored and processed is astronomical. Fortunately, in many experiments, the most valuable image data is sparse. Using the same on-camera FPGA, selection of many individual and overlapping regions-of-interest is a straightforward way to ensure only relevant data is transferred.
However, not knowing when or where the imaged object might appear makes predetermined return on investments difficult to leverage when addressing data glut. An example of this is found in superresolution localization microscopy. Localization-based superresolution microscopy requires a sparsity of emitting molecules. This enables the ability to determine the center of each emitting molecule by fitting the observed pattern of fluorescence. Even with this high level of sparsity, the full image frame is transferred to the host, as the location of each emitter is not known a priori.
On-camera computational intelligence addresses this situation by locating and transferring only the region of interest surrounding the emitter. By transferring only these regions to the host computer, the amount of data is drastically reduced. It’s possible that the reduction in data is sufficient and lower bandwidth interfaces like USB3 could replace the costly PCIe-based dual-channel Camera Link that is in use today. Streaming images to compressed folders is a simple way to see tenfold reductions or more in data storage (Figure 3).
Figure 3. The PrimeLocate system automatically finds point-like objects in the image and transfers the surrounding region to the host PC, reducing host bandwidth and data storage requirements. Points must be at least two pixels apart to be considered separate emitters. The number and size of the region transferred is also under user control. A simple implementation is shown where the image is reconstructed for streaming to disk. Since a large portion of the background is removed, lossless compression reduces file sizes by 10 to 100×. Courtesy of the University of Arizona.
Direction of convergence
The value real-time signal processing brings to scientific imaging goes beyond perfecting the sensor and incremental camera performance. These capabilities can address fundamental physical problems such as photon shot noise to increase signal-to-noise ratio, improve localization microscopes or even address the blur of optical imaging systems through real-time deconvolution. These computational resources can be applied to other algorithms, as well, to bring cameras from data capture to active participant in retrieving and decoding additional information from the sample.
Meet the author
Robert LaBelle is vice president of marketing for Photometrics and QImaging, based in Tucson, Ariz.; email: rlabelle@photometrics.com.
LATEST NEWS