Imaging in the Blink of an Eye
HANK HOGAN, CONTRIBUTING EDITOR
CMOS cameras and sensors are getting faster. That’s good news for researchers and for the autonomous vehicles market, among others. For researchers, faster imaging enhances spectroscopy, improves resolution, and makes it easier to capture rapidly decaying fluorescence. For self-driving vehicles, speedier imaging makes distance determination more accurate.
But high-speed imaging — sometimes taking only nanoseconds — produces more data, which presents challenges when moving, storing, and analyzing information. Fortunately, interface advancements and new imaging techniques are easing these challenges. And added imaging intelligence is on the horizon, which is expected to ease the data burden further.
The inspection of electronic printed circuit boards requires high-speed area scanning, and so could benefit from new, faster CMOS cameras. Courtesy of Basler AG.
Partly as a result of advancements in mainstream technology, the definition of high-speed imaging is expanding. One category of high-speed camera is conventional and is a successor to the old, film-based approach.
“Traditionally, high-speed cameras record at a very high frame rate for the purpose of playing back at a reduced frame rate,” said Peter Carellas, CEO of high-speed camera maker iX Cameras, headquartered in Woburn, Mass. The company’s top-end product captures 12,500 fps at full 1080p HD resolution directly into local memory, later moving information to a computer.
A second category of high-speed camera streams live video, offering real-time capture and analysis capabilities. While faster buses make it possible to stream and capture more rapidly than before, such systems are frame-rate limited compared to the capture-and-forward approach.
According to Carellas, high-speed
cameras are generally designed to move large amounts of data, and top-end cameras record raw, uncompressed images at 12 b/pixel. If done in color, then every pixel in an image takes 36 bits.
“Recording, moving, storing, and processing multimegabyte 36-bit images at thousands of frames per second in the visible, near-IR, and thermal IR spectra is a serious engineering challenge,” Carellas said.
Faster CMOS imaging allows monitoring and analysis of rapidly moving
processes, such as paper manufacturing. Courtesy of Teledyne DALSA.
The pursuit of higher-speed imaging involves trade-offs, said Michael DeLuca, product marketing manager in the industrial solutions division of the intelligent sensing group of ON Semiconductor. The Phoenix-based image sensor manufacturer makes standard products that are capable of 800 fps and custom ones that can go significantly faster, according to DeLuca.
“The thing that’s really important in understanding high speed is that frame rate combines directly with resolution to determine bandwidth,” he said.
Users can therefore choose between pixel number and image rate. A sensor may be capable of capturing 25 million pixels, which can provide a high-
resolution image — but at less than 100 fps. Alternatively, the image can be windowed, or cropped, down to many fewer pixels, but these will be refreshed at a significantly faster rate.
So, in an inspection or other machine vision application, certain aspects of the processing task can be performed at high spatial resolution. Others can use high temporal resolution, courtesy of a high frame rate.
With the right setup, then, sensor bandwidth can be parceled out in various ways — among resolutions in space, time, and even wavelength. This balance can be adjusted. According to DeLuca, the key is to capture the necessary data, thereby providing users the ability to see what is necessary. He said to keep an eye on computer-supported analysis via artificial intelligence (AI) in the future. High-speed cameras will make it possible to monitor events that happen too fast for people to follow, and machine learning will help pinpoint what information needs to be monitored and how it should be viewed.
The push for higher imaging rates has implications for the camera’s sensor and lighting system. As frame rates increase, the amount of light captured per frame and per pixel decreases — a result of the shorter exposure time.
One way to counter the reduction in photons is to increase the illumination, but this method has its limits. Another solution is to engineer the sensor and camera in a way that improves system performance, said Doreen Clark, senior product manager at high-speed camera maker Vision Research of Wayne, N.J. The company has products that run up to 25,000 fps at full resolution.
CMOS sensor advancements enable faster imaging, allowing cameras to
capture process details with greater temporal resolution. Courtesy of ON
Semiconductor.
According to Clark, one goal for sensors is to reduce noise and improve sensitivity. “The faster you go, the harder it is to get a good, clear, noise-free image,” she said. “So, the inherent trend is to go faster but try to mitigate the noise or get more dynamic range.”
New applications
The improved capabilities and falling costs of high-speed cameras are spawning new applications, Clark added. For instance, measuring the strains on rotating tires can be accomplished by putting a speckle pattern on a tire and seeing how the pattern changes as the tire spins. Capturing this information requires high frame rates, along with software used for analysis.
Other applications that benefit from high-speed imaging are flow cytometry and microfluidics — a consequence of fields of view shrinking smaller as microscope magnification increases. A smaller field of view means cells and other objects are visible for less time, so imaging must speed up.
A 512 × 512 single-photon avalanche diode (SPAD) array (a) captured this fluorescence image (b). SPAD arrays have achieved rates greater than 150,000 fps. Courtesy of Edoardo Charbon and IEEE.
The trade-offs between speed and resolution that are inherent in high-speed imaging are shifting as a result of changes in interfaces that significantly speed up data transfer, according to Glen Ahearn, sales and application support manager at Teledyne DALSA in Waterloo, Ontario, Canada. The company makes both sensors and cameras, with its line-scan cameras offering a linewidth of 16K pixels at up to 300K lines/s.
According to Ahearn, these interface advancements benefit two main application areas most: process monitoring and
motion analysis, which ranges from the study of golf swings to the understanding of biological systems. Process monitoring is also relevant to the paper conversion process in which bulk quantities of paper are customized — slit, printed, or coated — to customer specifications. A break in the stream of product while it is being converted can mean hundreds of feet of paper end up on the factory floor. Imaging the production process with fine temporal resolution from multiple viewpoints can help prevent this type of waste.
“You can digitally roll back the video to the point before where the break occurred and watch in slow motion how the break happened,” Ahearn said, “and, hopefully, determine how to prevent future breaks.”
In the future, AI could play a role in such analysis, making it possible to detect small, initial signs of trouble. It may even be feasible to push analysis out to the edges of the control-and-monitoring system, allowing a real-time or near real-time response.
The need to pull data off the camera runs up against bandwidth limits, particularly if responses to the images are to be in real time. Advancements in interface standards, however, offer ways around limits by enabling faster data transfer rates. Camera Link HS, which can deliver up to 15.625 GB/s — that is, more than 23× the speed of the original full Camera Link configuration — is one example.
CoaXPress 2.0, which has just been released, is another interface standard, and it’s twice as fast per channel as its predecessor. It can reach data rates of 6.25 GB/s when running over four copper cables. This transfer rate is enough to operate a 10-bit, 12-MP sensor at 300 fps.
The CoaXPress 2.0 standard will be the basis of upcoming high-speed cameras from Basler AG, an Ahrensburg, Germany-based machine vision supplier. Due to launch in the third quarter of 2019, the new cameras will include an interface card. According to Sean Pologruto, an applications engineer at Basler, getting everything from one supplier will ease integration.
He added that the new high-speed camera and interface hardware make it possible to replace multiple slower line- or area-scan cameras, reducing system complexity. “Combining the newest interface technology with state-of-the-art sensor generations, we can offer our customer more cost-effective solutions,” Pologruto said.
Challenges to tradition
Finally, researchers are looking into ways to entirely sidestep the processing challenges of high-speed imaging by using techniques that differ from traditional CMOS sensors and cameras, which put captured light through a multibit analog-to-digital conversion and subsequent processing.
For instance, Edoardo Charbon, an engineering professor at the research institute and university EPFL (École Polytechnique Fédérale de Lausanne), is investigating cameras based on single-photon avalanche diodes, or SPADs. The highest speeds his group has achieved with this technology top 150,000 fps.
SPADs register the presence of a photon, so they convert the incoming light directly into a single-bit digital output — a “1” if a photon is there, and a “0” if not. In effect, some of the image analysis is done automatically on the chip. Such a technology could be useful in x-ray spectroscopy when investigating the internal structure of molecules and atoms. It also could be useful in lidars that help self-driving cars navigate in a 3D world. Lidar uses the time of flight of light to determine distance, and the finer a detector can slice up time, the better the distance resolution.
Looking forward, Charbon sees the capabilities of high-speed imaging being put to additional uses. For example, an imager could capture wavelength or polarization or time of arrival or other parameters. This would yield information beyond the level provided by today’s sensors. Such capabilities could be possible, in part, because high-speed imaging enables the collection of additional data — with less impact on space-time resolution.
The extra information would be useful, Charbon said, “not necessarily for 3D reconstruction, but also for understanding what you’re looking at. Maybe the surface that you’re looking at — the structure, the texture.”
LATEST NEWS