Search
Menu
Opto Diode Corp. - Opto Diode 10-24 LB

Multicamera Systems Capture Every Angle

Facebook X LinkedIn Email
Scott York, !%Photonics Science Ltd.%!

Recent technology advances, ranging from very wide fields of view to invisible wave bands and affordable ultralow-noise devices, are creating fascinating new applications for multicamera systems.

It’s an exciting time in the imaging business: Short-wave infrared (SWIR) cameras that image at 1.5 µm are providing new detection and measurement capabilities in this unfamiliar wave band and are proving to be a practical alternative to conventional night-vision systems for surveillance. Furthermore, the GigE interface has shown itself to be the ideal basis for multicamera systems, enabling the creation of composite images acquired across multiple wave bands with a higher dynamic range and 360° vision, all in real time. And at long last, CMOS sensors are finally starting to outperform CCDs in noise levels and dynamic range.


This stitched 180° panoramic SWIR image was taken from a video acquired at 25 fps. Courtesy of Photonic Science Ltd.


Engineers are often asked to create innovative solutions when existing technology can’t meet our clients’ application requirements. Over the past few years, customers’ needs have often been satisfied with alternative sensors and multiple-camera systems. When very high resolution is needed, in excess of the pixel count available from a single image sensor, or when the field of view has a very extreme aspect ratio, a multicamera solution is the logical choice.

Traditionally, multicamera systems have presented a real data acquisition challenge: What’s the best way, using a single PC, to obtain images from eight or 16 nonpixel-locked digital cameras – in real time, in parallel – without resorting to multiple high-end frame grabbers?

Fortunately, Gigabit Ethernet has come to the rescue. This provides the simultaneous acquisition flexibility needed for multicamera systems and has the added physical advantage of using simple Cat5 network cabling rather than traditional multiway data cables. Just imagine 16 Camera Link cables – an arm-size bundle – connected into a single computer. In a GigE multicamera system, you can even have fundamentally different types of cameras within the mix. For example, data from 24-bit RGB devices and 16-bit monochrome cameras can be acquired simultaneously.

One noteworthy application for this GigE multicamera system is as a low-light-level driving aid for military vehicles. At night, with headlights off for covert operation, an army driver traditionally will resort to night-vision goggles, which have a limited field of view. Under armor, he likely can see only via periscope and may have an even more restricted view. A recent night-vision system, however, enables a full 360° panorama of the complete surroundings of the vehicle to be displayed to the driver in real time at 25 fps. The system comprises eight low-light cameras with GigE interfaces, networked into a single rugged acquisition PC. Four of the cameras are electron-multiplying (EM) CCD devices, and four use InGaAs SWIR imagers.


Night-vision comparison of EMCCD at high gain (upper image) with cooled InGaAs (lower image). Taken in the UK, this image was captured at 25 fps one hour after sunset in early April, in a shady location with no artificial lighting. Courtesy of Photonic Science Ltd.


The processing requirements for such a wide field of view pose some exceptional challenges. Most obviously, the individual images must be stitched to provide a continuous geometry in the final panorama. This is done by distorting each with a cylindrical or spherical projection before global registration of the set of images. Such mapping is usually processor-intensive but can now be done on the fly within the camera hardware so that each unit provides live, fully distortion-corrected video.

This geometrical stitching does not address all the image-matching requirements, however. Adjacent images must have the same signal intensity across the boundaries, and color images must have the same white-balance settings. This immediately introduces an optimization question.

All cameras can be matched for intensity if they are operated at the same global integration periods and gain settings, but this does not necessarily produce the best image quality. A panoramic night-vision system may have to view a bright scene in one section of its 360° field of view and a very dark scene in another section. Global exposure control will result in overexposure of the bright part, underexposure of the dark part, or both. To overcome this, an effective but more complex scheme is to allow each camera to provide the optimum exposure for its own part of the scene.

Systems comprising intensified or EM night-vision cameras ensure that a usable image with a good signal-to-noise ratio is acquired, even within the dark shadow areas, because the first-stage gain is boosted there. Inevitably, though, the raw images from the set of cameras do not then match in intensity. To fix this, a postprocessing step is applied so that the intensities of the final images match at the borders while most of the first-stage gain differences are still preserved. Effectively, the final image has a gain or exposure setting that varies across the image plane according to the intensity of the local scene, which is practically impossible to achieve with a single sensor.

Teledyne DALSA - Linea HS2 11/24 MR

The SWIR-sensitive cameras mentioned above are a recent technology that has found several intriguing applications. Devices typically have a spectral sensitivity in the traditionally unexploited 1000- to 1700-nm wave band. From a night-vision point of view, the exciting aspect of SWIR imaging is that the night sky does not appear particularly dark.


Silicon solar cell electroluminescence, imaged by a cooled SWIR camera during a 15-ms integration period, shows a crack and background defects. Courtesy of Photonic Science Ltd.


A phenomenon called nightglow or night-sky radiance, caused by the interaction of the solar wind with ions in the high atmosphere, results in a moonless, clear night sky’s relatively high luminance in the 1200- to 1800-nm wave band. This distributed source also results in quite uniform illumination without harsh shadows, unlike moonlight, for example. Night vision can thus be achieved with a SWIR-sensitive camera but without an amplifying sensor such as an intensifier or EMCCD.

Day or night

Cooling the sensor is critical for night vision, but the SWIR sensor is not just a low-light device. In bright daylight conditions, the SWIR camera provides an image similar to that from a conventional visible wave band monochrome camera, albeit with vegetation appearing very light due to the high reflectivity of chlorophyll in the infrared. A cooled SWIR camera is thus an effective technology for a 24-hour surveillance system.

One application for SWIR cameras is remote 2-D temperature monitoring of components inside the Joint European Torus fusion energy experiment. Another is solar cell inspection where they detect device defects by imaging the electroluminescence emitted when the devices are forward-biased.

Turn the panoramic night-vision concept inside out, and you have the Photonic Science Ltd. Cyclops imaging system, a neutron diffractometer installed at the Laue-Langevin Institute in Grenoble, France. A beam of thermal neutrons is aimed at the experimenters’ crystal samples. The diffracted beams are detected by a set of neutron scintillators that form an octagonal drum completely surrounding the sample. Sixteen synchronized low-light-level intensified cameras capture the light emission from these scintillators and send their data to a single acquisition PC that combines and stitches the images into a single composite image for analysis. The key parameters are not frame rate and optimum automatic gain settings; instead, the key factor is matched quantitative measurement over long exposure times.


This panoramic image of neutron diffraction spots from ruby crystal was obtained from the Photonic Science Cyclops detector. Courtesy of Laue-Langevin Institute.


Major technical advances are being made also in CMOS sensors which, after many years of development, are performing on par with – or better than – CCDs for professional imaging applications. Low readout noise (of a few electrons rms) is now possible even at high pixel frequencies, thanks to per-column correlated double sampling and analog-to-digital conversion. This massive duplication of readout structures effectively reduces the individual operating frequency of each amplifier from “video” to “slow scan” rate, dropping the readout noise to a figure similar to that of the best scientific CCDs.

In the case of the much-discussed scientific CMOS devices, such low noise coupled with digitally combined simultaneous high and low gain outputs should provide a higher dynamic range within a single image than could be achieved with any conventional CCD. Furthermore, back illumination is becoming commonplace in CMOS devices, driven by the massive demand for mobile phone cameras. This technology, traditionally applied only to high-end CCDs, is now in the phone in your pocket and will soon be the norm in the new generation of professional CMOS devices.

Meet the author

Scott York is technical director at Photonic Science Ltd. in East Sussex, UK; e-mail: info@photonic-science.com.


Published: October 2011
Glossary
field of view
The field of view (FOV) refers to the extent of the observable world or the visible area that can be seen at any given moment through a device, such as an optical instrument, camera, or sensor. It is the angular or spatial extent of the observable environment as seen from a specific vantage point or through a particular instrument. Key points about the field of view include: Angular measurement: The field of view is often expressed in angular units, such as degrees, minutes, or radians. It...
gige
GigE, short for gigabit Ethernet, refers to a standard for high-speed Ethernet communication, capable of transmitting data at rates of up to 1 gigabit per second (Gbps), or 1000 megabits per second (Mbps). GigE is an extension of the ethernet family of networking technologies, which is widely used for local area network (LAN) communication in homes, businesses, and data centers. Key features and characteristics of gigabit Ethernet include: Speed: GigE offers significantly higher data...
Camera LinkcamerasCat5CCDCMOSConsumercoolingCyclopsdefenseenergyEuropeFeaturesField of ViewFranceGigEImaginginfrared camerasInGaAs SWIR imagersinvisible wave bandsLaue-Langevin Institutemobile phone camerasmulticamera systemsnight visionPhotonic Science Ltd.Scott YorkSensors & Detectorsshort-wave infraredsolar cell inspectionSWIRUK

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.