Search
Menu
Sheetak -  Cooling at your Fingertip 11/24 LB

CAOS Smart Camera Captures Targets in Extreme Contrast Scenarios

Facebook X LinkedIn Email
A new camera technology working in unison with CMOS sensors smartly extracts extreme scene contrast pixel light intensity information using time-frequency coding of selected agile pixels.

NABEEL A. RIZA, UNIVERSITY COLLEGE CORK

Imaging electromagnetic radiation is of fundamental importance to a number of fields, from medicine and the biological sciences, to security and defense. Often, demanding contrast imaging scenarios arise that call for a high instantaneous linear dynamic range (HDR) — in certain cases reaching 190 decibels (dB) — and the ability to achieve extremely low interpixel crosstalk. Other requirements include secure camera operation to curtail the misuse of raw data output; adaptable spectrum usage; high camera speed; and the ability to achieve pixel selection and time integration control as well as adaptive spatial resolution.

For applications such as security and surveillance, achieving true image scene pixel information, as opposed to slowly collecting high-resolution pixel information with absent scene zones, is vital.

Shown is the CAOS smart camera design using a CMOS photodetector array (PDA) as the Hybrid (H) hardware element.

Figure 1.
Shown is the CAOS smart camera design using a CMOS photodetector array (PDA) as the Hybrid (H) hardware element. L1,L2,L3: Lenses. SM1,SM2,SM3: Smart modules for light conditioning. PD: Photodetector with amplifier. Courtesy of Nabeel A. Riza.

This has led to an increasing demand for a smart camera that can achieve true vision for a pressing imaging scenario through highly directional and adaptive image pixel sifting for specific regions of interest with high-value targets. A survey of today’s multipixel CMOS and CCD camera technologies reveals that photo detector array (PDA) sensors in general support around 60-dB-level linear dynamic ranges. Using custom design techniques, 120-dB higher dynamic ranges have been reached. These methods involve hardware modifications in the sensor chip aimed at increasing pixel size, often via a deeper quantum well and by controlling pixel integration time through the creation of pixel resets giving a piece-wise linear response or by using log or linear-log response CMOS sensor chip design technologies. Additionally, multi-image capture processing has been deployed where multiple images are captured at various optical filter attenuation- or integration-time settings, and software estimates the final HDR image.

Fundamental limitations

Although these sensor technologies and techniques have their unique merits, they each have certain fundamental limitations. For instance, time-sequential multi-image capture processing works best when the camera is on a stationary tripod and the viewed scenes are static, or else ghosting appears. For the best results, a small camera aperture with a large depth of field is needed; otherwise, the multi-image processing technique produces image artefacts, particularly for scenes with a shallow depth of field.

Log CMOS sensors generally have limited color reproduction, sensitivity and signal-to-noise ratios (SNRs) at the lower light level regions of signal log response compression operation. This leads to a non-uniform imaging performance with stronger fixed pattern noise and longer response times. When compared to an all-linear response CMOS sensor that provides a linear mapping over the entire contrast range of the incident image spatial space, the typical lin-log and piece-wise linear (or overall nonlinear response) CMOS sensors have lower SNRs. The log and lower slope response region of operation for detection of the brighter pixels in these two types of sensors produces reduced gray-scale levels due to voltage swing compression. This leads to lower contrast images and limited color reproduction. Therefore, varying image quality across the detected image zone depends on which response curve — linear or log or high slope or low slope — is used in sensing the light irradiance levels for the sensor pixels. To put things in context, see sidebar below, “CMOS Sensor Dynamic Range Ratings.”

For the reasons above, there remains a challenge for cameras to reach extreme all-linear, instantaneous dynamic ranges approaching 190 dB with multicolor smart capture of targets of interest within extreme contrast images. This is a requirement in diverse scenarios from natural night scene settings to complex biological materials, to hostile terrains such as deserts, snow and outer space. One can also think of this challenge as enabling the smart and intrinsically secure capture of spectral and spatial signatures of targets of interest to empower higher reliability pattern recognition and classification of sought-after high-value targets observed by the camera.

Inspiration from multiple access RF signal wireless

A new type of camera technology — the coded access optical sensor (CAOS) smart camera — addresses these shortcomings, working in unison with CMOS, CCD and focal plane array (FPA) camera sensors to extract previously unseen images.

The premise behind CAOS borrows from the advanced device and design radio frequency (RF) multiple access wireless network technologies that can detect very weak information signals at specific radio frequencies. With CAOS, agile pixels of light are captured from the target region of interest of an incident image in the camera and are rapidly encoded like RF signals. This encoding is done in the time-frequency domain using an optical array device such as a multipixel spatial light modulator. The encoded optical signals are then simultaneously detected by one point optical-to-RF detector/antenna. The output of this optical detector undergoes RF decoding via electronic wireless-style processing to simultaneously recover the light levels for all the agile pixels in the image.

On the contrary, CCD/CMOS/FPA cameras simply collect light from an image, so that photons are collected in the sensor buckets or wells and are transferred as electronic charge values. There is no deployment of spatially selective time-frequency content of the photons. CAOS, therefore, represents a seismic shift in imager design, made possible by modern-day advances in wireless and wired devices in the optical and electronic domains using extremely large time-bandwidth product time-frequency domain signal processing. Notably, the spatial size, location and shape of each agile pixel in the smart pixel set that is sampling the incident image region of interest in the CAOS camera is controlled using prior or real-time image application intelligence. This data gathered by the CAOS-mode imaging works in unison with other classic multipixel sensors and computational imaging methods operating within the CAOS hardware platform.

The CAOS smart camera forms a hybrid imaging platform where the agile pixel acquires a kind of space-time-frequency representation. Limited dynamic range image intelligence, for example, can be quickly gathered using classic compressive sensing. This computational technique is based on image spatial projections data combined with numerical optimization processing; it also uses the same CAOS hardware platform. Other linear spatial transform computational methods can also be deployed within the CAOS smart camera by appropriately programming spatial masks, such as 2D spatial codes, on the spatial light modulator. These space-focused, spatial code-based methods are unlike the CAOS-mode that, instead, engages time-frequency coding of agile pixels in the image space.

Summarizing, the CAOS smart camera is, intrinsically, a hybrid camera that works together with the CMOS/CCD/FPA sensor and computational imaging methods to extract smarter image information — where smarter refers to better spatial and spectral selectivity, faster speed, higher targeted pixel dynamic range and larger or more diverse spectral bands. The smarter concept extends to more optimized agile pixel shape, location, size, time duration, plus better camera security, and higher robustness to bright source blinding of scenes as well as improved fault tolerance through the hybrid dual channel camera design.

Incorporating digital micromirror device technology

A version of the CAOS smart camera called the CAOS-CMOS camera (Figures 1, 2) has been built and demonstrated1 using Texas Instrument’s Digital Micromirror Device (DMD) spatial light modulator as the CAOS-mode time-frequency agile pixel encoder. To start the imaging operation, the DMD is programmed to direct the incident light to the camera arm with the CMOS photodetector array to gather initial scene intelligence information that is used to program the DMD in the CAOS-mode to seek out the desired pixel high dynamic range regions of the scene. This visible band camera demonstrated a 31-dB improvement in camera linear dynamic range over a typical commercial 51-dB linear dynamic range CMOS camera when subjected to three test targets that created a scene with extreme brightness as well as extreme contrast (>82 dB) high dynamic range conditions.

Laboratory prototype of the CAOS smart camera using a CMOS photodetector array (PDA) as the H (Hybrid) hardware element in the overall camera design.

Figure 2.
Laboratory prototype of the CAOS smart camera using a CMOS photodetector array (PDA) as the H (Hybrid) hardware element in the overall camera design. Courtesy of Nabeel A. Riza.


Spectrogon US - Optical Filters 2024 MR
These controlled experimental hardware settings were deliberately chosen to allow the research team to clearly demonstrate the features of the smart design CAOS camera, such as when the limited linear dynamic range and noise floor of a deployed CMOS sensor cannot allow imaging beyond a certain scene contrast level. When this occurs, the CMOS sensor providing limited dynamic range image is used to guide the CAOS-mode of the smart camera to successfully see the high dynamic range regions of the scene that were unseen by the CMOS sensor. Therefore the 82-dB dynamic range for the CAOS camera was set by the highest contrast scene that could be produced in the lab environment for this first-time experiment.

The CAOS camera in the first incoherent light imaging experiment was subjected to a sample target on the left edge of the image view that is extremely bright (Figure 3). The two targets on the right of the bright target were extremely dim targets, near the noise floor of the demonstrated camera. Yet the CAOS smart camera was able to correctly see all three targets without any attenuation of the incoming light from the imaged scene (Figure 3b).

An 82-dB high instantaneous linear dynamic range-scaled irradiance map of the CAOS-mode-acquired image in a linear scale
Figure 3. An 82-dB high instantaneous linear dynamic range-scaled irradiance map of the CAOS-mode-acquired image in a linear scale (two faint targets not seen) (a) and logarithmic scale (all three targets seen) (b). Courtesy of Nabeel A. Riza.

Note that any attenuation of the light to eliminate saturation of the CMOS sensor sent the weak light image content into the noise floor of the CMOS sensor, making it impossible to see the weak light targets. In a second CAOS camera experiment, the team used a visible laser beam with a 10,000,000:1 gradual linear optical attenuation control to generate a 140-dB extreme linear dynamic range image pixel contrast target incident on the camera imaging plane. Using this new extreme contrast test target and improved electronic signal capture and digital signal processing methods, the CAOS camera was able to successfully detect the target over a 136-dB camera linear dynamic range, showing a 40-dB advance in instantaneous linear dynamic range when compared to the 2016 best state-of-the-art 94-dB linear dynamic range CMOS sensor. The camera also demonstrated dual-band imaging with better than -60 dB interband crosstalk when using a visible shortwave-IR (SWIR) test target made up of a 2 × 2 array of three visible and one SWIR LED2 (Figure 4).

The visible band image captured by the dual-band CAOS camera using three visible LEDs and one shortwave-IR (SWIR) LED target scene with LEDs arranged in a 2 × 2 formation.

Figure 4. The visible band image captured by the dual-band CAOS camera using three visible LEDs and one shortwave-IR (SWIR) LED target scene with LEDs arranged in a 2 × 2 formation. The SWIR LED, as expected, is missing from the camera visible channel created image. Courtesy of Nabeel A. Riza.

Multiple time-frequency coding modes

Complete electronic programmability allows the CAOS camera to perform as a smart spatial sampler of irradiance maps and also for electronic processing for high-performance encoding and decoding of the agile pixel irradiance map. Much like wireless and wired communication networks, the agile pixel can operate in different programmable time-frequency coding modes such as code division multiple access (CDMA), frequency division multiple access (FDMA) and time division multiple access (TDMA)3. CDMA and FDMA will produce spread spectrum radio frequency signals from the point photodetector (PD) while TDMA is the staring-mode operation of the CAOS imager, one agile pixel at a time producing a direct current signal.

For full impact, agile pixel codes should include CDMA, FDMA or mixed CDMA-FDMA codes that produce not only PD signals on a broad radio frequency spectrum but also engage sophisticated analog, digital and hybrid information coding techniques to provide isolation and robustness among time-frequency codes used for optical array device pixel coding. The camera also uses advanced coherent electronic signal processing, such as digital signal processing (DSP) implemented time-frequency transforms to produce noise suppression. DSP also furthers signal gain for the point PD signal leading to extreme contrast optical irradiance agile pixel decoding. Do note that agile pixel space-time-frequency coding creates a highly secure image that only the authorized recipient can see. In addition, the CAOS camera inherently makes the best and most efficient use of the relatively large full quantum well capacity of the point detector. Such is not the case in prior-art PD-array-based cameras where an incident bright extreme contrast image in most designs causes the many high-spatial-resolution smaller capacity quantum wells to be partially filled while many other quantum wells in the PD array are over-filled and create spillover to nearby wells, thus causing pixel saturation and interpixel crosstalk noise. In short, CAOS is also quantum well capacity efficient.

The speed of image acquisition by the CAOS-mode is limited mainly by the optical array device/spatial light modulator speed. Currently, the Texas Instruments’ DMD technology has a 32-KHz frame refresh rate giving a code bit time of 31.25 microseconds. When encoding the agile pixels with CDMA/FDMA techniques, instantaneous simultaneous capture and processing of all agile pixels is implemented to enable faster image generation. A basic design example using a 14-bit CDMA code per agile pixel used for 1000 simultaneous agile pixels (called a CAOS frame) with 10 code lengths oversampled PD integration time of 4.375 ms plus 10 ms for data processing creates a 14.375-ms image frame time or near 70 frames/s output indicating that the CAOS camera can be currently designed to register real-time 60 CAOS frames/s video.

Furthermore, the camera can be designed for faster than real-time video rates using customized agile pixels, codes and signal processing hardware. Imaging resolution of the CAOS camera depends on the size of the agile pixel deployed and can be as small as the SLM pixel — for example, the size of the single micromirror in the DMD, which is near seven microns in scale. The number of simultaneous agile pixels, crosstalk, dynamic range, signal-to-noise ratio and speed are interrelated parameters; the imaging platform needs to be carefully optimized for a particular smart imaging scenario keeping in mind that CAOS works with — and not in competition with — existing PDA sensors within the CAOS smart camera unit.

Today, the smart camera platform is undergoing research and technology development with design optimizations for specific commercial applications4, opening up a world of the as yet unseen as diverse as automobile machine vision systems for enhanced driver and road safety; security and surveillance; and inspection of medical, food and industrial specimens and products to foster a safer, longer and better human life.

Meet the author

Nabeel A. Riza holds a master’s degree and a Ph.D. in electrical engineering from Caltech. His awards include the 2001 International Commission for Optics (ICO) Prize. In August 2011, he was appointed UCC Chair Professor of Electrical and Electronic Engineering; email: [email protected]. Additional information @ www.nabeelriza.com.

References

1. N. A. Riza et al. (2016). CAOS-CMOS camera. Optics Express, Vol. 24, Issue 12, pp. 13444-13458.

2. N. A. Riza and J. P. La Torre (2016). Demonstration of 136-dB dynamic range capability for a simultaneous dual optical band CAOS camera. Optics Express, Vol. 24, Issue 26, pp. 29427-29443.

3. N. A. Riza et al. (2015). Coded access optical sensor (CAOS) imager, Elsevier Journal of the Europ Opt Soc Rap Pub, Vol. 10 (15021).

4. N. A. Riza (Jan. 31, 2017). The CAOS camera platform — Ushering in a paradigm change in extreme dynamic range imager design. SPIE Photonics West OPTO, Invited paper No.10117-21.

CMOS Sensor Dynamic Range Ratings

State-of-the-art performance numbers available in 2016 by leading commercial CMOS sensor vendors are as follows: Omnivision CMOS sensor linear instantaneous DR = 94 dB and a dual exposure DR = 120 dB; New Imaging Technologies (NIT) log CMOS sensor with log response DR = 140 dB; Photonfocus LinLog CMOS sensor with linear operation DR = 60 dB and LOG response DR from 60 dB to 120 dB; Melexis-Cypress-Sensata Autobrite CMOS sensor with piece-wise linear DR = 150 dB.

Published: January 2017
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
spatial light modulator
A spatial light modulator (SLM) is an optical device that modulates or manipulates the amplitude, phase, or polarization of light in two dimensions, typically in the form of an array. SLMs are versatile tools used in various optical applications, including adaptive optics, optical signal processing, holography, and imaging. There are different types of SLMs, each with its own operating principle: Liquid crystal spatial light modulators (LC-SLM): These SLMs use liquid crystal technology to...
camerasCMOSImagingmachine visionNabeel rizaCAOSCAOS Smart CameraCMOS sensorCCDphotodetector arrayhigh instantanous dynamic rangeFPAspatial light modulatorFeatures

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.