Search
Menu
Excelitas Technologies Corp. - X-Cite Vitae LB 11/24

A Camera with 12,616 Lenses

Facebook X LinkedIn Email
STANFORD, Calif., March 19, 2008 -- A digital camera is being developed with 12,616 microlenses -- each in effect a tiny camera -- that can take photos in a kind of super 3-D for potential use in facial recognition, biological imaging, and 3-D printing, among other applications.

Electronics researchers at Stanford University, led by electrical engineering professor Abbas El Gamal, are developing the digital camera around their multiaperture image sensor. Traditional digital cameras have one main lens, known as the objective lens, and focus an image directly on the camera's image sensor, producing a flat, 2-D photo. The objective lens of the multiaperture camera instead focuses its image about 40-µm above the image sensor arrays. As a result, any point in the photo is captured by at least four of the chip's tiny cameras, producing overlapping views, each from a slightly different perspective, just as your left eye sees things differently than the right.
StanfordResearchers.jpg
Stanford University electronics researchers (l-r) Philip Wong, Abbas El Gamal and Keith Fife are developing a digital camera that sees the world through thousands of tiny lenses, providing an electronic "depth map" containing the distance from the camera to every object in the picture. (Photos courtesy Stanford University)
The researchers have shrunk the pixels on the sensor to 0.7 microns (millionths of a meter), several times smaller than pixels in standard digital cameras, and have grouped them in arrays of 256 pixels each. They're now preparing to place a tiny lens atop each array.

With these thousands of tiny lenses, the camera can provide an electronic "depth map" containing the distance from the camera to every object in the picture, resulting in a super 3-D image with every object in focus.

"It's like having a lot of cameras on a single chip," said Keith Fife, a graduate student working with El Gamal and another electrical engineering professor, H.-S. Philip Wong. In fact, if their prototype 3-MP chip had all its microlenses in place, they would add up to 12,616 "cameras."

Point such a camera at someone's face, and it would, in addition to taking a photo, precisely record the distances to the subject's eyes, nose, ears, chin, etc. One obvious potential use of the technology: facial recognition for security purposes. Other possible applications include biological imaging, 3-D printing, creation of 3-D objects or people to inhabit virtual worlds, or 3-D modeling of buildings.

The technology is expected to produce a photo in which almost everything, near or far, is in focus. But it would be possible to selectively defocus parts of the photo after the fact, using editing software on a computer.

"You can choose to do things with that image that you weren't able to do with the regular 2-D image," Fife said. "You can say, 'I want to see only the objects at this distance,' and suddenly they'll appear for you. And you can wipe away everything else."

Knowing the exact distance to an object might give robots better spatial vision than humans and allow them to perform delicate tasks now beyond their abilities. "People are coming up with many things they might do with this," Fife said.

Excelitas PCO GmbH - Industrial Camera 11-24 VS MR

The three researchers published a paper on their work in the February edition of the IEEE ISSCC Digest of Technical Papers.
MultiapertureImageSensor.jpg
The testing platform for the multiaperture image sensor chip.
Their multiaperture camera would look and feel like an ordinary camera, or even a smaller cell phone camera. The cell phone aspect is important, Fife said, given that "the majority of the cameras in the world are now on phones."

The sensor could be deployed naked, with no objective lens at all. By placing the sensor very close to an object, each microlens would take its own photo without the need for an objective lens. It has been suggested that a very small probe could be placed against the brain of a laboratory mouse, for example, to detect the location of neural activity.

Other researchers are headed toward similar depth-map goals from different approaches. Some use intelligent software to inspect ordinary 2-D photos for the edges, shadows or focus differences that might infer the distances of objects. Others have tried cameras with multiple lenses, or prisms mounted in front of a single camera lens. One approach employs lasers and another attempts to stitch together photos taken from different angles, while yet another involves video shot from a moving camera.

But El Gamal, Fife and Wong said they believe their multiaperture sensor has some key advantages. It's small and doesn't require lasers, bulky camera gear, multiple photos or complex calibration. And it has excellent color quality. Each of the 256 pixels in a specific array detects the same color. In an ordinary digital camera, red pixels may be arranged next to green pixels, leading to undesirable "crosstalk" between the pixels that degrade color.

The sensor also can take advantage of smaller pixels in a way that an ordinary digital camera cannot, El Gamal said, because camera lenses are nearing the optical limit of the smallest spot they can resolve. Using a pixel smaller than that spot will not produce a better photo. But with the multiaperture sensor, smaller pixels produce even more depth information, he said.

The technology also may aid the quest for the huge photos possible with a gigapixel camera -- that's 140 times as many pixels as today's typical 7-MP cameras. The first benefit of the technology is straightforward: Smaller pixels mean more pixels can be crowded onto the chip. The second benefit involves chip architecture. With a billion pixels on one chip, some of them are sure to go bad, leaving dead spots, El Gamal said. But the overlapping views provided by the multiaperture sensor provide backups when pixels fail.

The researchers are now working out the manufacturing details of fabricating the micro-optics onto a camera chip. The finished product may cost less than existing digital cameras, the researchers said, because the quality of a camera's main lens will no longer be of paramount importance.

"We believe that you can reduce the complexity of the main lens by shifting the complexity to the semiconductor," Fife said.

For more information, visit: http://isl.stanford.edu/groups/elgamal/multiap.html

Published: March 2008
Glossary
aperture
An opening or hole through which radiation or matter may pass.
digital camera
A digital camera is a device that captures and records still images or video in digital format. Unlike traditional film cameras, which use photographic film to capture and store images, digital cameras use electronic sensors to convert light into digital data that can be stored, displayed, and manipulated electronically. digital camera suppliers → Key components of a digital camera include: Image sensor: The image sensor is the electronic component that captures incoming...
image
In optics, an image is the reconstruction of light rays from a source or object when light from that source or object is passed through a system of optics and onto an image forming plane. Light rays passing through an optical system tend to either converge (real image) or diverge (virtual image) to a plane (also called the image plane) in which a visual reproduction of the object is formed. This reconstructed pictorial representation of the object is called an image.
micro-optics
Micro-optics refers to the design, fabrication, and application of optical components and systems at a microscale level. These components are miniaturized optical elements that manipulate light at a microscopic level, providing functionalities such as focusing, collimating, splitting, and shaping light beams. Micro-optics play a crucial role in various fields, including telecommunications, imaging systems, medical devices, sensors, and consumer electronics. Key points about micro-optics: ...
nano
An SI prefix meaning one billionth (10-9). Nano can also be used to indicate the study of atoms, molecules and other structures and particles on the nanometer scale. Nano-optics (also referred to as nanophotonics), for example, is the study of how light and light-matter interactions behave on the nanometer scale. See nanophotonics.
photonics
The technology of generating and harnessing light and other forms of radiant energy whose quantum unit is the photon. The science includes light emission, transmission, deflection, amplification and detection by optical components and instruments, lasers and other light sources, fiber optics, electro-optical instrumentation, related hardware and electronics, and sophisticated systems. The range of applications of photonics extends from energy generation to detection to communications and...
pixel
A pixel, short for "picture element," is the smallest controllable element of a digital image or display. It is a fundamental unit that represents a single point in a raster image, which is a grid of pixels arranged in rows and columns. Each pixel contains information about the color and brightness of a specific point in the image. Some points about pixels include: Color and intensity: In a colored image, each pixel typically consists of three color channels: red, green, and blue (RGB). The...
sensor
1. A generic term for detector. 2. A complete optical/mechanical/electronic system that contains some form of radiation detector.
2D3-DAbbas El Gamalaperturebiological imagingBiophotonicscamerascell phonedefensedigital cameraEl GamalimageindustrialKeith Fifelensesmegapixelmicro-opticsmicrolensesmultiaperturenanoNews & Featuresobjective lensphonephotophotonicspixelrobotssensorSensors & DetectorsLasers

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.