As camera phones become ubiquitous, consumer demand for a photographic experience similar to that of traditional digital cameras is growing. Coupled with the ready availability of high-definition displays, this need has translated into a requirement for higher-resolution cameras in mobile phones. However, handset design aesthetics impose a much smaller form factor for the miniature camera modules built into hand-sets than can be accommodated by reusing the same technology found in digital still cameras.One of the most challenging aspects of designing a high-resolution camera for a mobile phone is the limitation on the overall height of the camera, measured from the top of the lens to the back of the camera substrate. The typical target height is 6 mm or less, unless a more expensive folded-optics design is considered. Given the angular acceptance of CMOS image sensor pixels, the maximum-size sensor that can be used with such a thin camera measures approximately 4.5 mm diagonal. To increase the resolution without increasing the height of the camera (or thickness of the phone), more pixels must fit into the array defined by this diagonal size. Using a 2.2- × 2.2-μm-pixel size, 2-megapixel sensors can be used in these thin cameras. To achieve 3.2-megapixel resolution, 1.75 × 1.75-μm-pixel size must be used, and 5-megapixel resolution requires 1.4 × 1.4-μm pixels.Unfortunately, these ever-shrinking pixels have performance lim-itations. Despite the use of more advanced semiconductor process geometries to shrink interconnect lines and despite increased transistor sharing between pixels, the sensitivity of pixels below 2.0 × 2.0 μm decreases with further size reductions, resulting in cameras with less than optimum operating range. Thus, the signal-to-noise performance of a camera with smaller pixels is inferior to that in a camera with larger pixels. The decreased pixel sensitivity is the result of its less effective light-gathering ability; moreover, the noise floor for this technology has been reached. This problem does, on the other hand, create an opportunity for a solution.To have a useful depth of field for the camera, typically an aperture of f/2.8 or smaller is used. However, if an aperture of f/2.0 could be used, the signal level available to each pixel would be doubled. By using postcapture, in-camera, on-the-fly digital image-processing techniques, the system depth-of-field performance of a camera with an f/2.0 aperture can be increased to a useful operating range. This is one of the new benefits of the Digital Optics technology developed by DxO Labs of Paris.Traditional camera optical design dictates that the image spot size for the RGB color components should be concentric and sized proportional to the pixel. Lens designers use multiple elements to achieve this goal. One example tries to overcome the longitudinal chromatic aberration effect that occurs from the difference in wavelength of various colors of light and the associated refraction through plastic or glass elements in a lens. In the Digital Optics technology, the inherent spreading between the color channels is used beneficially to produce various levels of sharpness for each color channel at a given distance. This lens design seeks to exaggerate the longitudinal chromatic aberration in a known fashion to separate each color channel and increase the effective depth of field (Figure 1).Figure 1. Each color channel has its own effective depth of field.The image data captured through such an optical system is then processed using Sharpness Transport Technology. This proprietary patented technology determines which color channel is the sharpest in a given area of the image and transports that sharpness to the other color channels in the local region, effectively generating the local depth of field inherent in the sharpest channel. Thus, the effective depth of field of a lens with an aperture of f/2.0 can be increased to give a similar or even better user perception to that of a lens with an aperture of f/2.8 and in the process completely overcome the sensitivity reduction seen in moving to smaller pixel sizes. Rich extended depth of field can be accommodated, giving sharpness to foreground and background objects (Figure 2).Figure 2. Extended depth of field provides sharp details both in the foreground and in the background.This example illustrates the new field of possibilities that postdigital processing opens. Besides correcting lens flaws of traditional optics, the processing accomplishes a new step in the field of co-optimizing camera modules at the system level. With this approach, some specifically controlled lens flaws (longitudinal chromatic aberration in the example) are purposefully introduced during the lens design and are leveraged by genuine postdigital processing to achieve new optical performances, unreachable with traditional designs.Of course, additional postprocessing techniques (including noise reduction algorithms) can further improve signal-to-noise ratio performance for small pixel-based sensors. Using a combination of these technologies should enable useful performance from pixel sizes below 1.5 μm, allowing 5- or even 8-megapixel camera implementations in mobile phones.Meet the authorGuichard Frédéric is chief scientist at DxO Labs in Paris; e-mail: fguichard@dxo.com.