Strathclyde University researchers have shown that dynamically controlled LED room lighting enabled 3D imaging with consumer-grade digital cameras. In a smart factory setting, the technology could improve surveillance capabilities, giving robots an enhanced sense of their surroundings.
The researchers demonstrated the concept using a cellphone camera and LEDs without the need for complex temporal synchronization processes. The technology relies on a concept called photometric stereo imaging, in which a detector, or camera, is combined with illumination coming from multiple directions. The multidirectional lighting allows the camera to record images with different shadowing, allowing a computer system to then reconstruct a 3D image.
In a public area, LEDs could be used for general lighting, visible light communication, and 3D video surveillance. The illustration shows multiple access LiFi — wireless communication technology that uses light to transmit data and position between devices — and visible light positioning in a train station. Courtesy of Emma Le Francois.
That technology typically necessitates four light sources placed symmetrically around the viewing axis of a camera. The researchers’ work reconstructed 3D images with top-down illumination when objects were imaged from the side. In such a process, overhead room lighting could be used as the light source.
The researchers developed algorithms to modulate the LEDs with a bespoke binary multiple access format that reduced flicker and removed the need for synchronization. The algorithm supported the ability of the camera to determine which LED generated which image to facilitate the 3D reconstruction. This method also carried its own clock signal so that the image acquisition could be self-synchronized with the LEDs by using the camera to passively detect the LED clock signal.
“We wanted to make photometric stereo imaging more easily deployable by removing the link between the light source and the camera,” said Emma Le Francois, a doctoral student in the research group. “To our knowledge, we are the first to demonstrate a top-down illumination system with a side image acquisition where the modulation of the light is self-synchronized with the camera.”
Researchers developed a way to use overhead LED lighting and a smartphone to create 3D images of a small figurine. Courtesy of Emma Le Francois.
Putting the approach into action, the researchers deployed their modulation setup with a photometric stereo setup based on commercially available LEDs, controlled by a common Arduino board. A smartphone, operating at a high speed (960 fps), captured images of a target: a 48-mm-tall figurine 3D printed with a matte material. The production method and finish avoided producing reflective surfaces that might have complicated imaging in the testing phase.
The researchers achieved a reconstruction of just 2.6 mm for the figurine when imaged at a distance of 42 cm, which puts the technique on par with other photometric stereo imaging approaches.
They were also able to reconstruct images of a moving object, indicating that their method is unaffected by ambient light.
Image reconstruction on an external computer takes a few minutes; to make the new method more practical, the researchers aim to decrease computational time to a matter of seconds by integrating a deep-learning neural network that could learn to reconstruct the shape of the object from the raw image data.
The work, supported by the United Kingdom’s Quantic research program, was published in Optics Express (www.doi.org/10.1364/OE.408658).