Search
Menu
Excelitas PCO GmbH - PCO.Edge 11-24 BIO LB

Smart Light Captures Body in Motion

Facebook X LinkedIn Email
CAREN B. LES, [email protected]

Using solely the visible light around us, a new light-sensing system tracks whole-body human postures unobtrusively and in real time. The LiSense system reconstructs a 3D skeletal model on a computer screen based on the shadows cast by a person moving within a lighted room — no cameras or on-body devices required.

“Light is everywhere, and we are making light very smart,” said professor Xia Zhou, who led the project at Dartmouth College in Hanover, N.H. “Imagine a future where light knows and responds to what we do. We can naturally interact with surrounding smart objects, such as drones and smart appliances and play games, using purely the light around us.”

The LiSense system could be especially applicable in health and behavioral monitoring, Zhou said. She explained that if the light around us continuously monitors how we move and gesture over time, it might help detect early symptoms of certain diseases, such as Parkinson’s, which has movement-related symptoms. Currently, to detect these symptoms, doctors either need to videotape the patients, or ask them to wear and carry cumbersome devices.

“Also, doctors can use the lights in the hospital to monitor patients’ behaviors over time, without using any cameras that present privacy risks. It’s quite attractive for environments like hospitals and airplanes that are sensitive to electromagnetic interference, because light won’t bring any.”

The LiSense system uses visible light communication (VLC) to reconstruct skeletal movements. VLC encodes data into light intensity changes at a high frequency not perceived by human eyes. It uses energy-efficient LEDs to transmit data inexpensively and securely with virtually unlimited bandwidth, the researchers said. Many devices, such as smartphones, tablets and smart watches, have light sensors that can recover data by monitoring light changes.

testbed, which includes off-the-shelf LED lights on the ceiling and hundreds of light sensors on the floor
Tianxing Li, lead graduate student on the LiSense project team is shown in the testbed, which includes off-the-shelf LED lights on the ceiling and hundreds of light sensors on the floor. The system separates light rays from different LED lights using visible light communication and reconstructs a human skeleton (shown on a monitor) in real time to track a user’s 3D postures. Courtesy of the DartNets Lab at Dartmouth College.

The light-sensing testbed built by Zhou and her team consists of five off-the-shelf LED lights on the ceiling, hundreds of light sensors on the floor, 29 microcontrollers and a server.

Hamamatsu Corp. - Mid-Infrared LED 11/24 MR

They overcame two main challenges in the project. The first, Zhou said, is that shadows diminish under multiple light sources. Recovering the shadow cast by each individual light is difficult. The team’s solution was to design light beacons enabled by VLC to separate light rays from different light sources and recover the shadow pattern cast by each individual light. They implemented the light beacons by programming the microcontrollers that modulate the LEDs.

The second hurdle, Zhou explained, is that shadows are only 2D projections of a human body. It is quite difficult to infer accurately the 3D skeleton from 2D shadows. The team designed an algorithm to reconstruct human postures using 2D shadow information with a limited resolution collected by the photodiodes embedded in the floor.

“The LED and VLC physical layer technologies have advanced significantly in recent years,” Zhou said when asked about the status of VLC technology. “Multi-Gb/s data rates using VLC has been shown feasible through research prototypes. We have started to see a startup company (pureLiFi) that sells VLC transceivers. So VLC is gaining momentum.”

As for future work, the team would like to minimize the number of light sensors in the system while maintaining sensing performance. They would also like to move from low-level gestures to where the system can infer higher-level activities as well. Other plans include developing applications which could include novel interaction designs.

The team’s paper will be published in MobiCom ’15: Proceedings of the 21st Annual International Conference on Mobile Computing and Networking (doi: 10.1145/2789168.2790110). A demonstration video of the project can be found at https://youtu.be/7wK-zo66GdY “Here we are, pushing the envelope further and asking: ‘Can light turn into a ubiquitous sensing medium that tracks what we do and senses how we behave?’”

Published: October 2015
PostscriptsLEDsBiophotonicsSensors & DetectorsLasersCommunicationsLight SourcesAmericasSmart Lightlight-sensing systemhuman postures3D skeletal modelLiSense systembehavior monitoringhealth monitoringVLCvisible light communicationsMobiCom '15Xia ZhouTiangxing LiDartNets LabDartmouth CollegeNew HampshireCaren B. Les

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.