Search
Menu
Excelitas Technologies Corp. - X-Cite Vitae LB 11/24

Embedded Vision Moves Into the Clinical Realm

Facebook X LinkedIn Email
A handheld technology opens new possibilities for dermatology, ophthalmology and in vitro diagnostics.

PETER BEHRINGER, BASLER AG

Embedded vision, the combination of miniaturized processing units and board-level cameras, has the potential to create a new tool for medical monitoring and diagnostics. The hardware design can make medical devices faster, more compact and more user-friendly. It can enable point-of-care testing because the technology replaces the industrial PC with processing boards. It also promises to significantly reduce the total cost of ownership per device.

Miniaturized processing units and board-level cameras are combined to create embedded vision.

Miniaturized processing units and board-level cameras are combined to create embedded vision. Courtesy of Basler AG.

The relatively new technology has found applications in industry, defense and self-driving cars. Medical applications so far involve a variety of fields including dermatology, ophthalmology, in vitro diagnostics, microscopy and laboratory automation.

In dermatology, physicians traditionally examine suspicious skin pigments by looking through a dermatoscope that optically magnifies the area of interest. Through digital dermatoscopes, which use embedded cameras, it is possible to view the skin pigment and take a digital image of the suspicious region.

The digital image helps the physician track small changes over time and make an examination via software, allowing for diameter measurements as well as for analysis of pigment color. This can lead to a more accurate and standardized diagnosis, where the smallest changes in color and radius can be taken into account.

Another big advantage is that the digital image can be used as input for computer-aided analysis algorithms. This is especially interesting when looking at the great technological progress in image-based analysis algorithms in the domain of machine learning, such as the recent development of convolutional neural networks. These networks are capable of detecting tumor-suspicious regions on image data with an astonishingly high sensitivity. The algorithms learn by themselves the criteria for separating tumor-suspicious regions from nonsuspicious regions. This opens up huge potential for image-guided diagnoses by presenting the physician a suspicious region to look at rather than having to go on a search for suspicious areas.

Another great advantage of using a digital image is that it can be shared among dermatologists to get additional opinions. This can be done by using picture archiving and communication systems that allow remote access for multiple physicians to visually examine the same image information simultaneously. Through this sort of telemedicine, a second opinion is available within minutes.

The next step will be to create highly compact and easy-to-use miniaturized digital dermatoscopes that stay at the patient’s home. The hardware advances for embedded vision now allow for the development of such devices.

Another great example of embedded vision devices can be found in ophthalmology. Many malfunctions of the eye have their roots in a nonfunctioning retina. Retinal cameras allow for the inspection of this structure by taking an image that allows a physician to see if the eye has proper circulation and the cells are healthy.

A typical image processing pipeline in medical embedded vision devices contains four key components.

A typical image processing pipeline in medical embedded vision devices contains four key components. Courtesy of Basler AG.

Traditionally, retinal cameras were about the size of a small refrigerator and could not be carried around. Embedded vision makes it possible to shrink the size of a retinal camera by using board-level cameras instead of boxed cameras, and processing boards instead of industrial PCs. This will allow for a much larger distribution of medical care, resulting in better eye care. At the same time, it would dramatically increase device numbers, resulting in better revenue for manufacturers.

Embedded vision technology also offers advantages for other applications in the medical realm. In pathological microscopy, for example, physicians spend a large amount of time scanning through the tissue probes, which may or may not contain pathological cells. Embedded vision will help to completely digitalize the histological sample, using built-in cameras inside the microscope. As with dermatology, sophisticated algorithms that run on miniaturized processing boards inside the device will then identify suspicious regions and present those regions to the physician.

For in vitro diagnostics, embedded vision devices can be used to investigate blood cells using digital microscopes. In laboratory automation, processes such as barcode reading or archiving of probes can be done automatically.

In most embedded vision systems that are used in medical devices, a typical image pipeline covers four steps: image acquisition by the image sensor, preprocessing of the sensor data, application-specific image processing and the user output interface.

A typical embedded vision system in medical devices

A typical embedded vision system in medical devices: Thanks to the compact design of the camera and the processing board, modern diagnostic devices have become portable. Courtesy of Basler AG.

The sensor is the first component in the pipeline. There are two different types of sensors available in the market — consumer-type sensors and industrial-grade sensors. While consumer sensors built into smartphones are cheap, industrial grade sensors are preferred in medical devices since they are durable, robust in their reproducibility and manufactured at higher precision.

Excelitas PCO GmbH - PCO.Edge 11-24 BIO MR

Achieving color accuracy

Another reason why smartphone cameras are not suitable for vision-based diagnostics is that the preprocessing is a complete black box, which means that the user has no way of knowing what exactly happens during the preprocessing. This results in an aesthetically pleasing image that has nothing to do with the color tones from the real-world scene.

In many cases however, the algorithms that calculate the diagnostic proposals directly take the color information into consideration. Therefore, a “true-color” camera is very important for vision-based diagnostics. Typically, industrial cameras have a high color accuracy, which means there is a very small “color-error” between the color in the real world and the digital projection in the image.

Raw data from the sensor is not yet an image. In order to get a clear image, preprocessing image algorithms such as debayering, pixel correction, sharpness filters, color space conversion and many more are applied to the data the sensor delivers.

In the case of a board-level camera such as the Basler dart, preprocessing occurs inside the camera. This is a major benefit for device manufacturers. The system integrator receives already-corrected images instead of raw data. And since preprocessing is computation-intensive, having it performed by internal processing units saves the highly limited resources of the processing board to application-specific algorithms from the system integrator.

After the raw sensor data has been preprocessed, the next step is application-specific image processing. This type of processing varies depending on the application and use cases.

In dermatology, processing is typically needed to give the final result of the diagnosis to the physician. In a digital dermatoscope, for example, processing is focused on proposing a diagnosis by taking into account images from a previous stage that may show color changes of the skin region or changes of the pigment’s diameter.

Application-specific image processing for dermatology also involves the automatic detection of abnormalities that are found in the image data, using machine learning algorithms such as Deep Learning, for example. The processor is chosen depending on what type of mathematical operation is to be performed. For the deployment of Deep Learning algorithms, for example, FPGA-based system on chips (SoCs) are preferred, because many mathematical operations-named convolutions must be performed. FPGAs can run those tasks simultaneously, while a CPU can only work on tasks one by one.

For some applications in ophthalmology, the needs are different than for applications that use Deep Learning. If a device measures the refractive index of the eye to determine if a patient needs glasses, a CPU-based processing board is preferred.

Application-specific processing is typically the core element of added value in a customer’s device. In a large number of cases, those application-specific algorithms are built using image processing libraries such as OpenCV or OpenVX, which offer hundreds of image processing algorithms.

The last step of the imaging pipeline is the output interface. Typically, the output is presented to the user via a display that shows the collected images, as well as the calculated data, which is then used to make a diagnosis. Most modern embedded vision devices offer the possibility of directly uploading encrypted patient data into the cloud or a hospital patient management system. Other outputs are docking stations that connect the device to local PC-based systems, or HDMI outputs for direct monitor connections.

Embedded vision offers great potential for the development of a whole new generation of miniaturized products for rapid diagnostics. However, the development of embedded vision devices remains more challenging than in the traditional desktop PC world. Developing software on embedded computing devices is more difficult and error-prone compared to classical PC-based developments. Also, since hardware development is typically much more application-specific, hardware components must be customized and cannot be bought off the shelf. This ultimately leads to a lower price per piece for a device, but confronts the manufacturer with a higher initial investment. However, machine vision companies are continuously working on making the combination of board level cameras and processing boards more easily accessible.

Meet the author

Peter Behringer has been a product market manager responsible for Basler AG’s embedded cameras in the Medical and Life Sciences Markets since 2015. He has published several academic papers with the focus on medical image processing; email: [email protected].



The Evolution of Embedded Vision

Two key technologies, miniaturized processing units and board-level cameras, emerged in recent years to enable the creation of embedded vision.

The development of powerful miniaturized processing units was driven by the huge success of portable consumer devices such as tablets and smartphones. As a result, algorithms that used to require top-performing industrial PCs now can run on miniaturized processing boards. The modular system on chips (SoCs) and system on modules (SoMs) now provide great flexibility for the easy integration of a processor.

At the same time that miniaturized processing units emerged, vision technology accelerated quickly. Industrial cameras that used to be bulky and expensive are now available as board-level cameras, tailored to be integrated into small and compact devices. Today’s smaller sensors are still sensitive enough to achieve high image quality and medical-grade reliability. Camera manufacturers also used the advances of processing technology on the camera side to create more powerful single-board cameras. Last but not least, interface technology also improved, so that today there are interfaces on the market that are tailored for use in combination with an embedded processing board.


Published: August 2017
Glossary
embedded vision
Embedded vision refers to the integration of computer vision technologies into various embedded systems, devices, or machines. Computer vision involves teaching machines to interpret and understand visual information from the world, much like human vision. Embedded vision takes this concept and applies it to systems where the processing occurs locally within the device, as opposed to relying on external servers or cloud-based services. Key components of embedded vision systems include: ...
FeaturesBiophotonicsEmbedded VisionPeter BehringerBaslersystem on chip

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.