Search
Menu
Opto Diode Corp. - Opto Diode 10-24 LB

Embedded Vision Is Streamlined for Application on a Massive Scale

Facebook X LinkedIn Email
Although often limited by space and power constraints, embedded vision’s design flexibility and scalability offer pathways to massive growth.

HANK HOGAN, CONTRIBUTING EDITOR

Compact, efficient, and highly application specific, embedded vision technology increasingly offers performance and price points that would have been impossible to achieve only a few years ago. The steady advancement in capabilities has been driven by improvements in all component technologies, including sensors, optics, and processors.



Courtesy of iStock.com/Andrey Suslov.

But the most revolutionary advancements have unfolded in the last category — processors — helping to multiply many of embedded vision’s more recent innovations. These include intelligent doorbells on the consumer side and robots that can evaluate the health of vineyards or map structures in three dimensions on the industrial end. This processor-driven progress could soon be joined by sensor enhancements, such as inexpensive depth-perception capabilities, detection across a wider spectrum, and the incorporation of nonphotonic data. Together with software advancements, such component advancements promise to expand the utility of task-specific vision systems even further.

Compact connections

Embedded vision is an all-in-one, application-specific, lean, low-cost, and high-performing computer vision solution, according to Tim Coggins, Basler’s head of sales and field application engineering modules for the Americas. Based in Ahrensburg, Germany, the camera supplier acts more as a systems integrator in the embedded vision arena, often working backward from the demands of an application, to tailor software, optics, sensor, and processing components into a solution.

One approach that Basler frequently employs is to position the sensor as close as possible to the processor, Coggins said. This improves performance by reducing latency and other factors that affect task execution. Close positioning can also help to minimize costs by elim­inating interfaces, cabling, and other components.

This level of system design highlights one way in which embedded vision is unlike conventional machine vision. “There is no standard, plug-and-play solution for embedded,” Coggins said. Another distinguishing characteristic of embedded vision, he said, is that system designers such as Basler often begin with the processor component. This runs counter to conventional vision solutions that specify the sensor component first.

The stringent focus on application drives component selection as much as it drives system-level design. Embedded vision systems that target industrial environments may need to ensure that their processors will perform reliably and be available for a decade or more, whereas the processors found in phones and other consumer products may offer at best a five-year lifetime.

Basler works with chipmakers to ensure that the processors it specifies for industrial applications will deliver a suitable product life cycle.



Embedded 3D imagers can enable a drone (highlighted on right) to avoid obstacles such as trees, even as it maps other structures. While advancements in processing power and flexibility are driving adoption of embedded systems, sensor developments such as 3D imaging are also cultivating new applications. Courtesy of Skydio.

The extensive customization imposed by many embedded vision systems requires committing engineering resources, along with money and time, to developing a finished product. These costs can be prohibitive for some low-volume applications. For high-volume products, however, embedded technology’s scalability enables ramping up production to levels far beyond those that are achievable with traditional machine vision. A new smartphone model, for instance, may represent tens of millions of units, each incorporating a single embedded vision design.

A prototype of a vineyard health-monitoring robot developed for the European Union’s VineScout viticulture project. The limited footprint and power budget allotted to embedded vision systems require design and programming to be highly application-specific. Courtesy of iStock.com/Zephyr18.


A prototype of a vineyard health-monitoring robot developed for the European Union’s VineScout viticulture project. The limited footprint and power budget allotted to embedded vision systems require design and programming to be highly application-specific. Courtesy of iStock.com/Zephyr18.

Coggins predicted that the costs associated with customization will diminish in the future, with nonrecurring engineer­ing costs falling and time to market getting shorter. This trend will unfold as manufacturers develop reusable modules and hammer out standards for embedded vision systems, both of which will help to lower the barriers to implementing these solutions. “In the years to come, it will become easier and easier and easier to get there” to finished systems, Coggins said.

Advancements in image processors have not only fueled adoption of embedded vision systems in traditional markets, they’ve also ushered in entirely new applications such as smart doorbells. Courtesy of Sundance Multiprocessor Technology.


Advancements in image processors have not only fueled adoption of embedded vision systems in traditional markets, they’ve also ushered in entirely new applications such as smart doorbells. Courtesy of Sundance Multiprocessor Technology.

Designer systems on a chip

The application-specific nature of embedded system design puts a particular focus on processors, said Susan Cheng, marketing manager for industrial vision at Xilinx. Consider a high-speed inspection being performed on a part as it moves down a manufacturing line. In this situation, total system response time — its latency — is critical to success.

Meadowlark Optics - Wave Plates 6/24 MR 2024

“If latency is key, then for sure the processing and the algorithms will be key,” Cheng said. She added that lower latency translates directly into increased throughput.

To address such demands, Cheng said Xilinx has developed system-on-chip (SoC) implementations with programmable computing cores. Some, for instance, have multiple Arm cores that ensure any software designed to run on the processors should work with minimal or no changes.

An important benefit to the SoC architecture is that it brings together previously separated components onto a single chip. This has reduced the amount of space between components a thousandfold, from millimeters to microns.

Edge AI and Vision Alliance conducted a survey of vision system developers in 2020 that showed over half of respondents had already added or planned to add depth information to their solutions. Such systems, however, may challenge processors by expanding sensor input. Courtesy of Edge AI and Vision Alliance.


Edge AI and Vision Alliance conducted a survey of vision system developers in 2020 that showed over half of respondents had already added or planned to add depth information to their solutions. Such systems, however, may challenge processors by expanding sensor input. Courtesy of Edge AI and Vision Alliance.

SoC architectures enable the graphics subsystems to sit exceedingly close to the memory, the general-purpose computing CPU, and other subsystems — all of which improves performance. Such proximity has a direct and positive impact on latency that both allows faster communication between processor subsystems and reduces power budgets.

Xilinx’s SoC implementations are based on a technology that is similar to the company’s field-programmable gate array (FPGA) technology, which has long been used in cameras and frame grabbers. Both the FPGA and SoC technology can be reprogrammed, if needed, which makes it possible to upgrade embedded vision systems to enhance their capabilities and performance without necessarily having to replace hardware.

The flexible programmability of these processing platforms translates to embedded vision systems and their end applications. Cheng said industrial users often want to capture an image that is as close to reality as possible, rather than one that looks good to the eye, as is desired for a consumer-focused approach. “You need to really see if the edge of that screw is really jagged and it needs to be rejected. You don’t need it nicely smoothed so that it looks pretty,” she said.

Expanding the sensors

As new chip architectures continue to improve processor performance and efficiency, the capabilities of embedded vision systems have evolved, said Jeff Bier, president of engineering consulting firm BDTI and founder of Edge AI and Vision Alliance.

Five years ago, a demanding vision task may have required running algorithms that necessitated using a $1000 graphics processing card, water cooling, and an expensive workstation for the task to complete within an acceptable timeframe. These requirements prevented embedded vision from becoming a practical and deployable solution — terms that Bier uses to define the technology.

“Now I can put it in a consumer product,” he said. “It’s a $10 chip that consumes 1 W and can go into everybody’s doorbell cam or whatever.”

The processor revolution is being further accelerated by the increasing use of neural nets, which have improved the ability to extract useful information from the data that a sensor captures.

Sensors are experiencing their own technological changes, Bier said, albeit not as dramatic as the changes taking place on the processor side. Edge AI and Vision Alliance conducted a survey of vision system developers in 2020 that showed that over half of respondents had already added or planned to add depth information to their solutions. Imaging in three dimensions, for instance, can help autonomous systems such as drones to maneuver around objects and collect critical information about them. Such 3D capabilities, however, will increase the sensor input to processors.

Other sensor advancements on the verge of being deployed include inexpensive multi- or hyperspectral imagers. The addition of data from beyond the visible spectrum can reveal information about chemical composition or other material properties of objects and can be valuable when evaluating the characteristics or mechanical properties of a manufactured component.

Other sensor advancements offer the ability to integrate nonphotonic information based on sound or vibration. Such data can be an important indicator of the health of a machine. In some applications, the combination of image data with sound or vibrational data could significantly enhance the usefulness of a vision-based system.

Since embedded vision systems are often tailored for a particular application, Bier said, they are like a dragster: designed for a specific task, such as getting from point A to B in a straight line very quickly. This approach stands in contrast to the analogy of a minivan — a better general-purpose solution for getting around, but one that is not good at racing. Programmability in embedded vision systems does, to extend the analogy, offer some flexibility to go with what can be dragster-level performance for a specific job.

Which embedded vision trends should people keep an eye on, according to Bier? “This space is changing really fast. So, be careful about deciding that things are impossible or impractical.”

Published: January 2021
Glossary
embedded vision
Embedded vision refers to the integration of computer vision technologies into various embedded systems, devices, or machines. Computer vision involves teaching machines to interpret and understand visual information from the world, much like human vision. Embedded vision takes this concept and applies it to systems where the processing occurs locally within the device, as opposed to relying on external servers or cloud-based services. Key components of embedded vision systems include: ...
FeaturesEmbedded VisionImagingframe grabbersSensors & Detectorsneural networks3D vision

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.