Vision Spectra Preview for Spring 2025

Here is your first look at the editorial content for the upcoming Spring issue of Vision Spectra.

Jan. 8, 2025

Embedded Vision

As deep learning capabilities integrate into a range of technology areas, engineers and designers must determine how-best to harness the potential of AI/DL for their systems, starting in the design and fabrication stages. Camera designers, as well as vision system integrators, must develop imagers and imaging systems capable of achieving optimal performance without compromising or restricting the AI/DL element. At the same time, cameras must support the AI element: Deep learning applications cannot achieve peak levels of functionality in poor lighting settings, or as part of a sub-optimally designed system, and require high-quality images for training and labeling via a data set. The considerations that camera designers and end-users give to system design in the budding AI age is explored. Considered questions include how designers are adapting camera designs to incorporate or pair with an AI/DL element; the relationship between AI/DL capabilities in image processing and the design of a camera; and the trade-off between the quality of camera optics and the sophistication of the AI element.

Key Technologies: 3D imaging, cameras, machine vision systems, camera optics, AI/DL, camera sensors

Machine Vision and the Smart Factory

Deep learning vision inspection faces numerous challenges when applied to real-world manufacturing environments. Among these, the most critical is the lack of defect data. Deep learning operates on a paradigm of case-based learning, where the model acquires the ability to distinguish between defects and normal conditions by analyzing a wide variety of examples, encompassing both defective and normal images. Without sufficient data for training, creating an effective deep learning model becomes inherently difficult.
This article delves into how a Generative Adversarial Network (GAN) can be leveraged to address this issue by generating synthetic defect data. Industrial generative AI models must satisfy unique requirements distinct from general-purpose GAN due to the specific demands of manufacturing applications: 1. The model should create defects that are realistic and closely resemble actual ones (maintaining the background while generating the defect). 2. It should produce diverse defect types. 3. It must generate realistic defects even with limited actual defect images.
The article will address a GAN model that meets these criteria and addresses how GAN overcomes the shortage of defect data in manufacturing, a barrier to adopting deep learning-based vision inspection.

Key Technologies: Machine vision, deep learning, AI, GAN

Machine Vision and Semiconductor Manufacturing

In this article, Klaus Schrenker, Business Development Manager at MVTec Software GmbH, is outlining, how machine vision software is advancing many production steps in semiconductor manufacturing and the added value it brings. In fact, machine vision is what enables numerous inspection and alignment processes required in high-precision semiconductor production to be carried out automatically, accurately, and at high speeds. Robust machine vision software is used to implement applications such as defect detection, measurement, as well as matching and alignment, which are particularly beneficial for many process steps. The article is enriched with plenty practical examples from frontend- as well as back-end production including corresponding images. To name just a few:
1) identifying microscopic cracks, scratches, and particle contamination on wafer surfaces, even in difficult lighting conditions. 2) (3D-) Measuring of wafer bumps in the µm range. 3) Bond inspection during wire bonding processes.

Key Technologies:Machine vision, semiconductor production, alignment software


Lighting & LEDs

Advances in LEDs are improving their ability to meet machine vision lighting needs. LEDs are available in more wavelengths, with offerings running from the UV to the IR. LEDs are also more efficient, with more light out for every watt in. The latest LEDs are about 50 percent more efficient than the old technology and can also be driven at a higher current.
However, efficiency depends on wavelength. Currently, a blue LED that produces light between 400-480 nm, for instance, is perhaps an order of magnitude more efficient than one that emits in the UV-C band, say from 260-280 nm. Today’s LEDs in the SWIR, from 1000 out to 1700 nm, also do not perform as well as LEDs in the visible.
LEDs emit light because they contain gallium nitride (GaN), indium gallium arsenide (InGaAs), indium phosphide (InP), or another compound semiconductor. The semiconductor composition dictates the LED output wavelength, although the use of a phosphor in the optical path can shift and broaden the emission spectrum. By devising new semiconductor compound recipes, researchers are developing the basis for new LED emission wavelengths.
In addition to expanding the available wavelengths, another innovation trend is toward shorter pulse duration. Having illumination on only when needed cuts down on waste heat, reducing cooling requirements. Pulsing LEDs and reducing their duty cycle can extend their life, increasing the uptime of a machine vision system.
What’s more, shorter pulse LEDs can improve machine vision performance. Parts going by on conveyor belt, for instance, only need to be illuminated for an instant. During that brief time when the LED is on, though, having as much light as possible often is an advantage. A shorter pulse width when compared to a longer one or continuous operation enables a more intense burst of light because it allows implementation of a specific LED operating technique.
Today, with CSP the LED die is almost the same size as the body of the LED package, and so a larger portion of the surface generates light. This development increases the amount of light available for machine vision applications. The reduction in size of chip and package also makes it easier to fit a lighting solution into a confined space – a requirement in many machine vision applications deployed in tight confines on a factory floor.

Key Technologies: LEDs, machine vision, SWIR, chip scale packages, computational imaging


Hyperspectral Imaging

Transforming wafer inspection/DIVE Imaging Systems builds hyperspectral vision systems - integrating hardware, software, and comprehensive solutions for industrial inspection tasks. With a primary focus on performance surfaces and thin film applications. These systems address the need for meticulous inspection and quality control in thin-layer application processes. Any deviation from specifications can lead to malfunctions, making accurate assessment crucial. DIVE's technology offers a comprehensive evaluation of surface characteristics, particularly beneficial in semiconductor manufacturing. Beyond semiconductors, this technology caters to a wide range of industries, including electronics production, glass or foil coating for optics, encapsulation, and cleanliness of bonding surfaces.

Key Technologies: Hyperspectral imaging, semiconductors, defect inspection, cameras, AI

Machine Vision for Instant Noodles

Machine vision technology is revolutionizing the instant noodle industry by ensuring high-quality standards and efficiency in production. Utilizing advanced imaging systems, machine vision cameras can detect defects in instant noodles in mere milliseconds. This inspection process identifies issues such as overcooked starch, debris, and large holes, achieving an impressive 99% accuracy rate. Given that over 100 billion servings of instant noodles are consumed annually worldwide, this technology is pivotal in maintaining the consistency and safety of products from leading manufacturers in China, Japan, Korea, and Taiwan. The application focuses on inspecting four key defects before adding flavors and final packaging. It involves inspecting both the top and bottom of each serving of noodles, with dedicated camera setups and lighting to ensure comprehensive coverage. More critically, a hybrid approach between multiple AI inferences and conventional vision algorithms can be used to run multiple threads in parallel. A robust vision solution is required to tackle complex inspections like this one in real industrial situations. With an inspection speed of 40-50ms for each side of the instant noodle pack, the system offers significant potential for higher throughput, ensuring that instant noodles meet rigorous quality standards.

Key Technologies: Machine vision, cameras, defect inspection, AI, quality control

Download Media Planner

 

 



Popular Posts