About This Webinar
Two critical elements in the ever-growing deployment of intelligent camera systems with machine learning AI capabilities is data quality and data quantity. The sensor that generates this information is the main contributor to both these elements. Event-based Vision Sensors offer significant benefits in terms of environmental robustness and much reduced data bandwidth compared with the mainly CMOS Imager-based systems in use today.
These systems include high-speed inspection cameras that must deliver fast motion capture, robotic grinding or welding systems that need high dynamic scene capture, and smart Edge IoT cameras that must have both speed and accuracy as well as robust object tracking capabilities.
But for the most part, such systems have relied on an increasingly on over-stretched frame-based vision. This method struggles to address many important challenges like capturing fast moving objects in scenes, operating robustness in low-light and high dynamic range scenes, and functioning in operating environments where compute and power resources are limited.
Event-Based Vision introduces a new approach that is derived from academic study of human eyes and brain operation, which uses independent receptors that collect the essential visual information to be rapidly processed by our brains in order for our bodies to take rapid actions in the avoidance of danger/threat.
In recent years, event cameras that leverage these neuromorphic techniques have gained a strong foothold in computer/machine vision applications. With up to 1000x less data generated, 120dB to 140dB dynamic range and microsecond latency and time resolution (over 10k images per second equivalent), event-based vision offers vast benefits in areas such as industrial automation, robotics, security and surveillance, mobile, IoT and AR/VR
Key Takeaways:
- Introduction to the concept of event-based vision and how it compares to traditional frame-based methods.
- Understanding of key benefits of event-based vision in terms of power, performance dynamic range, integration with other data acquisition technologies.
- Overview of development tools and methods that can be used to integrate event-based vision into MV systems.
- Examples of common use cases that leverage the advantages of event-based vision in industrial applications.
*** This presentation premiered during the
2024 Vision Spectra Conference. For more information on Photonics Media conferences and summits, visit
events.photonics.com
About the presenter
Gareth Powell is the product marketing director at Prophesee. An industry veteran with more than 35 years of experience in semiconductors, specializing in CMOS imaging after the acquisition of VLSI Vision (early pioneers in CMOS sensing technology) by ST Micro as applications engineer, then later as technical marketing/business manager. Powell joined (Teledyne-)e2v, in 2006 as strategic marketing manager appointed with the task of building a CMOS imaging business from the ground up targeting professional/industrial markets. In early 2022, he joined Prophesee as product marketing director to contribute to bringing to market their pioneering event-based sensing technology and products. After graduating with a degree in electrical and electronic engineering from Swansea University in the mid-80s, Powell relocated from the UK to France to join ST Micro in Grenoble and has remained there ever since with his wife and three children. He’s also passionate about music and is a performing musician and recording studio owner in spare time.