Search
Menu
PFG Precision Optics - Precision Optics 12/24 LB

Photonic Chip Enables 160 TOPS/W Artificial General Intelligence

Facebook X LinkedIn Email
Researchers from Tsinghua University have reported the development of a photonic AI chiplet called “Taichi” which empowers 160 TOPS/W (tera operations per second per watt) artificial general intelligence (AGI).

Developments in the field of AGI impose strict energy and area efficiency requirements on next-generation computing. Poised to break the plateauing of Moore’s Law, integrated photonic neural networks have shown the potential to achieve superior processing speeds and high energy efficiency. However, it has suffered severely limited computing capability and scalability such that only simple tasks and shallow models have been realized experimentally.

The team at Tsinghua University developed the large-scale chip along with a distributed optical computing architecture, producing billions-of-neuron on-chip computing capability with 160 TOPS/W energy efficiency. The chip not only exploits the high parallelism and high connectivity of wave optics to implement computing with very high computing density, but also explores a general and iterative encoding-embedding-decoding photonic computing to effectively increase the scale of the optical neural network to billion neuron level.
The integrated large-scale interference-diffraction-hybrid photonic chiplet developed by Tsinghua University researchers could pave the way for viable photonic computing and applications in artificial intelligence. Courtesy of Tsinghua University.
The integrated large-scale interference-diffraction-hybrid photonic chiplet developed by Tsinghua University researchers could pave the way for viable photonic computing and applications in artificial intelligence. Courtesy of Tsinghua University.

For the first time, the researchers said, Taichi experimentally realizes on-chip large optical neural networks for thousand-category-level classification and artificial intelligence-generated content (AIGC) tasks, with up to 2-3 orders of magnitude improvement in area efficiency and energy efficiency compared to current AI chips.

Lambda Research Optics, Inc. - Beamsplitter Cubes

The team proposed a universal and robust distributed computing protocol for complex AGI tasks. Instead of going deeper as electronic computing, the researchers said, Taichi architecture goes broad for throughput and scale expansion. A binary encoding protocol is proposed to divide challenging computing tasks and large network models into sub-problems and sub-models that can be distributed and deployed on photonic chiplet. This atomic divide and concur operation enables large-scale tasks to be adaptively solved with flexible scales, achieving on-chip networks with up to 10 billion optical neurons.

The researchers developed their largest-scale photonic chiplets to support input and output dimensions as large as 64 × 64. By integrating scalable wavefield diffraction and reconfigurable interference, the entire inputs are encoded passively and modulated in a highly-parallel way, they said, achieving 160 TOPS/W on-chip energy efficiency and 879 T MACS/mm² area efficiency (up to 2 orders of magnitude improvement in both energy and area efficiency than existing AI chips).

The versatility and flexibility of Taichi was demonstrated by on-chip experiments showing an accuracy of 91.89% in 1623-category Omniglot characters classification and 87.74% in 100-category mini-ImageNet classification. The on-chip high-fidelity AIGC models were demonstrated in tasks such as music composing and high-resolution styled paintings generation.

Taichi not only breaks the scale limitation towards beyond-billion-neurons foundation model with large-scale, high-throughput photonic chiplets, the researchers said, but also achieves error-prone robustness through information scattering and synthesizing. The researchers believe the chip’s ability to solve complex on-chip AGI tasks in a scalable, accurate, and efficient manner will pave the way for real-world photonic computing to support applications in large machine learning models, AIGC, robotics, and other areas.

The research was published in Science (www.doi.org/10.1126/science.adl1203).

Published: April 2024
Glossary
neural network
A computing paradigm that attempts to process information in a manner similar to that of the brain; it differs from artificial intelligence in that it relies not on pre-programming but on the acquisition and evolution of interconnections between nodes. These computational models have shown extensive usage in applications that involve pattern recognition as well as machine learning as the interconnections between nodes continue to compute updated values from previous inputs.
artificial intelligence
The ability of a machine to perform certain complex functions normally associated with human intelligence, such as judgment, pattern recognition, understanding, learning, planning, and problem solving.
Research & TechnologyOpticsphotonic integrated circuitscomputingoptical computingchipsneural networkchipletartificial intelligenceartificial general intelligenceAsia-PacificTsinghua University

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.