Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


Scalable Error-Correction Signals Forthcoming Efficiency Gains for Quantum Compute

BY GUILLAIME DUCLOS-CIANCI, NORD QUANTIQUE

Quantum computers hold tremendous promise, and the potential to solve a range of complex problems which are currently intractable due to the limitations of today’s super-computers. These problems span the fields of quantum chemistry and condensed matter physics — which have direct applications in crucial research areas such as drug discovery and the design of materials for more powerful and efficient batteries.

Early prototypes of quantum computers were brought online in the last decade. The performances of these forerunners invited skeptics to loudly claim that the industry would never reach the levels of performance that we have since come to see in the present.



A developed superconducting architecture uses microwave photons, inside an aluminum cavity, to encode qubits in a bosonic code, which is a superposition of many resonant photon states. The design approach is used to to help with error correction within Nord Quantique's systems. Courtesy of Nord Quantique.

Despite the astonishing systems of today, we must recognize that there is still much to accomplish if quantum computing will ever deliver on its projected $2 trillion impact on industry verticals (according to McKinsey & Company).

In addition to positive momentum from the experimental successes of the past several years, quantum system developers possess a great advantage in their quest to develop necessary technologies to drive forthcoming progress in quantum compute. Indeed, there are a number of different approaches to building quantum computers, each with their own pros and cons. Some photonic quantum computers, for example, constantly generate photons at a given rate. These photons travel through an optical network and are measured in a dynamic fashion, which depends on previous results, and later influences the measurement of subsequent photons. In these systems, the entanglement of the photons in both space and time happens in real-time to perform computations as the photons are being generated, processed, and measured. Other photonic architectures use highly entangled states of light, called cluster states, to steer the state of many entangled photons in a similar fashion.

Ion-trap, neutral atom, superconducting circuits, and quantum dots are among some of the additional approaches to quantum computing. Nord Quantique, for example, has chosen a superconducting architecture that uses microwave photons inside an aluminum cavity to encode qubits in a bosonic code, which is a superposition of many resonant photon states, to help with error correction within its systems.

Error correction is a critical dynamic in any quantum information-sharing platform. The fragility of quantum information hosted on quantum platform is a central impediment to achieving what is known as “fault-tolerant” quantum computing. The slightest “noise” from nearby electronics and magnetic fields can cause a loss of coherence in the system, triggering errors that cause a loss of quantum information. Without a process to address this noise, and the errors that it causes, developers cannot realistically hope to build reliable quantum computers that provide an advantage over classical supercomputers.

Scientists have studied different approaches to quantum error correction for decades. And today, the number of research initiatives addressing this challenge continues to rise. It has already been demonstrated that, using innovative software encoding, we can very accurately use physical qubits to develop logical qubits. This is the celebrated threshold theorem of fault-tolerant quantum error correction and computing.

Yet despite the decades of attention that quantum error correction has commanded from R&D, until recently, industry was settled on working with a single architecture, called surface code lattice surgery. One obvious and very relevant advantage of this architecture is its high tolerance threshold for errors (about 1%). Further, surface code lattice surgery only requires nearest-neighbor couplings among qubits, which enables an easier system layout.

This solution proved useful for multiple iterative advances in the realm of quantum computing. Today, technologies using components operating at just about two orders of magnitude below the threshold mentioned above are widely known.

So why do fault-tolerant computers remain out of reach?

The answer involves an important caveat with surface code lattice surgery: Both the space and time overheads to develop these systems are prohibitively large. Depending on certain variables, it may require up to a few thousand physical qubits per logical qubit to achieve fault-tolerance, as well as hundreds of physical time-steps per logical time-step to implement. This means that circuits would have to be 100x longer to reach fault tolerance.

This bottleneck can be best appreciated when looking at resource estimation for useful applications, where the qubits are counted in millions and computation times can be as long as complete centuries.

A quantum constraint

As it relates to individual qubits, there are three main factors — each offering a unique impediment — that are present during the operation of quantum error correction via surface code lattice surgery.

Quantum error correction itself is one obstacle. The idea behind quantum error correction is to redundantly manipulate the quantum information to improve its resilience. This requires many physical qubits per logical qubit. Error diagnostics are performed using a process called “error syndrome extraction,” which is then fed to a classical algorithm that decodes this stream of data. This is used to determine which errors have most likely affected the data.

The cost involved in redundant encoding is noticeable in the physical space that is required for successful operation: These quantum systems need to be much larger. Additionally, syndrome extraction becomes a time cost because of the added time it takes to perform this process on these bloated circuits.

On top of this first layer of overhead, there is also a price to pay for “gate synthesis,” which comes in the form of longer and more time-consuming operation(s). Roughly speaking, the mechanism that allows quantum error correction in the first place restricts the system to a more limited set of valid operations which can be performed on the data. This is called the “instruction set;” in other words, freedom in operation is traded for resilience against unwanted errors. Consequently, the gates that form the circuit at the logical level may not be directly applicable to the data, in which case they must be compiled as a sequence of instructions, resulting in extra time.

Also, gates directly supported by a quantum error-correcting code cannot be universal since they are limited. Another resource is needed to complete the gate set. To do this, special states, known as “magic states,” are required to complete its instruction set.

This creates a separate challenge: How can magic states be prepared if the gates that would be used to do so are fundamentally prohibited from doing so? Here, the answer lies in a process called distillation, which allows the computer to make many copies of low-quality magic states, which are not fault tolerant. This process then uses a special circuit which, within the applicable limitations, distills a handful of high-quality copies of the same state.



Ion-trap, neutral atom, superconducting circuits, and quantum dots are among the approaches to quantum computing. The road to what is known as "fault-tolerant" quantum computing involves first overcoming bottlenecks posed by the fragility of quantum information hosted on quantum platform. Courtesy of Nord Quantique.

Distillation can be iterated many times to achieve the desired level of quality. This results in both a space overhead (auxiliary qubits required to run the distillation circuit) and potentially a time overhead (the computation speed can be limited by the distillation process). In practice, there is a trade-off between the two: The more auxiliaries added to perform distillation, the less the time logical circuit has to wait for the high-quality magic states.

The overhead caused by the additional qubits required for error correction using this method is massive. And certainly today, it is sufficient to impede the development of a full-fledged fault-tolerant quantum computer.

Fault-tolerance

Many developers in the quantum industry today recognize the need for more efficient error correction alternatives. Some of these alternatives, such as quantum Low-Density Parity Check codes (qLDPC), have undergone many years of research. Yet despite their capabilities and promise, these codes are difficult to implement. They typically require non-local couplers, which must be reconfigurable to allow computation, as opposed to quantum memories. These components are also apt to limit the parallelization in the logical circuits, which translates to yet another time overhead and limits the potential gains.

One alternative, which is not mutually exclusive with qLDPC codes, involves building resilience at the individual qubit level. This is done by encoding the qubit in a much bigger and richer state space using a harmonic oscillator (sometimes called a bosonic mode). In practice, these are microwave cavities that contain photons. They can be made extremely clean so that photon life and coherence times can reach a few milliseconds. This compares favorably for example, to traditional superconducting qubits, which typically have a lifespan in the low hundreds of microseconds or less.

The state space of a boson is technically infinite, and as such it comes with intrinsic redundancy. Redundancy can be built at the single-unit level based on this natural property, in what is called finite energy Gottesman-Kitaev-Preskill (GKP) qubit encoding.

The dynamics of this scheme offer several distinct advantages both in terms of performance and the delivery of information. Quantum error correction is performed at the single-qubit level, and information acquired on individual qubits during this process is very valuable, akin to the syndrome of a qubit code. This means that qubits are better protected because they are themselves error corrected, and they are also accompanied by extra information that dynamically “tells” the hardware which qubits have the best fidelity.

Second, the GKP encodings form a family of protocols. One can encode qudits, which are higher dimensional systems, instead of qubits, with no extra cost in terms of physical resources (though with performance that is somewhat degraded). Still, certain tasks (such as magic state distillation) can be orders of magnitude more efficient when using qudits rather than qubits. This directly translates into important savings.

Additionally, the presence of bosons in the system can provide its own set of advantages. A hybrid qubit-boson architecture would avoid the need to represent the bosons as qubits, as they are readily available within the system itself. This option allows a quantum processing unit (QPU) designer to optimize the use of bosons to either encode qubits or bosons directly, depending on the use case. In practice, this will translate into important savings of both time and space when performing gate synthesis.

The benefits of such an approach do not entirely bypass additional challenges. The control techniques and apparatus of bosonic QPUs are more involved and more complex. Plus, the technology is less mature than other superconducting designs.

At the same time, this complexity is a fair price to pay to unlock fault-tolerant quantum computing. In the end, this is where the most value lies. Even as additional challenges to resilient computation using these devices persist, bosonic qubits are the first ones to be experimentally shown to be significantly above the break-even point in the context of an actively error-corrected quantum memory.

info@nordquantique.ca

Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2025 Photonics Media