Predictive Benchmarking of High-Fidelity Rydberg CZ Gates

Significance 

Advancing fault-tolerant quantum computing hinges on one simple requirement: two-qubit entangling gates must be nearly perfect. In practice, this demand has proven to be one of the most formidable hurdles in the field. For years, researchers have known that while individual qubits can often be controlled with remarkable precision, the step of entangling them introduces errors that accumulate quickly in larger circuits. Unless those errors are reduced below the thresholds demanded by quantum error correction, the dream of building machines that outperform classical computers at meaningful tasks remains out of reach. Neutral atom arrays, and in particular systems harnessing Rydberg interactions, have recently attracted enormous interest as a potential solution. The appeal is clear: neutral atoms can be trapped in large, reconfigurable arrays with optical tweezers, offering scalability and flexibility that few other platforms can match. The Rydberg blockade mechanism, where the excitation of one atom prevents its neighbor from being simultaneously excited, provides a natural way to implement entangling gates. Over the last decade, these ingredients have been assembled into systems capable of executing two-qubit gates with fidelities above 99.5%, a figure that would have been unimaginable not long ago. Yet, even this level of control is insufficient when one considers the unforgiving demands of fault-tolerant architectures, which generally require fidelities in the neighborhood of 99.9% or higher. What makes further progress so difficult is not simply the engineering of stronger lasers or cleaner vacuum chambers, but the fact that the errors themselves are multifaceted. Decoherence from Rydberg state decay, noise in laser frequency and intensity, residual atomic motion, and imperfections in pulse shaping all combine to erode performance. Crucially, these error sources vary in their importance depending on how a gate is implemented, how quickly it is driven, and which physical states are involved. Without a clear and predictive framework to untangle these contributions, improvements risk becoming a matter of trial and error, offering only incremental gains.

To this account, new research work published in PRX Quantum and conducted by Dr. Richard Bing-Shiun Tsai, Dr.  Xiangkai Sun, Dr.  Adam Shaw , Dr.  Ran Finkelstein , and led by Professor Manuel Endres from the Caltech – The Division of Physics, Mathematics and Astronomy, The researchers developed a new benchmarking method called symmetric stabilizer benchmarking that isolates the fidelity of Rydberg-based CZ gates while minimizing sensitivity to single-qubit errors. They also created fidelity response theory, an analytical framework that connects laser noise spectra to gate infidelity through well-defined response functions. Together, these tools allowed them not only to demonstrate a record-high entangling gate fidelity but also to explain the underlying error mechanisms and predict how further improvements can be achieved. This dual development provides both a practical measurement protocol and a theoretical guide for advancing neutral atom quantum computing.

the Caltech team worked with strontium-88 atoms confined in arrays of optical tweezers, encoding qubits on the narrow optical clock transition. To create entanglement, they relied on promoting atoms into highly excited Rydberg states where strong interactions prevent simultaneous excitation—a mechanism known as blockade. Using this effect, they implemented a time-optimal controlled-Z (CZ) gate, carefully shaped by sinusoidal phase modulation and constrained by the rise and fall times of their modulators. The delicate balance of this pulse sequence was critical, as even small imperfections could accumulate into measurable errors. To truly assess how well the CZ operation performed, the group introduced a new benchmarking approach they called symmetric stabilizer benchmarking. Instead of relying on conventional randomized benchmarking, which often requires local addressing, they designed circuits that interleaved CZ gates with global π/2 rotations around different axes. This clever choice ensured that the system evolved only through a restricted set of symmetric stabilizer states, so the impact of single-qubit imperfections was suppressed. By varying the number of CZ gates applied and monitoring how the probability of returning to the initial state decayed, they could extract the fidelity of the entangling operation itself rather than a mixture of unrelated processes.

The authors found that after correcting for leakage errors, they reported an average CZ fidelity of 0.9971, one of the highest values achieved in neutral atom systems. This number was not plucked from a single optimized data point but emerged consistently from sequences run at their maximum available Rabi frequency of 7.7 MHz. Importantly, the fidelity was not treated as a mere black-box outcome. The team compared their measurements against ab initio simulations that incorporated four major error channels: spontaneous and blackbody-induced decay of the Rydberg state, laser frequency and intensity fluctuations, and residual atomic motion. The close agreement between experiment and theory was a key validation, showing that the observed errors could be traced back to well-understood physical sources rather than hidden technical artifacts. The Caltech team then found that at lower gate speeds, the longer exposure to decay dominated, while at higher frequencies the noise from laser fluctuations took over. By developing fidelity response theory, the researchers could go beyond brute-force simulations and derive analytical scaling laws. They showed, for example, that frequency-noise-induced errors decrease roughly with the square of the Rabi frequency, while decay errors fall off more slowly, and intensity noise contributes nearly a constant floor. When they combined these predictions, the curves lined up with experimental data across the board. This union of precise measurement with a transparent theoretical framework was the real achievement: not just setting a record fidelity, but demonstrating why that record was possible and how it might be pushed further toward the elusive 0.999 level. 

In conclusion, Professor Manuel Endres and his colleagues successfully pushed the fidelity of Rydberg-based CZ operations to 0.9971 and grounding that performance in a rigorous error model, the researchers show that neutral atom platforms are no longer limited by mysterious imperfections but instead by quantifiable and, importantly, addressable noise sources. This transition from empirical progress to predictive understanding is critical. It suggests that future gains will not depend on blind trial-and-error improvements but on targeted interventions informed by analytical scaling laws and careful benchmarking. We believe one of the most direct implications is for quantum error correction. The difference between 99.7% and 99.9% fidelity may appear minor on paper, yet in practice it can determine whether logical qubits survive or collapse under repeated cycles of error correction. The study demonstrates a realistic path to reaching that threshold by pointing to specific technical upgrades, such as reducing laser frequency noise or improving the response time of modulators. In this sense, the work provides not just a snapshot of present-day performance but a roadmap toward the practical requirements of fault-tolerant computation. Moreover and beyond computing, the fidelity response theory developed here offers a broader conceptual tool. By linking arbitrary power spectral densities of noise to measurable infidelity, the framework can be applied well outside the immediate context of two-qubit gates. It can diagnose the limits of many-body simulations, guide the design of adiabatic state preparation protocols, or even be adapted to other quantum hardware platforms where noise is spectrally complex. In doing so, it reshapes the way researchers think about noise—not as a static background disturbance, but as a structured influence that can be mapped, predicted, and mitigated. Equally important is the new benchmarking protocol. Symmetric stabilizer benchmarking provides a clean way of isolating entangling gate fidelity without the confounding influence of single-qubit errors. This is especially relevant for large neutral atom arrays, where global control is often the only realistic option. The fact that this protocol can now serve as a standard means that comparisons between different laboratories and gate designs will be far more meaningful, accelerating collective progress.

Predictive Benchmarking of High-Fidelity Rydberg CZ Gates - Advances in Engineering

About the author

Manuel A. Endres

Professor of Physics
Caltech – The Division of Physics, Mathematics and Astronomy

Research interests: focus on experimental and theoretical quantum science. This includes experiments with individually controlled neutral atoms targeting novel approaches for quantum simulation, quantum information, and quantum-enhanced metrology, as well as theory work in quantum many-body physics, applications of machine learning, and proposals for new quantum science and AMO platforms.

Reference

Tsai, Richard & Sun, Xiangkai & Shaw, Adam & Finkelstein, Ran & Endres, Manuel. (2025). Benchmarking and Fidelity Response Theory of High-Fidelity Rydberg Entangling Gates. PRX Quantum. 6. 10.1103/PRXQuantum.6.010331.

Go to PRX Quantum.

Check Also

A Data-Centric Framework for Resilient Flexible Job Shop Scheduling