What Were Your Favorite APS March Talks?

 

Like many other physicists, we spent the past week in the APS March meeting, sitting in video conference rooms watching PowerPoints, wondering if we should maybe have joined a different video conference to watch a different PowerPoint, all while worrying about how our own presentation would go and hoping that our pet wouldn't make noise during it. But the dust is finally settling. Despite both the chaos of trying to keep track of dozens of quantum computing talks combined with the fatigue that comes with forgetting to get up from our chair for an entire day, we were able to sit in on a host of amazing talks about quantum hardware.

We've roped together a few talks we liked that represent some of the more interesting pieces of work to come out of the meeting, showing advances in qubit architectures, control mechanisms, and other hardware topics. We've intentionally left out any IBM results so as not to look tacky (most of the blog's authors work there), but feel free to discuss any IBM results in the comments. This isn't a comprehensive list or a list of "best," and given how many talks there were, we definitely left off some other cool presentations; these are just some of the ones we noticed and wanted to share. Sound off in the comments (or on Twitter or elsewhere!) with what you thought about the meeting, these talks, or any other research that got you thinking this past week. 

Presenter: Farah Fahim (Fermi National Accelerator Laboratory) 
Abstract: Deadzone-less, large area camera systems can be assembled by connecting wafer scale sensors to an array of almost reticule size, 4-side tileable, edgeless readout integrated circuits (ROIC). The design of truly edgeless ROICs, with active area extending to their edges, has been made possible with the advent of 3D integration technologies with high-density interconnects, which enable new routing and I/O paradigms. Despite their obvious potential, the realization and widespread development of truly edgeless ROICs to create gapless dectors has faced several obstacles including manufacturing processes related to 3D integration, identification of known good dies and edgeless design methodologies. The advancements required in "thru via" approaches and wafer bonding and its impact on developing integrated electronics required for Quantum and AI will be discussed.

We thought it was really interesting to see how other physicists have dealt with scaling up experiments to ridiculous levels of complexity, and provides some inspiration (and hope!) for the future of quantum devices.

Presenter: Jacob Blumoff (HRL Laboratories LLC)
Abstract: Existing architectures for silicon quantum-dot qubits have enabled high-fidelity state preparation and measurement1, low-error randomized benchmarking2, and millisecond-scale dynamical decoupling3. To facilitate improved control of the underlying electrostatic potential and scaling to larger arrays, we present a more advanced design called Single-Layer Etch-Defined Gate Electrode, or “SLEDGE.” These devices feature a single layer of non-overlapping gate electrodes and employ vias to break the plane to backend routing. Using this process, we demonstrate exchange-only qubit initialization, measurement, and randomized benchmarking with fidelities that compare favorably to the previous design. This architecture provides a path to scalable and high-performance silicon-based quantum devices.
  1. Blumoff et al., APS March Meeting 2020, R38.00001
  2. Andrews et al., Nat. Nano. 14, 747 (2019)
  3. Sun et al., APS March Meeting 2020, L17.00008 

This talk was a great intro on exchange-only qubits in Si/SiGe. Blumoff discussed scalability and the fabrication aspect, including improvement made with vias—and the new architecture performs about as well as the architecture it was attempting to improve upon.

Presenter: Phillipe Campagne-Ibarcq (Quantic Team, Inria Paris)
Abstract: In 2001, Gottesman, Kitaev and Preskill (GKP) proposed to encode a fully correctable logical qubit in grid states of a single harmonic oscillator. Although this code was originally designed to correct against shift errors, GKP qubits are robust against virtually all realistic error channels. Since this proposal, other bosonic codes have been extensively investigated, but only recently were the exotic GKP states experimentally synthesized and stabilized. These experiments relied on stroboscopic interactions between a target oscillator and an ancillary two-level system to measure non-destructively the GKP code error syndromes.
In this talk, I will review the fascinating properties of the GKP code and the conceptual and experimental tools developed for trapped ions and superconducting circuits, which enabled quantum error correction of a logical GKP qubit encoded in a microwave cavity. I will describe ongoing efforts to suppress further logical errors, and in particular to avoid the apparition of uncorrectable errors stemming from the noisy ancilla involved in error syndrome detection. 

This talk started with a very clear introduction to GKP states, and the experiments themselves were amazing. The degree of technical skill that went into making and manipulating these states was really cool. Plus the states are really cool looking.

Presenter: Andras Gyenis (Princeton University)
Abstract: Encoding a qubit in logical quantum states with wavefunctions characterized by disjoint support and robust energies can offer simultaneous protection against relaxation and pure dephasing. One of the most promising candidates for such a fully-protected superconducting qubit is the 0-π circuit [Brooks et al., Phys. Rev. A 87, 052306 (2013)]. Here, we realize the proposed circuit topology in an experimentally obtainable parameter regime and show that the device, which we call as the soft 0-π qubit, hosts logical states with disjoint support that are exponentially (first-order) protected against charge (flux) noise. Multi-tone spectroscopy measurements reveal the energy-level structure of the system, which can be precisely described by a simple two-mode Hamiltonian. Using a Raman-type protocol, we exploit a higher-lying charge-insensitive energy level of the device to realize coherent population transfer and logical operations. The measured relaxation (T_1 = 1.6 ms) and dephasing (T_R = 9 μs, T_2E = 25 µs) times demonstrate that the soft 0-π circuit not only broadens the family of superconducting qubits, but also constitutes an important step towards quantum computing with intrinsically protected superconducting qubits. 

The 0-π qubit lives! It was great to see how far along protected qubits have come. We're also still laughing about the authors claim that the qubit is "so well protected, even from experimentalists." 

Presenter: Mahdi Naghiloo (MIT)
Abstract: We propose a new scheme that combines parametric mode conversion and adiabatic techniques in a pair of coupled nonlinear Josephson junction transmission lines to realize broadband isolation without magnetic elements. The idea is to induce an effective unidirectional parametric coupling between two otherwise orthogonal modes of propagation and engineer the dispersion to have an adiabatic conversion between two modes. Our realistic analysis suggests more than 20 dB isolation over an octave of bandwidth (4-8 GHz) with less than 0.1 dB of insertion loss. Our scheme is compatible with the current superconducting qubit technology. We report on progress toward implementing this device. 

This was a proposal for making a TRWPA like device to replace a macroscopic magnetic isolator. This was very exciting to see because the devices performance looks almost identical to the commercial components. Looks like it will be a difficult microwave engineering challenge but the payoff would be enormous.
Presenter: Teruaki Yoshioka (Tokyo Univ of Science, Kagurazaka)
Abstract: We report an experiment of fast initialization of superconducting qubit using SINIS.
Active and unconditional initialization is required for NISQ, surface code and quantum computation.
By applying a bias voltage to the SINIS, photon assisted tunneling occurs, and the Q value of the resonator can be temporarily deteriorated. A qubit is coupled to the resonator, energy is transferred from the qubit to the resonator by applying two drive pulses which are an existing initialization scheme, and energy is efficiently emitted to the environment by natural relaxation of the resonator. Further, when initialization is not performed, that is, when a bias voltage is not applied to SINIS, the Q value of the resonator returns, so that the Q value does not affect readout and gate operation.
In this presentation, we report the experimental results and fabrication of the device. 

The superconductor-insulator-normal metal-insulator-superconductor sandwich (SINIS) idea has been knocking around for a while. It's a cool attempt to take a piece of physics we'd normally say was a big problem—exciting quasiparticles—and turn it into a reset mechanism for resonators. 

Presenter: Chuanhong Liu (University of Wisconsin-Madison)
Abstract: The Single Flux Quantum (SFQ) digital logic family has been proposed as a scalable approach for the control of next-generation multiqubit arrays. In an initial implementation, the fidelity of SFQ-based qubit gates was limited by quasiparticle (QP) poisoning induced by the dissipative SFQ driver. Here we introduce superconducting bandgap engineering as a mitigation strategy to suppress QP poisoning in this system. We explore low-gap moats and high-gap fences surrounding the qubit structure, along with a geometry involving extensive coverage of the high-gap groundplane with low-gap traps. We use charge-sensitive transmon qubits to evaluate the effectiveness of the various mitigation strategies in experiments involving direct QP injection. 

This is the first time I've see an interface SFQ logic to qubits without destroying the qubits; they still had good coherence times. This was a cool introduction to superconducting bandgap engineering as a mitigation strategy to suppress quasiparticle poisoning in this system.

Presenter: Helin Zhang (University of Chicago)
Abstract: The heavy-fluxonium qubit is a promising building block for superconducting quantum processors due to its long relaxation and dephasing times at the flux-frustration point. However, the suppressed charge matrix elements and small splitting between computational states have made it challenging to perform fast single and two-qubit gates with conventional methods. In order to achieve high-fidelity initialization and readout, we demonstrate protocols utilizing higher levels beyond the computational subspace. We realize fast qubit control using a universal set of single-cycle flux gates, which are comprised of directly synthesizable pulses, and reach fidelities exceeding 99.8%. Finally, we discuss a set of flux-controlled two-qubit gates for inductively coupled fluxonium qubits. We believe that the fast, flux-based control combined with the coherence properties of the heavy fluxonium make this circuit one of the most promising candidates for next-generation superconducting qubits. 

This took a good look at extremely low frequency fluxonium qubits at only a couple hundred MHz. It was really neat to see people control things that are at or below the thermal limit since they have to cool these qubits before thy even begin the experiment. Also, the fast flux gates look similar to something we would see in a spin qubit gate, so its interesting to see that come together, the control is very atypical.

Presenter: Nico Hendrickx (QuTech and Kavli Institute of Nanoscience, Delft University of Technology)
Abstract: Quantum dot spin qubits are a promising platform for large-scale quantum computers. Their inherent compatibility with semiconductor fabrication technology promises the ability to scale up to large numbers of qubits. However, all prior experiments are limited to two-qubit logic.
Here, we go beyond these demonstrations and operate a four-qubit quantum processor. Furthermore, we define the quantum dots in a two-by-two grid and thereby realize the first two-dimensional qubit array with semiconductor qubits, a crucial step toward quantum error correction and practical quantum algorithms. We achieve these results by defining qubits based on hole states in strained planar germanium quantum wells, enabling a high degree of control, well defined qubit states, and fast, all-electrical qubit driving.
We perform one, two, three, and four qubit logic for all qubit combinations, realizing a compact and high-connectivity circuit. Furthermore, we show that the hole coherence can be extended up to 100 ms using refocusing pulses and employ this to perform a quantum circuit executed on the full four-qubit system. These results mark an important step for scaling up spin qubits in two dimensions and position planar germanium as a prime candidate for practical quantum applications. 

This research represented a big simplification of the germanium spin-qubit platform. The researchers did so by incorporating enough spin-orbit coupling such that they didn't need a micromagnet in order to do microwave manipulations, allowing them to create an array rather than just a 2-qubit interaction. 

Presenter: Ciaran Ryan-Anderson (Honeywell Intl)
Abstract: Mid-circuit measurement and active feed-forward are essential ingredients to fault-tolerant quantum error correction, and the QCCD architecture naturally lends itself to these operational primitives. Ion-transport operations allow for individual qubits to be spatially isolated, where they may be safely interrogated and reinitialized with focused laser beams without damaging idling qubits. Here we present experimental characterizations of these operations including both primitive as well as algorithmic benchmarking results. We will also discuss our results’ implications for the QCCD architecture’s capabilities. 

It has been really awesome to see the steady progress they have made from their original H0 device. We appreciated the clear communication of the effort they have dedicated to methodically solving each problem in turn and sharing the results.


Presenter: Prof Andrew Houck (Princeton University) 
Abstract: We employ tantalum transmon qubits with coherence times above 0.3 ms to demonstrate the importance of materials engineering in realizing a superconducting quantum processor. In this talk we characterize the regions and mechanisms of loss in state-of-the-art two-dimensional qubits. To do so, we efficiently iterate our fabrication procedure using materials spectroscopy. We correlate the spectroscopic results with time domain measurements to enable rapid screening of new materials and processing techniques. We further elucidate the dominant loss sources by characterizing time, frequency, geometry, and temperature fluctuations of coherence. Our fabrication techniques can be easily employed in standard industry and academic cleanrooms, and integrated into existing quantum processor architectures.

It's always great to see new innovations in this field using novel materials. Prof Houck did a great job outlining why this type of creative exploration was necessary and the results are not only quite impressive, they are easily implemented in other labs. We also enjoyed seeing your co-author, the cat. Unfortunately, he was a little blurry, but we just assume this means he has very precise momentum. 

Presenter: Uros Delic (University of Vienna)
Abstract: Owing to its excellent isolation from the thermal environment, an optically levitated silica nanoparticle in ultra-high vacuum has been proposed to observe quantum behavior of massive objects at room temperature, with applications ranging from sensing to testing fundamental physics. As a first step towards quantum state preparation of the nanoparticle motion, both cavity and feedback cooling methods have been used to attempt cooling to its motional ground state, albeit with many technical difficulties. We have recently developed a new experimental interface, which combines stable (and arbitrary) trapping potentials of optical tweezers with the cooling performance of optical cavities, and demonstrated operation at desired experimental conditions [1]. In order to overcome still existent technical problems we implemented a new cooling method – cavity cooling by coherent scattering – which we employ to demonstrate ground state cooling of the nanoparticle motion [2, 3]. In this talk I will present our latest experimental result on motional ground state cooling of a levitated nanoparticle and discuss next steps toward macroscopic quantum states.
  1. Delic, Grass et al., QST 5 (2), 025006
  2. Delic et al., Phys. Rev. Lett. 122, 123602
  3. Delic et al., Science 367, 892-895
Figuring out why this result is cool is left as an exercise to the reader. :)



 

Here's How Ion Trap Quantum Computers Work

By Petar Jurcevic and Ryan Mandelbaum

It's easy, in principle, to build a quantum computer: just find a system that obeys the laws of quantum mechanics with properties that you can exploit in order to perform your computations. In practice, it's extremely hard to build a quantum computer; you need to be able to control that system.

Several architectures have arisen as viable controllable quantum systems; perhaps the "leaders" are the superconducting qubits that IBM and Google researches, and the trapped ion approach pursued by companies such as Honeywell and IonQ. You can read about how superconducting qubits work here. As for me, I did my Ph.D at the Institute for Quantum Optics and Quantum Information in Innsbruck, Austria, studying quantum simulations of many-body interacting systems using trapped ions under professor Rainer Blatt and Dr. Christian Roos. I didn't join the team because I had a specific preference for ions; it was an opportunity in entering the field of experimental quantum computation at a time when programs were generally pursued by academic groups with a few notable exceptions. I was lucky to get the opportunity to join one of the major trapped-ion research groups in the world and to experience the evolution of quantum computation early on. 

Based on my experience, I can fairly say that neither technology is "better" or "worse" than the other, but both approaches have strengths as well as challenges that their respective engineers must overcome (if they hope for their devices to become useful in in the future.)

A macroscopic ion trap (via Blatt lab)
 

Ion traps are among the more  "natural" quantum computers; we represent the computer's two bit values with two quantum states of an electron around an atom. We begin setting up our device by loading from a neutral atomic source inside the computer's vacuum chamber, and then add energy with heat or lasers in order to create a tiny stream of neutral atoms. There are many choices of species we could use, calcium and ytterbium being some of the more popular, all of which have their own pros and cons— which choice is based on the engineering path you'd like to go down. Once we have our neutral stream of atoms, we use a laser to rip off a single electron, charging the atom and turning it into an ion. This charged ion then falls into the trapping potential, a specially-generated rotating electromagnetic field; which you can think of as a ball remaining at the base of a quickly-spinning saddle. The longer you wait with this process, the more ions you trap, and therefore, the more qubits you have to work with; each ion represents one qubit.

In order to actually compute with these qubits, we begin by Doppler cooling the ions with lasers of a specially tuned frequency slightly below the electron transition frequency, halting their motion. We use optical light to pump the atom into its ground state, and use sideband cooling, another laser cooling technique, to further cool the atom's motional modes into the ground state.
 
Once the system is initialized, we use laser fields to apply single or two-qubit gates. Single-qubit gates "move" the electron from one state to another, either leaving the electron in the ground, or 0 state, exciting it into the 1 state, or generating a superposition of 0 and 1. We also use lasers to couple these internal degrees of freedom, i.e. qubit states,  to the external motional degrees of freedom in order to generate two qubit gates; the common motion of the ions, due to the harmonic trapping potential, can be thought of as a bus system coupling all the qubits together at once. The laser fields are tuned in a particular way such that the ions' motion gets excited and de-excited only if the qubits are in a particular state; this state-dependent force is know as the Mølmer-Sørensen gate, the most commonly used two-qubit gate scheme in trapped ion architectures. 
 
After we run our circuit of one- and two- qubit gates, it's time to make our final measurements in order to get our readout, the bitstring that represents the computation's results. We use fluorescence; we couple one of the two qubit states to a short-lived transition that scatters a lot of light which we can collect with a CCD camera or a photo multiplier tube, so if the qubit is in this state we see light, and if the qubit is not in this state, we don't. The resulting measurements is therefore represented by a series of dark or light spots—the bitstring's zeros and ones.
 

These systems have natural advantages—depending on which ion you use, ion trap computers can have qubit T1s (the measure of how long until the ions return to their ground state) of several minutes or longer, and T2s (the measure of how long until ions in superposition dephase) of several seconds. All ions of the same species of atom are identical, so there's no variation in your qubits introduced by fabrication. Ions have extremely high gate fidelities, and in fact, reported record fidelities for single- and two-qubit gates have so far come from trapped ion systems. Moreover, state preparation/measurement errors can be orders of magnitude smaller compared to superconducting qubits and are rarely a real concern. Finally, ions in the same to trap have all-to-all connectivity, meaning you can drive gates between any pair of ions in the system. This is a direct consequence of using the common-motional modes as a bus system. Sparsely-connected superconducting qubits instead utilize SWAP gates or teleportation schemes to generate quantum entanglement between far-distant qubits which do not have direct connection between each other. While superconducting computers benefit from the ability to perform a variety of circuits, connectivity gives ion-trapped based platforms an edge in the early stages of quantum computing development, for now.
 
A setup in an ion trap lab (via Blatt lab)


But ions have their own challenges. It can take a long time to cool the ions in the trap, and these system require a lot more hands-on work that makes them more difficult to automate. Selectively performing two-qubit gates on specific and/or individual qubit pairs is a harder engineering challenge on ions than it is on superconducting qubits. There are various approaches being pursued to improve these gates, all with their own set of advantages and challenges, but all of them add an additional layer of complexity, with scalability and performance being a focus of current research. Gates can also take a few orders of magnitude more time to run than in superconducting quantum computers, a pain that can be felt especially in cases where we must perform many iterations on a parametrized quantum circuit communicating with classical computational resources. Building devices with a larger number of qubits can prove especially challenging.
 
There are proposals under development in order to surpass these scaling challenges, of course. One is the quantum charge-coupled device architecture proposal, where a micro-fabricated trap contains various separate trapping regions, including a region where loading happens, a region where computing happens, or a storage region, and the system shuffles ions around depending on how they're being used. Another proposal consists of multiple individual ion traps linked together via optical links, where photons mapped to qubit states can exchange information with ions in another trap. Each of these solutions come with their own considerations; quantum CCD requires many control knobs that all need to be stable themselves, and linking ion traps is (for now) a very inefficient process.
Petar in the lab (via Blatt lab)


But during my seven years researching ions (five as a Ph.D student, two as a postdoctoral researcher), I gained a lot of respect for this architecture that I've carried with me through my career. I think that ions made me really aware of the tiny effects that might not be a big deal now, but will be important to any quantum computing architecture as we scale up. After we made major improvements to our trapped ion setup, we suddenly could measure the elevator movements in our building, for example! And though I cherish my time with ions and I love teasing my fellow transmon-qubit researcher about how easy ion-qubits are made, I'm also very happy I don't have to spend time fiddling with lasers and optics anymore.
 
Quantum technology is certainly further along than ever before, and given the attention, both ion traps and superconducting qubits are advancing quickly. And while there's certainly some business competition, many in the fields have worked in both systems—and it's likely that the quantum ecosystem of the future will incorporate knowledge or hardware from both of these architectures.


Explained: Quantum Error Correction and Logical Qubits

By Antonio Córcoles and Maika Takita

It is a truth universally acknowledged, that a noisy quantum computer in possession of a good algorithm, must be in want of a fault-tolerant quantum error correcting code.
 
Quantum information is fragile. More so than its classical counterpart. Modern methods of error correction in computer science rely heavily on redundancy: by increasing the amount of data in a message in the form of check bits derived from the original data, the presence of errors can be detected (and possibly corrected) by the receiver. Among the many differences between quantum and classical information, two major aspects emerge when considering error correction protocols. First, quantum information cannot be duplicated due to the no-cloning theorem [1]. And second, quantum measurements collapse the information into a basis set of outcomes, thus destroying any superposition or entanglement exploited by quantum algorithms. These two aspects make the application of classical error correction methods to the quantum realm unfeasible. Additionally, whereas errors in classical information have only one form—0 flipping value to 1 or vice versa—there are two types of errors lurking within a quantum computation: bit-flips, |0> becoming |1> or vice versa, and phase-flips, \((|0> + |1>)/ \sqrt{2}\) becoming \((|0> - |1>)/\sqrt{2}\), for example.
 
Despite all this, there exist ways to encode quantum information into larger spaces requiring neither exact replication of the information nor direct query of the data. The field of Quantum Error Correction (QEC) focuses on precisely this, providing the tools for building a reliable, essentially flawless qubit, the so-called logical qubit, by combining many faulty ones. Although there are other ways of protecting quantum information—as, for example, decoherence-free subspaces [2]—those can be thought of as quantum error suppression, rather than correction. We will focus uniquely on QEC in this post.
 
The first QEC codes were formulated independently by Shor [3] and Steane [4]. Further theory of QEC was subsequently developed by Calderbank, Shor, Steane, Knill, Laflamme, and Bennett. Eventually, Gottesman [5] and Calderbank et al. [6] arrived at the very important concept of a stabilizer. Stabilizers are operations we apply to a set of qubits in order to obtain information about the qubits' state without disturbing them.  Formally, a stabilizer group \(S\) is a sub-group of the n-qubit Pauli group \(P^n\) where the identity is excluded \((P = \{X, Y, Z\})\). Thus, we can define a "codeset" as the set of all states stabilized by the stabilizer group that we use to encode the logical qubit states:

$$C={|\psi> : g|\psi> = |\psi> \forall g \in S}$$
 
Consider the arguably simplest possible example, the codeset defined by a single codeword:

$$|\psi> = \frac{1}{\sqrt{2}}(|00> + |11>)$$

This codeword is stabilized by the operators \(XX\) and \(ZZ\), meaning that we can use these operators to learn about its parity (both in the Z- and in the X-basis) without disturbing it, and we can do this as many times as we want. If we were to prepare this codeword and subject it to noise (as done, for example, here [7]), we could identify errors by measuring the stabilizers mentioned above. These measurements yield the "error syndromes." Obviously, the above codespace is too small to host a logical qubit, but the principle is the same. In general, we can consider a logical codespace of \(2^k\) dimensions embedded into a physical space of \(2^n\) dimensions. The physical degrees of freedom offered by the n physical qubits are countered by the constraints imposed by the stabilizers, resulting in a reduced number of degrees of freedom that may serve as logical qubits. In the standard notation, we can define a \([[n,k,d]]\) QEC code as one that uses \(n\) physical qubits to encode \(k\) logical qubits and can correct up to \(floor(d-1)/2\) errors.

As for how this might work in hardware, we initialize a logical qubit, then start querying its constituent physical qubits with all of the stabilizers as fast as we can, receiving a series of 0s and 1s from the stabilizer measurements. 0s represent no error, while 1s represent errors in the code. We keep checking for these errors, either correcting them on the spot or keeping track and correcting them at the end. In order to perform a logical gate, we do so with a set of operations that the stabilizers are blind to. By construction of the stabilizer group and the code, we have extra degrees of freedom available in order to operate the logical qubit without confusing gates for errors.

Fig. 1. Error propagation within a parity-check circuit

 

As an example, the paper in [8] implements the \([[4,2,2]]\) code, using four physical qubits to encode 2 logical qubits, performing parity checks on those 4 physical qubits to detect errors during the computation. This small code provides a good example of how errors can propagate within the code operations. The code space is comprised of the following four physical states (omitting normalization): \(|0000> + |1111>, |1100> + |0011>, |1010> + |0101>\), and \(|0110> + |1001>,\) corresponding to the logical states \(|0p0g>, |0p1g>, |1p0g>,\) and \(|1p1g>\), respectively, where we have labeled our two logical qubits as ‘p’ and ‘g’. Now imagine we have initialized our data qubits in the state \(|\Psi_i> = |0000>+|1111>\) (corresponding to the \(|0p0g>\) logical state) and we run the circuit shown in Fig. 1. This circuit implements an X-check followed by a Z-check on the data qubits. This means that the two measurements on the syndrome qubit yield the \(XXXX\) and the \(ZZZZ\) observables on the data qubits. Since \(XXXX\) and \(ZZZZ\) are stabilizers of this code, we should obtain the same state for the data qubits at the end of the circuit. Now let’s consider the scenario where the syndrome qubit undergoes a bit-flip error. This can happen anywhere in the circuit but let’s focus on three possible locations, labeled as \(A, B,\) and \(C\). Note that in any of the three cases the error is not picked by the \(X\)-check. Now, since a CNOT gate propagates bit-flip errors from control to target, it is easy to see that the error in A results in the final state \(|\Psi_f> = |1000> + |0111>\) and the error in \(C\) results in the final state \(|\Psi_f> = |0001> + |1110>\). In both of these scenarios the error is detected in the subsequent \(Z\)-check. However, consider the case where the bit-flip error happens at \(B\). In that case, not only does the Z-check yield the outcome 0 in the syndrome qubit, failing to detect that an error happened, but the final data qubits state is \(|\Psi_f> = |0011> + |1100>…\) which corresponds to a logical bit-flip in the second (labeled g) logical qubit! This example highlights a critical aspect of QEC codes called fault tolerance. The [[4,2,2]] code is designed to be fault-tolerant for the ‘p’ (protected) logical qubit and not fault-tolerant for the ‘g’ (gauge) logical qubit.
 
We have seen how QEC codes can detect errors during a computation by querying some property of the data (for example, parity) without directly learning what the data is, which would collapse the quantum information. However, it is fair to ask ourselves to what degree we can use these codes to correct for errors. How good do our physical qubits need to be? What is the physical overhead to pay for a reasonably sized, fault-free quantum computer? These are not trivial questions. Let’s start with the concept of fault tolerance introduced above. What is fault tolerance? Fault tolerance is a property of a circuit whereby it computes the correct result—with very little error—despite individual elements being faulty and unreliable. A fault tolerant circuit therefore is not exactly infallible, but it will take several faults for it to yield the wrong result. Fault tolerance rarely occurs naturally and therefore has to be designed, and while doing that, we need to bear in mind that it is critical for fault tolerant QEC that all the steps involved in the computation (encoding, syndrome extraction, logical operations, decoding…) be fault tolerant. This has an immediate and profound consequence: having a code that encodes information into a logical qubit is not enough, we need fault tolerant circuits as well. QEC doesn't always means fault tolerance and one has to be very careful in this regard when reading the literature.
 
Fault tolerance is a critical component of what goes into the concept of a ‘threshold’. The quantum fault tolerant theorem states that when the probability of failure in a noisy device is low enough (below a certain threshold), we can attain arbitrarily low logical error rates by increasing the size of the encoding. This means that a fault tolerant code severely limits the spread of errors within its circuits as long as such errors happen at a rate lower than a particular quantity called the threshold (*). Once our physical system's noise is below the threshold, we can make our computation arbitrarily better by making the code larger. Thus, the answers to ‘how many qubits do we need for one logical qubit’ and ‘how good do our physical qubits need to be’ is the same simple one: it depends. It depends on the code and it depends on our target logical error rate. But as long as the procedures are fault tolerant, we are on the right path.
 
With all these considerations, what goes into choosing a code? A lot of it has to do with the underlying hardware and the prevailing physical error sources. As a nice example, consider the surface code (SC) [9]. This is a very attractive code from an experimental point of view because it requires only nearest-neighbor connectivity and is relatively lenient to physical error rates, with a threshold of almost 1%. Our team started exploring this code as our experimental systems became capable enough to access some small demonstrations. However, it soon became apparent that even the low connectivity required might be too much for the level of crosstalk in our systems [10]. These findings motivated a slight turn of direction toward less-connected codes, and our theory team came up with a heavy-hexagonal (HH) code [11] that retained or improved the essence of the advantages of the SC: very low connectivity required and a not too low of a threshold(**) (albeit a bit lower than that of the SC). The HH code provides a higher degree of protection against crosstalk than the SC simply by virtue of its topological design. It also shows how theory and experiment in QEC (and in quantum information in general) typically go hand-in-hand and propel each other forward. Finally, it also explains why many of the backends offered by IBM Quantum are built with a HH topology!
 
What lies ahead for logical qubits? With a number of small codes demonstrated in a variety of physical platforms in recent years, the next big step is to experimentally demonstrate fault tolerant QEC. Following that, both theory and experiment will share the onus of advancing QEC, by building bigger and better systems and developing codes that can perform more efficiently and with lower overhead (the biggest part of which will be devoted to resources needed to implement logical gates, of which we have said little in this post) at a given level of noise. Quantum computing technologies have shown spectacular progress in the last decade, but the real exciting journey towards truly powerful quantum computers is just starting now.
 
 

 
(*) Note that whereas the existence of a threshold guarantees fault tolerance, the opposite is not true and there are fault tolerant codes that lack a threshold but have a pseudothreshold. A pseudothreshold is the physical error rate below which the logical error rate is lower than the physical error rate of the system for that particular logical qubit size only. The key difference is that trying to lower the logical error rate further by enlarging the logical qubit results in a different (lower) pseudothreshold.
 
(**) The HH code has a threshold for only one type of quantum error, but pseudothresholds exist on both types.
 
References:
 
[1] Wootters et al., Nature 299, 5886 (1982)
[2] Palma et al., Proc. Royal Society of London A, 452:567–584, (1996)
[3] Shor, Phys. Rev. A, 52 (1995) R2493-R2496
[4] Steane, Phys. Rev. Lett., 77 (1996) 793-767
[5] Gottesman, Phys. Rev. A, 54 (1996) 1862-1868
[6] Calderbank et al. Phys. Rev. Lett., 78 (1997) 405
[7] Corcoles et al., Nat. Commun. 6, 6979 (2015).
[8] Takita et al. Phys. Rev. Lett., 119 (2017) 180501
[9] Bravyi and Kitaev, arXiv:quant-ph/9811052 (1998)
[10] Takita et al. Phys. Rev. Lett., 117 (2016) 210505
[11] Chamberland et al. Phys. Rev. X, 10 (2020) 011022
 





How to Measure Errors on IBM Quantum Systems with Randomized Benchmarking


David McKay, Seth Merkel, Doug McClure, Neereja Sundaresan and Isaac Lauer
 
A non-trivial issue when building a quantum computer is trying to answer a simple question: “how well does it work?” As with regular computers, measuring a quantum computer’s performance boils down to running a set of problems where we know the expected outputs. 
 
But the task doesn’t end there. Which problems should we run? How many? What does a wrong output mean about the likelihood of a wrong output in the future? These are complicated questions even for regular computers. However, in the quantum realm, the situation is even more difficult due to the complexities of superposition, entanglement and measurement. For example, due to the no-cloning theorem, we can’t determine the output of a quantum circuit from a single experimental instance; the experiment needs to be repeated exponentially more times as the number of qubits increases. Therefore, a number of quantum benchmarking strategies use the concept of random circuits–random programs of a similar type that, after enough trials, give an average “sense” of how well our devices work based on statistical measures.
 
These benchmarks operate at two scales: the qubit level and the overall device level. At the device level, there are several benchmarks, for example, the quantum volume [1, 2, 3, 5] (proposed by IBM) and the cross entropy [4]. These measures give a single number that is useful for getting a sense of overall device performance and improvements. However, these measures are not very predictive, i.e., users can’t use those numbers to predict the results of their own algorithms. That’s where the other scale of benchmarking comes in. Benchmarks at the qubit level tell us about one- and two-qubit gate performance; a gate is the fundamental operation the occurs in a quantum circuit to evolve the quantum state. Generally, one-qubit gates create superposition states of individual qubits and two-qubit gates generate entanglement. As quantum computers increase in complexity, new benchmarks will get added to this list to investigate operations such as reset, mid-circuit measurement and feed-forward which are all elements required for fault-tolerance. 
 
If you’ve used an IBM system in Qiskit, you can view the gate errors by looking at the “properties” of a physical backend. By assigning an error number to each gate, we can then use these errors in simulators [5] to estimate the outputs of our circuits with noise. 
 
What are these errors and how are they measured? It is important to understand that these errors are averaged over all possible input states for a specific combination of gates. For example, the error of a gate on qubit 0 should be independent of the gates we run on qubit 2, but in practice there are small crosstalk effects. It would be exponentially expensive in time to measure the errors for all these scenarios, so instead only a subset are measured and reported. In general, we try to measure errors on IBM Quantum devices when all the neighboring qubits are idle. To tell which gate errors are measured together, one can look at the “date measured” value of the error. Errors with identical date/times were measured simultaneously. In short, the gate errors are estimates for the errors that will occur in any particular algorithm, but they aren’t perfect. 
 




Figure 1: Schematic of Randomized Benchmarking. Here we have decided to run circuits with {l0=1,l1=3,l2=6} Cliffords. We also show what a typical interleaved RB circuit would look like. Each “C” gate is a Clifford gate that needs to be transpiled to the device.
 
To measure these errors, we use a specific random circuit program known as randomized benchmarking [7, 8]. Randomized benchmarking (RB for short) is a program that selects random gates from a certain class of gates – the Clifford group – and the last gate inverts the operation of all the previous gates. A special property of the Clifford group means that the inversion gate is efficient to calculate. Therefore, every RB sequence of gates should return the qubit(s) back to its initial state. The basic premise of RB is shown in Figure 1 for a subset of 2 qubits. First, we decide we are going to run circuits with different numbers of Clifford gates {li} on a subset of n qubits. Then, we make a circuit with l0 random Clifford gates and the inversion. Next, we make a second circuit by adding l1-l0 more gates and recalculating the inversion gate for the new sequence, and so on. We run all the circuits in this set and measure the population in the |0> state of each qubit (the ground state); due to the properties of RB we can plot the population of any of the qubit |0> states and get the same answer. Next, we repeat this experiment and average the results.  With enough averaging, the qubit |0> state population decays as Aα^l+B where the average error per Clifford gate is given as ϵ_c = ((2^n - 1)/2^n)(1-α)  where n is the number of qubits in the Clifford gate group that we used for RB (if we are measuring one-qubit error n=1, if we are measuring two-qubit error n=2). For IBM Quantum systems the typical Clifford length is a few thousand one-qubit Cliffords, and a few hundred two-qubit Cliffords. A big benefit of this method is that errors in the preparation of the state and the readout of the state are mostly contained in the coefficients A and B, which are not used for measuring error. 
 
Now there are a few important points. For one we want to know the error per gate, not per Clifford. The Cliffords are certain particular gate operations, but they must be expressed to the native gates of the device with a transpiler. When the Clifford is transpiled it may require several types of gates, and in the case of two-qubit Cliffords there will be a mix of one- and two-qubit gates. To measure single-qubit gate errors we take the average number of single-qubit gates per Clifford gate n_1C and divide the error to get the error per gate ϵ_1G = ϵ_1C/n_1c. To measure the two-qubit gate errors we take the average number of two-qubit gates per Clifford n_2C  and divide the error to get the error per gate  ϵ_2G = ϵ_2C/n_2c . In this case the error is an upper bound because we are neglecting the contribution to the Clifford error from the single qubit gates. The red curve in Fig 2 is an example of standard two-qubit RB.
 
If we want a gate error that is not an upper bound, there is a protocol to use RB to measure the error of a specific gate directly – interleaved RB [9]. A schematic is given in Fig 1 and the blue curve in Fig 2 is an example. In interleaved RB (IRB) we run an extra circuit with the specific gate interleaved between the random Clifford gates as shown in the schematic of Fig 1. The gate error is then given by ϵ_G = ((2^n - 1)/2^n)(1-α_IRB/α_RB)) , i.e. the gate fidelity estimate is proportional to the ratio of the decays from the two curves. We don’t use this method for reporting IBM Quantum backend errors, as it requires twice as much data and the systematic errors can be large [10] since we are taking ratios. Subtle double exponential decays can lead to unphysical error rates. In the example plot shown in Fig 2 the error from interleaved RB is 2.3e-3 and from the procedure used on IBM Quantum systems the error is 3e-3, which are reasonably close. However, there are times when the reference curve error is much higher and, in those cases, the systematic errors mean that IRB must be taken with caution. 
 



Figure 2: Example of RB (red) and Interleaved RB (blue). From https://arxiv.org/abs/2011.07050, see details therein.
 
In conclusion, RB is a quick and effective way to measure gate errors on large devices. It allows us to report a complete set of gate errors, which can be used to monitor the health of devices, improvements, and as an input into simulations to give rough predictions for algorithmic performance. However, it’s important to understand the limitations of any benchmarking scheme; we’ve highlighted a few for RB (in particular the caution required for using IRB) and furthermore there is a deep body of literature on the more subtle issues surrounding RB (see, for examples, refs. [11, 12,13, 14]). We hope this blog post gives some insight into how operation gate errors are measured on IBM Quantum systems with randomized benchmarking and what these errors represent.  
 
References
 
1.      Cross, Andrew W., et al. “Validating Quantum Computers Using Randomized Model Circuits.” ArXiv.org, 11 Oct. 2019, arxiv.org/abs/1811.12926.
2.      Mandelbaum, Ryan F. “What Is Quantum Volume, Anyway?” Qiskit Medium, 20 Aug. 2020, medium.com/qiskit/what-is-quantum-volume-anyway-a4dff801c36f.
3.      Jurcevic, Petar, et al. “Demonstration of Quantum Volume 64 on a Superconducting Quantum Computing System.” ArXiv.org, 4 Sept. 2020, arxiv.org/abs/2008.08571.
4.      Arute, Frank, et al. “Quantum Supremacy Using a Programmable Superconducting Processor.” Nature, vol. 574, no. 7779, 2019, pp. 505–510., doi:10.1038/s41586-019-1666-5
5.     “Quantum Volume.” Qiskit 0.23.1 Documentation, qiskit.org/documentation/tutorials/noise/5_quantum_volume.html.
6.      “Building Noise Models.” Qiskit 0.23.1 Documentation, qiskit.org/documentation/tutorials/simulators/3_building_noise_models.html.
7.      “Randomized Benchmarking.” Qiskit Textbook, 8 Dec. 2020, qiskit.org/textbook/ch-quantum-hardware/randomized-benchmarking.html.
8.     Magesan, E., Gambetta, J. M. & Emerson, J. Characterizing quantum gates via randomized benchmarking. Phys. Rev. A85, 042311 (2012).
9.     Magesan, E. et al. Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking. Phys. Rev. Lett. 109, 080505 (2012).
10.  Epstein, Jeffrey M., et al. “Investigating the Limits of Randomized Benchmarking Protocols.” ArXiv.org, 13 Aug. 2013, arxiv.org/abs/1308.2928.
11.  Proctor, Timothy, et al. “What Randomized Benchmarking Actually Measures.” Physical Review Letters, American Physical Society, 28 Sept. 2017, link.aps.org/doi/10.1103/PhysRevLett.119.130502.
12.  Wallman, Joel J. “Randomized Benchmarking with Gate-Dependent Noise.” Quantum, Verein Zur Förderung Des Open Access Publizierens in Den Quantenwissenschaften, 29 Jan. 2018, quantum-journal.org/papers/q-2018-01-29-47/.
13.  Merkel, Seth T., et al. “Randomized Benchmarking as Convolution: Fourier Analysis of Gate Dependent Errors.” ArXiv.org, 14 Aug. 2019, arxiv.org/abs/1804.05951.
Helsen, Jonas, et al. “A General Framework for Randomized Benchmarking.” ArXiv.org, 15 Oct. 2020, arxiv.org/abs/2010.07