Explained: Quantum Error Correction and Logical Qubits

By Antonio Córcoles and Maika Takita

It is a truth universally acknowledged, that a noisy quantum computer in possession of a good algorithm, must be in want of a fault-tolerant quantum error correcting code.
 
Quantum information is fragile. More so than its classical counterpart. Modern methods of error correction in computer science rely heavily on redundancy: by increasing the amount of data in a message in the form of check bits derived from the original data, the presence of errors can be detected (and possibly corrected) by the receiver. Among the many differences between quantum and classical information, two major aspects emerge when considering error correction protocols. First, quantum information cannot be duplicated due to the no-cloning theorem [1]. And second, quantum measurements collapse the information into a basis set of outcomes, thus destroying any superposition or entanglement exploited by quantum algorithms. These two aspects make the application of classical error correction methods to the quantum realm unfeasible. Additionally, whereas errors in classical information have only one form—0 flipping value to 1 or vice versa—there are two types of errors lurking within a quantum computation: bit-flips, |0> becoming |1> or vice versa, and phase-flips, \((|0> + |1>)/ \sqrt{2}\) becoming \((|0> - |1>)/\sqrt{2}\), for example.
 
Despite all this, there exist ways to encode quantum information into larger spaces requiring neither exact replication of the information nor direct query of the data. The field of Quantum Error Correction (QEC) focuses on precisely this, providing the tools for building a reliable, essentially flawless qubit, the so-called logical qubit, by combining many faulty ones. Although there are other ways of protecting quantum information—as, for example, decoherence-free subspaces [2]—those can be thought of as quantum error suppression, rather than correction. We will focus uniquely on QEC in this post.
 
The first QEC codes were formulated independently by Shor [3] and Steane [4]. Further theory of QEC was subsequently developed by Calderbank, Shor, Steane, Knill, Laflamme, and Bennett. Eventually, Gottesman [5] and Calderbank et al. [6] arrived at the very important concept of a stabilizer. Stabilizers are operations we apply to a set of qubits in order to obtain information about the qubits' state without disturbing them.  Formally, a stabilizer group \(S\) is a sub-group of the n-qubit Pauli group \(P^n\) where the identity is excluded \((P = \{X, Y, Z\})\). Thus, we can define a "codeset" as the set of all states stabilized by the stabilizer group that we use to encode the logical qubit states:

$$C={|\psi> : g|\psi> = |\psi> \forall g \in S}$$
 
Consider the arguably simplest possible example, the codeset defined by a single codeword:

$$|\psi> = \frac{1}{\sqrt{2}}(|00> + |11>)$$

This codeword is stabilized by the operators \(XX\) and \(ZZ\), meaning that we can use these operators to learn about its parity (both in the Z- and in the X-basis) without disturbing it, and we can do this as many times as we want. If we were to prepare this codeword and subject it to noise (as done, for example, here [7]), we could identify errors by measuring the stabilizers mentioned above. These measurements yield the "error syndromes." Obviously, the above codespace is too small to host a logical qubit, but the principle is the same. In general, we can consider a logical codespace of \(2^k\) dimensions embedded into a physical space of \(2^n\) dimensions. The physical degrees of freedom offered by the n physical qubits are countered by the constraints imposed by the stabilizers, resulting in a reduced number of degrees of freedom that may serve as logical qubits. In the standard notation, we can define a \([[n,k,d]]\) QEC code as one that uses \(n\) physical qubits to encode \(k\) logical qubits and can correct up to \(floor(d-1)/2\) errors.

As for how this might work in hardware, we initialize a logical qubit, then start querying its constituent physical qubits with all of the stabilizers as fast as we can, receiving a series of 0s and 1s from the stabilizer measurements. 0s represent no error, while 1s represent errors in the code. We keep checking for these errors, either correcting them on the spot or keeping track and correcting them at the end. In order to perform a logical gate, we do so with a set of operations that the stabilizers are blind to. By construction of the stabilizer group and the code, we have extra degrees of freedom available in order to operate the logical qubit without confusing gates for errors.

Fig. 1. Error propagation within a parity-check circuit

 

As an example, the paper in [8] implements the \([[4,2,2]]\) code, using four physical qubits to encode 2 logical qubits, performing parity checks on those 4 physical qubits to detect errors during the computation. This small code provides a good example of how errors can propagate within the code operations. The code space is comprised of the following four physical states (omitting normalization): \(|0000> + |1111>, |1100> + |0011>, |1010> + |0101>\), and \(|0110> + |1001>,\) corresponding to the logical states \(|0p0g>, |0p1g>, |1p0g>,\) and \(|1p1g>\), respectively, where we have labeled our two logical qubits as ‘p’ and ‘g’. Now imagine we have initialized our data qubits in the state \(|\Psi_i> = |0000>+|1111>\) (corresponding to the \(|0p0g>\) logical state) and we run the circuit shown in Fig. 1. This circuit implements an X-check followed by a Z-check on the data qubits. This means that the two measurements on the syndrome qubit yield the \(XXXX\) and the \(ZZZZ\) observables on the data qubits. Since \(XXXX\) and \(ZZZZ\) are stabilizers of this code, we should obtain the same state for the data qubits at the end of the circuit. Now let’s consider the scenario where the syndrome qubit undergoes a bit-flip error. This can happen anywhere in the circuit but let’s focus on three possible locations, labeled as \(A, B,\) and \(C\). Note that in any of the three cases the error is not picked by the \(X\)-check. Now, since a CNOT gate propagates bit-flip errors from control to target, it is easy to see that the error in A results in the final state \(|\Psi_f> = |1000> + |0111>\) and the error in \(C\) results in the final state \(|\Psi_f> = |0001> + |1110>\). In both of these scenarios the error is detected in the subsequent \(Z\)-check. However, consider the case where the bit-flip error happens at \(B\). In that case, not only does the Z-check yield the outcome 0 in the syndrome qubit, failing to detect that an error happened, but the final data qubits state is \(|\Psi_f> = |0011> + |1100>…\) which corresponds to a logical bit-flip in the second (labeled g) logical qubit! This example highlights a critical aspect of QEC codes called fault tolerance. The [[4,2,2]] code is designed to be fault-tolerant for the ‘p’ (protected) logical qubit and not fault-tolerant for the ‘g’ (gauge) logical qubit.
 
We have seen how QEC codes can detect errors during a computation by querying some property of the data (for example, parity) without directly learning what the data is, which would collapse the quantum information. However, it is fair to ask ourselves to what degree we can use these codes to correct for errors. How good do our physical qubits need to be? What is the physical overhead to pay for a reasonably sized, fault-free quantum computer? These are not trivial questions. Let’s start with the concept of fault tolerance introduced above. What is fault tolerance? Fault tolerance is a property of a circuit whereby it computes the correct result—with very little error—despite individual elements being faulty and unreliable. A fault tolerant circuit therefore is not exactly infallible, but it will take several faults for it to yield the wrong result. Fault tolerance rarely occurs naturally and therefore has to be designed, and while doing that, we need to bear in mind that it is critical for fault tolerant QEC that all the steps involved in the computation (encoding, syndrome extraction, logical operations, decoding…) be fault tolerant. This has an immediate and profound consequence: having a code that encodes information into a logical qubit is not enough, we need fault tolerant circuits as well. QEC doesn't always means fault tolerance and one has to be very careful in this regard when reading the literature.
 
Fault tolerance is a critical component of what goes into the concept of a ‘threshold’. The quantum fault tolerant theorem states that when the probability of failure in a noisy device is low enough (below a certain threshold), we can attain arbitrarily low logical error rates by increasing the size of the encoding. This means that a fault tolerant code severely limits the spread of errors within its circuits as long as such errors happen at a rate lower than a particular quantity called the threshold (*). Once our physical system's noise is below the threshold, we can make our computation arbitrarily better by making the code larger. Thus, the answers to ‘how many qubits do we need for one logical qubit’ and ‘how good do our physical qubits need to be’ is the same simple one: it depends. It depends on the code and it depends on our target logical error rate. But as long as the procedures are fault tolerant, we are on the right path.
 
With all these considerations, what goes into choosing a code? A lot of it has to do with the underlying hardware and the prevailing physical error sources. As a nice example, consider the surface code (SC) [9]. This is a very attractive code from an experimental point of view because it requires only nearest-neighbor connectivity and is relatively lenient to physical error rates, with a threshold of almost 1%. Our team started exploring this code as our experimental systems became capable enough to access some small demonstrations. However, it soon became apparent that even the low connectivity required might be too much for the level of crosstalk in our systems [10]. These findings motivated a slight turn of direction toward less-connected codes, and our theory team came up with a heavy-hexagonal (HH) code [11] that retained or improved the essence of the advantages of the SC: very low connectivity required and a not too low of a threshold(**) (albeit a bit lower than that of the SC). The HH code provides a higher degree of protection against crosstalk than the SC simply by virtue of its topological design. It also shows how theory and experiment in QEC (and in quantum information in general) typically go hand-in-hand and propel each other forward. Finally, it also explains why many of the backends offered by IBM Quantum are built with a HH topology!
 
What lies ahead for logical qubits? With a number of small codes demonstrated in a variety of physical platforms in recent years, the next big step is to experimentally demonstrate fault tolerant QEC. Following that, both theory and experiment will share the onus of advancing QEC, by building bigger and better systems and developing codes that can perform more efficiently and with lower overhead (the biggest part of which will be devoted to resources needed to implement logical gates, of which we have said little in this post) at a given level of noise. Quantum computing technologies have shown spectacular progress in the last decade, but the real exciting journey towards truly powerful quantum computers is just starting now.
 
 

 
(*) Note that whereas the existence of a threshold guarantees fault tolerance, the opposite is not true and there are fault tolerant codes that lack a threshold but have a pseudothreshold. A pseudothreshold is the physical error rate below which the logical error rate is lower than the physical error rate of the system for that particular logical qubit size only. The key difference is that trying to lower the logical error rate further by enlarging the logical qubit results in a different (lower) pseudothreshold.
 
(**) The HH code has a threshold for only one type of quantum error, but pseudothresholds exist on both types.
 
References:
 
[1] Wootters et al., Nature 299, 5886 (1982)
[2] Palma et al., Proc. Royal Society of London A, 452:567–584, (1996)
[3] Shor, Phys. Rev. A, 52 (1995) R2493-R2496
[4] Steane, Phys. Rev. Lett., 77 (1996) 793-767
[5] Gottesman, Phys. Rev. A, 54 (1996) 1862-1868
[6] Calderbank et al. Phys. Rev. Lett., 78 (1997) 405
[7] Corcoles et al., Nat. Commun. 6, 6979 (2015).
[8] Takita et al. Phys. Rev. Lett., 119 (2017) 180501
[9] Bravyi and Kitaev, arXiv:quant-ph/9811052 (1998)
[10] Takita et al. Phys. Rev. Lett., 117 (2016) 210505
[11] Chamberland et al. Phys. Rev. X, 10 (2020) 011022
 





No comments:

Post a Comment