Autor/es reacciones

Carlos Sabín

Ramón y Cajal Researcher Department of Theoretical Physics UAM Madrid

A few years ago, Google's quantum computing team claimed to have achieved so-called "quantum supremacy" (solving a problem in a short time with a quantum computer that would take an intractable time for any imaginable classical computer) with a machine of about 50 quantum bits (cubits). The problem is that, in that case, the task was useless: it was specifically designed to be very difficult for a classical computer to solve, but feasible for a quantum computer. The open question is whether quantum computers can outperform classical computers on problems that do have some utility.   

In theory, we know that there are problems where a quantum computer would outperform a classical one, such as the decomposition of a large number into prime numbers (on which current cryptography is based). But this would require many cubits and many operations (logic gates) to be performed on them. Although quantum computers can already perform each of these operations with few errors (with error probabilities below 1%), when you have to perform so many operations, the probability of making a mistake and the result being unreliable becomes very high.  

To avoid this, quantum computers should incorporate error correction mechanisms. Such mechanisms are known theoretically, but they involve increasing the number of cubits and operations much more, so it would only be worthwhile to introduce them in a quantum computer where the probability of error in each operation is even much lower than it is today. The challenge of having a quantum computer with so many cubits and such extremely low error probabilities is still far beyond today's technological capabilities. But the question posed in the Nature paper by IBM's quantum computing team is: can we do anything useful with today's quantum computers, with a small number of cubits and relatively high error probabilities? 

The authors' answer is yes, but it has a "trick" called "error mitigation". If we understand well the sources of error due to noise in an experiment and how the results of the experiment vary for different noise levels, we can deduce the result we would have if we had no noise at all. This therefore requires performing different experiments and correcting the results a posteriori, usually with a classical computer. It is in these corrected ("mitigated") results that the authors claim to have demonstrated their superiority over classical computers. They use a 127-cubit machine called Kyiv and run quantum circuits with 2,880 logic gates between pairs of cubits. These operations are not random, but serve to simulate the so-called Ising model, which was originally introduced to explain properties related to magnetism and has found many applications in physics over time. Classical computers use different approximations and methods to solve this model in many circumstances but, as shown in the article, with a number of particles as high as 127 and certain values of the physical parameters, the structure of the generated physical states can be so complex that previous approximations fail and classical machines cannot predict results with sufficient reliability.   

This is related to the famous quantum entanglement. A two-cubit system has four possible states: 00, 01, 10 and 11, but, in addition, the cubits may be in a quantum superposition, so that you cannot decompose them into states of each individual cubit (quantum entanglement). With three cubits you would have eight possible states and their superpositions. With 127 you have a huge number of states (2^127) and their superpositions: classical computers don't have that much memory, but they can use approximations assuming that, of all those possible states, not all are important for describing the properties we are interested in, which reduces the amount of memory needed. The problem is that if the system we want to simulate is in a very complicated state, with a lot of entanglement, that assumption is no longer valid and classical computers cannot make accurate calculations. And this is where the usefulness of IBM's quantum computer comes in: in these situations, given certain values of the parameters of an Ising model, the authors show that their machine, after error mitigation, does provide reliable results when calculating the physical magnitudes of the system.  

If these results are confirmed (e.g. by Google's competitor's team), they would be a first step in proving the usefulness of today's relatively small and noisy quantum computers when aided by error mitigation. While this particular calculation certainly has no direct practical application (since the parameter values where quantum superiority is shown probably do not correspond to real physical systems), at least Ising's model has a physical inspiration, so it is possible that there are models of similar complexity with more immediate applications that can also be attacked by Kyiv-like machines and an approach based on error mitigation, not correction.

EN