Reading view

There are new articles available, click to refresh the page.

Transistor-like Qubits Hit Key Benchmark



A team in Australia has recently demonstrated a key advance in metal-oxide-semiconductor-based (or MOS-based) quantum computers. They showed that their two-qubit gates—logical operations that involve more than one quantum bit, or qubit—perform without errors 99 percent of the time. This number is important, because it is the baseline necessary to perform error correction, which is believed to be necessary to build a large-scale quantum computer. What’s more, these MOS-based quantum computers are compatible with existing CMOS technology, which will make it more straightforward to manufacture a large number of qubits on a single chip than with other techniques.

“Getting over 99 percent is significant because that is considered by many to be the error correction threshold, in the sense that if your fidelity is lower than 99 percent, it doesn’t really matter what you’re going to do in error correction,” says Yuval Boger, CCO of quantum computing company QuEra and who wasn’t involved in the work. “You’re never going to fix errors faster than they accumulate.”

There are many contending platforms in the race to build a useful quantum computer. IBM, Google and others are building their machines out of superconducting qubits. Quantinuum and IonQ use individual trapped ions. QuEra and Atom Computing use neutrally-charged atoms. Xanadu and PsiQuantum are betting on photons. The list goes on.

In the new result, a collaboration between the University of New South Wales (UNSW) and Sydney-based startup Diraq, with contributors from Japan, Germany, Canada, and the U.S., has taken yet another approach: trapping single electrons in MOS devices. “What we are trying to do is we are trying to make qubits that are as close to traditional transistors as they can be,” says Tuomo Tanttu, a research fellow at UNSW who led the effort.

Qubits That Act Like Transistors

These qubits are indeed very similar to a regular transistor, gated in such a way as to have only a single electron in the channel. The biggest advantage of this approach is that it can be manufactured using traditional CMOS technologies, making it theoretically possible to scale to millions of qubits on a single chip. Another advantage is that MOS qubits can be integrated on-chip with standard transistors for simplified input, output, and control, says Diraq CEO Andrew Dzurak.

The drawback of this approach, however, is that MOS qubits have historically suffered from device-to-device variability, causing significant noise on the qubits.

“The sensitivity in [MOS] qubits is going to be more than in transistors, because in transistors, you still have 20, 30, 40 electrons carrying the current. In a qubit device, you’re really down to a single electron,” says Ravi Pillarisetty, a senior device engineer for Intel quantum hardware who wasn’t involved in the work.

The team’s result not only demonstrated the 99 percent accurate functionality on two-qubit gates of the test devices, but also helped better understand the sources of device-to-device variability. The team tested three devices with three qubits each. In addition to measuring the error rate, they also performed comprehensive studies to glean the underlying physical mechanisms that contribute to noise.

The researchers found that one of the sources of noise was isotopic impurities in the silicon layer, which, when controlled, greatly reduced the circuit complexity necessary to run the device. The next leading cause of noise was small variations in electric fields, likely due to imperfections in the oxide layer of the device. Tanttu says this is likely to improve by transitioning from a laboratory clean room to a foundry environment.

“It’s a great result and great progress. And I think it’s setting the right direction for the community in terms of thinking less about one individual device, or demonstrating something on an individual device, versus thinking more longer term about the scaling path,” Pillarisetty says.

Now, the challenge will be to scale up these devices to more qubits. One difficulty with scaling is the number of input/output channels required. The quantum team at Intel, who are pursuing a similar technology, has recently pioneered a chip they call Pando Tree to try to address this issue. Pando Tree will be on the same plane as the quantum processor, enabling faster inputs and outputs to the qubits. The Intel team hopes to use it to scale to thousands of qubits. “A lot of our approach is thinking about, how do we make our qubit processor look more like a modern CPU?” says Pillarisetty.

Similarly, Diraq CEO Dzurak says his team plan to scale their technology to thousands of qubits in the near future through a recently announced partnership with Global Foundries. “With Global Foundries, we designed a chip that will have thousands of these [MOS qubits]. And these will be interconnected by using classical transistor circuitry that we designed. This is unprecedented in the quantum computing world,” Dzurak says.

IBM’s Big Bet on the Quantum-Centric Supercomputer



Back in June 2022, Oak Ridge National Laboratory debuted Frontier—the world’s most powerful supercomputer. Frontier can perform a billion billion calculations per second. And yet there are computational problems that Frontier may never be able to solve in a reasonable amount of time.

Some of these problems are as simple as factoring a large number into primes. Others are among the most important facing Earth today, like quickly modeling complex molecules for drugs to treat emerging diseases, and developing more efficient materials for carbon capture or batteries.

However, in the next decade, we expect a new form of supercomputing to emerge unlike anything prior. Not only could it potentially tackle these problems, but we hope it’ll do so with a fraction of the cost, footprint, time, and energy. This new supercomputing paradigm will incorporate an entirely new computing architecture, one that mirrors the strange behavior of matter at the atomic level—quantum computing.

For decades, quantum computers have struggled to reach commercial viability. The quantum behaviors that power these computers are extremely sensitive to environmental noise, and difficult to scale to large enough machines to do useful calculations. But several key advances have been made in the last decade, with improvements in hardware as well as theoretical advances in how to handle noise. These advances have allowed quantum computers to finally reach a performance level where their classical counterparts are struggling to keep up, at least for some specific calculations.

For the first time, we here at IBM can see a path toward useful quantum computers, and we can begin imagining what the future of computing will look like. We don’t expect quantum computing to replace classical computing. Instead, quantum computers and classical computers will work together to run computations beyond what’s possible on either alone. Several supercomputer facilities around the world are already planning to incorporate quantum-computing hardware into their systems, including Germany’s Jupiter, Japan’s Fugaku, and Poland’s PSNC. While it has previously been called hybrid quantum-classical computing, and may go by other names, we call this vision quantum-centric supercomputing.

A Tale of Bits and Qubits

At the heart of our vision for a quantum-centric supercomputer is the quantum hardware, which we call a quantum processing unit (QPU). The power of the QPU to perform better than classical processing units in certain tasks comes from an operating principle that’s fundamentally different, one rooted in the physics of quantum mechanics.

In the standard or “classical” model of computation, we can reduce all information to strings of binary digits, bits for short, which can take on values of either 0 or 1. We can process that information using simple logic gates, like AND, OR, NOT, and NAND, which act on one or two bits at a time. The “state” of a classical computer is determined by the states of all its bits. So, if you have N bits, then the computer can be in just one of 2N states.



But a quantum computer has access to a much richer repertoire of states during computation. A quantum computer also has bits. But instead of just 0 and 1, its quantum bits— qubits—via a quantum property known as superposition, represent 0, 1, or a linear combination of both. While a digital computer can be in just one of those 2N states, a quantum computer can be in many logical states at once during the computation. And the superpositions the different qubits are in can be correlated with one another in a fundamental way, thanks to another quantum property known as entanglement. At the end of the computation, the qubit assumes just one state, chosen based on probabilities generated during the running of the quantum algorithm.

It’s not obvious how this computing paradigm can outperform the classical one. But in 1994, Peter Shor, a mathematician at MIT, discovered an algorithm that, using the quantum-computing paradigm, could divide large numbers into their prime factors exponentially faster than the best classical algorithm. Two years later, Lov Grover discovered a quantum algorithm that could find a particular entry in a database much faster than a classical one could.

Perhaps most importantly, since quantum computers follow the laws of quantum mechanics, they are the right tool for simulating the fundamentally quantum phenomena of our world, such as molecular interactions for drug discovery or materials design.

The Quantum-Centric Supercomputer’s Center

Before we can build a quantum-centric supercomputer, we have to make sure it’s capable of doing something useful. Building a capable enough QPU relies on constructing hardware that can re-create counterintuitive quantum behaviors.

Here at IBM, the basic building block of a quantum computation—the qubit—is made out of superconducting components. Each physical qubit consists of two superconducting plates, which act as a capacitor, wired to components called Josephson junctions, which act as a special lossless, nonlinear inductor.

The current flowing across Josephson junctions is quantized—fixed to discrete values. The Josephson junctions ensure that only two of those values (or their superpositions) are realistically accessible. The qubit is encoded in two current levels, one representing a 0, the other a 1. But, as mentioned, the qubit can also exist in a superposition of the 0 and 1 states.

Because superconductors need frigid temperatures to maintain superconductivity, the qubits and some of their control circuitry are held inside a specialty liquid-helium fridge called a dilution refrigerator.

We change the qubit states and couple qubits together with quantum instructions, commonly known as gates. These are a series of specially crafted microwave waveforms. A QPU includes all of the hardware responsible for accepting a set of quantum instructions—called a quantum circuit—and returning a single output represented by a binary string. The QPU includes the qubits plus components that amplify signals, the control electronics, and the classical computation required for tasks such as holding the instructions in memory, accumulating and separating signals from noise, and creating single binary outputs. We etch components like qubits, resonators for readouts, output filters, and quantum buses into a superconducting layer deposited on top of a silicon chip.

But it’s a challenge trying to control qubits at the supersensitive quantum level. External noise, noise from the electronics, and cross talk between control signals for different qubits all destroy the fragile quantum properties of the qubits. Controlling these noise sources has been key in reaching the point where we can envision useful quantum-centric supercomputers.

Getting the Quantum Stuff up to Snuff

No one has yet conclusively demonstrated quantum advantage—that is, a quantum computer that outperforms the best classical one on a real-world relevant task. Demonstrating true quantum advantage would herald a new era of computing, where previously intractable tasks would now be within reach.

Before we can approach this grandiose goal, we have to set our sights a bit lower, to a target we call quantum utility. Quantum utility is the ability of quantum hardware to outperform brute-force classical calculations of a quantum circuit. In other words, it’s the point where quantum hardware is better at doing quantum computations than a traditional computer is.


A photo of a series of computer towers in the middle of a room.


A photo of a cryogenic system.


An image of a series of computer towers.


This may sound underwhelming, but it is a necessary stepping-stone on the way to quantum advantage. In recent years, the quantum community has finally reached this threshold. Demonstrating quantum utility of our QPU, which we did in 2023, has convinced us that our quantum hardware is advanced enough to merit being built into a quantum-centric supercomputer. Achieving this milestone has taken a combination of advances, including both hardware and algorithmic improvements.

Since 2019, we’ve been incorporating advances in semiconductor fabrication to introduce 3D integration to our chips. This gave us access to qubits from a controller chip placed below the qubit plane to reduce the wiring on the chip, a potential source of noise. We also introduced readout multiplexing, which allows us to access the information from several qubits with a single wire, drastically reducing the amount of hardware we have to put in the dilution refrigerator.

In 2023, we implemented a new way to perform quantum gates—the steps of a program that change the value of the qubits—on our hardware, using components called tunable couplers. Previously, we prevented cross talk by fabricating the qubits that respond to different frequencies so that they wouldn’t react to microwave pulses meant for other qubits. But this made it too difficult for the qubits to perform the essential task of talking to one another, and it also made the processors slow. With tunable couplers, we don’t need the frequency-specific fabrication. Instead, we introduced a sort of “on-off” switch, using magnetic fields to decide whether or not a qubit should talk to another qubit. The result: We virtually eliminated cross-talk errors between qubits, allowing us to run much faster, more reliable gates.


As our hardware improved, we also demonstrated that we could deal with some noise using an error mitigation algorithm. Error mitigation can be done in many ways. In our case, we run quantum programs, analyze how the noise in our system changes the program outputs, and then create a noise model. Then we can use classical computing and our noise model to recover what a noise-free result would look like. The surrounding hardware and software of our quantum computer therefore includes classical computing capable of performing error mitigation, suppression, and eventually, error correction.

Alongside ever-improving hardware advances, we teamed up with the University of California, Berkeley, to demonstrate in 2023 that a quantum computer running our 127-qubit quantum chip, Eagle, could run circuits beyond the ability of brute-force classical simulation—that is, methods where the classical computer exactly simulates the quantum computer in order to run the circuit, reaching quantum utility. And we did so for a real condensed-matter physics problem—namely, finding the value of a property called magnetization for a system of simplified atoms with a structure that looked like the layout of our processors’ qubits.


Left: A quantum processing unit is more than just a chip. It includes the interconnects, amplifiers, and signal filtering. It also requires the classical hardware, including the room-temperature classical computers needed to receive and apply instructions and return outputs. Right: At the heart of an IBM quantum computer is a multilayer semiconductor chip etched with superconducting circuits. These circuits comprise the qubits used to perform calculations. Chips are divided into a layer with the qubits, a layer with resonators for readout, and multiple layers of wiring for input and output.


Error Correction to the Rescue

We were able to demonstrate the ability of our quantum hardware outperforming brute-force classical simulation without leveraging the most powerful area of quantum-computing theory: quantum error correction.

Unlike error mitigation, which deals with noise after a computation, quantum error correction can remove noise as it arises during the process. And it works for a more general kind of noise; you don’t need to figure out a specific noise model first. Plus, while error mitigation is limited in its ability to scale as the complexity of quantum circuits grows, error correction will continue to work at large scales.

Error Correction


An illustration of circle and lines showing classic error correction.


But quantum error correction comes at a huge cost: It requires more qubits, more connectivity, and more gates. For every qubit you want to compute with, you may need many more to enable error correction. Recent advances in improving hardware and finding better error-correcting codes have allowed us to envision an error-corrected supercomputer that can make those costs worthwhile.

Quantum error-correcting schemes are a bit more involved than error correction in traditional binary computers. To work at all, these quantum schemes require that the hardware error rate is below a certain threshold. Since quantum error correction’s inception, theorists have devised new codes with more relaxed thresholds, while quantum-computer engineers have developed better-performing systems. But there hasn’t yet been a quantum computer capable of using error correction to perform large-scale calculations.

Meanwhile, error-correction theory has continued to advance. One promising finding by Moscow State University physicists Pavel Panteleev and Gleb Kalachev inspired us to pursue a new kind of error-correcting code for our systems. Their 2021 paper demonstrated the theoretical existence of “good codes,” codes where the number of extra qubits required to perform error correction scales more favorably.



This led to an explosion of research into a family of codes called quantum low-density parity check codes, or qLDPC codes. Earlier this year, our team published a qLDPC code with an error threshold high enough that we could conceivably implement it on near-term quantum computers; the amount of required connectivity between qubits was only slightly beyond what our hardware already supplies. This code would need only a tenth the number of qubits as previous methods to achieve error correction at the same level.

These theoretical developments allow us to envision an error-corrected quantum computer at experimentally accessible scales, provided we can connect enough quantum processing power together, and leverage classical computing as much as possible.

Hybrid Classical-Quantum Computers for the Win

To take advantage of error correction, and to reach large enough scales to solve human-relevant problems with quantum computers, we need to build larger QPUs or connect multiple QPUs together. We also need to incorporate classical computing with the quantum system.


Quantum-centric supercomputers will include thousands of error-corrected qubits to unlock the full power of quantum computers. Here’s how we’ll get there.

2024

Heron

→ 156 qubits

→ 5K gates before errors set in

2025

Flamingo

→ Introduce l-couplers between chips

→ Connect 7 chips for 7 x 156 = 1,092 qubits

→ 5K gates before errors set in

2027

Flamingo

→ l-couplers between chips

→ 7 x 156 = 1,092 qubits

→ Improved hardware and error mitigation

→ 10K gates before errors set in

2029

Starling

→ 200 qubits

→ l-, m-, and c-couplers combined

→ Error correction

→ 100M gates

2030

BlueJay

→ 2,000 qubits

→ Error correction

→ 1B gates


Last year, we released a machine we call the IBM Quantum System Two, which we can use to start prototyping error mitigation and error correction in a scalable quantum computing system. System Two relies on larger, modular cryostats, allowing us to place multiple quantum processors into a single refrigerator with short-range interconnects, and then combine multiple fridges into a bigger system, kind of like adding more racks to a traditional supercomputer.

Along with the System Two release, we also detailed a 10-year plan for realizing our vision. Much of the early hardware work on that road map has to do with interconnects. We’re still developing the interconnects required to connect quantum chips into larger chips like Lego blocks, which we call m-couplers. We’re also developing interconnects to transfer quantum information between more distant chips, called l-couplers. We hope to prototype both m- and l-couplers by the end of this year. We’re also developing on-chip couplers that link qubits on the same chip that are more distant than their nearest neighbors—a requirement of our newly developed error-correction code. We plan to deliver this c-coupler by the end of 2026. In the meantime, we’ll be improving error mitigation so that by 2028, we can run a quantum program across seven parallel quantum chips, each chip capable of performing up to 15,000 accurate gates before the errors set in, on 156 qubits.

We’re also continuing to advance error correction. Our theorists are always looking for codes that require fewer extra qubits for more error-correcting power and allow for higher error thresholds. We must also determine the best way to run operations on information that’s encoded into the error-correcting code, and then decode that information in real time. We hope to demonstrate those by the end of 2028. That way, in 2029, we can debut our first quantum computer incorporating both error mitigation and error correction that can run up to 100 million gates until the errors take hold, on 200 qubits. Further advances in error correction will allow us to run a billion gates on 2,000 qubits by 2033.

Knitting Together a Quantum-Centric Supercomputer

The ability to mitigate and correct errors removes a major roadblock in the way of full-scale quantum computing. But we still don’t think it’ll be enough to tackle the largest, most valuable problems. For that reason, we’ve also introduced a new way of running algorithms, where multiple quantum circuits and distributed classical computing are woven together into a quantum-centric supercomputer.

Many envision the “quantum computer” as a single QPU, working on its own to run programs with billions of operations on millions of physical qubits. Instead, we envision computers incorporating multiple QPUs, running quantum circuits in parallel with distributed classical computers.

Combining the strengths of quantum and classical


Quantum-centric supercomputing leverages quantum and classical resources in parallelized workloads to run computations larger than what was possible before. A quantum-centric supercomputer is a system optimized to orchestrate work across the quantum computers and advanced classical compute clusters in the same data center.


Recent work has demonstrated techniques that let us run quantum circuits much more efficiently by incorporating classical computing with quantum processing. These techniques, called circuit knitting, break down a single quantum-computing problem into multiple quantum-computing problems and then run them in parallel on quantum processors. And then a combination of quantum and classical computers knit the circuit results together for the final answer.

Another technique uses the classical computer to run all but the core, intrinsically quantum part of the calculation. It is this last vision that we believe will realize quantum advantage first.

Therefore, a quantum computer doesn’t just include one quantum processor, its control electronics, and its dilution refrigerator—it also includes the classical processing required to perform error correction, and error mitigation.

We haven’t realized a fully integrated quantum-centric supercomputer yet. But we’re laying the groundwork with System Two, and Qiskit, our full-stack quantum-computing software for running large quantum workloads. We are building middleware capable of managing circuit knitting, and of provisioning the appropriate computing resources when and where they’re required. The next step is to mature our hardware and software infrastructure so that quantum and classical can extend one another to do things beyond the capabilities of either.

Today’s quantum computers are now scientific tools capable of running programs beyond the brute-force ability of classical simulation, at least when simulating certain quantum systems. But we must continue improving both our quantum and classical infrastructure so that, combined, it’s capable of speeding up solutions for problems relevant to humanity. With that in mind, we hope that the broader computing community will continue researching new algorithms incorporating circuit knitting, parallelized quantum circuits, and error mitigation in order to find use cases that can benefit from quantum in the near term.

And we look forward to a day when the Top 500 list of most powerful supercomputers will include machines that have quantum processors at their hearts.

Atomically Thin Materials Significantly Shrink Qubits



Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.

IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.

Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.

“We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”

The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.

Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).

Golden dilution refrigerator hanging vertically Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT

In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.

As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.

In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.

“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.

On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.

While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.

“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”

This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.

“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.

Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.

❌