Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

A Match Made in Yorktown Heights



It pays to have friends in fascinating places. You need look no further than the cover of this issue and the article “ IBM’s Big Bet on the Quantum-Centric Supercomputer” for evidence. The article by Ryan Mandelbaum, Antonio D. Córcoles, and Jay Gambetta came to us courtesy of the article’s illustrator, the inimitable graphic artist Carl De Torres, a longtime IEEE Spectrum contributor as well as a design and communications consultant for IBM Research.

Story ideas typically originate with Spectrum’s editors and pitches from expert authors and freelance journalists. So we were intrigued when De Torres approached Spectrum about doing an article on IBM Research’s cutting-edge work on quantum-centric supercomputing.

De Torres has been collaborating with IBM in a variety of capacities since 2009, when, while at Wired magazine creating infographics, he was asked by the ad agency Ogilvy to work on Big Blue’s advertising campaign “Let’s build a Smarter Planet.” That project went so well that De Torres struck out on his own the next year. His relationship with IBM expanded, as did his engagements with other media, such as Spectrum, Fortune, and The New York Times. “My interest in IBM quickly grew beyond helping them in a marketing capacity,” says De Torres, who owns and leads the design studio Optics Lab in Berkeley, Calif. “What I really wanted to do is get to the source of some of the smartest work happening in technology, and that was IBM Research.”

Last year, while working on visualizations of a quantum-centric supercomputer with Jay Gambetta, vice president and lead scientist of IBM Quantum at the Thomas J. Watson Research Center in Yorktown Heights, N.Y., De Torres was inspired to contact Spectrum’s creative director, Mark Montgomery, with an idea.

“I really loved this process because I got to bring together two of my favorite clients to create something really special.” —Carl De Torres

“I thought, ‘You know, I think IEEE Spectrum would love to see this work,’” De Torres told me. “So with Jay’s permission, I gave Mark a 30-second pitch. Mark liked it and ran it by the editors, and they said that it sounded very promising.” De Torres, members of the IBM Quantum team, and Spectrum editors had a call to brainstorm what the article could be. “From there everything quickly fell into place, and I worked with Spectrum and the IBM Quantum team on a visual approach to the story,” De Torres says.

As for the text, we knew it would take a deft editorial hand to help the authors explain what amounts to the peanut butter and chocolate of advanced computing. Fortunately for us, and for you, dear reader, Associate Editor Dina Genkina has a doctorate in atomic physics, in the subfield of quantum simulation. As Genkina explained to me, that speciality is “adjacent to quantum computing, but not quite the same—it’s more like the analog version of QC that’s not computationally complete.”

Genkina was thrilled to work with De Torres to make the technical illustrations both accurate and edifying. Spectrum prides itself on its tech illustrations, which De Torres notes are increasingly rare in the space-constrained era of mobile-media consumption.

“Working with Carl was so exciting,” Genkina says. “It was really his vision that made the article happen, and the scope of his ambition for the story was at times a bit terrifying. But it’s the kind of story where the illustrations make it come to life.”

De Torres was happy with the collaboration, too. “I really loved this process because I got to bring together two of my favorite clients to create something really special.”

This article appears in the September 2024 print issue.

IBM’s Big Bet on the Quantum-Centric Supercomputer



Back in June 2022, Oak Ridge National Laboratory debuted Frontier—the world’s most powerful supercomputer. Frontier can perform a billion billion calculations per second. And yet there are computational problems that Frontier may never be able to solve in a reasonable amount of time.

Some of these problems are as simple as factoring a large number into primes. Others are among the most important facing Earth today, like quickly modeling complex molecules for drugs to treat emerging diseases, and developing more efficient materials for carbon capture or batteries.

However, in the next decade, we expect a new form of supercomputing to emerge unlike anything prior. Not only could it potentially tackle these problems, but we hope it’ll do so with a fraction of the cost, footprint, time, and energy. This new supercomputing paradigm will incorporate an entirely new computing architecture, one that mirrors the strange behavior of matter at the atomic level—quantum computing.

For decades, quantum computers have struggled to reach commercial viability. The quantum behaviors that power these computers are extremely sensitive to environmental noise, and difficult to scale to large enough machines to do useful calculations. But several key advances have been made in the last decade, with improvements in hardware as well as theoretical advances in how to handle noise. These advances have allowed quantum computers to finally reach a performance level where their classical counterparts are struggling to keep up, at least for some specific calculations.

For the first time, we here at IBM can see a path toward useful quantum computers, and we can begin imagining what the future of computing will look like. We don’t expect quantum computing to replace classical computing. Instead, quantum computers and classical computers will work together to run computations beyond what’s possible on either alone. Several supercomputer facilities around the world are already planning to incorporate quantum-computing hardware into their systems, including Germany’s Jupiter, Japan’s Fugaku, and Poland’s PSNC. While it has previously been called hybrid quantum-classical computing, and may go by other names, we call this vision quantum-centric supercomputing.

A Tale of Bits and Qubits

At the heart of our vision for a quantum-centric supercomputer is the quantum hardware, which we call a quantum processing unit (QPU). The power of the QPU to perform better than classical processing units in certain tasks comes from an operating principle that’s fundamentally different, one rooted in the physics of quantum mechanics.

In the standard or “classical” model of computation, we can reduce all information to strings of binary digits, bits for short, which can take on values of either 0 or 1. We can process that information using simple logic gates, like AND, OR, NOT, and NAND, which act on one or two bits at a time. The “state” of a classical computer is determined by the states of all its bits. So, if you have N bits, then the computer can be in just one of 2N states.



But a quantum computer has access to a much richer repertoire of states during computation. A quantum computer also has bits. But instead of just 0 and 1, its quantum bits— qubits—via a quantum property known as superposition, represent 0, 1, or a linear combination of both. While a digital computer can be in just one of those 2N states, a quantum computer can be in many logical states at once during the computation. And the superpositions the different qubits are in can be correlated with one another in a fundamental way, thanks to another quantum property known as entanglement. At the end of the computation, the qubit assumes just one state, chosen based on probabilities generated during the running of the quantum algorithm.

It’s not obvious how this computing paradigm can outperform the classical one. But in 1994, Peter Shor, a mathematician at MIT, discovered an algorithm that, using the quantum-computing paradigm, could divide large numbers into their prime factors exponentially faster than the best classical algorithm. Two years later, Lov Grover discovered a quantum algorithm that could find a particular entry in a database much faster than a classical one could.

Perhaps most importantly, since quantum computers follow the laws of quantum mechanics, they are the right tool for simulating the fundamentally quantum phenomena of our world, such as molecular interactions for drug discovery or materials design.

The Quantum-Centric Supercomputer’s Center

Before we can build a quantum-centric supercomputer, we have to make sure it’s capable of doing something useful. Building a capable enough QPU relies on constructing hardware that can re-create counterintuitive quantum behaviors.

Here at IBM, the basic building block of a quantum computation—the qubit—is made out of superconducting components. Each physical qubit consists of two superconducting plates, which act as a capacitor, wired to components called Josephson junctions, which act as a special lossless, nonlinear inductor.

The current flowing across Josephson junctions is quantized—fixed to discrete values. The Josephson junctions ensure that only two of those values (or their superpositions) are realistically accessible. The qubit is encoded in two current levels, one representing a 0, the other a 1. But, as mentioned, the qubit can also exist in a superposition of the 0 and 1 states.

Because superconductors need frigid temperatures to maintain superconductivity, the qubits and some of their control circuitry are held inside a specialty liquid-helium fridge called a dilution refrigerator.

We change the qubit states and couple qubits together with quantum instructions, commonly known as gates. These are a series of specially crafted microwave waveforms. A QPU includes all of the hardware responsible for accepting a set of quantum instructions—called a quantum circuit—and returning a single output represented by a binary string. The QPU includes the qubits plus components that amplify signals, the control electronics, and the classical computation required for tasks such as holding the instructions in memory, accumulating and separating signals from noise, and creating single binary outputs. We etch components like qubits, resonators for readouts, output filters, and quantum buses into a superconducting layer deposited on top of a silicon chip.

But it’s a challenge trying to control qubits at the supersensitive quantum level. External noise, noise from the electronics, and cross talk between control signals for different qubits all destroy the fragile quantum properties of the qubits. Controlling these noise sources has been key in reaching the point where we can envision useful quantum-centric supercomputers.

Getting the Quantum Stuff up to Snuff

No one has yet conclusively demonstrated quantum advantage—that is, a quantum computer that outperforms the best classical one on a real-world relevant task. Demonstrating true quantum advantage would herald a new era of computing, where previously intractable tasks would now be within reach.

Before we can approach this grandiose goal, we have to set our sights a bit lower, to a target we call quantum utility. Quantum utility is the ability of quantum hardware to outperform brute-force classical calculations of a quantum circuit. In other words, it’s the point where quantum hardware is better at doing quantum computations than a traditional computer is.


A photo of a series of computer towers in the middle of a room.


A photo of a cryogenic system.


An image of a series of computer towers.


This may sound underwhelming, but it is a necessary stepping-stone on the way to quantum advantage. In recent years, the quantum community has finally reached this threshold. Demonstrating quantum utility of our QPU, which we did in 2023, has convinced us that our quantum hardware is advanced enough to merit being built into a quantum-centric supercomputer. Achieving this milestone has taken a combination of advances, including both hardware and algorithmic improvements.

Since 2019, we’ve been incorporating advances in semiconductor fabrication to introduce 3D integration to our chips. This gave us access to qubits from a controller chip placed below the qubit plane to reduce the wiring on the chip, a potential source of noise. We also introduced readout multiplexing, which allows us to access the information from several qubits with a single wire, drastically reducing the amount of hardware we have to put in the dilution refrigerator.

In 2023, we implemented a new way to perform quantum gates—the steps of a program that change the value of the qubits—on our hardware, using components called tunable couplers. Previously, we prevented cross talk by fabricating the qubits that respond to different frequencies so that they wouldn’t react to microwave pulses meant for other qubits. But this made it too difficult for the qubits to perform the essential task of talking to one another, and it also made the processors slow. With tunable couplers, we don’t need the frequency-specific fabrication. Instead, we introduced a sort of “on-off” switch, using magnetic fields to decide whether or not a qubit should talk to another qubit. The result: We virtually eliminated cross-talk errors between qubits, allowing us to run much faster, more reliable gates.


As our hardware improved, we also demonstrated that we could deal with some noise using an error mitigation algorithm. Error mitigation can be done in many ways. In our case, we run quantum programs, analyze how the noise in our system changes the program outputs, and then create a noise model. Then we can use classical computing and our noise model to recover what a noise-free result would look like. The surrounding hardware and software of our quantum computer therefore includes classical computing capable of performing error mitigation, suppression, and eventually, error correction.

Alongside ever-improving hardware advances, we teamed up with the University of California, Berkeley, to demonstrate in 2023 that a quantum computer running our 127-qubit quantum chip, Eagle, could run circuits beyond the ability of brute-force classical simulation—that is, methods where the classical computer exactly simulates the quantum computer in order to run the circuit, reaching quantum utility. And we did so for a real condensed-matter physics problem—namely, finding the value of a property called magnetization for a system of simplified atoms with a structure that looked like the layout of our processors’ qubits.


Left: A quantum processing unit is more than just a chip. It includes the interconnects, amplifiers, and signal filtering. It also requires the classical hardware, including the room-temperature classical computers needed to receive and apply instructions and return outputs. Right: At the heart of an IBM quantum computer is a multilayer semiconductor chip etched with superconducting circuits. These circuits comprise the qubits used to perform calculations. Chips are divided into a layer with the qubits, a layer with resonators for readout, and multiple layers of wiring for input and output.


Error Correction to the Rescue

We were able to demonstrate the ability of our quantum hardware outperforming brute-force classical simulation without leveraging the most powerful area of quantum-computing theory: quantum error correction.

Unlike error mitigation, which deals with noise after a computation, quantum error correction can remove noise as it arises during the process. And it works for a more general kind of noise; you don’t need to figure out a specific noise model first. Plus, while error mitigation is limited in its ability to scale as the complexity of quantum circuits grows, error correction will continue to work at large scales.

Error Correction


An illustration of circle and lines showing classic error correction.


But quantum error correction comes at a huge cost: It requires more qubits, more connectivity, and more gates. For every qubit you want to compute with, you may need many more to enable error correction. Recent advances in improving hardware and finding better error-correcting codes have allowed us to envision an error-corrected supercomputer that can make those costs worthwhile.

Quantum error-correcting schemes are a bit more involved than error correction in traditional binary computers. To work at all, these quantum schemes require that the hardware error rate is below a certain threshold. Since quantum error correction’s inception, theorists have devised new codes with more relaxed thresholds, while quantum-computer engineers have developed better-performing systems. But there hasn’t yet been a quantum computer capable of using error correction to perform large-scale calculations.

Meanwhile, error-correction theory has continued to advance. One promising finding by Moscow State University physicists Pavel Panteleev and Gleb Kalachev inspired us to pursue a new kind of error-correcting code for our systems. Their 2021 paper demonstrated the theoretical existence of “good codes,” codes where the number of extra qubits required to perform error correction scales more favorably.



This led to an explosion of research into a family of codes called quantum low-density parity check codes, or qLDPC codes. Earlier this year, our team published a qLDPC code with an error threshold high enough that we could conceivably implement it on near-term quantum computers; the amount of required connectivity between qubits was only slightly beyond what our hardware already supplies. This code would need only a tenth the number of qubits as previous methods to achieve error correction at the same level.

These theoretical developments allow us to envision an error-corrected quantum computer at experimentally accessible scales, provided we can connect enough quantum processing power together, and leverage classical computing as much as possible.

Hybrid Classical-Quantum Computers for the Win

To take advantage of error correction, and to reach large enough scales to solve human-relevant problems with quantum computers, we need to build larger QPUs or connect multiple QPUs together. We also need to incorporate classical computing with the quantum system.


Quantum-centric supercomputers will include thousands of error-corrected qubits to unlock the full power of quantum computers. Here’s how we’ll get there.

2024

Heron

→ 156 qubits

→ 5K gates before errors set in

2025

Flamingo

→ Introduce l-couplers between chips

→ Connect 7 chips for 7 x 156 = 1,092 qubits

→ 5K gates before errors set in

2027

Flamingo

→ l-couplers between chips

→ 7 x 156 = 1,092 qubits

→ Improved hardware and error mitigation

→ 10K gates before errors set in

2029

Starling

→ 200 qubits

→ l-, m-, and c-couplers combined

→ Error correction

→ 100M gates

2030

BlueJay

→ 2,000 qubits

→ Error correction

→ 1B gates


Last year, we released a machine we call the IBM Quantum System Two, which we can use to start prototyping error mitigation and error correction in a scalable quantum computing system. System Two relies on larger, modular cryostats, allowing us to place multiple quantum processors into a single refrigerator with short-range interconnects, and then combine multiple fridges into a bigger system, kind of like adding more racks to a traditional supercomputer.

Along with the System Two release, we also detailed a 10-year plan for realizing our vision. Much of the early hardware work on that road map has to do with interconnects. We’re still developing the interconnects required to connect quantum chips into larger chips like Lego blocks, which we call m-couplers. We’re also developing interconnects to transfer quantum information between more distant chips, called l-couplers. We hope to prototype both m- and l-couplers by the end of this year. We’re also developing on-chip couplers that link qubits on the same chip that are more distant than their nearest neighbors—a requirement of our newly developed error-correction code. We plan to deliver this c-coupler by the end of 2026. In the meantime, we’ll be improving error mitigation so that by 2028, we can run a quantum program across seven parallel quantum chips, each chip capable of performing up to 15,000 accurate gates before the errors set in, on 156 qubits.

We’re also continuing to advance error correction. Our theorists are always looking for codes that require fewer extra qubits for more error-correcting power and allow for higher error thresholds. We must also determine the best way to run operations on information that’s encoded into the error-correcting code, and then decode that information in real time. We hope to demonstrate those by the end of 2028. That way, in 2029, we can debut our first quantum computer incorporating both error mitigation and error correction that can run up to 100 million gates until the errors take hold, on 200 qubits. Further advances in error correction will allow us to run a billion gates on 2,000 qubits by 2033.

Knitting Together a Quantum-Centric Supercomputer

The ability to mitigate and correct errors removes a major roadblock in the way of full-scale quantum computing. But we still don’t think it’ll be enough to tackle the largest, most valuable problems. For that reason, we’ve also introduced a new way of running algorithms, where multiple quantum circuits and distributed classical computing are woven together into a quantum-centric supercomputer.

Many envision the “quantum computer” as a single QPU, working on its own to run programs with billions of operations on millions of physical qubits. Instead, we envision computers incorporating multiple QPUs, running quantum circuits in parallel with distributed classical computers.

Combining the strengths of quantum and classical


Quantum-centric supercomputing leverages quantum and classical resources in parallelized workloads to run computations larger than what was possible before. A quantum-centric supercomputer is a system optimized to orchestrate work across the quantum computers and advanced classical compute clusters in the same data center.


Recent work has demonstrated techniques that let us run quantum circuits much more efficiently by incorporating classical computing with quantum processing. These techniques, called circuit knitting, break down a single quantum-computing problem into multiple quantum-computing problems and then run them in parallel on quantum processors. And then a combination of quantum and classical computers knit the circuit results together for the final answer.

Another technique uses the classical computer to run all but the core, intrinsically quantum part of the calculation. It is this last vision that we believe will realize quantum advantage first.

Therefore, a quantum computer doesn’t just include one quantum processor, its control electronics, and its dilution refrigerator—it also includes the classical processing required to perform error correction, and error mitigation.

We haven’t realized a fully integrated quantum-centric supercomputer yet. But we’re laying the groundwork with System Two, and Qiskit, our full-stack quantum-computing software for running large quantum workloads. We are building middleware capable of managing circuit knitting, and of provisioning the appropriate computing resources when and where they’re required. The next step is to mature our hardware and software infrastructure so that quantum and classical can extend one another to do things beyond the capabilities of either.

Today’s quantum computers are now scientific tools capable of running programs beyond the brute-force ability of classical simulation, at least when simulating certain quantum systems. But we must continue improving both our quantum and classical infrastructure so that, combined, it’s capable of speeding up solutions for problems relevant to humanity. With that in mind, we hope that the broader computing community will continue researching new algorithms incorporating circuit knitting, parallelized quantum circuits, and error mitigation in order to find use cases that can benefit from quantum in the near term.

And we look forward to a day when the Top 500 list of most powerful supercomputers will include machines that have quantum processors at their hearts.

❌
❌