Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

A Picture Is Worth 4.6 Terabits



Clark Johnson says he has wanted to be a scientist ever since he was 3. At age 8, he got bored with a telegraph-building kit he received as a gift and repurposed it into a telephone. By age 12, he set his sights on studying physics because he wanted to understand how things worked at the most basic level.

“I thought, mistakenly at the time, that physicists were attuned to the left ear of God,” Johnson says.

Clark Johnson


Employer

Wave Domain

Title

CFO

Member grade

Life Fellow

After graduating at age 19 with a bachelor’s degree in physics in 1950 from the University of Minnesota Twin Cities, he was planning to go to graduate school when he got a call from the head of the physics section at 3M’s R&D laboratory with a job offer. Tempted by the promise of doing things with his own hands, Johnson accepted the role of physicist at the company’s facility in St. Paul, Minn. Thus began his more than seven-decade-long career as an electrical engineer, inventor, and entrepreneur—which continues to this day.

Johnson, an IEEE Life Fellow, is an active member of the IEEE Magnetics Society and served as its 1983–1984 president.

He was on the science committee of the U.S. House of Representatives, and then was recruited by the Advanced Research Projects Agency (ARPA) and assigned to assist in MIT’s Research Program on Communications Policy, where he contributed to the development of HDTV.

He went on to help found Wave Domain in Monson, Mass. Johnson and his Wave Domain collaborators have been granted six patents for their latest invention, a standing-wave storage (SWS) system that houses archival data in a low-energy-use, tamper-proof way using antiquated photography technology.

3M, HDTV, and a career full of color

3M turned out to be fertile ground for Johnson’s creativity.

“You could spend 15 percent of your time working on things you liked,” he says. “The president of the company believed that new ideas sort of sprung out of nothing, and if you poked around, you might come across something that could be useful.”

Johnson’s poking around led him to contribute to developing an audio tape cartridge and Scotchlite, the reflective film seen on roads, signs, and more.

In 1989 he was tapped to be an IEEE Congressional Fellow. He chose to work with Rep. George Brown Jr., a Democrat representing the 42nd district in central California. Brown was a ranking member of the House committee on science, space, and technology, which oversees almost all non-defense and non-health related research.

“It was probably the most exciting year of my entire life,” Johnson says.

While on the science committee, he met Richard Jay Solomon, who was associate director of MIT’s Research Program on Communications Policy, testifying for the committee on video and telecom issues. Solomon’s background is diverse. He studied physics and electrical engineering in the early 1960s at Brooklyn Polytechnic and general science at New York University. Before becoming a research associate at MIT in 1969, he held a variety of positions. He ran a magazine about scientific photography, and he founded a business that provided consulting on urban planning and transportation. He authored four textbooks on transportation planning, three of which were published by the American Society of Civil Engineers. At the magazine, Solomon gained insights into arcane, long-forgotten 19th-century photographic processes that turned out to be useful in future inventions.

a man standing at the end of a brown and orange train car Johnson and Solomon bonded over their shared interest in trains. Johnson’s refurbished Pullman car has traveled some 850,000 miles across the continental U.S.Clark Johnson

Johnson and Solomon clicked over a shared interest in trains. At the time they met, Johnson owned a railway car that was parked in the District of Columbia’s Union Station, and he used it to move throughout North America, traveling some 850,000 miles before selling the car in 2019. Johnson and Solomon shared many trips aboard the refurbished Pullman car.

Now they are collaborators on a new method to store big data in a tamperproof, zero-energy-cost medium.

Conventional storage devices such as solid-state drives and hard disks take energy to maintain, and they might degrade over time, but Johnson says the technique he, Solomon, and collaborators developed requires virtually no energy and can remain intact for centuries under most conditions.

Long before collaborating on their latest project, Johnson and Solomon teamed up on another high-profile endeavor: the development of HDTV. The project arose through their work on the congressional science committee.

In the late 1980s, engineers in Japan were working on developing an analog high-definition television system.

“My boss on the science committee said, ‘We really can’t let the Japanese do this. There’s all this digital technology and digital computers. We’ve got to do this digitally,’” Johnson says.

That spawned a collaborative project funded by NASA and ARPA (the predecessor of modern-day DARPA). After Johnson’s tenure on the science committee ended, he and Solomon joined a team at MIT that participated in the collaboration. As they developed what would become the dominant TV technology, Johnson and Solomon became experts in optics. Working with Polaroid, IBM, and Philips in 1992, the team demonstrated the world’s first digital, progressive-scanned, high-definition camera at the annual National Association of Broadcasters conference.

A serendipitous discovery

Around 2000, Clark and Solomon, along with a new colleague, Eric Rosenthal, began working as independent consultants to NASA and the U.S. Department of Defense. Rosenthal had been a vice president of research and development at Walt Disney Imagineering and general manager of audiovisual systems engineering at ABC television prior to joining forces with Clark and Solomon.

While working on one DARPA-funded project, Solomon stumbled upon a page in a century-old optics textbook that caught his eye. It described a method developed by noted physicist Gabriel Lippmann for producing color photographs. Instead of using film or dyes, Lippmann created photos by using a glass plate coated with a specially formulated silver halide emulsion.

When exposed to a bright, sunlit scene, the full spectrum of light reflected off a mercury-based mirror coating on the back of the glass. It created standing waves inside the emulsion layer of the colors detected. The silver grains in the brightest parts of the standing wave became oxidized, as if remembering the precise colors they saw. (It was in stark contrast to traditional color photographs and television, which store only red, green, and blue parts of the spectrum.) Then, chemical processing turned the oxidized silver halide grains black, leaving the light waves imprinted in the medium in a way that is nearly impossible to tamper with. Lippmann received the 1908 Nobel Prize in Physics for his work.

Lippmann’s photography technique did not garner commercial success, because there was no practical way to duplicate the images or print them. And at the time, the emulsions needed the light to be extremely bright to be properly imprinted in the medium.

Nevertheless, Solomon was impressed with the durability of the resulting image. He explained the process to his colleagues, who recognized the possibility of using the technique to store information for archival purposes. Johnson saw Lippmann’s old photographs at the Museum for Photography, in Lausanne, Switzerland, where he noticed that the colors appeared clear and intense despite being more than a century old.

The silver halide method stuck with Solomon, and in 2013 he and Johnson returned to Lippmann’s emulsion photography technique.

“We got to talking about how we could take all this information we knew about color and use it for something,” Johnson says.

Data in space and on land

While Rosenthal was visiting the International Space Station headquarters in Huntsville, Ala., in 2013, a top scientist said, “‘The data stored on the station gets erased every 24 hours by cosmic rays,’” Rosenthal recalls. “‘And we have to keep rewriting the data over and over and over again.’” Cosmic rays and solar flares can damage electronic components, causing errors or outright erasures on hard disks and other traditional data storage systems.

Rosenthal, Johnson, and Solomon knew that properly processed silver halide photographs would be immune to such hazards, including electromagnetic pulses from nuclear explosions. The team examined Lippmann’s photographic emulsion anew.

Solomon’s son, Brian Solomon, a professional photographer and a specialist in making photographic emulsions, also was concerned about the durability of conventional dye-based color photographs, which tend to start fading after a few decades.

The team came up with an intriguing idea: Given how durable Lippmann’s photographs appeared to be, what if they could use a similar technique—not for making analog images but for storing digital data? Thus began their newest engineering endeavor: changing how archival data—data that doesn’t need to be overwritten but simply preserved and read occasionally—is stored.

black text with red and green wavy lines and black dots in a gray box with another gray box next to it The standing wave storage technique works by shining bright LEDs onto a specially formulated emulsion of silver grains in gelatin. The light reflects off the substrate layer (which could be air), and forms standing waves in the emulsion. Standing waves oxidize the silver grains at their peaks, and a chemical process turns the oxidized silver grains black, imprinting the pattern of colors into the medium. Wave Domain

Conventionally stored data sometimes is protected by making multiple copies or continuously rewriting it, Johnson says. The techniques require energy, though, and can be labor-intensive.

The amount of data that needs to be stored on land is also growing by leaps and bounds. The market for data centers and other artificial intelligence infrastructure is growing at an annual rate of 44 percent, according to Data Bridge Market Research. Commonly used hard drives and solid-state drives consume some power, even when they are not in use. The drives’ standby power consumption varies between 0.05 and 2.5 watts per drive. And data centers contain an enormous number of drives requiring tremendous amounts of electricity to keep running.

Johnson estimates that about 25 percent of the data held in today’s data centers is archival in nature, meaning it will not need to be overwritten.

The ‘write once, read forever’ technology

The technology Johnson, Solomon, and their collaborators have developed promises to overcome the energy requirements and vulnerabilities of traditional data storage for archival applications.

The design builds off of Lippmann’s idea. Instead of taking an analog photograph, the team divided the medium into pixels. With the help of emulsion specialist Yves Gentet, they worked to improve Lippmann’s emulsion chemistry, making it more sensitive and capable of storing multiple wavelengths at each pixel location. The final emulsion is a combination of silver halide and extremely hardened gelatin. Their technique now can store up to four distinct narrow-band, superimposed colors in each pixel.

black text with squares with red, green, blue, yellow and pink in them with another large rectangle below with a spectrum of the rainbow in colors The standing wave storage technique can store up to four colors out of a possible 32 at each pixel location. This adds up to an astounding storage capacity of 4.6 terabits (or roughly 300 movies) in the area of a single photograph. Wave Domain

“The textbooks say that’s impossible,” Solomon says, “but we did it, so the textbooks are wrong.”

For each pixel, they can choose four colors out of a possible 32 to store.

That amounts to more than 40,000 possibilities. Thus, the technique can store more than 40,000 bits (although the format need not be binary) in each 10-square-micrometer pixel, or 4.6 terabits in a 10.16 centimeter by 12.7 cm modified Lippmann plate. That’s more than 300 movies’ worth of data stored in a single picture.

To write on the SWS medium, the plate—coated with a thin layer of the specially formulated emulsion—is exposed to light from an array of powerful color LEDs.

That way, the entire plate is written simultaneously, greatly reducing the writing time per pixel.

The plate then gets developed through a chemical process that blackens the exposed silver grains, memorizing the waves of color it was exposed to.

Finally, a small charged-couplet-device camera array, like those used in cellphones, reads out the information. The readout occurs for the entire plate at once, so the readout rate, like the writing rate, is fast.

“The data that we read is coming off the plate at such a high bandwidth,” Solomon says. “There is no computer on the planet that can absorb it without some buffering.”

The entire memory cell is a sandwich of the LED array, the photosensitive plate, and the CCD. All the elements use off-the-shelf parts.

“We took a long time to figure out how to make this in a very inexpensive, reproducible, quick way,” Johnson says. “The idea is to use readily available parts.” The entire storage medium, along with its read/write infrastructure, is relatively inexpensive and portable.

To test the durability of their storage method, the team sent their collaborators at NASA some 150 samples of their SWS devices to be hung by astronauts outside the International Space Station for nine months in 2019. They then tested the integrity of the stored data after the SWS plates were returned from space, compared with another 150 plates stored in Rosenthal’s lab on the ground.

“There was absolutely zero degradation from nine months of exposure to cosmic rays,” Solomon says. Meanwhile, the plates on Rosenthal’s desk were crawling with bacteria, while the ISS plates were sterile. Silver is a known bactericide, though, so the colors were immune, Solomon says.

Their most recent patent, granted earlier this year, describes a method of storing data that requires no power to maintain when not actively reading or writing data. Team members say the technique is incorruptible: It is immune to moisture, solar flares, cosmic rays, and other kinds of radiation. So, they argue, it can be used both in space and on land as a durable, low-cost archival data solution.

Passing on the torch

The new invention has massive potential applications. In addition to data centers and space applications, Johnson says, scientific enterprises such as the Rubin Observatory being built in Chile, will produce massive amounts of archival data that could benefit from SWS technology.

“It’s all reference data, and it’s an extraordinary amount of data that’s being generated every week that needs to be kept forever,” Johnson says.

Johnson says, however, that he and his team will not be the ones to bring the technology to market: “I’m 94 years old, and my two partners are in their 70s and 80s. We’re not about to start a company.”

He is ready to pass on the torch. The team is seeking a new chief executive to head up Wave Domain, which they hope will continue the development of SWS and bring it to mass adoption.

Johnson says he has learned that people rarely know which new technologies will eventually have the most impact. Perhaps, though few people are aware of it now, storing big data using old photographic technology will become an unexpected success.

Startups Squeeze Room-Size Optical Atomic Clocks Into a Briefcase



Walking into Jun Ye’s lab at the University of Colorado Boulder is a bit like walking into an electronic jungle. There are wires strung across the ceiling that hang down to the floor. Right in the middle of the room are four hefty steel tables with metal panels above them extending all the way to the ceiling. Slide one of the panels to the side and you’ll see a dense mesh of vacuum chambers, mirrors, magnetic coils, and laser light bouncing around in precisely orchestrated patterns.

This is one of the world’s most precise and accurate clocks, and it’s so accurate that you’d have to wait 40 billion years—or three times the age of the universe—for it to be off by one second.

What’s interesting about Ye’s atomic clock, part of a joint venture between the University of Colorado Boulder and the National Institute of Standards and Technology (NIST), is that it is optical not microwave, like most atomic clocks. The ticking heart of the clock is the strontium atom, and it beats at a frequency of 429 terahertz, or 429 trillion ticks per second. It’s the same frequency as light in the lower part of the red region of the visible spectrum, and that relatively high frequency is a pillar of the clock’s incredible precision. Commonly available atomic clocks beat at frequencies in the gigahertz range, or about 10 billion ticks per second. Going from the microwave to the optical makes it possible for Ye’s clock to be tens of thousands of times as precise.

A photo of a small glass object with two green lines in it.  The startup Vector Atomic uses a vapor of iodine molecules trapped in a small glass cell as the ticking heart of its optical atomic clock. Will Lunden

One of Ye’s former graduate students, Martin Boyd, cofounded a company called Vector Atomic, which has taken the idea behind Ye’s optical-clock technology and used it to make a clock small enough to fit in a box the size of a large briefcase. The precision of Vector Atomic’s clock is far from that of Ye’s—it might lose a second in 32 million years, says Jamil Abo-Shaeer, CEO of Vector Atomic. But it, too, operates at an optical frequency, and it matches or beats commercial alternatives.

In the past year, three separate companies have developed their own versions of compact optical atomic clocks—besides Vector Atomic, there’s also Infleqtion, in Boulder, Colo., and QuantX Labs, based in Adelaide, Australia. Freed from the laboratory, these new clocks promise greater resilience and a backup to GPS for military applications, as well as for data centers, financial institutions, and power grids. And they may enable a future of more-precise GPS, with centimeter-positioning resolution, exact enough to keep self-driving cars in their lanes, allow drones to drop deliveries onto balconies, and more.

And even more than all that, this is a story of invention at the frontiers of electronics and optics. Getting the technology from an unwieldy, lab-size behemoth to a reliable, portable product took a major shift in mind-set: The tech staff of these companies, mostly Ph.D. atomic physicists, had to go from focusing on precision at all costs to obsessing over compactness, robustness, and minimizing power consumption. They took an idea that pushed the boundaries of science and turned it into an invention that stretched the possibilities of technology.

How does an atomic clock work?

Like any scientist, Ye is motivated by understanding the deepest mysteries of the universe. He hopes his lab’s ultraprecise clocks will one day help glean the secrets of quantum gravity, or help understand the nature of dark matter. He also revels in the engineering complexity of his device.

“I love this job because everything you’re teaching in physics turns out to matter when you’re trying to measure things at such a high-precision level,” he says. For example, if someone walks into the lab, the minuscule thermal radiation emanating from their body will polarize the atoms in the lab ever so slightly, changing their ticking frequency. To maintain the clock’s precision, you need to bring that effect under control.

An illustration of the process for how an optical atomic clock works. Inside the briefcase-size optical atomic clock. A laser (1) shines into a glass cell containing atomic vapor (2). The atoms absorb light at only a very precise frequency. A detector (3) measures the amount of absorption and uses that to stabilize the laser at the correct frequency. A frequency comb (4) gears down from the optical oscillation in the terahertz to the microwave range. The clock outputs an ultraprecise megahertz signal (5). Chris Philpot

In an atomic clock, the atoms act like an extremely picky Goldilocks, identifying when a frequency of electromagnetic radiation they are exposed to is too hot, too cold, or just right. The clock starts with a source of electromagnetic radiation, be it a microwave oscillator (like the current commercial atomic clocks) or a laser (like Ye’s clock). No matter how precisely the sources are engineered, they will always have some variation, some bandwidth, and some jitter, making their frequency irregular and unreliable.

Unlike these radiation sources, all atoms of a certain isotope of a species—rubidium, cesium, strontium, or any other—are exactly identical to one another. And any atom has a host of discrete energy levels that it can occupy. Each pair of energy levels has its own energy gap, corresponding to a frequency. If an atom is illuminated by radiation of the exact frequency of one such gap, the atom will absorb the radiation, and the electrons will hop to the higher energy level. Shortly after, the atom will re-emit radiation as those electrons hop back down to the lower energy levels.

During clock operation, a maximally stable (but inevitably still somewhat broadband-jittery) source illuminates the atoms. The electrons get excited and hop energy levels only when the source’s frequency is just right. A detector observes how much of the radiation the atoms absorb (or how much they later re-emitted, depending on the architecture) and reports whether the incoming frequency is too high or too low. Then, active feedback stabilizes the source’s frequency to the atoms’ frequency of choice. This precise frequency feeds into a counter that can count the crests and troughs of the electromagnetic radiation—the ticks of the atomic clock. That stabilized count is an ultra-accurate frequency base—a clock, in other words.

There are a plethora of effects that can affect the precision of the clock. If the atoms are moving, the frequency of radiation from the atoms’ reference point is altered by the Doppler effect, causing different atoms to select for different frequencies according to their velocity. External electric or magnetic fields, or even heat radiating from a human, can tweak the atoms’ preferred frequency. A vibration can knock a source laser’s frequency so far off that the atoms will stop responding altogether, breaking the feedback loop.

Ye chose one of the pickiest atoms of them all, one that would offer very high precision—strontium. To minimize the noisemaking effects of heat, Ye’s team uses more lasers to cool the atoms down to just shy of absolute zero. To better detect the atoms’ signal, they corral the atoms in a periodic lattice—a trap shaped like an egg carton and made by yet another laser. This configuration creates several separate groups of atoms that can all be compared against one another to get a more precise measurement. All in all, Ye’s lab uses seven lasers of different colors for cooling, trapping, preparing the clock state, and detection, all defined by the atoms’ particular needs.

The lasers enable the clock’s astounding precision, but they are also expensive, and they take up a lot of space. Aside from the light source itself, each laser requires a bevy of optical control elements to coax it to the right frequency and alignment—and they are easily misaligned or knocked slightly away from their target color.

“The laser is a weak link,” Ye says. “When you design a microwave oscillator, you put a waveguide around it, and they work forever. Lasers are still very much more gentle or fragile.” The lasers can be knocked out of alignment by someone lightly knocking on one of Ye’s massive tables. Waveguides, meanwhile, being enclosed and bolted down, are much less sensitive.

The lab is run by a team of graduate students and postdocs, bent on ensuring that the laser’s instabilities do not deter them from making the world’s most precise measurements. They have the luxury of pursuing the ultimate precision without concern for worldly practicalities.

The mind-set shift to a commercial product

While Ye and his team pursue perfection in timing, Vector Atomic, the first company to put an optical atomic clock on the market, is after an equally elusive objective: commercial impact.

“Our competition is not Jun Ye,” says Vector Atomic’s Abo-Shaeer. “Our competition is the clocks that are out there—it’s the commercial clocks. We’re trying to bring these more modern timekeeping techniques to bear.”

To be commercially viable, these clocks cannot be thrown off by the bodily heat of a nearby human, nor can they malfunction when someone knocks against the device. So Vector Atomic had to rethink the whole construction of its device from the ground up, and the most fragile part of the system became the company’s focus. “Instead of designing the system around the atom, we designed the system around the lasers,” Abo-Shaeer says.

First, they drastically reduced the number of lasers used in the design. That means no laser cooling—the clock has to work with atoms or molecules in their gaseous state, confined in a glass cell. And there is no periodic lattice to group the atoms into separate clumps and get multiple readings. Both of these choices come with hits to precision, but they were necessary to make robust, compact devices.

Then, to choose their lasers, Abo-Shaeer and his coworkers asked themselves which ones were the most robust, cheap, and well-engineered. The answer was clear—infrared lasers used in mature telecommunications and machining industries. Then they asked themselves which atom, or molecule, had a transition that could be stimulated by such a laser. The answer here was an iodine molecule, whose electrons have a transition at 532 nanometers—conveniently, exactly half the wavelength of a common industrial laser. Halving the wavelength could be achieved by means of a common optical device.

“We have all these Ph.D. atomic physicists, and it takes as much or more creativity to get all this into a box as it did when we were graduate students with the ultimate goal of writing Nature and Science papers,” Abo-Shaeer says.

Vector Atomic couldn’t get away with just one laser in its system. Having a box that outputs a very precise laser, oscillating at hundreds of terahertz, sounds cool but is completely useless. No electronics are capable of counting those ticks. To convert the optical signal into a friendly microwave one, while keeping the original signal’s precision, the team needed to incorporate a frequency comb.

Frequency combs are lasers that emit light in regularly spaced bursts in time. Their comblike nature becomes apparent if you look at the frequencies—or colors—of the light they emit, regularly spaced like the teeth of a comb. The subject of the 2005 Nobel Prize in Physics, these devices bridge the optical and microwave domains, allowing laser light to “gear down” to lower frequency range while preserving precision.

In the past decade, frequency combs underwent their own transformation, from lab-based devices to briefcase-size commercially available products (and even quarter-size prototypes). This development, as much as anything else, unleashed a wave of innovation that enabled the three optical atomic clocks and this nascent market today.

High time for optical time

Inventions often happen in a flurry, as if there were something in the air making conditions ripe for the new innovation. Alongside Vector Atomic’s Evergreen-30 clock, Infleqtion and QuantX Labs have both developed clocks of their own in short order. Infleqtion has made seven sales to date of their clock, Tiqker (yes, quantum-tech companies are morally obligated to put a q in every name). QuantX Labs, meanwhile, has sold the first two of their Tempo clocks, with delivery to customers scheduled before the end of this year, says Andre Luiten, cofounder and managing director of QuantX Labs. (A fourth company, Vescent, based in Golden, Colo., is also selling an optical atomic clock, although it is not integrated into a single box.)

A photo of an atomic clock and prototype atomic clock.  Vector Atomic, QuantX Labs, and Infleqtion all have plans to send prototypes of their clocks into space. QuantX Labs has designed a 20-liter engineering model of their space clock [left]. QuantX Labs

Independently, all three companies have made surprisingly similar design choices. They all realized that lasers were the limiting factor, and so chose to use a glass cell filled with atomic vapor rather than a vacuum chamber and laser cooling and trapping. They all opted to double the frequency of a telecom laser. Unlike Vector Atomic, Infleqtion and QuantX Labs chose the rubidium atom. The energy gap in rubidium, around 780 nm, can be addressed by a frequency-doubled infrared laser at 1,560 nm. QuantX Labs stands out for using two such lasers, very close to each other in frequency, to probe through a clever two-tone scheme that requires less power. They all managed to fit their clock systems into a 30-liter box, roughly the size of a briefcase.

All three companies took great pains to ensure that their clocks will operate robustly in realistic environments. At the lower level of precision compared with lab-based optical clocks, the radiation coming from a nearby person is no longer an issue. However, by doing away with laser cooling, these companies have heightened the possibility that temperature and motion could affect the atoms’ internal ticking frequency.

“You’ve got to be smart about the way you make the atomic cell so that it’s not coupled to the environment,” says Luiten.

Optical clocks set sail and take flight

In mid-2022, to test the robustness of their design, Vector Atomic and QuantX Labs’ partners in its venture, the University of Adelaide and Australia’s Defence Science and Technology Group, took their clocks out to sea. They brought their clocks to Pearl Harbor, in Hawaii, to participate in the Alternative Position, Navigation and Timing Challenge at Rim of the Pacific, a defense collaboration among the Five Eyes nations—Australia, Canada, New Zealand, the United Kingdom, and the United States. “They were playing touch rugby with the New Zealand sailors. So that was an awesome experience for atomic physicists,” Abo-Shaeer says.

After 20 days aboard a naval ship, Vector Atomic’s optical clocks maintained a performance that was very close to that of their measurements under lab conditions. “When it happened, I thought everyone should be standing up and shouting from the rooftops,” says Jonathan Hoffman, a program manager at the U.S. Defense Advanced Research Projects Agency (DARPA), which cofunded Vector Atomic’s work. “People have been working on these optical clocks for decades. And this was the first time an optical clock ran on its own without human interference, out in the real world.”

A photo of a box on the side of a ship on the water.

A photo of 3 stacked boxes. They are labeled "Viper", "Epic", and "Pickles." Vector Atomic and QuantX affiliate University of Adelaide installed their optical atomic clocks on a ship [top] to test their robustness in a harsh environment. The performance of Vector Atomic’s clocks [bottom] remained basically unchanged despite the ship’s rocking, temperature swings, and water sprays. The University of Adelaide’s clock degraded somewhat, but the team used the trial to improve their design. Will Lunden

The University of Adelaide’s clock did suffer some degradation at sea, but a critical outcome of the trial was an understanding of why that happened. This has allowed the team to amend its design to avoid the leading causes of noise, says Luiten.

In May 2024, Infleqtion sent its Tiqker clock into flight, along with its atom-based navigation system. A short-haul flight from MoD Boscombe Down, a military aircraft testing site in the United Kingdom, carried the quantum tech along with the U.K.’s science minister, Andrew Griffith. The company is still analyzing data from the flight, but at a minimum the clock has outperformed all onboard references, according to Judith Olson, head of the atomic clock project at Infleqtion.

All three companies are working on yet more compact versions of their clocks. All are confident they will be able to get their briefcase-size boxes down from a volume of about 30 liters to 5 L, about the size of an old-school two-slice toaster, say Olson, Luiten, and Abo-Shaeer. “Mostly those boxes are still empty space,” Luiten says.

An image of a wave of water hitting a metal container. During the sea trials, Vector Atomic’s and the University of Adelaide’s clocks were exposed to the elements. Jon Roslund

Infleqtion also has designs for an even smaller, 100-mL version, which leverages integrated photonics to make such tight packaging possible. “At that point, you basically have a clock that can fit in your pocket,” says Olson. “It might make a very warm pocket after a while, because the power draw will still be high. But even with the large power draw, that’s something we perceive as being potentially extremely disruptive.”

All three companies also plan to send their designs into space, aboard satellites, in the next several years. Under their Kairos mission, QuantX will launch a component of their Tempo clock into space in 2025, with a full launch scheduled for 2026.

Precision timing today

So why would someone need the astounding precision of an optical atomic clock? The most likely immediate use cases will be in situations where GPS is unavailable.

When most people think of GPS, they picture that blue dot on a map on their smartphone. But behind that dot is a sophisticated network of remarkable timing devices. It starts with Coordinated Universal Time (UTC), the standard established by averaging together about 400 atomic clocks of various kinds all over the world.

“UTC is known to be some factor of 1 million more stable than any astronomical sense of time provided by Earth’s rotation,” says Jeffrey Sherman, a supervisory physicist at NIST who works on maintaining and improving UTC clocks.

UTC is transmitted to satellites in the GPS network twice a day. Each satellite carries an onboard clock of its own, a microwave atomic clock usually based on rubidium. These clocks are precise to about a nanosecond during that half-day they are left to their own devices, Sherman says. From there, satellites provide the time for all kinds of facilities here on Earth, including data centers, financial institutions, power grids, and cell towers.

Precise timing is what allows the satellites to locate that blue dot on a phone map, too. A phone must connect to three or more GPS satellites and receive precise time from all three. However, the times will be different due to the different distances traveled from the satellites. Using this difference, and knowing the positions of the satellites, the phone triangulates its own position. So the precision of timing aboard the satellites directly relates to how precisely the location of any phone can be determined—currently about 2 meters in the nonmilitary version of the service.

The precisely timed future

Optical atomic clocks can usefully inject themselves into multiple stages of this worldwide timing scheme. First, if they prove reliable enough over the long term, they can be used in defining the UTC standard alongside—and eventually instead of—other clocks. Currently, the majority of the clocks that make up the standard are hydrogen masers. Hydrogen masers have a precision similar to that of the new portable optical clocks, but they are far from portable: They are roughly the size of a kitchen refrigerator and require a room-size thermally and vibrationally controlled environment.

“I think everyone can agree the maser is probably at the end of its technological evolution,” Shermann says. “They’ve stopped really getting a lot better, while on day one, the first crop of optical clocks are comparable. There’s a hope that by encouraging development, they can take over, and they can become much better in the near future.”

An illustration of the infrastructure of different interactions to create UTC. The global timing infrastructure. A collection of precise clocks, including hydrogen masers and atomic clocks, is used to create Coordinated Universal Time (UTC). A network of satellites carries atomic clocks of their own, synced to UTC on a regular basis. The satellites then send precise timing to data centers, financial institutions, the power grid, cell towers, and more. Four or more satellites are used to determine your phone’s GPS position. An optical atomic clock can be included in UTC, sent aboard satellites, or used as backup in data centers, financial institutions, or cell towers. Chris Philpot

Second, optical clocks can come in handy in situations where GPS isn’t available. Although many people experience GPS as extremely reliable, jammed or spoofed GPS is very common in times of war or conflict. (To see a daily map of where GPS is unavailable due to interference, check out gpsjam.org.)

This is a big issue for the U.S. Department of Defense. Not having access to GPS-based time compromises military communications. “For the DOD, it’s very important that we can put this on many, many different platforms,” DARPA’s Hoffman says. “We want to put it on ships, we want to put it on aircraft, we want to put it on satellites and vehicles.”

It can also be an issue in financial markets, data centers, and 5G communications. All of these use cases require precise timing to about 1 microsecond to function properly and meet regulatory requirements. That means the source of timing for these applications must be at least an order of magnitude better, or roughly a 100-nanosecond resolution. GPS provided this with room to spare, but if these industries rely solely on GPS, jamming or spoofing puts them at great risk.

A local microwave atomic clock can provide a backup, but these clocks lose several nanoseconds a day even in controlled-temperature environments. Optical atomic clocks can provide these industries with security, so that even if they lose access to GPS for extended periods of time, their operations will continue unimpeded.

“By having this headroom in performance, it means that we can trust how well our clocks are ticking hours and days or even months later,” says Infleqtion’s Olson. “The lower-performing clocks don’t have that.”

Finally, portable optical atomic clocks open up the possibility of a future where the entire timing fabric goes from nanosecond to picosecond resolution. That means sending these clocks into space to form their own version of a more-precise GPS. Among other things, this would enable location precision that’s several millimeters instead of 2 meters.

“We call it GPS 2.0,” says Vector Atomic’s Abo-Shaeer. He argues that millimeter-scale location resolution would allow autonomous vehicles to stay in their lanes, or make it possible for delivery drones to land on a New York City balcony.

Perhaps most exciting of all, this invention promises to open the possibility for many inventions in a variety of fields. Having the option of superior timing will open new applications that have not yet been envisioned. “A lot of applications are built around the current limitations of GPS. In other words, it’s sort of a catch-22,” says David Howe, emeritus and former leader of the time and frequency metrology group of NIST. “You get into this mode where you don’t ever cross over to something better because the applications are designed for what’s available. So, it’ll take a larger vision to say, ‘Let’s see what we can do with optical clocks.’”

This article appears in the November 2024 print issue as “Squeezing an Optical Atomic Clock Into a Briefcase.”

Startup Says It Can Make a 100x Faster CPU



In an era of fast-evolving AI accelerators, general purpose CPUs don’t get a lot of love. “If you look at the CPU generation by generation, you see incremental improvements,” says Timo Valtonen, CEO and co-founder of Finland-based Flow Computing.

Valtonen’s goal is to put CPUs back in their rightful, ‘central’ role. In order to do that, he and his team are proposing a new paradigm. Instead of trying to speed up computation by putting 16 identical CPU cores into, say, a laptop, a manufacturer could put 4 standard CPU cores and 64 of Flow Computing’s so-called parallel processing unit (PPU) cores into the same footprint, and achieve up to 100 times better performance. Valtonen and his collaborators laid out their case at the IEEE Hot Chips conference in August.

The PPU provides a speed-up in cases where the computing task is parallelizable, but a traditional CPU isn’t well equipped to take advantage of that parallelism, yet offloading to something like a GPU would be too costly.

“Typically, we say, ‘okay, parallelization is only worthwhile if we have a large workload,’ because otherwise the overhead kills lot of our gains,” says Jörg Keller, professor and chair of parallelism and VLSI at FernUniversität in Hagen, Germany, who is not affiliated with Flow Computing. “And this now changes towards smaller workloads, which means that there are more places in the code where you can apply this parallelization.”

Computing tasks can roughly be broken up into two categories: sequential tasks, where each step depends on the outcome of a previous step, and parallel tasks, which can be done independently. Flow Computing CTO and co-founder Martti Forsell says a single architecture cannot be optimized for both types of tasks. So, the idea is to have separate units that are optimized for each type of task.

“When we have a sequential workload as part of the code, then the CPU part will execute it. And when it comes to parallel parts, then the CPU will assign that part to PPU. Then we have the best of both words,” Forsell says.

According to Forsell, there are four main requirements for a computer architecture that’s optimized for parallelism: tolerating memory latency, which means finding ways to not just sit idle while the next piece of data is being loaded from memory; sufficient bandwidth for communication between so-called threads, chains of processor instructions that are running in parallel; efficient synchronization, which means making sure the parallel parts of the code execute in the correct order; and low-level parallelism, or the ability to use the multiple functional units that actually perform mathematical and logical operations simultaneously. For Flow Computing new approach, “we have redesigned, or started designing an architecture from scratch, from the beginning, for parallel computation,” Forsell says.

Any CPU can be potentially upgraded

To hide the latency of memory access, the PPU implements multi-threading: when each thread calls to memory, another thread can start running while the first thread waits for a response. To optimize bandwidth, the PPU is equipped with a flexible communication network, such that any functional unit can talk to any other one as needed, also allowing for low-level parallelism. To deal with synchronization delays, it utilizes a proprietary algorithm called wave synchronization that is claimed to be up to 10,000 times more efficient than traditional synchronization protocols.

To demonstrate the power of the PPU, Forsell and his collaborators built a proof-of-concept FPGA implementation of their design. The team says that the FPGA performed identically to their simulator, demonstrating that the PPU is functioning as expected. The team performed several comparison studies between their PPU design and existing CPUS. “Up to 100x [improvement] was reached in our preliminary performance comparisons assuming that there would be a silicon implementation of a Flow PPU running at the same speed as one of the compared commercial processors and using our microarchitecture,” Forsell says.

Now, the team is working on a compiler for their PPU, as well as looking for partners in the CPU production space. They are hoping that a large CPU manufacturer will be interested in their product, so that they could work on a co-design. Their PPU can be implemented with any instruction set architecture, so any CPU can be potentially upgraded.

“Now is really the time for this technology to go to market,” says Keller. “Because now we have the necessity of energy efficient computing in mobile devices, and at the same time, we have the need for high computational performance.”

Transistor-like Qubits Hit Key Benchmark



A team in Australia has recently demonstrated a key advance in metal-oxide-semiconductor-based (or MOS-based) quantum computers. They showed that their two-qubit gates—logical operations that involve more than one quantum bit, or qubit—perform without errors 99 percent of the time. This number is important, because it is the baseline necessary to perform error correction, which is believed to be necessary to build a large-scale quantum computer. What’s more, these MOS-based quantum computers are compatible with existing CMOS technology, which will make it more straightforward to manufacture a large number of qubits on a single chip than with other techniques.

“Getting over 99 percent is significant because that is considered by many to be the error correction threshold, in the sense that if your fidelity is lower than 99 percent, it doesn’t really matter what you’re going to do in error correction,” says Yuval Boger, CCO of quantum computing company QuEra and who wasn’t involved in the work. “You’re never going to fix errors faster than they accumulate.”

There are many contending platforms in the race to build a useful quantum computer. IBM, Google and others are building their machines out of superconducting qubits. Quantinuum and IonQ use individual trapped ions. QuEra and Atom Computing use neutrally-charged atoms. Xanadu and PsiQuantum are betting on photons. The list goes on.

In the new result, a collaboration between the University of New South Wales (UNSW) and Sydney-based startup Diraq, with contributors from Japan, Germany, Canada, and the U.S., has taken yet another approach: trapping single electrons in MOS devices. “What we are trying to do is we are trying to make qubits that are as close to traditional transistors as they can be,” says Tuomo Tanttu, a research fellow at UNSW who led the effort.

Qubits That Act Like Transistors

These qubits are indeed very similar to a regular transistor, gated in such a way as to have only a single electron in the channel. The biggest advantage of this approach is that it can be manufactured using traditional CMOS technologies, making it theoretically possible to scale to millions of qubits on a single chip. Another advantage is that MOS qubits can be integrated on-chip with standard transistors for simplified input, output, and control, says Diraq CEO Andrew Dzurak.

The drawback of this approach, however, is that MOS qubits have historically suffered from device-to-device variability, causing significant noise on the qubits.

“The sensitivity in [MOS] qubits is going to be more than in transistors, because in transistors, you still have 20, 30, 40 electrons carrying the current. In a qubit device, you’re really down to a single electron,” says Ravi Pillarisetty, a senior device engineer for Intel quantum hardware who wasn’t involved in the work.

The team’s result not only demonstrated the 99 percent accurate functionality on two-qubit gates of the test devices, but also helped better understand the sources of device-to-device variability. The team tested three devices with three qubits each. In addition to measuring the error rate, they also performed comprehensive studies to glean the underlying physical mechanisms that contribute to noise.

The researchers found that one of the sources of noise was isotopic impurities in the silicon layer, which, when controlled, greatly reduced the circuit complexity necessary to run the device. The next leading cause of noise was small variations in electric fields, likely due to imperfections in the oxide layer of the device. Tanttu says this is likely to improve by transitioning from a laboratory clean room to a foundry environment.

“It’s a great result and great progress. And I think it’s setting the right direction for the community in terms of thinking less about one individual device, or demonstrating something on an individual device, versus thinking more longer term about the scaling path,” Pillarisetty says.

Now, the challenge will be to scale up these devices to more qubits. One difficulty with scaling is the number of input/output channels required. The quantum team at Intel, who are pursuing a similar technology, has recently pioneered a chip they call Pando Tree to try to address this issue. Pando Tree will be on the same plane as the quantum processor, enabling faster inputs and outputs to the qubits. The Intel team hopes to use it to scale to thousands of qubits. “A lot of our approach is thinking about, how do we make our qubit processor look more like a modern CPU?” says Pillarisetty.

Similarly, Diraq CEO Dzurak says his team plan to scale their technology to thousands of qubits in the near future through a recently announced partnership with Global Foundries. “With Global Foundries, we designed a chip that will have thousands of these [MOS qubits]. And these will be interconnected by using classical transistor circuitry that we designed. This is unprecedented in the quantum computing world,” Dzurak says.

NIST Announces Post-Quantum Cryptography Standards



Today, almost all data on the Internet, including bank transactions, medical records, and secure chats, is protected with an encryption scheme called RSA (named after its creators Rivest, Shamir, and Adleman). This scheme is based on a simple fact—it is virtually impossible to calculate the prime factors of a large number in a reasonable amount of time, even on the world’s most powerful supercomputer. Unfortunately, large quantum computers, if and when they are built, would find this task a breeze, thus undermining the security of the entire Internet.

Luckily, quantum computers are only better than classical ones at a select class of problems, and there are plenty of encryption schemes where quantum computers don’t offer any advantage. Today, the U.S. National Institute of Standards and Technology (NIST) announced the standardization of three post-quantum cryptography encryption schemes. With these standards in hand, NIST is encouraging computer system administrators to begin transitioning to post-quantum security as soon as possible.

“Now our task is to replace the protocol in every device, which is not an easy task.” —Lily Chen, NIST

These standards are likely to be a big element of the Internet’s future. NIST’s previous cryptography standards, developed in the 1970s, are used in almost all devices, including Internet routers, phones, and laptops, says Lily Chen, head of the cryptography group at NIST who lead the standardization process. But adoption will not happen overnight.

“Today, public key cryptography is used everywhere in every device,” Chen says. “Now our task is to replace the protocol in every device, which is not an easy task.”

Why we need post-quantum cryptography now

Most experts believe large-scale quantum computers won’t be built for at least another decade. So why is NIST worried about this now? There are two main reasons.

First, many devices that use RSA security, like cars and some IoT devices, are expected to remain in use for at least a decade. So they need to be equipped with quantum-safe cryptography before they are released into the field.

“For us, it’s not an option to just wait and see what happens. We want to be ready and implement solutions as soon as possible.” —Richard Marty, LGT Financial Services

Second, a nefarious individual could potentially download and store encrypted data today, and decrypt it once a large enough quantum computer comes online. This concept is called “harvest now, decrypt later“ and by its nature, it poses a threat to sensitive data now, even if that data can only be cracked in the future.

Security experts in various industries are starting to take the threat of quantum computers seriously, says Joost Renes, principal security architect and cryptographer at NXP Semiconductors. “Back in 2017, 2018, people would ask ‘What’s a quantum computer?’” Renes says. “Now, they’re asking ‘When will the PQC standards come out and which one should we implement?’”

Richard Marty, chief technology officer at LGT Financial Services, agrees. “For us, it’s not an option to just wait and see what happens. We want to be ready and implement solutions as soon as possible, to avoid harvest now and decrypt later.”

NIST’s competition for the best quantum-safe algorithm

NIST announced a public competition for the best PQC algorithm back in 2016. They received a whopping 82 submissions from teams in 25 different countries. Since then, NIST has gone through 4 elimination rounds, finally whittling the pool down to four algorithms in 2022.

This lengthy process was a community-wide effort, with NIST taking input from the cryptographic research community, industry, and government stakeholders. “Industry has provided very valuable feedback,” says NIST’s Chen.

These four winning algorithms had intense-sounding names: CRYSTALS-Kyber, CRYSTALS-Dilithium, Sphincs+, and FALCON. Sadly, the names did not survive standardization: The algorithms are now known as Federal Information Processing Standard (FIPS) 203 through 206. FIPS 203, 204, and 205 are the focus of today’s announcement from NIST. FIPS 206, the algorithm previously known as FALCON, is expected to be standardized in late 2024.

The algorithms fall into two categories: general encryption, used to protect information transferred via a public network, and digital signature, used to authenticate individuals. Digital signatures are essential for preventing malware attacks, says Chen.

Every cryptography protocol is based on a math problem that’s hard to solve but easy to check once you have the correct answer. For RSA, it’s factoring large numbers into two primes—it’s hard to figure out what those two primes are (for a classical computer), but once you have one it’s straightforward to divide and get the other.

“We have a few instances of [PQC], but for a full transition, I couldn’t give you a number, but there’s a lot to do.” —Richard Marty, LGT Financial Services

Two out of the three schemes already standardized by NIST, FIPS 203 and FIPS 204 (as well as the upcoming FIPS 206), are based on another hard problem, called lattice cryptography. Lattice cryptography rests on the tricky problem of finding the lowest common multiple among a set of numbers. Usually, this is implemented in many dimensions, or on a lattice, where the least common multiple is a vector.

The third standardized scheme, FIPS 205, is based on hash functions—in other words, converting a message to an encrypted string that’s difficult to reverse

The standards include the encryption algorithms’ computer code, instructions for how to implement it, and intended uses. There are three levels of security for each protocol, designed to future-proof the standards in case some weaknesses or vulnerabilities are found in the algorithms.

Lattice cryptography survives alarms over vulnerabilities

Earlier this year, a pre-print published to the arXiv alarmed the PQC community. The paper, authored by Yilei Chen of Tsinghua University in Beijing, claimed to show that lattice-based cryptography, the basis of two out of the three NIST protocols, was not, in fact, immune to quantum attacks. On further inspection, Yilei Chen’s argument turned out to have a flaw—and lattice cryptography is still believed to be secure against quantum attacks.

On the one hand, this incident highlights the central problem at the heart of all cryptography schemes: There is no proof that any of the math problems the schemes are based on are actually “hard.” The only proof, even for the standard RSA algorithms, is that people have been trying to break the encryption for a long time, and have all failed. Since post-quantum cryptography standards, including lattice cryptography, are newer, there is less certainty that no one will find a way to break them.

That said, the failure of this latest attempt only builds on the algorithm’s credibility. The flaw in the paper’s argument was discovered within a week, signaling that there is an active community of experts working on this problem. “The result of that paper is not valid, that means the pedigree of the lattice-based cryptography is still secure,” says NIST’s Lily Chen (no relation to Tsinghua University’s Yilei Chen). “People have tried hard to break this algorithm. A lot of people are trying, they try very hard, and this actually gives us confidence.”

NIST’s announcement is exciting, but the work of transitioning all devices to the new standards has only just begun. It is going to take time, and money, to fully protect the world from the threat of future quantum computers.

“We’ve spent 18 months on the transition and spent about half a million dollars on it,” says Marty of LGT Financial Services. “We have a few instances of [PQC], but for a full transition, I couldn’t give you a number, but there’s a lot to do.”

❌
❌