Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Zynga owes IBM $45M after using 1980s patented technology for hit games

17 September 2024 at 19:54
Zynga owes IBM $45M after using 1980s patented technology for hit games

Enlarge (credit: via Zynga)

Zynga must pay IBM nearly $45 million in damages after a jury ruled that popular games in its FarmVille series, as well as individual hits like Harry Potter: Puzzles and Spells, infringed on two early IBM patents.

In an SEC filing, Zynga reassured investors that "the patents at issue have expired and Zynga will not have to modify or stop operating any of the games at issue" as a result of the loss. But the substantial damages owed will likely have financial implications for Zynga parent company Take-Two Interactive Software, analysts said, unless Zynga is successful in its plans to overturn the verdict.

A Take-Two spokesperson told Ars: "We are disappointed in the verdict; however, believe we will prevail on appeal."

Read 19 remaining paragraphs | Comments

IBM makes developing for quantum computers easier with the Qiskit Functions Catalog

16 September 2024 at 16:00

IBM today launched the Qiskit Functions Catalog, a new set of services that aims to make programming quantum computers easier by abstracting away many of the complexities of working with these machines. “I do think it’s the next big transition since we put the quantum computer on the cloud,” Jay Gambetta, IBM’s VP in charge […]

© 2024 TechCrunch. All rights reserved. For personal use only.

A Match Made in Yorktown Heights



It pays to have friends in fascinating places. You need look no further than the cover of this issue and the article “ IBM’s Big Bet on the Quantum-Centric Supercomputer” for evidence. The article by Ryan Mandelbaum, Antonio D. Córcoles, and Jay Gambetta came to us courtesy of the article’s illustrator, the inimitable graphic artist Carl De Torres, a longtime IEEE Spectrum contributor as well as a design and communications consultant for IBM Research.

Story ideas typically originate with Spectrum’s editors and pitches from expert authors and freelance journalists. So we were intrigued when De Torres approached Spectrum about doing an article on IBM Research’s cutting-edge work on quantum-centric supercomputing.

De Torres has been collaborating with IBM in a variety of capacities since 2009, when, while at Wired magazine creating infographics, he was asked by the ad agency Ogilvy to work on Big Blue’s advertising campaign “Let’s build a Smarter Planet.” That project went so well that De Torres struck out on his own the next year. His relationship with IBM expanded, as did his engagements with other media, such as Spectrum, Fortune, and The New York Times. “My interest in IBM quickly grew beyond helping them in a marketing capacity,” says De Torres, who owns and leads the design studio Optics Lab in Berkeley, Calif. “What I really wanted to do is get to the source of some of the smartest work happening in technology, and that was IBM Research.”

Last year, while working on visualizations of a quantum-centric supercomputer with Jay Gambetta, vice president and lead scientist of IBM Quantum at the Thomas J. Watson Research Center in Yorktown Heights, N.Y., De Torres was inspired to contact Spectrum’s creative director, Mark Montgomery, with an idea.

“I really loved this process because I got to bring together two of my favorite clients to create something really special.” —Carl De Torres

“I thought, ‘You know, I think IEEE Spectrum would love to see this work,’” De Torres told me. “So with Jay’s permission, I gave Mark a 30-second pitch. Mark liked it and ran it by the editors, and they said that it sounded very promising.” De Torres, members of the IBM Quantum team, and Spectrum editors had a call to brainstorm what the article could be. “From there everything quickly fell into place, and I worked with Spectrum and the IBM Quantum team on a visual approach to the story,” De Torres says.

As for the text, we knew it would take a deft editorial hand to help the authors explain what amounts to the peanut butter and chocolate of advanced computing. Fortunately for us, and for you, dear reader, Associate Editor Dina Genkina has a doctorate in atomic physics, in the subfield of quantum simulation. As Genkina explained to me, that speciality is “adjacent to quantum computing, but not quite the same—it’s more like the analog version of QC that’s not computationally complete.”

Genkina was thrilled to work with De Torres to make the technical illustrations both accurate and edifying. Spectrum prides itself on its tech illustrations, which De Torres notes are increasingly rare in the space-constrained era of mobile-media consumption.

“Working with Carl was so exciting,” Genkina says. “It was really his vision that made the article happen, and the scope of his ambition for the story was at times a bit terrifying. But it’s the kind of story where the illustrations make it come to life.”

De Torres was happy with the collaboration, too. “I really loved this process because I got to bring together two of my favorite clients to create something really special.”

This article appears in the September 2024 print issue.

Inside the Three-Way Race to Create the Most Widely Used Laser



The semiconductor laser, invented more than 60 years ago, is the foundation of many of today’s technologies including barcode scanners, fiber-optic communications, medical imaging, and remote controls. The tiny, versatile device is now an IEEE Milestone.

The possibilities of laser technology had set the scientific world alight in 1960, when the laser, long described in theory, was first demonstrated. Three U.S. research centers unknowingly began racing each other to create the first semiconductor version of the technology. The three—General Electric, IBM’s Thomas J. Watson Research Center, and the MIT Lincoln Laboratory—independently reported the first demonstrations of a semiconductor laser, all within a matter of days in 1962.

The semiconductor laser was dedicated as an IEEE Milestone at three ceremonies, with a plaque marking the achievement installed at each facility. The Lincoln Lab event is available to watch on demand.

Invention of the laser spurs a three-way race

The core concept of the laser dates back to 1917, when Albert Einstein theorized about “stimulated emission.” Scientists already knew electrons could absorb and emit light spontaneously, but Einstein posited that electrons could be manipulated to emit at a particular wavelength. It took decades for engineers to turn his theory into reality.

In the late 1940s, physicists were working to improve the design of a vacuum tube used by the U.S. military in World War II to detect enemy planes by amplifying their signals. Charles Townes, a researcher at Bell Labs in Murray Hill, N.J., was one of them. He proposed creating a more powerful amplifier that passed a beam of electromagnetic waves through a cavity containing gas molecules. The beam would stimulate the atoms in the gas to release their energy exactly in step with the beam’s waves, creating energy that allowed it to exit the cavity as a much more powerful beam.

In 1954 Townes, then a physics professor at Columbia, created the device, which he called a “maser” (short for microwave amplification by stimulated emission of radiation). It would prove an important precursor to the laser.

Many theorists had told Townes his device couldn’t possibly work, according to an article published by the American Physical Society. Once it did work, the article says, other researchers quickly replicated it and began inventing variations.

Townes and other engineers figured that by harnessing higher-frequency energy, they could create an optical version of the maser that would generate beams of light. Such a device potentially could generate more powerful beams than were possible with microwaves, but it also could create beams of varied wavelengths, from the infrared to the visible. In 1958 Townes published a theoretical outline of the “laser.”

“It’s amazing what these … three organizations in the Northeast of the United States did 62 years ago to provide all this capability for us now and into the future.”

Several teams worked to fabricate such a device, and in May 1960 Theodore Maiman, a researcher at Hughes Research Lab, in Malibu, Calif., built the first working laser. Maiman’s paper, published in Nature three months later, described the invention as a high-power lamp that flashed light onto a ruby rod placed between two mirrorlike silver-coated surfaces. The optical cavity created by the surfaces oscillated the light produced by the ruby’s fluorescence, achieving Einstein’s stimulated emission.

The basic laser was now a reality. Engineers quickly began creating variations.

Many perhaps were most excited by the potential for a semiconductor laser. Semiconducting material can be manipulated to conduct electricity under the right conditions. By its nature, a laser made from semiconducting material could pack all the required elements of a laser—a source of light generation and amplification, lenses, and mirrors—into a micrometer-scale device.

“These desirable attributes attracted the imagination of scientists and engineers” across disciplines, according to the Engineering and Technology History Wiki.

A pair of researchers discovered in 1962 that an existing material was a great laser semiconductor: gallium arsenide.

Gallium-arsenide was ideal for a semiconductor laser

On 9 July 1962, MIT Lincoln Laboratory researchers Robert Keyes and Theodore Quist told the audience at the Solid State Device Research Conference that they were developing an experimental semiconductor laser, IEEE Fellow Paul W. Juodawlkis said during his speech at the IEEE Milestone dedication ceremony at MIT. Juodawlkis is director of the MIT Lincoln Laboratory’s quantum information and integrated nanosystems group.

The laser wasn’t yet emitting a coherent beam, but the work was advancing quickly, Keyes said. And then Keyes and Quist shocked the audience: They said they could prove that nearly 100 percent of the electrical energy injected into a gallium-arsenide semiconductor could be converted into light.

A group of men next to devices.  MIT’s Lincoln Laboratory’s [from left] Robert Keyes, Theodore M. Quist, and Robert Rediker testing their laser on a TV set.MIT Lincoln Laboratory

No one had made such a claim before. The audience was incredulous—and vocally so.

“When Bob [Keyes] was done with his talk, one of the audience members stood up and said, ‘Uh, that violates the second law of thermodynamics,’” Juodawlkis said.

The audience erupted into laughter. But physicist Robert N. Hall—a semiconductor expert working at GE’s research laboratory in Schenectady, N.Y.—silenced them.

“Bob Hall stood up and explained why it didn’t violate the second law,” Juodawlkis said. “It created a real buzz.”

Several teams raced to develop a working semiconductor laser. The margin of victory ultimately came down to a few days.

A ‘striking coincidence’

A photo of a man in glasses looking at a glass container. A semiconductor laser is made with a tiny semiconductor crystal that is suspended inside a glass container filled with liquid nitrogen, which helps keep the device cool. General Electric Research and Development Center/AIP Emilio Segrè Visual Archives

Hall returned to GE, inspired by Keyes and Quist’s speech, certain that he could lead a team to build an efficient, effective gallium arsenide laser.

He had already spent years working with semiconductors and invented what is known as a “p-i-n” diode rectifier. Using a crystal made of purified germanium, a semiconducting material, the rectifier could convert AC to DC—a crucial development for solid-state semiconductors used in electrical transmission.

That experience helped accelerate the development of semiconductor lasers. Hall and his team used a similar setup to the “p-i-n” rectifier. They built a diode laser that generated coherent light from a gallium arsenide crystal one-third of one millimeter in size, sandwiched into a cavity between two mirrors so the light bounced back and forth repeatedly. The news of the invention came out in the November 1, 1962, Physical Review Letters.

As Hall and his team worked, so did researchers at the Watson Research Center, in Yorktown Heights, N.Y. In February 1962 Marshall I. Nathan, an IBM researcher who previously worked with gallium arsenide, received a mandate from his department director, according to ETHW: Create the first gallium arsenide laser.

Nathan led a team of researchers including William P. Dumke, Gerald Burns, Frederick H. Dill, and Gordon Lasher, to develop the laser. They completed the task in October and hand-delivered a paper outlining their work to Applied Physics Letters, which published it on 1 November 1962.

Over at MIT’s Lincoln Laboratory, Quist, Keyes, and their colleague Robert Rediker published their findings in Applied Physics Letters on 1 December1962.

It had all happened so quickly that a New York Times article marveled about the “striking coincidence,” noting that IBM officials didn’t know about GE’s success until GE sent invitations to a news conference. An MIT spokesperson told the Times that GE had achieved success “a couple days or a week” before its own team.

Both IBM and GE had applied for U.S. patents in October, and both were ultimately awarded.

All three facilities now have been honored by IEEE for their work.

“Perhaps nowhere else has the semiconductor laser had greater impact than in communications,” according to an ETHW entry, “where every second, a semiconductor laser quietly encodes the sum of human knowledge into light, enabling it to be shared almost instantaneously across oceans and space.”

A photo of fingers holding a device with light coming out.  IBM Research’s semiconductor laser used a gallium arsenide p-n diode, which was patterned into a small optical cavity with an etched mesa structure.IBM

Juodawlkis, speaking at the Lincoln Lab ceremony, noted that semiconductor lasers are used “every time you make a cellphone call” or “Google silly cat videos.”

“If we look in the broader world,” he said, “semiconductor lasers are really one of the founding pedestals of the information age.”

He concluded his speech with a quote summing up a 1963 Time magazine article: “If the world is ever afflicted with a choice between thousands of different TV programs, a few diodes with their feeble beams of infrared light might carry them all at once.”

That was a “prescient foreshadowing of what semiconductor lasers have enabled,” Juodawlkis said. “It’s amazing what these … three organizations in the Northeast of the United States did 62 years ago to provide all this capability for us now and into the future.”

Plaques recognizing the technology are now displayed at GE, the Watson Research Center, and the Lincoln Laboratory. They read:

In the autumn of 1962, General Electric’s Schenectady and Syracuse facilities, IBM Thomas J. Watson Research Center, and MIT Lincoln Laboratory each independently reported the first demonstrations of the semiconductor laser. Smaller than a grain of rice, powered using direct current injection, and available at wavelengths spanning the ultraviolet to the infrared, the semiconductor laser became ubiquitous in modern communications, data storage, and precision measurement systems.

The IEEE Boston, New York, and Schenectady sections sponsored the nomination.

Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.

Atomically Thin Materials Significantly Shrink Qubits



Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.

IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.

Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.

“We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”

The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.

Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).

Golden dilution refrigerator hanging vertically Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT

In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.

As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.

In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.

“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.

On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.

While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.

“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”

This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.

“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.

Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.

❌
❌