Normal view

There are new articles available, click to refresh the page.
Yesterday — 16 September 2024IEEE Spectrum Recent Content full text

Challengers Are Coming for Nvidia’s Crown



It’s hard to overstate Nvidia’s AI dominance. Founded in 1993, Nvidia first made its mark in the then-new field of graphics processing units (GPUs) for personal computers. But it’s the company’s AI chips, not PC graphics hardware, that vaulted Nvidia into the ranks of the world’s most valuable companies. It turns out that Nvidia’s GPUs are also excellent for AI. As a result, its stock is more than 15 times as valuable as it was at the start of 2020; revenues have ballooned from roughly US $12 billion in its 2019 fiscal year to $60 billion in 2024; and the AI powerhouse’s leading-edge chips are as scarce, and desired, as water in a desert.

Access to GPUs “has become so much of a worry for AI researchers, that the researchers think about this on a day-to-day basis. Because otherwise they can’t have fun, even if they have the best model,” says Jennifer Prendki, head of AI data at Google DeepMind. Prendki is less reliant on Nvidia than most, as Google has its own homespun AI infrastructure. But other tech giants, like Microsoft and Amazon, are among Nvidia’s biggest customers, and continue to buy its GPUs as quickly as they’re produced. Exactly who gets them and why is the subject of an antitrust investigation by the U.S. Department of Justice, according to press reports.

Nvidia’s AI dominance, like the explosion of machine learning itself, is a recent turn of events. But it’s rooted in the company’s decades-long effort to establish GPUs as general computing hardware that’s useful for many tasks besides rendering graphics. That effort spans not only the company’s GPU architecture, which evolved to include “tensor cores” adept at accelerating AI workloads, but also, critically, its software platform, called Cuda, to help developers take advantage of the hardware.

“They made sure every computer-science major coming out of university is trained up and knows how to program CUDA,” says Matt Kimball, principal data-center analyst at Moor Insights & Strategy. “They provide the tooling and the training, and they spend a lot of money on research.”

Released in 2006, CUDA helps developers use an Nvidia GPU’s many cores. That’s proved essential for accelerating highly parallelized compute tasks, including modern generative AI. Nvidia’s success in building the CUDA ecosystem makes its hardware the path of least resistance for AI development. Nvidia chips might be in short supply, but the only thing more difficult to find than AI hardware is experienced AI developers—and many are familiar with CUDA.

That gives Nvidia a deep, broad moat with which to defend its business, but that doesn’t mean it lacks competitors ready to storm the castle, and their tactics vary widely. While decades-old companies like Advanced Micro Devices (AMD) and Intel are looking to use their own GPUs to rival Nvidia, upstarts like Cerebras and SambaNova have developed radical chip architectures that drastically improve the efficiency of generative AI training and inference. These are the competitors most likely to challenge Nvidia.

Nvidia’s Armory

An illustration of a bar chart. While Nvidia has several types of GPUs deployed, the big guns found in data centers are the H100 and H200. As soon as the end of 2024, they will be joined by the B200, which nearly quadruples the H100’s performance on a per-GPU basis.Sources: Nvidia, MLPerf inferencing v4.1 results for Llama2-70B

AMD: The other GPU maker

Pro: AMD GPUs are convincing Nvidia alternatives

Con: Software ecosystem can’t rival Nvidia’s CUDA

AMD has battled Nvidia in the graphics-chip arena for nearly two decades. It’s been, at times, a lopsided fight. When it comes to graphics, AMD’s GPUs have rarely beaten Nvidia’s in sales or mindshare. Still, AMD’s hardware has its strengths. The company’s broad GPU portfolio extends from integrated graphics for laptops to AI-focused data-center GPUs with over 150 billion transistors. The company was also an early supporter and adopter of high-bandwidth memory (HBM), a form of memory that’s now essential to the world’s most advanced GPUs.

“If you look at the hardware…it stacks up favorably” to Nvidia, says Kimball, referring to AMD’s Instinct MI325X, a competitor of Nvidia’s H100. “AMD did a fantastic job laying that chip out.”

The MI325X, slated to launch by the end of the year, has over 150 billion transistors and 288 gigabytes of high-bandwidth memory, though real-world results remain to be seen. The MI325X’s predecessor, the MI300X, earned praise from Microsoft, which deploys AMD hardware, including the MI300X, to handle some ChatGPT 3.5 and 4 services. Meta and Dell have also deployed the MI300X, and Meta used the chips in parts of the development of its latest large language model, Llama 3.1.

There’s still a hurdle for AMD to leap: software. AMD offers an open-source platform, ROCm, to help developers program its GPUs, but it’s less popular than CUDA. AMD is aware of this weakness, and in July 2024, it agreed to buy Europe’s largest private AI lab, Silo AI, which has experience doing large-scale AI training using ROCm and AMD hardware. AMD has also plans to purchase ZT Systems, a company with expertise in data-center infrastructure, to help the company serve customers looking to deploy its hardware at scale. Building a rival to CUDA is no small feat, but AMD is certainly trying.

Intel: Software success

Pro: Gaudi 3 AI accelerator shows strong performance

Con: Next big AI chip doesn’t arrive until late 2025

Intel’s challenge is the opposite of AMD’s.

While Intel lacks an exact match for Nvidia’s CUDA and AMD’s ROCm, it launched an open-source unified programming platform, OneAPI, in 2018. Unlike CUDA and ROCm, OneAPI spans multiple categories of hardware, including CPUs, GPUs, and FPGAs. So it can help developers accelerate AI tasks (and many others) on any Intel hardware. “Intel’s got a heck of a software ecosystem it can turn on pretty easily,” says Kimball.

Hardware, on the other hand, is a weakness, at least when compared to Nvidia and AMD. Intel’s Gaudi AI accelerators, the fruit of Intel’s 2019 acquisition of AI hardware startup Habana Labs, have made headway, and the latest, Gaudi 3, offers performance that’s competitive with Nvidia’s H100.

However, it’s unclear precisely what Intel’s next hardware release will look like, which has caused some concern. “Gaudi 3 is very capable,” says Patrick Moorhead, founder of Moor Insights & Strategy. But as of July 2024 “there is no Gaudi 4,” he says.

Intel instead plans to pivot to an ambitious chip, code-named Falcon Shores, with a tile-based modular architecture that combines Intel x86 CPU cores and Xe GPU cores; the latter are part of Intel’s recent push into graphics hardware. Intel has yet to reveal details about Falcon Shores’ architecture and performance, though, and it’s not slated for release until late 2025.

Cerebras: Bigger is better

Pro: Wafer-scale chips offer strong performance and memory per chip

Con: Applications are niche due to size and cost

Make no mistake: AMD and Intel are by far the most credible challengers to Nvidia. They share a history of designing successful chips and building programming platforms to go alongside them. But among the smaller, less proven players, one stands out: Cerebras.

The company, which specializes in AI for supercomputers, made waves in 2019 with the Wafer Scale Engine, a gigantic, wafer-size piece of silicon packed with 1.2 trillion transistors. The most recent iteration, Wafer Scale Engine 3, ups the ante to 4 trillion transistors. For comparison, Nvidia’s largest and newest GPU, the B200, has “just” 208 billion transistors. The computer built around this wafer-scale monster, Cerebras’s CS-3, is at the heart of the Condor Galaxy 3, which will be an 8-exaflop AI supercomputer made up of 64 CS-3s. G42, an Abu Dhabi–based conglomerate that hopes to train tomorrow’s leading-edge large language models, will own the system.

“It’s a little more niche, not as general purpose,” says Stacy Rasgon, senior analyst at Bernstein Research. “Not everyone is going to buy [these computers]. But they’ve got customers, like the [United States] Department of Defense, and [the Condor Galaxy 3] supercomputer.”

Cerebras’s WSC-3 isn’t going to challenge Nvidia, AMD, or Intel hardware in most situations; it’s too large, too costly, and too specialized. But it could give Cerebras a unique edge in supercomputers, because no other company designs chips on the scale of the WSE.

SambaNova: A transformer for transformers

Pro: Configurable architecture helps developers squeeze efficiency from AI models

Con: Hardware still has to prove relevance to mass market

SambaNova, founded in 2017, is another chip-design company tackling AI training with an unconventional chip architecture. Its flagship, the SN40L, has what the company calls a “reconfigurable dataflow architecture” composed of tiles of memory and compute resources. The links between these tiles can be altered on the fly to facilitate the quick movement of data for large neural networks.

Prendki believes such customizable silicon could prove useful for training large language models, because AI developers can optimize the hardware for different models. No other company offers that capability, she says.

SambaNova is also scoring wins with SambaFlow, the software stack used alongside the SN40L. “At the infrastructure level, SambaNova is doing a good job with the platform,” says Moorhead. SambaFlow can analyze machine learning models and help developers reconfigure the SN40L to accelerate the model’s performance. SambaNova still has a lot to prove, but its customers include SoftBank and Analog Devices.

Groq: Form for function

Pro: Excellent AI inference performance

Con: Application currently limited to inference

Yet another company with a unique spin on AI hardware is Groq. Groq’s approach is focused on tightly pairing memory and compute resources to accelerate the speed with which a large language model can respond to prompts.

“Their architecture is very memory based. The memory is tightly coupled to the processor. You need more nodes, but the price per token and the performance is nuts,” says Moorhead. The “token” is the basic unit of data a model processes; in an LLM, it’s typically a word or portion of a word. Groq’s performance is even more impressive, he says, given that its chip, called the Language Processing Unit Inference Engine, is made using GlobalFoundries’ 14-nanometer technology, several generations behind the TSMC technology that makes the Nvidia H100.

In July, Groq posted a demonstration of its chip’s inference speed, which can exceed 1,250 tokens per second running Meta’s Llama 3 8-billion parameter LLM. That beats even SambaNova’s demo, which can exceed 1,000 tokens per second.

Qualcomm: Power is everything

Pro: Broad range of chips with AI capabilities

Con: Lacks large, leading-edge chips for AI training

Qualcomm, well known for the Snapdragon system-on-a-chip that powers popular Android phones like the Samsung Galaxy S24 Ultra and OnePlus 12, is a giant that can stand toe-to-toe with AMD, Intel, and Nvidia.

But unlike those peers, the company is focusing its AI strategy more on AI inference and energy efficiency for specific tasks. Anton Lokhmotov, a founding member of the AI benchmarking organization MLCommons and CEO of Krai, a company that specializes in AI optimization, says Qualcomm has significantly improved the inference of the Qualcomm Cloud AI 100 servers in an important benchmark test. The servers’ performance increased from 180 to 240 samples-per-watt in ResNet-50, an image-classification benchmark, using “essentially the same server hardware,” Lokhmotov notes.

Efficient AI inference is also a boon on devices that need to handle AI tasks locally without reaching out to the cloud, says Lokhmotov. Case in point: Microsoft’s Copilot Plus PCs. Microsoft and Qualcomm partnered with laptop makers, including Dell, HP, and Lenovo, and the first Copilot Plus laptops with Qualcomm chips hit store shelves in July. Qualcomm also has a strong presence in smartphones and tablets, where its Snapdragon chips power devices from Samsung, OnePlus, and Motorola, among others.

Qualcomm is an important player in AI for driver assist and self-driving platforms, too. In early 2024, Hyundai’s Mobius division announced a partnership to use the Snapdragon Ride platform, a rival to Nvidia’s Drive platform, for advanced driver-assist systems.

The Hyperscalers: Custom brains for brawn

Pros: Vertical integration focuses design

Cons: Hyperscalers may prioritize their own needs and uses first

Hyperscalers—cloud-computing giants that deploy hardware at vast scales—are synonymous with Big Tech. Amazon, Apple, Google, Meta, and Microsoft all want to deploy AI hardware as quickly as possible, both for their own use and for their cloud-computing customers. To accelerate that, they’re all designing chips in-house.

Google began investing in AI processors much earlier than its competitors: The search giant’s Tensor Processing Units, first announced in 2015, now power most of its AI infrastructure. The sixth generation of TPUs, Trillium, was announced in May and is part of Google’s AI Hypercomputer, a cloud-based service for companies looking to handle AI tasks.

Prendki says Google’s TPUs give the company an advantage in pursuing AI opportunities. “I’m lucky that I don’t have to think too hard about where I get my chips,” she says. Access to TPUs doesn’t entirely eliminate the supply crunch, though, as different Google divisions still need to share resources.

And Google is no longer alone. Amazon has two in-house chips, Trainium and Inferentia, for training and inference, respectively. Microsoft has Maia, Meta has MTIA, and Apple is supposedly developing silicon to handle AI tasks in its cloud infrastructure.

None of these compete directly with Nvidia, as hyperscalers don’t sell hardware to customers. But they do sell access to their hardware through cloud services, like Google’s AI Hypercomputer, Amazon’s AWS, and Microsoft’s Azure. In many cases, hyperscalers offer services running on their own in-house hardware as an option right alongside services running on hardware from Nvidia, AMD, and Intel; Microsoft is thought to be Nvidia’s largest customer.

An illustration of a knight holding a crown surrounded by arrows.  David Plunkert

Chinese chips: An opaque future

Another category of competitor is born not of technical needs but of geopolitical realities. The United States has imposed restrictions on the export of AI hardware that prevents chipmakers from selling their latest, most-capable chips to Chinese companies. In response, Chinese companies are designing homegrown AI chips.

Huawei is a leader. The company’s Ascend 910B AI accelerator, designed as an alternative to Nvidia’s H100, is in production at Semiconductor Manufacturing International Corp., a Shanghai-based foundry partially owned by the Chinese government. However, yield issues at SMIC have reportedly constrained supply. Huawei is also selling an “AI-in-a-box” solution, meant for Chinese companies looking to build their own AI infrastructure on-premises.

To get around the U.S. export control rules, Chinese industry could turn to alternative technologies. For example, Chinese researchers have made headway in photonic chips that use light, instead of electric charge, to perform calculations. “The advantage of a beam of light is you can cross one [beam with] another,” says Prendki. “So it reduces constraints you’d normally have on a silicon chip, where you can’t cross paths. You can make the circuits more complex, for less money.” It’s still very early days for photonic chips, but Chinese investment in the area could accelerate its development.

Room for more

It’s clear that Nvidia has no shortage of competitors. It’s equally clear that none of them will challenge—never mind defeat—Nvidia in the next few years. Everyone interviewed for this article agreed that Nvidia’s dominance is currently unparalleled, but that doesn’t mean it will crowd out competitors forever.

“Listen, the market wants choice,” says Moorhead. “I can’t imagine AMD not having 10 or 20 percent market share, Intel the same, if we go to 2026. Typically, the market likes three, and there we have three reasonable competitors.” Kimball says the hyperscalers, meanwhile, could challenge Nvidia as they transition more AI services to in-house hardware.

And then there’s the wild cards. Cerebras, SambaNova, and Groq are the leaders in a very long list of startups looking to nibble away at Nvidia with novel solutions. They’re joined by dozens of others, including d-Matrix, Untether, Tenstorrent, and Etched, all pinning their hopes on new chip architectures optimized for generative AI. It’s likely many of these startups will falter, but perhaps the next Nvidia will emerge from the survivors.

In 1926, TV Was Mechanical



Scottish inventor John Logie Baird had a lot of ingenious ideas, not all of which caught on. His phonovision was an early attempt at video recording, with the signals preserved on phonograph records. His noctovision used infrared light to see objects in the dark, which some experts claim was a precursor to radar.

But Baird earned his spot in history with the televisor. On 26 January 1926, select members of the Royal Institution gathered at Baird’s lab in London’s Soho neighborhood to witness the broadcast of a small but clearly defined image of a ventriloquist dummy’s face, sent from the televisor’s electromechanical transmitter to its receiver. He also demonstrated the televisor with a human subject, who observers could see speaking and moving on the screen. For this, Baird is often credited with the first public demonstration of television.

Photo of a man in a checked jacket holding the heads of ventriloquist dummies and looking at a metal apparatus. John Logie Baird [shown here] used the heads of ventriloquist dummies in early experiments because they didn’t mind the heat and bright lights of his televisor. Science History Images/Alamy

How the Nipkow Disk Led to Baird’s Televisor

To be clear, Baird didn’t invent television. Television is one of those inventions that benefited from many contributors, collaborators, and competitors. Baird’s starting point was an idea for an “electric telescope,” patented in 1885 by German engineer Paul Nipkow.

Nipkow’s apparatus captured a picture by dividing it into a vertical sequence of lines, using a spinning disk with perforated holes around the edge. The perforations were offset in a spiral so that each hole captured one slice of the image in turn—known today as scan lines. Each line would be encoded as an electrical signal. A receiving apparatus converted the signals into light, to reconstruct the image. Nipkow never commercialized his electric telescope, though, and after 15 years the patent expired.

Black and white photo of a man standing in front of a seated group of women and pointing to a boxlike apparatus on the wall. An inset image shows a face split into vertical lines. The inset on the left shows how the televisor split an image (in this case, a person’s face) into vertical lines. Bettmann/Getty Images

The system that Baird demonstrated in 1926 used two Nipkow disks, one in the transmitting apparatus and the other in the receiving apparatus. Each disk had 30 holes. He fitted the disk with glass lenses that focused the reflected light onto a photoelectric cell. As the transmitting disk rotated, the photoelectric cell detected the change in brightness coming through the individual lenses and converted the light into an electrical signal.

This signal was then sent to the receiving system. (Part of the receiving apparatus, housed at the Science Museum in London, is shown at top.) There the process was reversed, with the electrical signal first being amplified and then modulating a neon gas–discharge lamp. The light passed through a rectangular slot to focus it onto the receiving Nipkow disk, which was turning at the same speed as the transmitter. The image could be seen on a ground glass plate.

Early experiments used a dummy because the many incandescent lights needed to provide sufficient illumination made it too hot and bright for a person. Each hole in the disk captured only a small bit of the overall image, but as long as the disk spun fast enough, the brain could piece together the complete image, a phenomenon known as persistence of vision. (In a 2022 Hands On column, Markus Mierse explains how to build a modern Nipkow-disk electromechanical TV using a 3D printer, an LED module, and an Arduino Mega microcontroller.)

John Logie Baird and “True Television”

Regular readers of this column know the challenge of documenting historical “firsts”—the first radio, the first telegraph, the first high-tech prosthetic arm. Baird’s claim to the first public broadcast of television is no different. To complicate matters, the actual first demonstration of his televisor wasn’t on 26 January 1926 in front of those esteemed members of the Royal Institution; rather, it occurred in March 1925 in front of curious shoppers at a Selfridges department store.

As Donald F. McLean recounts in his excellent June 2022 article “Before ‘True Television’: Investigating John Logie Baird’s 1925 Original Television Apparatus,” Baird used a similar device for the Selfridges demo, but it had only 16 holes, organized as two groups of eight, hence its nickname the Double-8. The resolution was about as far from high definition as you could get, showing shadowy silhouettes in motion. Baird didn’t consider this “true television,” as McLean notes in his Proceedings of the IEEE piece.

Black and white photo of a man standing next to a glass case containing an apparatus that consists of disks along a central pole, with a large doll head at one end. In 1926, Baird loaned part of the televisor he used in his Selfridges demo to the Science Museum in London.PA Images/Getty Images

Writing in December 1926 in Experimental Wireless & The Wireless Engineer, Baird defined true television as “the transmission of the image of an object with all gradations of light, shade, and detail, so that it is seen on the receiving screen as it appears to the eye of an actual observer.” Consider the Selfridges demo a beta test and the one for the Royal Institution the official unveiling. (In 2017, the IEEE chose to mark the latter and not the former with a Milestone.)

The 1926 demonstration was a turning point in Baird’s career. In 1927 he established the Baird Television Development Co., and a year later he made the first transatlantic television transmission, from London to Hartsdale, N.Y. In 1929, the BBC decided to give Baird’s system a try, performing some experimental broadcasts outside of normal hours. After that, mechanical television took off in Great Britain and a few other European countries.

But Wait There’s More!

If you enjoyed this dip into the history of television, check out Spectrum’s new video collaboration with the YouTube channel Asianometry, which will offer a variety of perspectives on fascinating chapters in the history of technology. The first set of videos looks at the commercialization of color television.

Head over to Asianometry to see how Sony finally conquered the challenges of mass production of color TV sets with its Trinitron line. On Spectrum’s YouTube channel, you’ll find a video—written and narrated by yours truly—on how the eminent physicist Ernest O. Lawrence dabbled for a time in commercial TVs. Spoiler alert: Lawrence had much greater success with the cyclotron and government contracts than he ever did commercializing his Chromatron TV. Spectrum also has a video on the yearslong fight between CBS and RCA over the U.S. standard for color TV broadcasting. —A.M.

The BBC used various versions of Baird’s mechanical system from 1929 to 1937, starting with the 30-line system and upgrading to a 240-line system. But eventually the BBC switched to the all-electronic system developed by Marconi-EMI. Baird then switched to working on one of the earliest electronic color television systems, called the Telechrome. (Baird had already demonstrated a successful mechanical color television system in 1928, but it never caught on.) Meanwhile, in the United States, Columbia Broadcasting System (CBS) attempted to develop a mechanical color television system based on Baird’s original idea of a color wheel but finally ceded to an electronic standard in 1953.

Baird also experimented with stereoscopic or three-dimensional television and a 1,000-line display, similar to today’s high-definition television. Unfortunately, he died in 1946 before he could persuade anyone to take up that technology.

In a 1969 interview in TV Times, John’s widow, Margaret Baird, reflected on some of the developments in television that would have made her husband happy. He would enjoy the massive amounts of sports coverage available, she said. (Baird had done the first live broadcast of the Epsom Derby in 1931.) He would be thrilled with current affairs programs. And, my personal favorite, she thought he would love the annual broadcasting of the Eurovision song contest.

Other TV Inventors: Philo Farnsworth, Vladimir Zworykin

But as I said, television is an invention that’s had many contributors. Across the Atlantic, Philo Farnsworth was experimenting with an all-electrical system that he had first envisioned as a high school student in 1922. By 1926, Farnsworth had secured enough financial backing to work full time on his idea.

One of his main inventions was the image dissector, also known as a dissector tube. This video camera tube creates a temporary electron image that can be converted into an electrical signal. On 7 September 1927, Farnsworth and his team successfully transmitted a single black line, followed by other images of simple shapes. But the system could only handle silhouettes, not three-dimensional objects.

Meanwhile, Vladimir Zworykin was also experimenting with electronic television. In 1923, he applied for a patent for a video tube called the iconoscope. But it wasn’t until 1931, after he joined RCA, that his team developed a working version, which suspiciously came after Zworykin visited Farnsworth’s lab in California. The iconoscope overcame some of the dissector tube’s deficiencies, especially the storage capacity. It was also more sensitive and easier to manufacture. But one major drawback of both the image dissector and the iconoscope was that, like Baird’s original televisor, they required very bright lights.

Everyone was working to develop a better tube, but Farnsworth claimed that he’d invented both the concept of an electronic image moving through a vacuum tube as well as the idea of a storage-type camera tube. The iconoscope and any future improvements all depended on these progenitor patents. RCA knew this and offered to buy Farnsworth’s patents, but Farnsworth refused to sell. A multiyear patent-interference case ensued, finally finding for Farnsworth in 1935.

While the case was being litigated, Farnsworth made the first public demonstration of an all-electric television system on 25 August 1934 at the Franklin Institute in Philadelphia. And in 1939, RCA finally agreed to pay royalties to Farnsworth to use his patented technologies. But Farnsworth was never able to compete commercially with RCA and its all-electric television system, which went on to dominate the U.S. television market.

Eventually, Harold Law, Paul Weimer, and Russell Law developed a better tube at their Princeton labs, the image orthicon. Designed for TV-guided missiles for the U.S. military, it was 100 to 1,000 times as sensitive as the iconoscope. After World War II, RCA quickly adopted the tube for its TV cameras. The image orthicon became the industry standard by 1947, remaining so until 1968 and the move to color TV.

The Path to Television Was Not Obvious

My Greek teacher hated the word “television.” He considered it an abomination that combined the Greek prefix telos (far off) with a Latin base, videre (to see). But early television was a bit of an abomination—no one really knew what it was going to be. As Chris Horrocks lays out in his delightfully titled book, The Joy of Sets (2017), television was developed in relation to the media that came before—telegraph, telephone, radio, and film.

Was television going to be like a telegraph, with communication between two points and an image slowly reassembled? Was it going to be like a telephone, with direct and immediate dialog between both ends? Was it going to be like film, with prerecorded images played back to a wide audience? Or would it be more like radio, which at the time was largely live broadcasts? At the beginning, people didn’t even know they wanted a television; manufacturers had to convince them.

And technically, there were many competing visions—Baird’s, Farnsworth’s, Zworykin’s, and others. It’s no wonder that television took many years, with lots of false starts and dead ends, before it finally took hold.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the September 2024 print issue as “The Mechanical TV.”

References

In 1936, a fire destroyed the Crystal Palace, where Baird had workshops, a television studio, and a tube manufacturing plant. With it went lab notebooks, correspondence, and original artifacts, making it more difficult to know the full history of Baird and his contributions to television.

Donald McLean’s “Before ‘True Television’: Investigating John Logie Baird’s 1925 Original Television Apparatus,” which appeared in Proceedings of the IEEE in June 2022, is an excellent investigation into the double-8 apparatus that Baird used in the 1925 Selfridges demonstration.

For a detailed description of the apparatus used in the 1926 demonstration at Baird’s lab, see “John Logie Baird and the Secret in the Box: The Undiscovered Story Behind the World’s First Public Demonstration of Television,” in Proceedings of the IEEE, August 2020, by Brandon Inglis and Gary Couples.

For an overview on the history of television, check out Chris Horrocks’s The Joy of Sets: A Short History of the Television (Reaktion Books, 2017). Chapter 2 focuses on Baird and other early inventors. And if you want to learn more about Farnsworth’s and RCA’s battle, which doesn’t acknowledge Baird at all, see Evan Schwartz’s 2000 MIT Technology Review piece, “Who Really Invented Television?

Before yesterdayIEEE Spectrum Recent Content full text

Amazon's Secret Weapon in Chip Design Is Amazon



Big-name makers of processors, especially those geared toward cloud-based AI, such as AMD and Nvidia, have been showing signs of wanting to own more of the business of computing, purchasing makers of software, interconnects, and servers. The hope is that control of the “full stack” will give them an edge in designing what their customers want.

Amazon Web Services (AWS) got there ahead of most of the competition, when they purchased chip designer Annapurna Labs in 2015 and proceeded to design CPUs, AI accelerators, servers, and data centers as a vertically-integrated operation. Ali Saidi, the technical lead for the Graviton series of CPUs, and Rami Sinno, director of engineering at Annapurna Labs, explained the advantage of vertically-integrated design and Amazon-scale and showed IEEE Spectrum around the company’s hardware testing labs in Austin, Tex., on 27 August.

Saidi and Sinno on:

What brought you to Amazon Web Services, Rami?

an older man in an eggplant colored polo shirt posing for a portrait Rami SinnoAWS

Rami Sinno: Amazon is my first vertically integrated company. And that was on purpose. I was working at Arm, and I was looking for the next adventure, looking at where the industry is heading and what I want my legacy to be. I looked at two things:

One is vertically integrated companies, because this is where most of the innovation is—the interesting stuff is happening when you control the full hardware and software stack and deliver directly to customers.

And the second thing is, I realized that machine learning, AI in general, is going to be very, very big. I didn’t know exactly which direction it was going to take, but I knew that there is something that is going to be generational, and I wanted to be part of that. I already had that experience prior when I was part of the group that was building the chips that go into the Blackberries; that was a fundamental shift in the industry. That feeling was incredible, to be part of something so big, so fundamental. And I thought, “Okay, I have another chance to be part of something fundamental.”

Does working at a vertically-integrated company require a different kind of chip design engineer?

Sinno: Absolutely. When I hire people, the interview process is going after people that have that mindset. Let me give you a specific example: Say I need a signal integrity engineer. (Signal integrity makes sure a signal going from point A to point B, wherever it is in the system, makes it there correctly.) Typically, you hire signal integrity engineers that have a lot of experience in analysis for signal integrity, that understand layout impacts, can do measurements in the lab. Well, this is not sufficient for our group, because we want our signal integrity engineers also to be coders. We want them to be able to take a workload or a test that will run at the system level and be able to modify it or build a new one from scratch in order to look at the signal integrity impact at the system level under workload. This is where being trained to be flexible, to think outside of the little box has paid off huge dividends in the way that we do development and the way we serve our customers.

“By the time that we get the silicon back, the software’s done” —Ali Saidi, Annapurna Labs

At the end of the day, our responsibility is to deliver complete servers in the data center directly for our customers. And if you think from that perspective, you’ll be able to optimize and innovate across the full stack. A design engineer or a test engineer should be able to look at the full picture because that’s his or her job, deliver the complete server to the data center and look where best to do optimization. It might not be at the transistor level or at the substrate level or at the board level. It could be something completely different. It could be purely software. And having that knowledge, having that visibility, will allow the engineers to be significantly more productive and delivery to the customer significantly faster. We’re not going to bang our head against the wall to optimize the transistor where three lines of code downstream will solve these problems, right?

Do you feel like people are trained in that way these days?

Sinno: We’ve had very good luck with recent college grads. Recent college grads, especially the past couple of years, have been absolutely phenomenal. I’m very, very pleased with the way that the education system is graduating the engineers and the computer scientists that are interested in the type of jobs that we have for them.

The other place that we have been super successful in finding the right people is at startups. They know what it takes, because at a startup, by definition, you have to do so many different things. People who’ve done startups before completely understand the culture and the mindset that we have at Amazon.

[back to top]

What brought you to AWS, Ali?

a man with a beard wearing a polka dotted button-up shirt posing for a portrait Ali SaidiAWS

Ali Saidi: I’ve been here about seven and a half years. When I joined AWS, I joined a secret project at the time. I was told: “We’re going to build some Arm servers. Tell no one.”

We started with Graviton 1. Graviton 1 was really the vehicle for us to prove that we could offer the same experience in AWS with a different architecture.

The cloud gave us an ability for a customer to try it in a very low-cost, low barrier of entry way and say, “Does it work for my workload?” So Graviton 1 was really just the vehicle demonstrate that we could do this, and to start signaling to the world that we want software around ARM servers to grow and that they’re going to be more relevant.

Graviton 2—announced in 2019—was kind of our first… what we think is a market-leading device that’s targeting general-purpose workloads, web servers, and those types of things.

It’s done very well. We have people running databases, web servers, key-value stores, lots of applications... When customers adopt Graviton, they bring one workload, and they see the benefits of bringing that one workload. And then the next question they ask is, “Well, I want to bring some more workloads. What should I bring?” There were some where it wasn’t powerful enough effectively, particularly around things like media encoding, taking videos and encoding them or re-encoding them or encoding them to multiple streams. It’s a very math-heavy operation and required more [single-instruction multiple data] bandwidth. We need cores that could do more math.

We also wanted to enable the [high-performance computing] market. So we have an instance type called HPC 7G where we’ve got customers like Formula One. They do computational fluid dynamics of how this car is going to disturb the air and how that affects following cars. It’s really just expanding the portfolio of applications. We did the same thing when we went to Graviton 4, which has 96 cores versus Graviton 3’s 64.

[back to top]

How do you know what to improve from one generation to the next?

Saidi: Far and wide, most customers find great success when they adopt Graviton. Occasionally, they see performance that isn’t the same level as their other migrations. They might say “I moved these three apps, and I got 20 percent higher performance; that’s great. But I moved this app over here, and I didn’t get any performance improvement. Why?” It’s really great to see the 20 percent. But for me, in the kind of weird way I am, the 0 percent is actually more interesting, because it gives us something to go and explore with them.

Most of our customers are very open to those kinds of engagements. So we can understand what their application is and build some kind of proxy for it. Or if it’s an internal workload, then we could just use the original software. And then we can use that to kind of close the loop and work on what the next generation of Graviton will have and how we’re going to enable better performance there.

What’s different about designing chips at AWS?

Saidi: In chip design, there are many different competing optimization points. You have all of these conflicting requirements, you have cost, you have scheduling, you’ve got power consumption, you’ve got size, what DRAM technologies are available and when you’re going to intersect them… It ends up being this fun, multifaceted optimization problem to figure out what’s the best thing that you can build in a timeframe. And you need to get it right.

One thing that we’ve done very well is taken our initial silicon to production.

How?

Saidi: This might sound weird, but I’ve seen other places where the software and the hardware people effectively don’t talk. The hardware and software people in Annapurna and AWS work together from day one. The software people are writing the software that will ultimately be the production software and firmware while the hardware is being developed in cooperation with the hardware engineers. By working together, we’re closing that iteration loop. When you are carrying the piece of hardware over to the software engineer’s desk your iteration loop is years and years. Here, we are iterating constantly. We’re running virtual machines in our emulators before we have the silicon ready. We are taking an emulation of [a complete system] and running most of the software we’re going to run.

So by the time that we get to the silicon back [from the foundry], the software’s done. And we’ve seen most of the software work at this point. So we have very high confidence that it’s going to work.

The other piece of it, I think, is just being absolutely laser-focused on what we are going to deliver. You get a lot of ideas, but your design resources are approximately fixed. No matter how many ideas I put in the bucket, I’m not going to be able to hire that many more people, and my budget’s probably fixed. So every idea I throw in the bucket is going to use some resources. And if that feature isn’t really important to the success of the project, I’m risking the rest of the project. And I think that’s a mistake that people frequently make.

Are those decisions easier in a vertically integrated situation?

Saidi: Certainly. We know we’re going to build a motherboard and a server and put it in a rack, and we know what that looks like… So we know the features we need. We’re not trying to build a superset product that could allow us to go into multiple markets. We’re laser-focused into one.

What else is unique about the AWS chip design environment?

Saidi: One thing that’s very interesting for AWS is that we’re the cloud and we’re also developing these chips in the cloud. We were the first company to really push on running [electronic design automation (EDA)] in the cloud. We changed the model from “I’ve got 80 servers and this is what I use for EDA” to “Today, I have 80 servers. If I want, tomorrow I can have 300. The next day, I can have 1,000.”

We can compress some of the time by varying the resources that we use. At the beginning of the project, we don’t need as many resources. We can turn a lot of stuff off and not pay for it effectively. As we get to the end of the project, now we need many more resources. And instead of saying, “Well, I can’t iterate this fast, because I’ve got this one machine, and it’s busy.” I can change that and instead say, “Well, I don’t want one machine; I’ll have 10 machines today.”

Instead of my iteration cycle being two days for a big design like this, instead of being even one day, with these 10 machines I can bring it down to three or four hours. That’s huge.

How important is Amazon.com as a customer?

Saidi: They have a wealth of workloads, and we obviously are the same company, so we have access to some of those workloads in ways that with third parties, we don’t. But we also have very close relationships with other external customers.

So last Prime Day, we said that 2,600 Amazon.com services were running on Graviton processors. This Prime Day, that number more than doubled to 5,800 services running on Graviton. And the retail side of Amazon used over 250,000 Graviton CPUs in support of the retail website and the services around that for Prime Day.

[back to top]

The AI accelerator team is colocated with the labs that test everything from chips through racks of servers. Why?

Sinno: So Annapurna Labs has multiple labs in multiple locations as well. This location here is in Austin… is one of the smaller labs. But what’s so interesting about the lab here in Austin is that you have all of the hardware and many software development engineers for machine learning servers and for Trainium and Inferentia [AWS’s AI chips] effectively co-located on this floor. For hardware developers, engineers, having the labs co-located on the same floor has been very, very effective. It speeds execution and iteration for delivery to the customers. This lab is set up to be self-sufficient with anything that we need to do, at the chip level, at the server level, at the board level. Because again, as I convey to our teams, our job is not the chip; our job is not the board; our job is the full server to the customer.

How does vertical integration help you design and test chips for data-center-scale deployment?

Sinno: It’s relatively easy to create a bar-raising server. Something that’s very high-performance, very low-power. If we create 10 of them, 100 of them, maybe 1,000 of them, it’s easy. You can cherry pick this, you can fix this, you can fix that. But the scale that the AWS is at is significantly higher. We need to train models that require 100,000 of these chips. 100,000! And for training, it’s not run in five minutes. It’s run in hours or days or weeks even. Those 100,000 chips have to be up for the duration. Everything that we do here is to get to that point.

We start from a “what are all the things that can go wrong?” mindset. And we implement all the things that we know. But when you were talking about cloud scale, there are always things that you have not thought of that come up. These are the 0.001-percent type issues.

In this case, we do the debug first in the fleet. And in certain cases, we have to do debugs in the lab to find the root cause. And if we can fix it immediately, we fix it immediately. Being vertically integrated, in many cases we can do a software fix for it. We use our agility to rush a fix while at the same time making sure that the next generation has it already figured out from the get go.

[back to top]

Conference To Spotlight Harm Caused by Online Platforms



This year’s IEEE Conference on Digital Platforms and Societal Harms is scheduled to be held on 14 and 15 October in a hybrid format, with both in-person and virtual keynote panel sessions. The in-person events are to take place at American University, in Washington, D.C.

The annual conference focuses on how social media and similar platforms amplify hate speech, extremism, exploitation, misinformation, and disinformation, as well as what measures are being taken to protect people.

With the popularity of social media and the rise of artificial intelligence, content can be more easily created and shared online by individuals and bots, says Andre Oboler, the general chair of IEEE DPSH. The IEEE senior member is CEO of the Online Hate Prevention Institute, which is based in Sydney. Oboler cautions that a lot of content online is fabricated, so some people are making economic, political, social, and health care decisions based on inaccurate information.

“Addressing the creation, propagation, and engagement of harmful digital information is a complex problem. It requires broad collaboration among various stakeholders including technologists; lawmakers and policymakers; nonprofit organizations; private sectors; and end users.”

Misinformation (which is false) and disinformation (which is intentionally false) also can propagate hate speech, discrimination, violent extremism, and child sexual abuse, he says, and can create hostile online environments, damaging people’s confidence in information and endangering their lives.

To help prevent harm, he says, cutting-edge technical solutions and changes in public policy are needed. At the conference, academic researchers and leaders from industry, government, and not-for-profit organizations are gathering to discuss steps being taken to protect individuals online.

Experts to explore challenges and solutions

The event includes panel discussions and Q&A sessions with experts from a variety of technology fields and organizations. Scheduled speakers include Paul Giannasi from the U.K. National Police Chiefs’ Council; Skip Gilmour of the Global Internet Forum to Counter Terrorism; and Maike Luiken, chair of IEEE’s Planet Positive 2030 initiative.

“Addressing the creation, propagation, and engagement of harmful digital information is a complex problem,” Oboler says. “It requires broad collaboration among various stakeholders including technologists; lawmakers and policymakers; nonprofit organizations; private sectors; and end users.

“There is an emerging need for these stakeholders and researchers from multiple disciplines to have a joint forum to understand the challenges, exchange ideas, and explore possible solutions.”

To register for in-person and online conference attendance, visit the event’s website. Those who want to attend only the keynote panels can register for free access to the discussions. Attendees who register by 22 September and use the code 25off2we receive a 25 percent discount.

Check out highlights from the 2023 IEEE Conference on Digital Platforms and Societal Harms.

Ultrasonic Chips Could Cut Interference in the IoT



The proliferation of IoT technology has made chatterboxes out of everyday hardware and new gadgets too, but it comes with a downside: the more devices sharing the airwaves the more trouble they have communicating. The nearly 30 billion connected devices expected by 2030 will be operating using different wireless standards while sharing the same frequency bands, potentially interfering with one another. To overcome this, researchers in Japan say they have developed a way to shrink the devices that filter out interfering signals. Instead of many individual filters, the technology would combine them onto single chips.

For smartphones to work with different communications standards and in different countries, they need dozens of filters to keep out unwanted signals. But these filters can be expensive and collectively take up a relatively large amount of real estate in the phone. With increasingly crowded electromagnetic spectrum , engineers will have to cram even more filters into phones and other gadgets, meaning further miniaturization will be necessary. Researchers at Japanese telecom NTT and Okayama University say they’ve developed technology that could shrink all those filters down to a single device they describe as an ultrasonic circuit that can steer signals without unintentionally scattering them.

The ultrasonic circuit incorporates filters that are similar to surface acoustic wave (SAW) filters used in smartphones. SAW filters convert an electronic RF signal into a mechanical wave on the surface of a substrate and back again, filtering out particular frequencies in the process. Because the mechanical wave is thousands of times shorter than the RF wave that creates it, SAW filters can be compact.

illustration of hand holding smartphone and black and red text with different colored arrows to the right Today’s filters screen out unwanted RF signals by converting them to ultrasonic signals and back again. New research could lead to a way to integrate many such filters onto a single chip.NTT Corporation

“In the future IoT society, communication bandwidth and methods will increase, so we will need hundreds of ultrasonic filters in smartphones, but we cannot allocate a large area to them,” because the battery, display, processor and other components need room too, says Daiki Hatanaka a senior research scientist in the Nanomechanics Research Group at NTT. “Our technology allows us to confine ultrasound in a very narrow channel on a micrometer scale then guide the signal as we want. Based on this ultrasonic circuit, we can integrate many filters on just one chip.”

Valley Pseudospin-dependent Transport

Guiding ultrasonic waves along a path that changes direction can cause backscattering, degrading the signal quality. To counter this, Hatanaka and colleagues tapped Okayama University’s research into acoustic topological structures. Topology is mathematics concerned with how different shapes can be thought of as equivalent if they satisfy certain conditions—the classic example is a donut and a coffee mug being equivalent because they each have just one hole. But as highlighted by the 2016 Nobel Prize in Physics, it’s also used to explore exotic states of matter including superconductivity.

In their experiments, the researchers in Japan fashioned a waveguide made up of arrays of periodic holes with three-fold rotational symmetry. Where two arrays with holes that were rotated 10 degrees apart from each other met, a topological property called valley pseudospin arises. At this edge, tiny ultrasonic vortexes “pseudospin” in opposite directions, generating a unique ultrasonic wave known as valley pseudospin-dependent transport. This propagates a 0.5 GHz signal in only one direction even if there is a sharp bend in the waveguide, according to NTT. So the signal can’t suffer backscattering.

“The direction of the polarization of the valley states of ultrasound automatically forces it to propagate in only one direction, and backscattering is prohibited,” says Hatanaka. “

NTT says the gigahertz topological circuit is the first of its kind. The research team is now trying to fabricate a waveguide that connects 5 to 10 filters on a single chip. The initial chip will be about 1 square centimeter, but the researchers hope to shrink it to a few hundred square micrometers. In the second stage of research, they will try to dynamically control the ultrasound, amplify the signal, convert its frequency, and integrate these functions into one system.

The company will consider plans for commercialization as the research proceeds over the next two years. If the research becomes a commercial product the impact on future smartphones and IoT systems could be important, says Hatanaka. He estimates that future high-end smartphones could be equipped with up to around 20 ultrasonic circuits.

“We could use the space saved for a better user experience, so by using ultrasonic filters or other analog signal components we can improve the display or battery or other important components for the user experience,” he says.

From Punch Cards to Python



In today’s digital world, it’s easy for just about anyone to create a mobile app or write software, thanks to Java, JavaScript, Python, and other programming languages.

But that wasn’t always the case. Because the primary language of computers is binary code, early programmers used punch cards to instruct computers what tasks to complete. Each hole represented a single binary digit.

That changed in 1952 with the A-0 compiler, a series of specifications that automatically translates high-level languages such as English into machine-readable binary code.

The compiler, now an IEEE Milestone, was developed by Grace Hopper, who worked as a senior mathematician at the Eckert-Mauchly Computer Corp., now part of Unisys, in Philadelphia.

IEEE Fellow’s innovation allowed programmers to write code faster and easier using English commands. For her, however, the most important outcome was the influence it had on the development of modern programming languages, making writing code more accessible to everyone, according to a Penn Engineering Today article.

The dedication of the A-0 compiler as an IEEE Milestone was held in Philadelphia on 7 May at the University of Pennsylvania. That’s where the Eckert-Mauchly Computer Corp. got its start.

“This milestone celebrates the first step of applying computers to automate the tedious portions of their own programming,” André DeHon, professor of electrical systems, engineering, and computer science, said at the dedication ceremony.

Eliminating the punch-card system

To program a computer, early technicians wrote out tasks in assembly language—a human-readable way to write machine code, which is made up of binary numbers. They then manually translated the assembly language into machine code and punched holes representing the binary digits into cards, according to a Medium article on the method. The cards were fed into a machine that read the holes and input the data into the computer.

The punch-card system was laborious; it could take days to complete a task. The cards couldn’t be used with even a slight defect such as a bent corner. The method also had a high risk of human error.

After leading the development of the Electronic Numerical Integrator and Computer (ENIAC) at Penn, computer scientists J. Presper Eckert and John W. Mauchly set about creating a replacement for punch cards. ENIAC was built to improve the accuracy of U.S. artillery during World War II, but the two men wanted to develop computers for commercial applications, according to a Pennsylvania Center for the Book article.

The machine they designed was the first known large-scale electronic computer, the Universal Automatic, or UNIVAC I. Hopper was on its development team.

UNIVAC I used 6,103 vacuum tubes and took up a 33-square-meter room. The machine had a memory unit. Instead of punch cards, the computer used magnetic tape to input data. The tapes, which could hold audio, video, and written data, were up to 457 meters long. Unlike previous computers, the UNIVAC I had a keyboard so an operator could input commands, according to the Pennsylvania Center for the Book article.

“This milestone celebrates the first step of applying computers to automate the tedious portions of their own programming.” —André DeHon

Technicians still had to manually feed instructions into the computer, however, to run any new program.

That time-consuming process led to errors because “programmers are lousy copyists,” Hopper said in a speech for the Association for Computing Machinery. “It was amazing how many times a 4 would turn into a delta, which was our space symbol, or into an A. Even B’s turned into 13s.”

According to a Hidden Heroes article, Hopper had an idea for simplifying programming: Have the computer translate English to machine code.

She was inspired by computer scientist Betty Holberton’s sort/merge generator and Mauchly’s Short Code. Holberton is one of six women who programmed the ENIAC to calculate artillery trajectories in seconds, and she worked alongside Hopper on the UNIVAC I. Her sort/merge program, invented in 1951 for the UNIVAC I, handled the large data files stored on magnetic tapes. Hopper defined the sort/merge program as the first version of virtual memory because it made use of overlays automatically without being directed to by the programmer, according to a Stanford presentation about programming languages. The Short Code, which was developed in the 1940s, allowed technicians to write programs using brief sequences of English words corresponding directly to machine code instructions. It bridged the gap between human-readable code and machine-executable instructions.

“I think the first step to tell us that we could actually use a computer to write programs was the sort/merge generator,” Hopper said in the presentation. “And Short Code was the first step in moving toward something which gave a programmer the actual power to write a program in a language which bore no resemblance whatsoever to the original machine code.”

A photo of a woman standing in front of a large computer bank. IEEE Fellow Grace Hopper inputting call numbers into the Universal Automatic (UNIVAC I), which allows the computer to find the correct instructions to complete. The A-0 compiler translates the English instructions into machine-readable binary code.Computer History Museum

Easier, faster, and more accurate programming

Hopper, who figured computers should speak human-like languages, rather than requiring humans to speak computer languages, began thinking about how to allow programmers to call up specific codes using English, according to an IT Professional profile.

But she needed a library of frequently used instructions for the computer to reference and a system to translate English to machine code. That way, the computer could understand what task to complete.

Such a library didn’t exist, so Hopper built her own. It included tapes that held frequently used instructions for tasks that she called subroutines. Each tape stored one subroutine, which was assigned a three-number call sign so that the UNIVAC I could locate the correct tape. The numbers represented sets of three memory addresses: one for the memory location of the subroutine, another for the memory location of the data, and the third for the output location, according to the Stanford presentation.

“All I had to do was to write down a set of call numbers, let the computer find them on the tape, and do the additions,” she said in a Centre for Computing History article. “This was the first compiler.”

The system was dubbed the A-0 compiler because code was written in one language, which was then “compiled” into a machine language.

What previously had taken a month of manual coding could now be done in five minutes, according to a Cockroach Labs article.

Hopper presented the A-0 to Eckert-Mauchly Computer executives. Instead of being excited, though, they said they didn’t believe a computer could write its own programs, according to the article.

“I had a running compiler, and nobody would touch it, because they carefully told me computers could only do arithmetic; they could not do programs,” Hopper said. “It was a selling job to get people to try it. I think with any new idea, because people are allergic to change, you have to get out and sell the idea.”

It took two years for the company’s leadership to accept the A-0.

In 1954, Hopper was promoted to director of automatic programming for the UNIVAC division. She went on to create the first compiler-based programming languages including Flow-Matic, the first English language data-processing compiler. It was used to program UNIVAC I and II machines.

Hopper also was involved in developing COBOL, one of the earliest standardized computer languages. It enabled computers to respond to words in addition to numbers, and it is still used in business, finance, and administrative systems. Hopper’s Flow-Matic formed the foundation of COBOL, whose first specifications were made available in 1959.

A plaque recognizing the A-0 is now displayed at the University of Pennsylvania. It reads:

During 1951–1952, Grace Hopper invented the A-0 Compiler, a series of specifications that functioned as a linker/loader. It was a pioneering achievement of automatic programming as well as a pioneering utility program for the management of subroutines. The A-0 Compiler influenced the development of arithmetic and business programming languages. This led to COBOL (Common Business-Oriented Language), becoming the dominant high-level language for business applications.

The IEEE Philadelphia Section sponsored the nomination.

Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments worldwide.

About Grace Hopper


Hopper didn’t start as a computer programmer. She was a mathematician at heart, earning bachelor’s degrees in mathematics and physics in 1928 from Vassar College, in Poughkeepsie, N.Y. She then received master’s and doctoral degrees in mathematics and mathematical physics from Yale in 1930 and 1934, respectively.

She taught math at Vassar, but after the bombing of Pearl Harbor and the U.S. entry into World War II, Hopper joined the war effort. She took a leave of absence from Vassar to join the U.S. Naval Reserve (Women’s Reserve) in December 1943. She was assigned to the Bureau of Ships Computation Project at Harvard, where she worked for mathematician Howard Aiken. She was part of Aiken’s team that developed the Mark I, one of the earliest electromechanical computers. Hopper was the third person and the first woman to program the machine.

After the war ended, she became a research fellow at the Harvard Computation Laboratory. In 1946 she joined the Eckert-Mauchly Computer Corp., where she worked until her retirement in 1971. During 1959 she was an adjunct lecturer at Penn’s Moore School of Electrical Engineering.

Her work in programming earned her the nickname “Amazing Grace,” according to an entry about her on the Engineering and Technology History Wiki.

Hopper remained a member of the Naval Reserve and, in 1967, was recalled to active duty. She led the effort to standardize programming languages for the military, according to the ETHW entry. She was eventually promoted to rear admiral. When she retired from the Navy at the age of 79 in 1989, she was the oldest serving officer in all the U.S. armed forces.

Among her many honors was the 1991 U.S. National Medal of Technology and Innovation “for her pioneering accomplishments in the development of computer programming languages that simplified computer technology and opened the door to a significantly larger universe of users.”

She received 40 honorary doctorates from universities, and the Navy named a warship in her honor.

Driving Middle East’s Innovation in Robotics and Future of Automation



This is a sponsored article brought to you by Khalifa University of Science and Technology.

Abu Dhabi-based Khalifa University of Science and Technology in the United Arab Emirates (UAE) will be hosting the 36th edition of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024) to highlight the Middle East and North Africa (MENA) region’s rapidly advancing capabilities in the robotics and intelligent transport systems.

aspect_ratioLogo for IROS 2024 robotics conference, featuring a line drawing of electrical devices and the words IROS 24 and Abu Dhabi.

Themed “Robotics for Sustainable Development,” the IROS 2024 will be held from 14-18 October 2024 at the Abu Dhabi National Exhibition Center (ADNEC) in the UAE’s capital city. It will offer a platform for universities and research institutions to display their research and innovation activities and initiatives in robotics, gathering researchers, academics, leading corporate majors, and industry professionals from around the globe.

A total of 13 forums, nine global-level competitions and challenges covering various aspects of robotics and AI, an IROS Expo, as well as an exclusive Career Fair will also be part of IROS 2024. The challenges and competitions will focus on physical or athletic intelligence of robots, remote robot navigation, robot manipulation, underwater robotics, as well as perception and sensing.

Delegates for the event will represent sectors including manufacturing, healthcare, logistics, agriculture, defense, security, and mining sectors with 60 percent of the talent pool having over six years of experience in robotics. A major component of the conference will be the poster sessions, keynotes, panel discussions by researchers and scientists, and networking events.

A photo of two people in front of a red robot. Khalifa University will be hosting IROS 2024 to highlight the Middle East and North Africa (MENA) region’s rapidly advancing capabilities in the robotics and intelligent transport systems.Khalifa University

Abu Dhabi ranks first on the world’s safest cities list in 2024, according to online database Numbeo, out of 329 global cities in the 2024 standings, holding the title for eight consecutive years since 2017, reflecting the emirate’s ongoing efforts to ensure a good quality of life for citizens and residents.

With a multicultural community, Abu Dhabi is home to people from more than 200 nationalities and draws a large number of tourists to some of the top art galleries in the city such as Louvre Abu Dhabi and the Guggenheim Abu Dhabi, as well as other destinations such as Ferrari World Abu Dhabi and Warner Bros. World Abu Dhabi.

The UAE and Abu Dhabi have increasingly become a center for creative skillsets, human capital and advanced technologies, attracting several international and regional events such as the global COP28 UAE climate summit, in which more than 160 countries participated.

Abu Dhabi city itself has hosted a number of association conventions such as the 34th International Nursing Research Congress and is set to host the UNCTAD World Investment Forum, the 13th World Trade Organization (WTO) Ministerial Conference (MC13), the 12th World Environment Education Congress in 2024, and the IUCN World Conservation Congress in 2025.

A photo of a man looking at a sensor. Khalifa University’s Center for Robotics and Autonomous Systems (KU-CARS) includes a vibrant multidisciplinary environment for conducting robotics and autonomous vehicle-related research and innovation.Khalifa University

Dr. Jorge Dias, IROS 2024 General Chair, said: “Khalifa University is delighted to bring the Intelligent Robots and Systems 2024 to Abu Dhabi in the UAE and highlight the innovations in line with the theme Robotics for Sustainable Development. As the region’s rapidly advancing capabilities in robotics and intelligent transport systems gain momentum, this event serves as a platform to incubate ideas, exchange knowledge, foster collaboration, and showcase our research and innovation activities. By hosting IROS 2024, Khalifa University aims to reaffirm the UAE’s status as a global innovation hub and destination for all industry stakeholders to collaborate on cutting-edge research and explore opportunities for growth within the UAE’s innovation ecosystem.”

“This event serves as a platform to incubate ideas, exchange knowledge, foster collaboration, and showcase our research and innovation activities” —Dr. Jorge Dias, IROS 2024 General Chair

Dr. Dias added: “The organizing committee of IROS 2024 has received over 4000 submissions representing 60 countries, with China leading with 1,029 papers, followed by the U.S. (777), Germany (302), and Japan (253), as well as the U.K. and South Korea (173 each). The UAE with a total of 68 papers comes atop the Arab region.”

Driving innovation at Khalifa University is the Center for Robotics and Autonomous Systems (KU-CARS) with around 50 researchers and state-of-the-art laboratory facilities, including a vibrant multidisciplinary environment for conducting robotics and autonomous vehicle-related research and innovation.

IROS 2024 is sponsored by IEEE Robotics and Automation Society, Abu Dhabi Convention and Exhibition Bureau, the Robotics Society of Japan (RSJ), the Society of Instrument and Control Engineers (SICE), the New Technology Foundation, and the IEEE Industrial Electronics Society (IES).

More information at https://iros2024-abudhabi.org/

Video Friday: Jumping Robot Leg, Walking Robot Table



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
IROS 2024: 14–18 October 2024, ABU DHABI, UAE
ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

Researchers at the Max Planck Institute for Intelligent Systems and ETH Zurich have developed a robotic leg with artificial muscles. Inspired by living creatures, it jumps across different terrains in an agile and energy-efficient manner.

[ Nature ] via [ MPI ]

Thanks, Toshi!

ETH Zurich researchers have now developed a fast robotic printing process for earth-based materials that does not require cement. In what is known as “impact printing,” a robot shoots material from above, gradually building a wall. On impact, the parts bond together, and very minimal additives are required.

[ ETH Zurich ]

How could you not be excited to see this happen for real?

[ arXiv paper ]

Can we all agree that sanding, grinding, deburring, and polishing tasks are really best done by robots, for the most part?

[ Cohesive Robotics ]

Thanks, David!

Using doors is a longstanding challenge in robotics and is of significant practical interest in giving robots greater access to human-centric spaces. The task is challenging due to the need for online adaptation to varying door properties and precise control in manipulating the door panel and navigating through the confined doorway. To address this, we propose a learning-based controller for a legged manipulator to open and traverse through doors.

[ arXiv paper ]

Isaac is the first robot assistant that’s built for the home. And we’re shipping it in fall of 2025.

Fall of 2025 is a long enough time from now that I’m not even going to speculate about it.

[ Weave Robotics ]

By patterning liquid metal paste onto a soft sheet of silicone or acrylic foam tape, we developed stretchable versions of conventional rigid circuits (like Arduinos). Our soft circuits can be stretched to over 300% strain (over 4x their length) and are integrated into active soft robots.

[ Science Robotics ] via [ Yale ]

NASA’s Curiosity rover is exploring a scientifically exciting area on Mars, but communicating with the mission team on Earth has recently been a challenge due to both the current season and the surrounding terrain. In this Mars Report, Curiosity engineer Reidar Larsen takes you inside the uplink room where the team talks to the rover.

[ NASA ]

I love this and want to burn it with fire.

[ Carpentopod ]

Very often, people ask us what Reachy 2 is capable of, which is why we’re showing you the manipulation possibilities (through teleoperation) of our technology. The robot shown in this video is the Beta version of Reachy 2, our new robot coming very soon!

[ Pollen Robotics ]

The Scalable Autonomous Robots (ScalAR) Lab is an interdisciplinary lab focused on fundamental research problems in robotics that lie at the intersection of robotics, nonlinear dynamical systems theory, and uncertainty.

[ ScalAR Lab ]

Astorino is a 6-axis educational robot created for practical and affordable teaching of robotics in schools and beyond. It has been created with 3D printing, so it allows for experimentation and the possible addition of parts. With its design and programming, it replicates the actions of #KawasakiRobotics industrial robots, giving students the necessary skills for future work.

[ Astorino ]

I guess fish-fillet-shaping robots need to exist because otherwise customers will freak out if all their fish fillets are not identical, or something?

[ Flexiv ]

Watch the second episode of the ExoMars Rosalind Franklin rover mission—Europe’s ambitious exploration journey to search for past and present signs of life on Mars. The rover will dig, collect, and investigate the chemical composition of material collected by a drill. Rosalind Franklin will be the first rover to reach a depth of up to two meters below the surface, acquiring samples that have been protected from surface radiation and extreme temperatures.

[ ESA ]

The Next Frontier for EV Batteries: Nanoscale Coatings



Over the past 25 years, the longest driving range of an electric vehicle on a single charge has gone from about 260 kilometers to slightly over 800 km. Increasingly, these advanced battery packs have also begun storing energy from the grid or renewable sources to power homes or businesses. No wonder, then, that the global automotive battery market has surpassed US $50 billion a year and there is increasing pressure to produce greater numbers of even better batteries.

Now, several companies are applying a well-established chemical technique called atomic layer deposition (ALD) to coat battery electrodes with metal oxides or nitrides, which they claim improves both the energy capacity and the lifespan of lithium-ion batteries. The companies include Thornton, Colo.–based Forge Nano, Picosun (a wholly-owned subsidiary of Santa Clara, Calif.–based Applied Materials), and Beneq, in Espoo, Finland; they are leveraging the technique, which was originally developed in the 1960s. After years of refining their respective processes, these companies now hope to gain a toehold in markets for EV and smartphone batteries dominated by such giants as CATL, Panasonic, and Samsung.

Of the three, Forge Nano appears to have the most developed technology. It recently announced that its subsidiary, Forge Battery, has begun sending samples of a prototype battery cell made with ALD-coated materials to customers for testing. The company says its proprietary ALD formulation, which it calls Atomic Armor, makes batteries’ electrodes better at storing energy and helps them last longer.

What Goes Into a Lithium-Ion Battery?

The batteries found in today’s electric vehicles and smartphones consist of three main components. The anode, or negative electrode, usually made of graphite, is where lithium ions are stored during the charging process. The cathode (positive electrode) is made of a lithium-metal oxide such as lithium cobalt oxide or lithium-iron phosphate. Then there’s the electrolyte, which is a lithium salt dissolved in an organic solvent that allows lithium ions to move between the anode and cathode. Also important is the separator, a semi-porous material that allows the movement of ions between the cathode and anode during charging and discharging but blocks the flow of electrons directly between the two, which would quickly short out the battery.

a light gray and dark gray line on a black bar A cathode coating is deposited for R&D battery cells by Forge Nano.Forge Nano

Coating the materials that make up the anode, cathode, and separator at the molecular level, these companies say, boosts batteries’ the performance and durability without an appreciable increase in their weight or volume.

. The films are formed by a chemical reaction between two gaseous precursor substances, which are introduced to the substrate by turns. The first one reacts with the substrate surface at active sites, the points on the precursor molecules and on the surface of the substrate where the two materials chemically bond. Then, after all the non-reacted precursor gas is pumped away, the next precursor is introduced and bonds with the first precursor at their respective active sites. ALD technology is self-terminating, meaning that when all active sites are filled, the reaction stops. The film forms one atomic layer at a time, so its thickness can be set with precision as fine as a few tenths of a nanometer simply by cutting off exposure of the substrate to the precursors once the desired coating thickness is reached.

In a conventional lithium-ion battery, with a graphite anode, silicon (and sometimes other materials) is added to the graphite to improve the anode’s ability to store ions. The practice boosts energy density, but silicon is much more prone to side reactions with the electrolyte and to expansion and contraction during charging and discharging, which weakens the electrode. Eventually, the mechanical degradation diminishes the battery’s storage capacity. ALD technology, by coating anode molecules with a protective layer, enables a higher proportion of silicon in the anode while also inhibiting the expansion-contraction cycles and therefore, slowing the mechanical degradation. The result is a lighter, more energy-dense battery that is more durable than conventional lithium-ion batteries.

Picosun says its ALD technology has been used to create coated nickel oxide anodes with more than twice the energy storage capacity and three times the energy density of those relying on traditional graphite.

How big is the benefit? Forge Nano says that although the third-party testing and validation are underway, it’s too soon to make definitive statements about the coating-enhanced batteries’ lifespans. But a company spokesperson told IEEE Spectrum the data it has received thus far indicates that specific energy is improved by 15 percent compared with comparable batteries currently on the market.

The company has made a big bet that the players all along the battery production chain—from fabricators of anodes and cathodes to Tier 1 battery suppliers, and even electric vehicle manufacturers—will view its take on ALD as a must-have step in battery manufacturing. Forge Battery is building a 25,700 square meter gigafactory in North Carolina that it says will turn out 1 gigawatt-hour of its Atomic Armor–enhanced lithium-ion cells and finished batteries when it becomes operational in 2026.

Transistor-like Qubits Hit Key Benchmark



A team in Australia has recently demonstrated a key advance in metal-oxide-semiconductor-based (or MOS-based) quantum computers. They showed that their two-qubit gates—logical operations that involve more than one quantum bit, or qubit—perform without errors 99 percent of the time. This number is important, because it is the baseline necessary to perform error correction, which is believed to be necessary to build a large-scale quantum computer. What’s more, these MOS-based quantum computers are compatible with existing CMOS technology, which will make it more straightforward to manufacture a large number of qubits on a single chip than with other techniques.

“Getting over 99 percent is significant because that is considered by many to be the error correction threshold, in the sense that if your fidelity is lower than 99 percent, it doesn’t really matter what you’re going to do in error correction,” says Yuval Boger, CCO of quantum computing company QuEra and who wasn’t involved in the work. “You’re never going to fix errors faster than they accumulate.”

There are many contending platforms in the race to build a useful quantum computer. IBM, Google and others are building their machines out of superconducting qubits. Quantinuum and IonQ use individual trapped ions. QuEra and Atom Computing use neutrally-charged atoms. Xanadu and PsiQuantum are betting on photons. The list goes on.

In the new result, a collaboration between the University of New South Wales (UNSW) and Sydney-based startup Diraq, with contributors from Japan, Germany, Canada, and the U.S., has taken yet another approach: trapping single electrons in MOS devices. “What we are trying to do is we are trying to make qubits that are as close to traditional transistors as they can be,” says Tuomo Tanttu, a research fellow at UNSW who led the effort.

Qubits That Act Like Transistors

These qubits are indeed very similar to a regular transistor, gated in such a way as to have only a single electron in the channel. The biggest advantage of this approach is that it can be manufactured using traditional CMOS technologies, making it theoretically possible to scale to millions of qubits on a single chip. Another advantage is that MOS qubits can be integrated on-chip with standard transistors for simplified input, output, and control, says Diraq CEO Andrew Dzurak.

The drawback of this approach, however, is that MOS qubits have historically suffered from device-to-device variability, causing significant noise on the qubits.

“The sensitivity in [MOS] qubits is going to be more than in transistors, because in transistors, you still have 20, 30, 40 electrons carrying the current. In a qubit device, you’re really down to a single electron,” says Ravi Pillarisetty, a senior device engineer for Intel quantum hardware who wasn’t involved in the work.

The team’s result not only demonstrated the 99 percent accurate functionality on two-qubit gates of the test devices, but also helped better understand the sources of device-to-device variability. The team tested three devices with three qubits each. In addition to measuring the error rate, they also performed comprehensive studies to glean the underlying physical mechanisms that contribute to noise.

The researchers found that one of the sources of noise was isotopic impurities in the silicon layer, which, when controlled, greatly reduced the circuit complexity necessary to run the device. The next leading cause of noise was small variations in electric fields, likely due to imperfections in the oxide layer of the device. Tanttu says this is likely to improve by transitioning from a laboratory clean room to a foundry environment.

“It’s a great result and great progress. And I think it’s setting the right direction for the community in terms of thinking less about one individual device, or demonstrating something on an individual device, versus thinking more longer term about the scaling path,” Pillarisetty says.

Now, the challenge will be to scale up these devices to more qubits. One difficulty with scaling is the number of input/output channels required. The quantum team at Intel, who are pursuing a similar technology, has recently pioneered a chip they call Pando Tree to try to address this issue. Pando Tree will be on the same plane as the quantum processor, enabling faster inputs and outputs to the qubits. The Intel team hopes to use it to scale to thousands of qubits. “A lot of our approach is thinking about, how do we make our qubit processor look more like a modern CPU?” says Pillarisetty.

Similarly, Diraq CEO Dzurak says his team plan to scale their technology to thousands of qubits in the near future through a recently announced partnership with Global Foundries. “With Global Foundries, we designed a chip that will have thousands of these [MOS qubits]. And these will be interconnected by using classical transistor circuitry that we designed. This is unprecedented in the quantum computing world,” Dzurak says.

Will the "AI Scientist" Bring Anything to Science?



When an international team of researchers set out to create an “AI scientist” to handle the whole scientific process, they didn’t know how far they’d get. Would the system they created really be capable of generating interesting hypotheses, running experiments, evaluating the results, and writing up papers?

What they ended up with, says researcher Cong Lu, was an AI tool that they judged equivalent to an early Ph.D. student. It had “some surprisingly creative ideas,” he says, but those good ideas were vastly outnumbered by bad ones. It struggled to write up its results coherently, and sometimes misunderstood its results: “It’s not that far from a Ph.D. student taking a wild guess at why something worked,” Lu says. And, perhaps like an early Ph.D. student who doesn’t yet understand ethics, it sometimes made things up in its papers, despite the researchers’ best efforts to keep it honest.

Lu, a postdoctoral research fellow at the University of British Columbia, collaborated on the project with several other academics, as well as with researchers from the buzzy Tokyo-based startup Sakana AI. The team recently posted a preprint about the work on the ArXiv server. And while the preprint includes a discussion of limitations and ethical considerations, it also contains some rather grandiose language, billing the AI scientist as “the beginning of a new era in scientific discovery,” and “the first comprehensive framework for fully automatic scientific discovery, enabling frontier large language models (LLMs) to perform research independently and communicate their findings.”

The AI scientist seems to capture the zeitgeist. It’s riding the wave of enthusiasm for AI for science, but some critics think that wave will toss nothing of value onto the beach.

The “AI for Science” Craze

This research is part of a broader trend of AI for science. Google DeepMind arguably started the craze back in 2020 when it unveiled AlphaFold, an AI system that amazed biologists by predicting the 3D structures of proteins with unprecedented accuracy. Since generative AI came on the scene, many more big corporate players have gotten involved. Tarek Besold, a SonyAI senior research scientist who leads the company’s AI for scientific discovery program, says that AI for science isa goal behind which the AI community can rally in an effort to advance the underlying technology but—even more importantly—also to help humanity in addressing some of the most pressing issues of our times.”

Yet the movement has its critics. Shortly after a 2023 Google DeepMind paper came out claiming the discovery of 2.2 million new crystal structures (“equivalent to nearly 800 years’ worth of knowledge”), two materials scientists analyzed a random sampling of the proposed structures and said that they found “scant evidence for compounds that fulfill the trifecta of novelty, credibility, and utility.” In other words, AI can generate a lot of results quickly, but those results may not actually be useful.

How the AI Scientist Works

In the case of the AI scientist, Lu and his collaborators tested their system only on computer science, asking it to investigate topics relating to large language models, which power chatbots like ChatGPT and also the AI scientist itself, and the diffusion models that power image generators like DALL-E.

The AI scientist’s first step is hypothesis generation. Given the code for the model it’s investigating, it freely generates ideas for experiments it could run to improve the model’s performance, and scores each idea on interestingness, novelty, and feasibility. It can iterate at this step, generating variations on the ideas with the highest scores. Then it runs a check in Semantic Scholar to see if its proposals are too similar to existing work. It next uses a coding assistant called Aider to run its code and take notes on the results in the format of an experiment journal. It can use those results to generate ideas for follow-up experiments.

different colored boxes with arrows and black text against a white background The AI scientist is an end-to-end scientific discovery tool powered by large language models. University of British Columbia

The next step is for the AI scientist to write up its results in a paper using a template based on conference guidelines. But, says Lu, the system has difficulty writing a coherent nine-page paper that explains its results—”the writing stage may be just as hard to get right as the experiment stage,” he says. So the researchers broke the process down into many steps: The AI scientist wrote one section at a time, and checked each section against the others to weed out both duplicated and contradictory information. It also goes through Semantic Scholar again to find citations and build a bibliography.

But then there’s the problem of hallucinations—the technical term for an AI making stuff up. Lu says that although they instructed the AI scientist to only use numbers from its experimental journal, “sometimes it still will disobey.” Lu says the model disobeyed less than 10 percent of the time, but “we think 10 percent is probably unacceptable.” He says they’re investigating a solution, such as instructing the system to link each number in its paper to the place it appeared in the experimental log. But the system also made less obvious errors of reasoning and comprehension, which seem harder to fix.

And in a twist that you may not have seen coming, the AI scientist even contains a peer review module to evaluate the papers it has produced. “We always knew that we wanted some kind of automated [evaluation] just so we wouldn’t have to pour over all the manuscripts for hours,” Lu says. And while he notes that “there was always the concern that we’re grading our own homework,” he says they modeled their evaluator after the reviewer guidelines for the leading AI conference NeurIPS and found it to be harsher overall than human evaluators. Theoretically, the peer review function could be used to guide the next round of experiments.

Critiques of the AI Scientist

While the researchers confined their AI scientist to machine learning experiments, Lu says the team has had a few interesting conversations with scientists in other fields. In theory, he says, the AI scientist could help in any field where experiments can be run in simulation. “Some biologists have said there’s a lot of things that they can do in silico,” he says, also mentioning quantum computing and materials science as possible fields of endeavor.

Some critics of the AI for science movement might take issue with that broad optimism. Earlier this year, Jennifer Listgarten, a professor of computational biology at UC Berkeley, published a paper in Nature Biotechnology arguing that AI is not about to produce breakthroughs in multiple scientific domains. Unlike the AI fields of natural language processing and computer vision, she wrote, most scientific fields don’t have the vast quantities of publicly available data required to train models.

Two other researchers who study the practice of science, anthropologist Lisa Messeri of Yale University and psychologist M.J. Crockett of Princeton University, published a 2024 paper in Nature that sought to puncture the hype surrounding AI for science. When asked for a comment about this AI scientist, the two reiterated their concerns over treating “AI products as autonomous researchers.” They argue that doing so risks narrowing the scope of research to questions that are suited for AI, and losing out on the diversity of perspectives that fuels real innovation. “While the productivity promised by ‘the AI Scientist’ may sound appealing to some,” they tell IEEE Spectrum, “producing papers and producing knowledge are not the same, and forgetting this distinction risks that we produce more while understanding less.”

But others see the AI scientist as a step in the right direction. SonyAI’s Besold says he believes it’s a great example of how today’s AI can support scientific research when applied to the right domain and tasks. “This may become one of a handful of early prototypes that can help people conceptualize what is possible when AI is applied to the world of scientific discovery,” he says.

What’s Next for the AI Scientist

Lu says that the team plans to keep developing the AI scientist, and he says there’s plenty of low-hanging fruit as they seek to improve its performance. As for whether such AI tools will end up playing an important role in the scientific process, “I think time will tell what these models are good for,” Lu says. It might be, he says, that such tools are useful for the early scoping stages of a research project, when an investigator is trying to get a sense of the many possible research directions—although critics add that we’ll have to wait for future studies to see if these tools are really comprehensive and unbiased enough to be helpful.

Or, Lu says, if the models can be improved to the point that they match the performance of “a solid third-year Ph.D. student,” they could be a force multiplier for anyone trying to pursue an idea (at least, as long as the idea is in an AI-suitable domain). “At that point, anyone can be a professor and carry out a research agenda,” says Lu. “That’s the exciting prospect that I’m looking forward to.”

Greener Steel Production Requires More Electrochemical Engineers



In the 1800s, aluminum was considered more valuable than gold or silver because it was so expensive to produce the metal in any quantity. Thanks to the Hall-Héroult smelting process, which pioneered the electrochemical reduction of aluminum oxide in 1886, electrochemistry advancements made aluminum more available and affordable, rapidly transforming it into a core material used in the manufacturing of aircraft, power lines, food-storage containers and more.

As society mobilizes against the pressing climate crisis we face today, we find ourselves seeking transformative solutions to tackle environmental challenges. Much as electrochemistry modernized aluminum production, science holds the key to revolutionizing steel and iron manufacturing.

Electrochemistry can help save the planet

As the world embraces clean energy solutions such as wind turbines, electric vehicles, and solar panels to address the climate crisis, changing how we approach manufacturing becomes critical. Traditional steel production—which requires a significant amount of energy to burn fossil fuels at temperatures exceeding 1,600 °C to convert ore into iron—currently accounts for about 10 percent of the planet’s annual CO2 emissions. Continuing with conventional methods risks undermining progress toward environmental goals.

Scientists already are applying electrochemistry—which provides direct electrical control of oxidation-reduction reactions—to convert ore into iron. The conversion is an essential step in steel production and the most emissions-spewing part. Electrochemical engineers can drive the shift toward a cleaner steel and iron industry by rethinking and reprioritizing optimizations.

When I first studied engineering thermodynamics in 1998, electricity—which was five times the price per joule of heat—was considered a premium form of energy to be used only when absolutely required.

Since then the price of electricity has steadily decreased. But emissions are now known to be much more harmful and costly.

Engineers today need to adjust currently accepted practices to develop new solutions that prioritize mass efficiency over energy efficiency.

In addition to electrochemical engineers working toward a cleaner steel and iron industry, advancements in technology and cheaper renewables have put us in an “electrochemical moment” that promises change across multiple sectors.

The plummeting cost of photovoltaic panels and wind turbines, for example, has led to more affordable renewable electricity. Advances in electrical distribution systems that were designed for electric vehicles can be repurposed for modular electrochemical reactors.

Electrochemistry holds the potential to support the development of clean, green infrastructure beyond batteries, electrolyzers, and fuel cells. Electrochemical processes and methods can be scaled to produce metals, ceramics, composites, and even polymers at scales previously reserved for thermochemical processes. With enough effort and thought, electrochemical production can lead to billions of tons of metal, concrete, and plastic. And because electrochemistry directly accesses the electron transfer fundamental to chemistry, the same materials can be recycled using renewable energy.

As renewables are expected to account for more than 90 percent of global electricity expansion during the next five years, scientists and engineers focused on electrochemistry must figure out how best to utilize low-cost wind and solar energy.

The core components of electrochemical systems, including complex oxides, corrosion-resistant metals, and high-power precision power converters, are now an exciting set of tools for the next evolution of electrochemical engineering.

The scientists who came before have created a stable set of building blocks; the next generation of electrochemical engineers needs to use them to create elegant, reliable reactors and other systems to produce the processes of the future.

Three decades ago, electrochemical engineering courses were, for the most part, electives and graduate-level. Now almost every institutional top-ranked R&D center has full tracks of electrochemical engineering. Students interested in the field should take both electroanalytical chemistry and electrochemical methods classes and electrochemical energy storage and materials processing coursework.

Although scaled electrochemical production is possible, it is not inevitable. It will require the combined efforts of the next generation of engineers to reach its potential scale.

Just as scientists found a way to unlock the potential of the abundant, once-unattainable aluminum, engineers now have the opportunity to shape a cleaner, more sustainable future. Electrochemistry has the power to flip the switch to clean energy, paving the way for a world in which environmental harmony and industrial progress go hand in hand.

Get to Know the IEEE Board of Directors

By: IEEE
6 September 2024 at 20:00


The IEEE Board of Directors shapes the future direction of IEEE and is committed to ensuring IEEE remains a strong and vibrant organization—serving the needs of its members and the engineering and technology community worldwide—while fulfilling the IEEE mission of advancing technology for the benefit of humanity.

This article features IEEE Board of Directors members A. Matt Francis, Tom Murad, and Christopher Root.

IEEE Senior Member A. Matt Francis

Director, IEEE Region 5: Southwestern U.S.

A photo of a smiling man in a sweater. Moriah Hargrove Anders

Francis’s primary technology focus is extreme environment and high-temperature integrated circuits. His groundbreaking work has pushed the boundaries of electronics, leading to computers operating in low Earth orbit for more than a year on the International Space Station and on jet engines. Francis and his team have designed and built some of the world’s most rugged semiconductors and systems.

He is currently helping explore new computing frontiers in supersonic and hypersonic flight, geothermal energy exploration, and molten salt reactors. Well versed in shifting technology from idea to commercial application, Francis has secured and led projects with the U.S. Air Force, DARPA, NASA, the National Science Foundation, the U.S. Department of Energy, and private-sector customers.

Francis’s influence extends beyond his own ventures. He is a member of the IEEE Aerospace and Electronic Systems, IEEE Computer, and IEEE Electronics Packaging societies, demonstrating his commitment to industry and continuous learning.

He attended the University of Arkansas in Fayetteville for both his undergraduate and graduate degrees. He joined IEEE while at the university and was president of the IEEE–Eta Kappa Nu honor society’s Gamma Phi chapter. Francis’s other past volunteer roles include serving as chair of the IEEE Ozark Section, which covers Northwest Arkansas, and also as a member of the IEEE-USA Entrepreneurship Policy Innovation Committee.

His deep-rooted belief in the power of collaboration is evident in his willingness to share knowledge and support aspiring entrepreneurs. Francis is proud to have helped found a robotics club (an IEEE MGA Local Group) in his rural Elkins, Ark., community and to have served on steering committees for programs including IEEE TryEngineering and IEEE-USA’s Innovation, Research, and Workforce Conferences. He serves as an elected city council member for his town, and has cofounded two non-profits, supporting his community and the state of Arkansas.

Francis’s journey from entrepreneur to industry leader is a testament to his determination and innovative mindset. He has received numerous awards including the IEEE-USA Entrepreneur Achievement Award for Leadership in Entrepreneurial Spirit, IEEE Region 5 Directors Award, and IEEE Region 5 Outstanding Individual Member Achievement Award.

IEEE Senior Member Tom Murad

Director, IEEE Region 7: Canada

A photo of a smiling man in a suit. Siemens Canada

Murad is a respected technology leader, award-winning educator, and distinguished speaker on engineering, skills development, and education. Recently retired, he has 40 years of experience in professional engineering and technical operations executive management, including more than 10 years of academic and R&D work in industrial controls and automation.

He received his doctorate (Ph.D.) degree in power electronics and industrial controls from Loughborough University of Technology in the U.K.

Murad has held high-level positions in several international engineering and industrial organizations, and he contributed to many global industrial projects. His work on projects in power utilities, nuclear power, oil and gas, mining, automotive, and infrastructure industries has directly impacted society and positively contributed to the economy. He is a strong advocate of innovation and creativity, particularly in the areas of digitalization, smart infrastructure, and Industry 4.0. He continues his academic career as an adjunct professor at University of Guelph in Ontario, Canada.

His dedication to enhancing the capabilities of new generations of engineers is a source of hope and optimism. His work in significantly improving the quality and relevance of engineering and technical education in Canada is a testament to his commitment to the future of the engineering profession and community. For that he has been assigned by the Ontario Government to be a member of the board of directors of the Post Secondary Education Quality Assessment Board (PEQAB).

Murad is a member of the IEEE Technology and Engineering Management, IEEE Education, IEEE Intelligent Transportation Systems, and IEEE Vehicular Technology societies, the IEEE-Eta Kappa Nu honor society, and the Editorial Advisory Board Chair for the IEEE Canadian Review Magazine. His accomplishments show his passion for the engineering profession and community.

He is a member of the Order of Honor of the Professional Engineers of Ontario, Canada, Fellow of Engineers Canada, Fellow of Engineering Institutes of Canada (EIC), and received the IEEE Canada J.M. Ham Outstanding Engineering Educator Award, among other recognitions highlighting his impact on the field.

IEEE Senior Member Christopher Root

Director, Division VII

A photo of a smiling man in a suit. Vermont Electric Power Company and Shana Louiselle

Root has been in the electric utility industry for more than 40 years and is an expert in power system operations, engineering, and emergency response. He has vast experience in the operations, construction, and maintenance of transmission and distribution utilities, including all phases of the engineering and design of power systems. He has shared his expertise through numerous technical presentations on utility topics worldwide.

Currently an industry advisor and consultant, Root focuses on the crucial task of decarbonizing electricity production. He is engaged in addressing the challenges of balancing an increasing electrical market and dependence on renewable energy with the need to provide low-cost, reliable electricity on demand.

Root’s journey with IEEE began in 1983 when he attended his first meeting as a graduate student at Rensselaer Polytechnic Institute, in Troy, N.Y. Since then, he has served in leadership roles such as treasurer, secretary, and member-at-large of the IEEE Power & Energy Society (PES). His commitment to the IEEE mission and vision is evident in his efforts to revitalize the dormant IEEE PES Boston Chapter in 2007 and his instrumental role in establishing the IEEE PES Green Mountain Section in Vermont in 2015. He also is a member of the editorial board of the IEEE Power & Energy Magazine and the IEEE–Eta Kappa Nu honor society.

Root’s contributions and leadership in the electric utility industry have been recognized with the IEEE PES Leadership in Power Award and the PES Meritorious Service Award.

Video Friday: HAND to Take on Robotic Hands



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
IROS 2024: 14–18 October 2024, ABU DHABI, UAE
ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
Cybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

The National Science Foundation Human AugmentatioN via Dexterity Engineering Research Center (HAND ERC) was announced in August 2024. Funded for up to 10 years and $52 million, the HAND ERC is led by Northwestern University, with core members Texas A&M, Florida A&M, Carnegie Mellon, and MIT, and support from Wisconsin-Madison, Syracuse, and an innovation ecosystem consisting of companies, national labs, and civic and advocacy organizations. HAND will develop versatile, easy-to-use dexterous robot end effectors (hands).

[ HAND ]

The Environmental Robotics Lab at ETH Zurich, in partnership with Wilderness International (and some help from DJI and Audi), is using drones to sample DNA from the tops of trees in the Peruvian rainforest. Somehow, the treetops are where 60 to 90 percent of biodiversity is found, and these drones can help researchers determine what the heck is going on up there.

[ ERL ]

Thanks, Steffen!

1X introduces NEO Beta, “the pre-production build of our home humanoid.”

“Our priority is safety,” said Bernt Børnich, CEO at 1X. “Safety is the cornerstone that allows us to confidently introduce NEO Beta into homes, where it will gather essential feedback and demonstrate its capabilities in real-world settings. This year, we are deploying a limited number of NEO units in selected homes for research and development purposes. Doing so means we are taking another step toward achieving our mission.”

[ 1X ]

We love MangDang’s fun and affordable approach to robotics with Mini Pupper. The next generation of the little legged robot has just launched on Kickstarter, featuring new and updated robots that make it easy to explore embodied AI.

The Kickstarter is already fully funded after just a day or two, but there are still plenty of robots up for grabs.

[ Kickstarter ]

Quadrupeds in space can use their legs to reorient themselves. Or, if you throw one off a roof, it can learn to land on its feet.

To be presented at CoRL 2024.

[ ARL ]

HEBI Robotics, which apparently was once headquartered inside a Pittsburgh public bus, has imbued a table with actuators and a mind of its own.

[ HEBI Robotics ]

Carcinization is a concept in evolutionary biology where a crustacean that isn’t a crab eventually becomes a crab. So why not do the same thing with robots? Crab robots solve all problems!

[ KAIST ]

Waymo is smart, but also humans are really, really dumb sometimes.

[ Waymo ]

The Robotics Department of the University of Michigan created an interactive community art project. The group that led the creation believed that while roboticists typically take on critical and impactful problems in transportation, medicine, mobility, logistics, and manufacturing, there are many opportunities to find play and amusement. The final piece is a grid of art boxes, produced by different members of our robotics community, which offer an eight-inch-square view into their own work with robotics.

[ Michigan Robotics ]

I appreciate that UBTECH’s humanoid is doing an actual job, but why would you use a humanoid for this?

[ UBTECH ]

I’m sure most actuators go through some form of life-cycle testing. But if you really want to test an electric motor, put it into a BattleBot and see what happens.

[ Hardcore Robotics ]

Yes, but have you tried fighting a BattleBot?

[ AgileX ]

In this video, we present collaboration aerial grasping and transportation using multiple quadrotors with cable-suspended payloads. Grasping using a suspended gripper requires accurate tracking of the electromagnet to ensure a successful grasp while switching between different slack and taut modes. In this work, we grasp the payload using a hybrid control approach that switches between a quadrotor position control and a payload position control based on cable slackness. Finally, we use two quadrotors with suspended electromagnet systems to collaboratively grasp and pick up a larger payload for transportation.

[ Hybrid Robotics ]

I had not realized that the floretizing of broccoli was so violent.

[ Oxipital ]

While the RoboCup was held over a month ago, we still wanted to make a small summary of our results, the most memorable moments, and of course an homage to everyone who is involved with the B-Human team: the team members, the sponsors, and the fans at home. Thank you so much for making B-Human the team it is!

[ B-Human ]

A Match Made in Yorktown Heights



It pays to have friends in fascinating places. You need look no further than the cover of this issue and the article “ IBM’s Big Bet on the Quantum-Centric Supercomputer” for evidence. The article by Ryan Mandelbaum, Antonio D. Córcoles, and Jay Gambetta came to us courtesy of the article’s illustrator, the inimitable graphic artist Carl De Torres, a longtime IEEE Spectrum contributor as well as a design and communications consultant for IBM Research.

Story ideas typically originate with Spectrum’s editors and pitches from expert authors and freelance journalists. So we were intrigued when De Torres approached Spectrum about doing an article on IBM Research’s cutting-edge work on quantum-centric supercomputing.

De Torres has been collaborating with IBM in a variety of capacities since 2009, when, while at Wired magazine creating infographics, he was asked by the ad agency Ogilvy to work on Big Blue’s advertising campaign “Let’s build a Smarter Planet.” That project went so well that De Torres struck out on his own the next year. His relationship with IBM expanded, as did his engagements with other media, such as Spectrum, Fortune, and The New York Times. “My interest in IBM quickly grew beyond helping them in a marketing capacity,” says De Torres, who owns and leads the design studio Optics Lab in Berkeley, Calif. “What I really wanted to do is get to the source of some of the smartest work happening in technology, and that was IBM Research.”

Last year, while working on visualizations of a quantum-centric supercomputer with Jay Gambetta, vice president and lead scientist of IBM Quantum at the Thomas J. Watson Research Center in Yorktown Heights, N.Y., De Torres was inspired to contact Spectrum’s creative director, Mark Montgomery, with an idea.

“I really loved this process because I got to bring together two of my favorite clients to create something really special.” —Carl De Torres

“I thought, ‘You know, I think IEEE Spectrum would love to see this work,’” De Torres told me. “So with Jay’s permission, I gave Mark a 30-second pitch. Mark liked it and ran it by the editors, and they said that it sounded very promising.” De Torres, members of the IBM Quantum team, and Spectrum editors had a call to brainstorm what the article could be. “From there everything quickly fell into place, and I worked with Spectrum and the IBM Quantum team on a visual approach to the story,” De Torres says.

As for the text, we knew it would take a deft editorial hand to help the authors explain what amounts to the peanut butter and chocolate of advanced computing. Fortunately for us, and for you, dear reader, Associate Editor Dina Genkina has a doctorate in atomic physics, in the subfield of quantum simulation. As Genkina explained to me, that speciality is “adjacent to quantum computing, but not quite the same—it’s more like the analog version of QC that’s not computationally complete.”

Genkina was thrilled to work with De Torres to make the technical illustrations both accurate and edifying. Spectrum prides itself on its tech illustrations, which De Torres notes are increasingly rare in the space-constrained era of mobile-media consumption.

“Working with Carl was so exciting,” Genkina says. “It was really his vision that made the article happen, and the scope of his ambition for the story was at times a bit terrifying. But it’s the kind of story where the illustrations make it come to life.”

De Torres was happy with the collaboration, too. “I really loved this process because I got to bring together two of my favorite clients to create something really special.”

This article appears in the September 2024 print issue.

Erika Cruz Keeps Whirlpool’s Machines Spinning



Few devices are as crucial to people’s everyday lives as their household appliances. Electrical engineer Erika Cruz says it’s her mission to make sure they operate smoothly.

Cruz helps design washing machines and dryers for Whirlpool, the multinational appliance manufacturer.

Erika Cruz


Employer:

Whirlpool

Occupation:

Associate electrical engineer

Education:

Bachelor’s degree in electronics engineering, Industrial University of Santander, in Bucaramanga, Colombia

As a member of the electromechanical components team at Whirlpool’s research and engineering center in Benton Harbor, Mich., she oversees the development of timers, lid locks, humidity sensors, and other components.

More engineering goes into the machines than is obvious. Because the appliances are sold around the world, she says, they must comply with different technical and safety standards and environmental conditions. And reliability is key.

“If the washer’s door lock gets stuck and your clothes are inside, your whole day is going to be a mess,” she says.

While appliances can be taken for granted, Cruz loves that her work contributes in its own small way to the quality of life of so many.

“I love knowing that every time I’m working on a new design, the lives of millions of people will be improved by using it,” she says.

From Industrial Design to Electrical Engineering

Cruz grew up in Bucaramanga, Colombia, where her father worked as an electrical engineer, designing control systems for poultry processing plants. Her childhood home was full of electronics, and Cruz says her father taught her about technology. He paid her to organize his resistors, for example, and asked her to create short videos for work presentations about items he was designing. He also took Cruz and her sister along with him to the processing plants.

“We would go and see how the big machines worked,” she says. “It was very impressive because of their complexity and impact. That’s how I got interested in technology.”

In 2010, Cruz enrolled in Colombia’s Industrial University of Santander, in Bucaramanga, to study industrial design. But she quickly became disenchanted with the course’s focus on designing objects like fancy tables and ergonomic chairs.

“I wanted to design huge machines like my father did,” she says.

A teacher suggested that she study mechanical engineering instead. But her father was concerned about discrimination she might face in that career.

“He told me it would be difficult to get a job in the industry because mechanical engineers work with heavy machinery, and they saw women as being fragile,” Cruz says.

Her father thought electrical engineers would be more receptive to women, so she switched fields.

“I am very glad I ended up studying electronics because you can apply it to so many different fields,” Cruz says. She received a bachelor’s degree in electronics engineering in 2019.

The Road to America

While at university, Cruz signed up for a program that allowed Colombian students to work summer jobs in the United States. She held a variety of summer positions in Galveston, Texas, from 2017 to 2019, including cashier, housekeeper, and hostess.

She met her future husband in 2018, an American working at the same amusement park as she did. When she returned the following summer, they started dating, and that September they married. Since she had already received her degree, he was eager for her to move to the states permanently, but she made the difficult decision to return to Colombia.

“With the language barrier and my lack of engineering experience, I knew if I stayed in the United States, I would have to continue working jobs like housekeeping forever,” she says. “So I told my husband he had to wait for me because I was going back home to get some engineering experience.”

“I love knowing that every time I’m working on a new design, the lives of millions of people will be improved by using it.”

Cruz applied for engineering jobs in neighboring Brazil, which had more opportunities than Colombia did. In 2021, she joined Whirlpool as an electrical engineer at its R&D site in Joinville, Brazil. There, she introduced into mass production sensors and actuators provided by new suppliers.

Meanwhile, she applied for a U.S. Green Card, which would allow her to work and live permanently in the country. She received it six months after starting her job. Cruz asked her manager about transferring to one of Whirlpool’s U.S. facilities, not expecting to have any luck. Her manager set up a phone call with the manager of the components team at the company’s Benton Harbor site to discuss the request. Cruz didn’t realize that the call was actually a job interview. She was offered a position there as an electrical engineer and moved to Michigan later that year.

Designing Appliances Is Complex

Designing a new washing machine or dryer is a complex process, Cruz says. First, feedback from customers about desirable features is used to develop a high-level design. Then the product design work is divided among small teams of engineers, each responsible for a given subsystem, including hardware, software, materials, and components.

Part of Cruz’s job is to test components from different suppliers to make sure they meet safety, reliability, and performance requirements. She also writes the documentation that explains to other engineers about the components’ function and design.

Cruz then helps select the groups of components to be used in a particular application—combining, say, three temperature sensors with two humidity sensors in an optimized location to create a system that finds the best time to stop the dryer.

Building a Supportive Environment

Cruz loves her job, but her father’s fears about her entering a male-dominated field weren’t unfounded. Discrimination was worse in Colombia, she says, where she regularly experienced inappropriate comments and behavior from university classmates and teachers.

Even in the United States, she points out, “As a female engineer, you have to actually show you are able to do your job, because occasionally at the beginning of a project men are not convinced.”

In both Brazil and Michigan, Cruz says, she’s been fortunate to often end up on teams with a majority of women, who created a supportive environment. That support was particularly important when she had her first child and struggled to balance work and home life.

“It’s easier to talk to women about these struggles,” she says. “They know how it feels because they have been through it too.”

Update Your Knowledge

Working in the consumer electronics industry is rewarding, Cruz says. She loves going into a store or visiting someone’s home and seeing the machines that she’s helped build in action.

A degree in electronics engineering is a must for the field, Cruz says, but she’s also a big advocate of developing project management and critical thinking skills. She is a certified associate in project management, granted by the Project Management Institute, and has been trained in using tools that facilitate critical thinking. She says the project management program taught her how to solve problems in a more systematic way and helped her stand out in interviews.

It’s also important to constantly update your knowledge, Cruz says, “because electronics is a discipline that doesn’t stand still. Keep learning. Electronics is a science that is constantly growing.”

NASCAR Unveils Electric Race Car Prototype



NASCAR, the stock car racing sanctioning body known for its high-octane events across the United States, is taking a significant step toward a greener future. In July, during the Chicago Street Race event, NASCAR unveiled a prototype battery-powered race car that marks the beginning of its push to decarbonize motorsports. This move is part of NASCAR’s broader strategy to achieve net-zero emissions by 2035.

The electric prototype represents a collaborative effort between NASCAR and its traditional Original Equipment Manufacturer (OEM) partners—Chevrolet, Ford, and Toyota—along with ABB, a global technology leader. Built by NASCAR engineers, the car features three 6-Phase motors from Stohl Advanced Research and Development, an Austrian specialist in electric vehicle powertrains. These motors together produce 1,000 kilowatts at peak power, equivalent to approximately 1,300 horsepower. The energy is supplied by a 78-kilowatt-hour liquid-cooled lithium-ion battery, operating at 756 volts, though the specific battery chemistry remains a closely guarded secret.

C.J. Tobin, Senior Engineer of Vehicle Systems at NASCAR and the lead engineer on the EV prototype project, explained the motivation behind the development. He told IEEE Spectrum that “The push for electric vehicles is continuing to grow, and when we started this project one and a half years ago, that growth was rapid. We wanted to showcase our ability to put an electric stock car on the track in collaboration with our OEM partners. Our racing series have always been a platform for OEMs to showcase their stock cars, and this is just another tool for them to demonstrate what they can offer to the public.”

Eleftheria Kontou, a professor of civil and environmental engineering at the University of Illinois Urbana-Champaign whose primary research focus is transportation engineering, said in an interview that “It was an excellent introduction of the new technology to NASCAR fans, and I hope that the fans will be open to seeing more innovations in that space.”

a man talking while pointing to the under hood of an open car John Probst, NASCAR’s SVP of Innovation and Racing Development speaks during the unveiling of the new EV prototype. Jared C. Tilton/Getty Images


The electric race car is not just about speed; it’s also about sustainability. The car’s body panels are made from ampliTex, a sustainable flax-based composite supplied by Bcomp, a Swiss manufacturer specializing in composites made from natural fibers. AmpliTex is lighter, more moldable, and more durable than traditional materials like steel or aluminum, making the car more efficient and aerodynamic.

Regenerative braking is another key feature of the electric race car. As it slows down, the car can convert some of its kinetic energy into electric charge that feeds back into the battery. This feature most advantageous on road courses like the one in Chicago and on short oval tracks like Martinsville Speedway in Virginia.

“The Chicago Street Race was a great introduction for the EV prototype because it happens in a real-world setup where electric vehicles tend to thrive,” says Kontou, who also serves on the Steering Committee of the Illinois Alliance for Clean Transportation. “[It was a good venue for the car’s unveiling] because navigating the course requires more braking than is typical at many speedway tracks.”
Though the electric prototype is part of a larger NASCAR sustainability initiative, “There are no plans to use the electric vehicle in competition at this time,” a spokesman said. “The internal combustion engine plays an important role in NASCAR and there are no plans to move away from that.” So, die-hard stock-car racing fans can still anticipate the sounds and smells of V-8 engines burning gasoline as they hurtle around tracks and through street courses.

“The Chicago Street Race was a great introduction for the EV prototype because it happens in a real-world setup where electric vehicles tend to thrive.” —Eleftheria Kontou, University of Illinois

In its sustainability efforts, NASCAR lags well behind Formula One, its largest rival atop the world’s motorsports hierarchy. Since 2014, Formula One’s parent organization, the Fédération Internationale de l’Automobile (FIA), has had an all-electric racing spinoff, called Formula E. For the current season, which began in July, the ABB FIA Formula E World Championship series boasts 11 teams competing in 17 races. This year’s races feature the league’s third generation of electric race cars, and a fourth generation is planned for introduction in 2026.

Asked how NASCAR plans to follow through on its pledge to make its core operations net-zero emissions by its self-imposed target date, the spokesman pointed to changes that would counterbalance the output of traditional stock cars, which are notorious for their poor fuel efficiency and high carbon emissions. Those include 100 percent renewable electricity at NASCAR-owned racetracks and facilities, and tradeoffs such as recycling and on-site charging stations for use by fans with EVs.

The spokesman also noted that NASCAR and its OEM partners are developing racing fuel that’s more sustainable in light of the fact that stock cars consume, on average, about 47 liters for every 100 km they drive (5 miles per gallon). For comparison, U.S. federal regulators announced in June that they would begin enforcing an industry-wide fleet average of approximately 5.6 liters per 100 kilometers (50.4 miles per gallon) for model year 2031 and beyond. Fortunately for NASCAR, race cars are exempt from fuel-efficiency and tailpipe-emissions rules.

While some may be tempted to compare NASCAR’s prototype racer with the cars featured in the ABB FIA Formula E World Championship, Tobin emphasized that NASCAR’s approach in designing the prototype was distinct. “Outside of us seeing that there was a series out there racing electric vehicles and seeing how things were run with Formula E, we leaned heavily on our OEMs and went with what they wanted to see at that time,” he said.

The apparently slow transition to electric vehicles in NASCAR is seen by some in the organization as both a response to environmental concerns and a proactive move to stay ahead of potential legislation that could threaten the future of motorsports. “NASCAR and our OEM partners want to be in the driver’s seat, no matter where we’re going,” says Tobin. “With the development of [the NextGen EV prototype], we wanted to showcase the modularity of the chassis and what powertrains we can build upon it—whether that be alternative fuels, battery electric power, or something unforeseen in the future…We want to continue to push the envelope.”

Seaport Electrification Could Slash Emissions Worldwide



According to the International Maritime Organization, shipping was responsible for over 1 billion tonnes of carbon dioxide emissions in 2018. A significant share of those emissions came from seaport activities, including ship berthing, cargo handling, and transportation within port areas. In response, governments, NGOs, and environmental watchdog groups are sounding alarms and advocating for urgent measures to mitigate pollution at the world’s ports.

One of the most promising solutions for the decarbonization of port operations involves electrifying these facilities. This plan envisions ships plugging into dockside electric power rather than running their diesel-powered auxiliary generators for lighting, cargo handling, heating and cooling, accommodation, and onboard electronics. It would also call for replacing diesel-powered cranes, forklifts, and trucks that move massive shipping containers from ship to shore with battery-powered alternatives.

To delve deeper into this transformative approach, IEEE Spectrum recently spoke with John Prousalidis, a leading advocate for seaport electrification. Prousalidis, a professor of marine electrical engineering at the National Technical University of Athens, has played a pivotal role in developing standards for seaport electrification through his involvement with the IEEE, the International Electrical Commission (IEC), and the International Organization for Standardization (ISO). As vice-chair of the IEEE Marine Power Systems Coordinating Committee, he has been instrumental in advancing these ideas. Last year, Prousalidis co-authored a key paper titled “Holistic Energy Transformation of Ports: The Proteus Planin IEEE Electrification Magazine. In the paper, Prousalidis and his co-authors outlined their comprehensive vision for the future of port operations. The main points of the Proteus plan have been integrated in the policy document on Smart and Sustainable Ports coordinated by Prousalidis within the European Public Policy Committee Working Group on Energy; the policy document was approved in July 2024 by the IEEE Global Policy Committee.

portrait of a man with glasses and a suit and tie looking at camera with a blue box and red circle behind his left side head in the background Professor John ProusalidisJohn Prousalidis

What exactly is “cold ironing?”

John Prousalidis: Cold ironing involves shutting down a ship’s propulsion and auxiliary engines while at port, and instead, using electricity from shore to power onboard systems like air conditioning, cargo handling equipment, kitchens, and lighting. This reduces emissions because electricity from the grid, especially from renewable sources, is more environmentally friendly than burning diesel fuel on site. The technical challenges include matching the ship’s voltage and frequency with that of the local grid, which, in general, varies globally, while tackling grounding issues to protect against short circuits.

IEEE, along with IEC and ISO, have developed a joint standard, 80005, which is a series of three different standards for high-voltage and low-voltage connection. It is perhaps (along with Wi-Fi, the standard for wireless communication) the “hottest” standard because all governmental bodies tend to make laws stipulating that this is the standard that all ports need to follow to supply power to ships.

How broad has adoption of this standard been?

Prousalidis: The European Union has mandated full compliance by January 1, 2030. In the United States, California led the way with similar measures in 2010. This aggressive remediation via electrification is now being adopted globally, with support from the International Maritime Organization.

Let’s talk about another interesting idea that’s part of the plan: regenerative braking on cranes. How does that work?

Prousalidis: When lowering shipping containers, cranes in regenerative braking mode convert the kinetic energy into electric charge instead of wasting it as heat. Just like when an electric vehicle is coming to a stop, the energy can be fed back into the crane’s battery, potentially saving up to 50 percent in energy costs—though a conservative estimate would be around 20 percent.

What are the estimated upfront costs for implementing cold ironing at, say, the Port of Los Angeles, which is the largest port in the United States?

Prousalidis: The cost for a turnkey solution is approximately US $1.7 million per megawatt, covering grid upgrades, infrastructure, and equipment. A rough estimate using some established rules of thumb would be about $300 million. The electrification process at that port has already begun. There are, as far as I know, about 60 or more electrical connection points for ships at berths there.

How significant would the carbon reduction from present levels be if there were complete electrification with renewable energy at the world’s 10 biggest and busiest ports?


Prousalidis: If ports fully electrify using renewable energy, the European Union’s policy could achieve a 100-percent reduction in ship emissions in the port areas. According to the IMO’s approach, which considers the energy mix of each country, it could lead to a 60-percent reduction. This significant emission reduction means lower emissions of CO2, nitrogen oxides, sulfur oxides, and particulate matter, thus reducing shipping’s contribution to global warming and lowering health risks in nearby population centers.

If all goes according to plan, and every country with port operations goes full bore toward electrification, how long do you think it will realistically take to completely decarbonize that aspect of shipping?

Prousalidis: As I said, the European Union is targeting full port electrification by 1 January 2030. However, with around 600 to 700 ports in Europe alone, and the need for grid upgrades, delays are possible. Despite this, we should focus on meeting the 2030 deadline rather than anticipating extensions. This recalls the words of Gemini and Apollo pioneer astronaut, Alan Shepard, when he explained the difference between a test pilot and a normal professional pilot: “Suppose each of them had 10 seconds before crashing. The conventional pilot would think, In 10 seconds I’m going to die. The test pilot would say to himself, I’ve got 10 seconds to save myself and save the craft.” The point is that, in a critical situation like the fight against global warming, we should focus on the time we have to solve the problem, not on what happens after time runs out. But humanity doesn’t have an eject button to press if we don’t make every effort to avoid the detrimental consequences that will come with failure of the “save the planet” projects.

Sydney’s Tech Super-Cluster Propels Australia’s AI Industry Forward



This is a sponsored article brought to you by BESydney.

Australia has experienced a remarkable surge in AI enterprise during the past decade. Significant AI research and commercialization concentrated in Sydney drives the sector’s development nationwide and influences AI trends globally. The city’s cutting-edge AI sector sees academia, business and government converge to foster groundbreaking advancements, positioning Australia as a key player on the international stage.

Sydney – home to half of Australia’s AI companies

Sydney has been pinpointed as one of four urban super-clusters in Australia, featuring the highest number of tech firms and the most substantial research in the country.

The Geography of Australia’s Digital Industries report, commissioned by the National Science Agency, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) and the Tech Council of Australia, found Sydney is home to 119,636 digital professionals and 81 digital technology companies listed on the Australian Stock Exchange with a combined worth of A$52 billion.

AI is infusing all areas of this tech landscape. According to CSIRO, more than 200 active AI companies operate across Greater Sydney, representing almost half of the country’s 544 AI companies.

“Sydney is the capital of AI startups for Australia and this part of Australasia”
—Toby Walsh, UNSW Sydney

With this extensive AI commercialization and collaboration in progress across Sydney, AI startups are flourishing.

“Sydney is the capital of AI startups for Australia and this part of Australasia,” according to Professor Toby Walsh, Scientia Professor of Artificial Intelligence at the Department of Computer Science and Engineering at the University of New South Wales (UNSW Sydney).

He cites robotics, AI in medicine and fintech as three areas where Sydney leads the world in AI innovation.

“As a whole, Australia punches well above its weight in the AI sector,” Professor Walsh says. “We’re easily in the top 10, and by some metrics, we’re in the top five in the world. For a country of just 25 million people, that is quite remarkable.”

Sydney’s universities at the forefront of AI research

A key to Sydney’s success in the sector is the strength of its universities, which are producing outstanding research.

In 2021, the University of Sydney (USYD), the University of New South Wales (UNSW Sydney), and the University of Technology Sydney (UTS) collectively produced more than 1000 peer-reviewed publications in artificial intelligence, contributing significantly to the field’s development.

According to CSIRO, Australia’s research and development sector has higher rates of AI adoption than global averages, with Sydney presenting the highest AI publishing intensity among Australian universities and research institutes.

Professor Aaron Quigley, Science Director and Deputy Director of CSIRO’s Data61 and Head of School in Computer Science and Engineering at UNSW Sydney, says Sydney’s AI prowess is supported by a robust educational pipeline that supplies skilled graduates to a wide range of industries that are rapidly adopting AI technologies.

“Sydney’s AI sector is backed up by the fact that you have such a large educational environment with universities like UTS, USYD and UNSW Sydney,” he says. “They rank in the top five of AI locations in Australia.”

UNSW Sydney is a heavy hitter, with more than 300 researchers applying AI across various critical fields such as hydrogen fuel catalysis, coastal monitoring, safe mining, medical diagnostics, epidemiology and stress management.

A photo of a smiling man next to a device.  UNSW Sydney has more than 300 researchers applying AI across various critical fields such as hydrogen fuel catalysis, coastal monitoring, safe mining, medical diagnostics, epidemiology, and stress management.UNSW

UNSW Sydney’s AI Institute also has the largest concentration of academics working in AI in the country, adds Professor Walsh.

“One of the main reasons the AI Institute exists at UNSW Sydney is to be a front door to industry and government, to help translate the technology out of the laboratory and into practice,” he says.

Likewise, the Sydney Artificial Intelligence Centre at the University of Sydney, the Australian Artificial Intelligence Institute at UTS, and Macquarie University’s Centre for Applied Artificial Intelligence are producing world-leading research in collaboration with industry.

Alongside the universities, the Australian Government’s National AI Centre in Sydney, aims to support and accelerate Australia’s AI industry.

Synergies in Sydney: where tech titans converge

Sydney’s vortex of tech talent has meant exciting connections and collaborations are happening at lightning speed, allowing simultaneous growth of several high-value industries.

The intersection between quantum computing and AI will come into focus with the April 2024 announcement of a new Australian Centre for Quantum Growth at the University of Sydney. This centre will aim to build strategic and lasting relationships that drive innovation to increase the nation’s competitiveness within the field. Funded under the Australian Government’s National Quantum Strategy, it aims to promote the industry and enhance Australia’s global standing.

“There’s nowhere else in the world that you’re going to get a quantum company, a games company, and a cybersecurity company in such close proximity across this super-cluster arc located in Sydney”
—Aaron Quigley, UNSW Sydney

“There’s a huge amount of experience in the quantum space in Sydney,” says Professor Quigley. “Then you have a large number of companies and researchers working in cybersecurity, so you have the cybersecurity-AI nexus as well. Then you’ve got a large number of media companies and gaming companies in Sydney, so you’ve got the interconnection between gaming and creative technologies and AI.”

“So it’s a confluence of different industry spaces, and if you come here, you can tap into these different specialisms,” he adds “There’s nowhere else in the world that you’re going to get a quantum company, a games company, and a cybersecurity company in such close proximity across this super-cluster arc located in Sydney.”

A global hub for AI innovation and collaboration

In addition to its research and industry achievements in the AI sector, Sydney is also a leading destination for AI conferences and events. The annual Women in AI Asia Pacific Conference is held in Sydney each year, adding much-needed diversity to the mix.

Additionally, the prestigious International Joint Conference on Artificial Intelligence was held in Sydney in 1991.

Overall, Sydney’s integrated approach to AI development, characterized by strong academic output, supportive government policies, and vibrant commercial activity, firmly establishes it as a leader in the global AI landscape.

To discover more about how Sydney is shaping the future of AI download the latest eBook on Sydney’s Science & Engineering industry at besydney.com.au

❌
❌