Nvidia, Oracle, Google, Dell and 13 other companies reported how long it takes their computers to train the key neural networks in use today. Among those results were the first glimpse of Nvidia’s next generation GPU, the B200, and Google’s upcoming accelerator, called Trillium. The B200 posted a doubling of performance on some tests versus today’s workhorse Nvidia chip, the H100. And Trillium delivered nearly a four-fold boost over the chip Google tested in 2023.
The benchmark tests, called MLPerf v4.1, consist of six tasks: recommendation, the pre-training of the large language models (LLM) GPT-3 and BERT-large, the fine tuning of the Llama 2 70B large language model, object detection, graph node classification, and image generation.
Training GPT-3 is such a mammoth task that it’d be impractical to do the whole thing just to deliver a benchmark. Instead, the test is to train it to a point that experts have determined means it is likely to reach the goal if you kept going. For Llama 2 70B, the goal is not to train the LLM from scratch, but to take an already trained model and fine-tune it so it’s specialized in a particular expertise—in this case,government documents. Graph node classification is a type of machine learning used in fraud detection and drug discovery.
As what’s important in AI has evolved, mostly toward using generative AI, the set of tests has changed. This latest version of MLPerf marks a complete changeover in what’s being tested since the benchmark effort began. “At this point all of the original benchmarks have been phased out,” says David Kanter, who leads the benchmark effort at MLCommons. In the previous round it was taking mere seconds to perform some of the benchmarks.
Performance of the best machine learning systems on various benchmarks has outpaced what would be expected if gains were solely from Moore’s Law [blue line]. Solid line represent current benchmarks. Dashed lines represent benchmarks that have now been retired, because they are no longer industrially relevant.MLCommons
According to MLPerf’s calculations, AI training on the new suite of benchmarks is improving at about twice the rate one would expect from Moore’s Law. As the years have gone on, results have plateaued more quickly than they did at the start of MLPerf’s reign. Kanter attributes this mostly to the fact that companies have figured out how to do the benchmark tests on very large systems. Over time, Nvidia, Google, and others have developed software and network technology that allows for near linear scaling—doubling the processors cuts training time roughly in half.
This round marked the first training tests for Nvidia’s next GPU architecture, called Blackwell. For the GPT-3 training and LLM fine-tuning, the Blackwell (B200) roughly doubled the performance of the H100 on a per-GPU basis. The gains were a little less robust but still substantial for recommender systems and image generation—64 percent and 62 percent, respectively.
The Blackwell architecture, embodied in the Nvidia B200 GPU, continues an ongoing trend toward using less and less precise numbers to speed up AI. For certain parts of transformer neural networks such as ChatGPT, Llama2, and Stable Diffusion, the Nvidia H100 and H200 use 8-bit floating point numbers. The B200 brings that down to just 4 bits.
Google debuts 6th gen hardware
Google showed the first results for its 6th generation of TPU, called Trillium—which it unveiled only last month—and a second round of results for its 5th generation variant, the Cloud TPU v5p. In the 2023 edition, the search giant entered a different variant of the 5th generation TPU, v5e, designed more for efficiency than performance. Versus the latter, Trillium delivers as much as a 3.8-fold performance boost on the GPT-3 training task.
But versus everyone’s arch-rival Nvidia, things weren’t as rosy. A system made up of 6,144 TPU v5ps reached the GPT-3 training checkpoint in 11.77 minutes, placing a distant second to an 11,616-Nvidia H100 system, which accomplished the task in about 3.44 minutes. That top TPU system was only about 25 seconds faster than an H100 computer half its size.
A Dell Technologies computer fine-tuned the Llama 2 70B large language model using about 75 cents worth of electricity.
In the closest head-to-head comparison between v5p and Trillium, with each system made up of 2048 TPUs, the upcoming Trillium shaved a solid 2 minutes off of the GPT-3 training time, nearly an 8 percent improvement on v5p’s 29.6 minutes. Another difference between the Trillium and v5p entries is that Trillium is paired with AMD Epyc CPUs instead of the v5p’s Intel Xeons.
Google also trained the image generator, Stable Diffusion, with the Cloud TPU v5p. At 2.6 billion parameters, Stable Diffusion is a light enough lift that MLPerf contestants are asked to train it to convergence instead of just to a checkpoint, as with GPT-3. A 1024 TPU system ranked second, finishing the job in 2 minutes 26 seconds, about a minute behind the same size system made up of Nvidia H100s.
The steep energy cost of training neural networks has long been a source of concern. MLPerf is only beginning to measure this. Dell Technologies was the sole entrant in the energy category, with an eight-server system containing 64 Nvidia H100 GPUs and 16 Intel Xeon Platinum CPUs. The only measurement made was in the LLM fine-tuning task (Llama2 70B). The system consumed 16.4 megajoules during its 5-minute run, for an average power of 5.4 kilowatts. That means about 75 cents of electricity at the average cost in the United States.
While it doesn’t say much on its own, the result does potentially provide a ballpark for the power consumption of similar systems. Oracle, for example, reported a close performance result—4 minutes 45 seconds—using the same number and types of CPUs and GPUs.
At 8:30 p.m. on 16 May 1916, John J. Carty banged his gavel at the Engineering Societies Building in New York City to call to order a meeting of the American Institute of Electrical Engineers. This was no ordinary gathering. The AIEE had decided to conduct a live national meeting connecting more than 5,000 attendees in eight cities across four time zones. More than a century before Zoom made virtual meetings a pedestrian experience, telephone lines linked auditoriums from coast to coast. AIEE members and guests in Atlanta, Boston, Chicago, Denver, New York, Philadelphia, Salt Lake City, and San Francisco had telephone receivers at their seats so they could listen in.
The AIEE, a predecessor to the IEEE, orchestrated this event to commemorate recent achievements in communications, transportation, light, and power. The meeting was a triumph of engineering, covered in newspapers in many of the host cities. The Atlanta Constitution heralded it as “a feat never before accomplished in the history of the world.” According to the Philadelphia Evening Ledger, the telephone connections involved traversed about 6,500 kilometers (about 4,000 miles) across 20 states, held up by more than 150,000 poles running through 5,000 switches. It’s worth noting that the first transcontinental phone call had been achieved only a year earlier.
Carty, president of the AIEE, led the meeting from New York, while section chairmen directed the proceedings in the other cities. First up: roll call. Each city read off the number of members and guests in attendance—from 40 in Denver, the newest section of the institute, to 1,100 at AIEE headquarters in New York. In all, more than 5,100 members attended.
Due to limited seating in New York and Philadelphia, members were allowed only a single admission ticket, and ladies were explicitly not invited. (Boo.) In Atlanta, Boston, and Chicago, members received two tickets each, and in San Francisco members received three; women were allowed to attend in all of these cities. (The AIEE didn’t admit its first woman until 1922, and only as an associate member; Edith Clarke was the first woman to publish a paper in an AIEE journal, in 1926.)
These six cities were the only ones officially participating in the meeting. But because the telephone lines ran directly through both Denver and Salt Lake City, AIEE sections in those cities opted to listen in, although they were kept muted; during the meeting, they sent telegrams to headquarters with their attendance and greetings. In a modern-day Zoom call, these notes would have been posted in the chat.
The first virtual meeting had breakout sessions
Once everyone had checked in and confirmed that they all could hear, Carty read a telegram from U.S. President Woodrow Wilson, congratulating the members on this unique meeting: “a most interesting evidence of the inventive genius and engineering ability represented by the Institute.”
Alexander Graham Bell then gave a few words in greeting and remarked that he was glad to see how far the telephone had gone beyond his initial idea. Theodore Vail, first president of AT&T and one of the men who was instrumental in establishing telephone service as a public utility, offered his own congratulations. Charles Le Maistre, a British engineer who happened to be in New York to attend the AIEE Standards Committee, spoke on behalf of his country’s engineering societies. Finally, Thomas Watson, who as Bell’s assistant was the first person to hear words spoken over a telephone, welcomed all of the electrical engineers scattered across the country.
At precisely 9:00 p.m., the telephone portion of the meeting was suspended for 30 minutes so that each city could have its own local address by an invited guest. Let’s call them breakout sessions. These speakers reflected on the work and accomplishments of engineers. Overall, they conveyed an unrelentingly positive attitude toward engineering progress, with a few nuances.
In Boston, Lawrence Lowell, president of Harvard University, said the discovery and harnessing of electricity was the greatest single advancement in human history. However, he admonished engineers for failing to foresee the subordination of the individual to the factory system.
In Philadelphia, Edgar Smith, provost of the University of Pennsylvania, noted that World War I was limiting the availability of certain materials and supplies, and he urged more investment in developing the United States’ natural resources.
Charles Ferris, dean of engineering at the University of Tennessee, praised the development of long-distance power distribution and the positive effects it had on rural life, but worried about the use of fossil fuels. His chief concern was running out of coal, gas, and oil, not their negative impacts on the environment.
More than a century before Zoom made virtual meetings a pedestrian experience, telephone lines linked auditoriums from coast to coast for the AIEE’s national meeting.
On the West Coast, Ray Wilbur, president of Stanford, argued for the value of dissatisfaction, struggle, and unrest on campus as spurs to growth and innovation. I suspect many university presidents then and now would disagree, but student protests remain a force for change.
After the city breakout sessions, everyone reconnected by telephone, and the host cities took turns calling out their greetings, along with some engineering boasts.
“Atlanta, located in the Piedmont section of the southern Appalachians, among their racing rivers and roaring falls, whose energy has been dragged forth and laid at her doors through high-tension transmission and in whose phenomenal development no factor has been more potent than the electrical engineers, sends greetings.”
“Boston sends warmest greetings to her sister cities. The telephone was born here and here it first spoke, but its sound has gone out into all lands and its words unto the ends of the world.”
“San Francisco hails its fellow members of the Institute…. California has by the pioneer spirit of domination created needs which the world has followed—the snow-crowned Sierras opened up the path of gold to the path of energy, which tonight makes it possible for us on the western rim of the continent of peace to be in instant touch with men who have harnessed rivers, bridled precipices, drawn from the ether that silent and unseen energy that has leveled distance and created force to move the world along lines of greater civilization by closer contacts.”
That last sentence, my editor notes, is 86 words long, but we included it for its sheer exuberance.
Maybe all tech meetings should have musical interludes
The meeting then paused for a musical interlude. I find this idea delightfully weird, like the ballet dream sequence in the middle of the Broadway musical Oklahoma! Each city played a song of their choosing on a phonograph, to be transmitted through the telephone. From the south came strains of “Dixie,” countered by “Yankee Doodle” in New England. New York and San Francisco opted for two variations on the patriotic symbolism of Columbia: “Hail Columbia” and “Columbia the Gem of the Ocean,” respectively. Philadelphia offered up the “Star-Spangled Banner,” and although it wasn’t yet the national anthem, audience members in all auditoriums stood up while it played.
For the record, the AIEE in those days took entertainment very seriously. Almost all of their conferences included a formal dinner dance, less-formal smokers, sporting competitions, and inspection field trips to local sites of engineering interest. There were even women’s committees to organize events specifically for the ladies.
I suspect no one in attendance would have predicted that in the 21st century, people groan at the thought of another virtual meeting.
After the music, Michael Pupin delivered an address on “The Engineering Profession,” a topic that was commonly discussed in the Proceedings of the AIEE in those days. Remember that electrical engineering was still a fairly new academic discipline, only a few decades old, and working engineers were looking to more established professions, such as medical doctors, to see how they might fit into society. Pupin had made a number of advancements in the efficiency of transmission over long-distance telephone, and in 1925 he served as the president of the AIEE.
The meeting concluded with resolutions, amendments, acceptances, and seconding, following Robert’s Rules of Order. (IEEE meetings still adhere to the rules.) In the last resolution, the participants patted themselves on the back for hosting this first-of-its-kind meeting and acknowledging their own genius that made it possible.
The Proceedings of the AIEE covered the meeting in great detail. Local press accounts offered less detail. I’ve found no evidence that they ever tried to replicate the meeting. They did try another experiment in which a member read the same paper at meetings in three different cities so that there could be a joint discussion about the contents. But it seems they returned to their normal schedule of annual and section meetings with technical paper sessions and discussion.
And nowhere have I found answers to some of the basic questions that I, as a historian 100 years later, have about the 1916 event. First, how much did this meeting cost in long-distance fees and who paid for it? Second, what receivers did the audience members use and did they work? And finally, what did the members and guests think of this grand experiment? (My editor would also like to know why no one took a photo of the event.)
But in the moment, rarely do people think about what later historians may want to know. And I suspect no one in attendance would have predicted that in the 21st century, people groan at the thought of another virtual meeting.
The IEEE Board of Directors shapes the future direction of IEEE and is committed to ensuring IEEE remains a strong and vibrant organization—serving the needs of its members and the engineering and technology community worldwide—while fulfilling the IEEE mission of advancing technology for the benefit of humanity.
This article features IEEE Board of Directors members ChunChe “Lance” Fung, Eric Grigorian, and Christina Schober.
IEEE Senior Member ChunChe “Lance” Fung
Director, Region 10: Asia Pacific
Joanna Mai Yie Leung
Fung has worked in academia and provided industry consultancy services for more than 40 years. His research interests include applying artificial intelligence, machine learning, computational intelligence, and other techniques to solve practical problems. He has authored more than 400 publications in the disciplines of AI, computational intelligence, and related applications. Fung currently works on the ethical applications and social impacts of AI.
As chair of the
IEEE New Initiatives Committee, he established and promoted the US $1 Million Challenge Call for New Initiatives, which supports potential IEEE programs, services, or products that will significantly benefit members, the public, the technical community, or customers and could have a lasting impact on IEEE or its business processes.
Fung has left an indelible mark as a dedicated educator at
Singapore Polytechnic, Curtin University, and Murdoch University. He was appointed in 2015 as professor emeritus at Murdoch, and he takes pride in training the next generation of volunteers, leaders, teachers, and researchers in the Western Australian community. Fung received the IEEE Third Millennium Medal and the IEEE Region 10 Outstanding Volunteer Award.
IEEE Senior Member Eric Grigorian
Director, Region 3: Southern U.S. & Jamaica
Sean McNeil/GTRI
Grigorian has extensive experience leading international cross-domain teams that support the commercial and defense industries. His current research focuses on implementing model-based systems engineering, creating models that depict system behavior, interfaces, and architecture. His work has led to streamlined processes, reduced costs, and faster design and implementation of capabilities due to efficient modeling and verification. Grigorian holds two U.S. utility patents.
Grigorian has been an active volunteer with IEEE since his time as a student member at the
University of Alabama in Huntsville (UAH). He saw it as an excellent way to network and get to know people. He found his personality was suited for working within the organization and building leadership skills. During the past 43 years as an IEEE member, he has been affiliated with the IEEE Aerospace and Electronic Systems (AESS), IEEE Computer, and IEEE Communications societies.
As Grigorian’s career has evolved, his involvement with IEEE has also increased. He has been the
IEEE Huntsville Section student activities chair, as well as vice chair, and chair. He also was the section’s AESS chair. He served as IEEE SoutheastCon chair in 2008 and 2019, and served on the IEEE Region 3 executive committee as area chair and conference committee chair, enhancing IEEE members’ benefits, engagement, and career advancement. He has significantly contributed to initiatives within IEEE, including promoting preuniversity science, technology, engineering, and mathematics efforts in Alabama.
Schober is an innovative engineer with a diverse design and manufacturing engineering background. With more than 40 years of experience, her career has spanned research, design, and manufacturing sensors for space, commercial, and military aircraft navigation and tactical guidance systems. She was responsible for the successful transition from design to production for groundbreaking programs including an integrated flight management system, the Stinger missile’s roll frequency sensor, and the designing of three phases of the DARPA atomic clock. She holds 17 U.S. patents and 24 other patents in the aerospace and navigation fields.
Schober started her career in the 1980s, at a time when female engineers were not widely accepted. The prevailing attitude required her to “stay tough,” she says, and she credits IEEE for giving her technical and professional support. Because of her experiences, she became dedicated to making diversity and inclusion systemic in IEEE.
In physical books, yellowing pages are usually a sign of age. But brand-new users of Amazon’s Kindle Colorsofts, the tech giant’s first color e-reader, are already noticing yellow hues appearing at the bottoms of their displays.
Since the complaints began to trickle in, Amazon has reportedly suspended shipments and announced that it is working to fix the issue. (As of publication of this article, the US $280 Kindle had an average 2.6 star rating on Amazon.) It’s not yet clear what is causing the discoloration. But while the issue is new—and unexpected—the technology is not, says Jason Heikenfeld, an IEEE Fellow and engineering professor at the University of Cincinnati. The Kindle Colorsoft, which became available on 30 October, uses “a very old approach,” says Heikenfeld, who previously worked to develop the ultimate e-paper technology. “It was the first approach everybody tried.”
Amazon’s e-reader uses reflective display technology developed by E Ink, a company that started in the 1990s as an MIT Media Lab spin-off before developing its now-dominant electronic paper displays. E Ink is used in Kindles, as well as top e-readers from Kobo, reMarkable, Onyx, and more. E Ink first introduced Kaleido—the basis of the Colorsoft’s display—five years ago, though the road to full-color e-paper started well before.
How E-Readers Work
Monochromatic Kindles work by applying voltages to electrodes in the screen that bring black or white pigment to the top of each pixel. Those pixels then reflect ambient light, creating a paperlike display. To create a full-color display, companies like E Ink added an array of filters just above the ink. This approach didn’t work well at first because the filters lost too much light, making the displays dark and low resolution. But with a few adjustments, Kaleido was ready for consumer products in 2019. (Other approaches—like adding colored pigments to the ink—have been developed, but these come with their own drawbacks, including a higher price tag.)
Given this design, it initially seemed to Heikenfeld that the issue would have stemmed from the software, which determines the voltages applied to each electrode. This aligned with reports from some users that the issue appeared after a software update.
But industry analyst Ming-Chi Kuo suggested in a post on X that the issue is due to the e-reader’s hardware. Amazon switched the optically clear adhesive (OCA) used in the Colorsoft to a material that may not be so optically clear. In its announcement of the Colorsoft, the company boasted “custom formulated coatings” that would enhance the color display as one of the new e-reader’s innovations.
In terms of resolving the issue, Kuo’s post also stated that “While component suppliers have developed several hardware solutions, Amazon seems to be leaning toward a software-based fix.” Heikenfeld is not sure how a software fix would work, apart from blacking out the bottom of the screen.
Amazon did not reply to IEEE Spectrum’s request for comment. In an email to IEEE Spectrum, E Ink stated, “While we cannot comment on any individual partner or product, we are committed to supporting our partners in understanding and addressing any issues that arise.”
The Future of E-Readers
It took a long time for color Kindles to arrive, and the future of reflective e-reader displays isn’t likely to improve much, according to Heikenfeld. “I used to work a lot in this field, and it just really slowed down at some point, because it’s a tough nut to crack,” Heikenfeld says.
There are inherent limitations and inefficiencies to working with filter-based color displays that rely on ambient light, and there’s no Moore’s Law for these displays. Instead, their improvement is asymptotic—and we may already be close to the limit. Meanwhile, displays that emit light, like LCD and OLED, continue to improve. “An iPad does a pretty damn good job with battery life now,” says Heikenfeld.
At the same time, he believes there will always be a place for reflective displays, which remain a more natural experience for our eyes. “We live in a world of reflective color,” Heikenfeld says.
This is story was updated on 12 November 2024 to correct that Jason Heikenfeld is an IEEE Fellow.
Waiting for each part of a 3D-printed project to finish, taking it out of the printer, and then installing it on location can be tedious for multi-part projects. What if there was a way for your printer to print its creation exactly where you needed it? That’s the promise of MobiPrint, a new 3D printing robot that can move around a room, printing designs directly onto the floor.
MobiPrint, designed by Daniel Campos Zamora at the University of Washington, consists of a modified off-the-shelf 3D printer atop a home vacuum robot. First it autonomously maps its space—be it a room, a hallway, or an entire floor of a house. Users can then choose from a prebuilt library or upload their own design to be printed anywhere in the mapped area. The robot then traverses the room and prints the design.
It’s “a new system that combines robotics and 3D printing that could actually go and print in the real world,” Campos Zamora says. He presented MobiPrint on 15 October at the ACM Symposium on User Interface Software and Technology.
Campos Zamora and his team started with a Roborock S5 vacuum robot and installed firmware that allowed it to communicate with the open source program Valetudo. Valetudo disconnects personal robots from their manufacturer’s cloud, connecting them to a local server instead. Data collected by the robot, such as environmental mapping, movement tracking, and path planning, can all be observed locally, enabling users to see the robot’s LIDAR-created map.
Campos Zamora built a layer of software that connects the robot’s perception of its environment to the 3D printer’s print commands. The printer, a modified Prusa Mini+, can print on carpet, hardwood, and vinyl, with maximum printing dimensions of 180 by 180 by 65 millimeters. The robot has printed pet food bowls, signage, and accessibility markers as sample objects.
Currently, MobiPrint can only “park and print.” The robot base cannot move during printing to make large objects, like a mobility ramp. Printing designs larger than the robot is one of Campos Zamora’s goals in the future. To learn more about the team’s vision for MobiPrint, Campos Zamora answered a few questions from IEEE Spectrum.
What was the inspiration for creating your mobile 3D printer?
Daniel Campos Zamora: My lab is focused on building systems with an eye towards accessibility. One of the things that really inspired this project was looking at the tactile surface indicators that help blind and low vision users find their way around a space. And so we were like, what if we made something that could automatically go and deploy these things? Especially in indoor environments, which are generally a little trickier and change more frequently over time.
We had to step back and build this entirely different thing, using the environment as a design element. We asked: how do you integrate the real world environment into the design process, and then what kind of things can you print out in the world? That’s how this printer was born.
What were some surprising moments in your design process?
Campos Zamora: When I was testing the robot on different surfaces, I was not expecting the 3D printed designs to stick extremely well to the carpet. It stuck way too well. Like, you know, just completely bonded down there.
I think there’s also just a lot of joy in seeing this printer move. When I was doing a demonstration of it at this conference last week, it almost seemed like the robot had a personality. A vacuum robot can seem to have a personality, but this printer can actually make objects in my environment, so I feel a different relationship to the machine.
Where do you hope to take MobiPrint in the future?
Campos Zamora: There’s several directions I think we could go. Instead of controlling the robot remotely, we could have it follow someone around and print accessibility markers along a path they walk. Or we could integrate an AI system that recommends objects be printed in different locations. I also want to explore having the robot remove and recycle the objects it prints.
Finished chips coming in from the foundry are subject to a battery of tests. For those destined for critical systems in cars, those tests are particularly extensive and can add 5 to 10 percent to the cost of a chip. But do you really need to do every single test?
Engineers at NXP have developed a machine-learning algorithm that learns the patterns of test results and figures out the subset of tests that are really needed and those that they could safely do without. The NXP engineers described the process at the IEEE International Test Conference in San Diego last week.
NXP makes a wide variety of chips with complex circuitry and advanced chip-making technology, including inverters for EV motors, audio chips for consumer electronics, and key-fob transponders to secure your car. These chips are tested with different signals at different voltages and at different temperatures in a test process called continue-on-fail. In that process, chips are tested in groups and are all subjected to the complete battery, even if some parts fail some of the tests along the way.
Chips were subject to between 41 and 164 tests, and the algorithm was able to recommend removing 42 to 74 percent of those tests.
“We have to ensure stringent quality requirements in the field, so we have to do a lot of testing,” says Mehul Shroff, an NXP Fellow who led the research. But with much of the actual production and packaging of chips outsourced to other companies, testing is one of the few knobs most chip companies can turn to control costs. “What we were trying to do here is come up with a way to reduce test cost in a way that was statistically rigorous and gave us good results without compromising field quality.”
A Test Recommender System
Shroff says the problem has certain similarities to the machine learning-based recommender systems used in e-commerce. “We took the concept from the retail world, where a data analyst can look at receipts and see what items people are buying together,” he says. “Instead of a transaction receipt, we have a unique part identifier and instead of the items that a consumer would purchase, we have a list of failing tests.”
The NXP algorithm then discovered which tests fail together. Of course, what’s at stake for whether a purchaser of bread will want to buy butter is quite different from whether a test of an automotive part at a particular temperature means other tests don’t need to be done. “We need to have 100 percent or near 100 percent certainty,” Shroff says. “We operate in a different space with respect to statistical rigor compared to the retail world, but it’s borrowing the same concept.”
As rigorous as the results are, Shroff says that they shouldn’t be relied upon on their own. You have to “make sure it makes sense from engineering perspective and that you can understand it in technical terms,” he says. “Only then, remove the test.”
Shroff and his colleagues analyzed data obtained from testing seven microcontrollers and applications processors built using advanced chipmaking processes. Depending on which chip was involved, they were subject to between 41 and 164 tests, and the algorithm was able to recommend removing 42 to 74 percent of those tests. Extending the analysis to data from other types of chips led to an even wider range of opportunities to trim testing.
The algorithm is a pilot project for now, and the NXP team is looking to expand it to a broader set of parts, reduce the computational overhead, and make it easier to use.
“Any novel solution that helps in test-time savings without any quality hit is valuable,” says Sriharsha Vinjamury, a principal engineer at Arm. “Reducing test time is essential, as it reduces costs.” He suggests that the NXP algorithm could be integrated with a system that adjust the order of tests, so that failures could be spotted earlier.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Researchers at Meta FAIR are releasing several new research artifacts that advance robotics and support our goal of reaching advanced machine intelligence (AMI). These include Meta Sparsh, the first general-purpose encoder for vision-based tactile sensing that works across many tactile sensors and many tasks; Meta Digit 360, an artificial fingertip-based tactile sensor that delivers detailed touch data with human-level precision and touch-sensing; and Meta Digit Plexus, a standardized platform for robotic sensor connections and interactions that enables seamless data collection, control and analysis over a single cable.
The first bimanual Torso created at Clone includes an actuated elbow, cervical spine (neck), and anthropomorphic shoulders with the sternoclavicular, acromioclavicular, scapulothoracic and glenohumeral joints. The valve matrix fits compactly inside the ribcage. Bimanual manipulation training is in progress.
Equipped with a new behavior architecture, Nadia navigates and traverses many types of doors autonomously. Nadia also demonstrates robustness to failed grasps and door opening attempts by automatically retrying and continuing. We present the robot with pull and push doors, four types of opening mechanisms, and even spring-loaded door closers. A deep neural network and door plane estimator allow Nadia to identify and track the doors.
In this study, we integrate the musculoskeletal humanoid Musashi with the wire-driven robot CubiX, capable of connecting to the environment, to form CubiXMusashi. This combination addresses the shortcomings of traditional musculoskeletal humanoids and enables movements beyond the capabilities of other humanoids. CubiXMusashi connects to the environment with wires and drives by winding them, successfully achieving movements such as pull-up, rising from a lying pose, and mid-air kicking, which are difficult for Musashi alone.
[ CubiXMusashi, JSK Robotics Laboratory, University of Tokyo ]
Thanks, Shintaro!
An old boardwalk seems like a nightmare for any robot with flat feet.
This paper presents a novel learning-based control framework that uses keyframing to incorporate high-level objectives in natural locomotion for legged robots. These high-level objectives are specified as a variable number of partial or complete pose targets that are spaced arbitrarily in time. Our proposed framework utilizes a multi-critic reinforcement learning algorithm to effectively handle the mixture of dense and sparse rewards. In the experiments, the multi-critic method significantly reduces the effort of hyperparameter tuning compared to the standard single-critic alternative. Moreover, the proposed transformer-based architecture enables robots to anticipate future goals, which results in quantitative improvements in their ability to reach their targets.
We present the first static-obstacle avoidance method for quadrotors using just an onboard, monocular event camera. Quadrotors are capable of fast and agile flight in cluttered environments when piloted manually, but vision-based autonomous flight in unknown environments is difficult in part due to the sensor limitations of traditional onboard cameras. Event cameras, however, promise nearly zero motion blur and high dynamic range, but produce a large volume of events under significant ego-motion and further lack a continuous-time sensor model in simulation, making direct sim-to-real transfer not possible.
[ Paper University of Pennsylvania and University of Zurich ]
Cross-embodiment imitation learning enables policies trained on specific embodiments to transfer across different robots, unlocking the potential for large-scale imitation learning that is both cost-effective and highly reusable. This paper presents LEGATO, a cross-embodiment imitation learning framework for visuomotor skill transfer across varied kinematic morphologies. We introduce a handheld gripper that unifies action and observation spaces, allowing tasks to be defined consistently across robots.
The 2024 Xi’an Marathon has kicked off! STAR1, the general-purpose humanoid robot from Robot Era, joins runners in this ancient yet modern city for an exciting start!
In robotics, there are valuable lessons for students and mentors alike. Watch how the CyberKnights, a FIRST robotics team champion sponsored by RTX, with the encouragement of their RTX mentor, faced challenges after a poor performance and scrapped its robot to build a new one in just nine days.
In this special video, PAL Robotics takes you behind the scenes of our 20th-anniversary celebration, a memorable gathering with industry leaders and visionaries from across robotics and technology. From inspiring speeches to milestone highlights, the event was a testament to our journey and the incredible partnerships that have shaped our path.
In 6G telecom research today, a crucial portion of wireless spectrum has been neglected: the Frequency Range 3, or FR3, band. The shortcoming is partly due to a lack of viable software and hardware platforms for studying this region of spectrum, ranging from approximately 6 to 24 gigahertz. But a new, open-source wireless research kit is changing that equation. And research conducted using that kit, presented last week at a leading industry conference, offers proof of viability of this spectrum band for future 6G networks.
In fact, it’s also arguably signaling a moment of telecom industry re-evaluation. The high-bandwidth 6G future, according to these folks, may not be entirely centered around difficult millimeter wave-based technologies. Instead, 6G may leave plenty of room for higher-bandwidth microwave spectrum tech that is ultimately more familiar and accessible.
The FR3 band is a region of microwave spectrum just shy of millimeter-wave frequencies (30 to 300 GHz). FR3 is also already very popular today for satellite Internet and military communications. For future 5G and 6G networks to share the FR3 band with incumbent players would require telecom networks nimble enough to perform regular, rapid-response spectrum-hopping.
Yet spectrum-hopping might still be an easier problem to solve than those posed by the inherent physical shortcomings of some portions of millimeter-wave spectrum—shortcomings that include limited range, poor penetration, line-of-sight operations, higher power requirements, and susceptibility to weather.
Pi-Radio’s New Face
Earlier this year, the Brooklyn, N.Y.-based startup Pi-Radio—a spinoff from New York University’s Tandon School of Engineering—released a wireless spectrum hardware and software kit for telecom research and development. Pi-Radio’s FR-3 is a software-defined radio system developed for the FR3 band specifically, says company co-founder Sundeep Rangan.
“Software-defined radio is basically a programmable platform to experiment and build any type of wireless technology,” says Rangan, who is also the associate director of NYU Wireless. “In the early stages when developing systems, all researchers need these.”
According to Pi-Radio co-founder Marco Mezzavilla, who’s also an associate professor at the Polytechnic University of Milan, the early-stage FR3 research that the team presented at Asilomar will enable researchers “to capture [signal] propagation in these frequencies and will allow us to characterize it, understand it, and model it... And this is the first stepping stone towards designing future wireless systems at these frequencies.”
There’s a good reason researchers have recently rediscovered FR3, says Paolo Testolina, postdoctoral research fellow at Northeastern University’s Institute for the Wireless Internet of Things unaffiliated with the current research effort. “The current scarcity of spectrum for communications is driving operators and researchers to look in this band, where they believe it is possible to coexist with the current incumbents,” he says. “Spectrum sharing will be key in this band.”
Rangan notes that the work on which Pi-Radio was built has been published earlier this year both on the more foundational aspects of building networks in the FR3 band as well as the specific implementation of Pi-Radio’s unique, frequency-hopping research platform for future wireless networks. (Both papers were published in IEEE journals.)
“If you have frequency hopping, that means you can get systems that are resilient to blockage,” Rangan says. “But even, potentially, if it was attacked or compromised in any other way, this could actually open up a new type of dimension that we typically haven’t had in the cellular infrastructure.” The frequency-hopping that FR3 requires for wireless communications, in other words, could introduce a layer of hack-proofing that might potentially strengthen the overall network.
Complement, Not Replacement
The Pi-Radio teamstresses, however, that FR3 would not supplant or supersede other new segments of wireless spectrum. There are, for instance, millimeter wave 5G deployments already underway today that will no doubt expand in scope and performance into the 6G future. That said, the ways that FR3 expand future 5G and 6G spectrum usage is an entirely unwritten chapter: Whether FR3 as a wireless spectrum band fizzles, or takes off, or finds a comfortable place somewhere in between depends in part on how it’s researched and developed now, the Pi-Radio team says.
“We’re at this tipping point where researchers and academics actually are empowered by the combination of this cutting-edge hardware with open-source software,” Mezzavilla says. “And that will enable the testing of new features for communications in these new frequency bands.” (Mezzavilla credits the National Telecommunications and Information Administration for recognizing the potential of FR3, and for funding the group’s research.)
By contrast, millimeter-wave 5G and 6G research has to date been bolstered, the team says, by the presence of a wide range of millimeter-wave software-defined radio (SDR) systems and other research platforms.
“Companies like Qualcomm, Samsung, Nokia, they actually had excellent millimeter wave development platforms,” Rangan says. “But they were in-house. And the effort it took to build one—an SDR at a university lab—was sort of insurmountable.”
So releasing an inexpensiveopen-source SDR in the FR3 band, Mezzavilla says, could jump start a whole new wave of 6G research.
“This is just the starting point,” Mezzavilla says. “From now on we’re going to build new features—new reference signals, new radio resource control signals, near-field operations... We’re ready to ship these yellow boxes to other academics around the world to test new features and test them quickly, before 6G is even remotely near us.”
This story was updated on 7 November 2024 to include detail about funding from the National Telecommunications and Information Administration.
Azerbaijan next week will garner much of the attention of the climate tech world, and not just because it will host COP29, the United Nation’s giant annual climate change conference. The country is promoting a grand, multi-nation plan to generate renewable electricity in the Caucasus region and send it thousands of kilometers west, under the Black Sea, and into energy–hungry Europe.
The transcontinental connection would start with wind, solar, and hydropower generated in Azerbaijan and Georgia, and off-shore wind power generated in the Caspian Sea. Long-distance lines would carry up to 1.5 gigawatts of clean electricity to Anaklia, Georgia, at the east end of the Black Sea. An undersea cable would move the electricity across the Black Sea and deliver it to Constanta, Romania, where it could be distributed further into Europe.
The scheme’s proponents say this Caspian-Black Sea energy corridor will help decrease global carbon emissions, provide dependable power to Europe, modernize developing economies at Europe’s periphery, and stabilize a region shaken by war. Organizers hope to build the undersea cable within the next six years at an estimated cost of €3.5 billion (US $3.8 billion).
To accomplish this, the governments of the involved countries must quickly circumvent a series of technical, financial, and political obstacles. “It’s a huge project,” says Zviad Gachechiladze, a director at Georgian State Electrosystem, the agency that operates the country’s electrical grid, and one of the architects of the Caucasus green-energy corridor. “To put it in operation [by 2030]—that’s quite ambitious, even optimistic,” he says.
Black Sea Cable to Link Caucasus and Europe
The technical lynchpin of the plan falls on the successful construction of a high voltage direct current (HVDC) submarine cable in the Black Sea. It’s a formidable task, considering that it would stretch across nearly 1,200 kilometers of water, most of which is over 2 km deep, and, since Russia’s invasion of Ukraine, littered with floating mines. By contrast, the longest existing submarine power cable—the North Sea Link—carries 1.4 GW across 720 km between England and Norway, at depths of up to 700 meters.
As ambitious as Azerbaijan’s plans sound, longer undersea connections have been proposed. The Australia-Asia PowerLink project aims to produce 6 GW at a vast solar farm in Northern Australia and send about a third of it to Singapore via a 4,300-km undersea cable. The Morocco-U.K. Power Project would send 3.6 GW over 3,800 km from Morocco to England. A similar attempt by Desertec to send electricity from North Africa to Europe ultimately failed.
Building such cables involves laying and stitching together lengths of heavy submarine power cables from specialized ships—the expertise for which lies with just two companies in the world. In an assessment of the Black Sea project’s feasibility, the Milan-based consulting and engineering firm CESI determined that the undersea cable could indeed be built, and estimated that it could carry up to 1.5 GW—enough to supply over 2 million European households.
But to fill that pipe, countries in the Caucasus region would have to generate much more green electricity. For Georgia, that will mostly come from hydropower, which already generates over 80 percent of the nation’s electricity. “We are a hydro country. We have a lot of untapped hydro potential,” says Gachechiladze.
Azerbaijan and Georgia Plan Green Energy Corridor
Generating hydropower can also generate opposition, because of the way dams alter rivers and landscapes. “There were some cases when investors were not able to construct power plants because of opposition of locals or green parties” in Georgia, says Salome Janelidze, a board member at the Energy Training Center, a Georgian government agency that promotes and educates around the country’s energy sector.
“It was definitely a problem and it has not been totally solved,” says Janelidze. But “to me it seems it is doable,” she says. “You can procure and construct if you work closely with the local population and see them as allies rather than adversaries.”
CESI is currently running a second study to gauge the practicality of the full breadth of the proposed energy corridor—from the Caspian Sea to Europe—with a transmission capacity of 4 to 6 GW. But that beefier interconnection will likely remain out of reach in the near term. “By 2030, we can’t claim our region will provide 4 GW or 6 GW,” says Gachechiladze. “1.3 is realistic.”
COP29: Azerbaijan’s Renewable Energy Push
Signs of political support have surfaced. In September, Azerbaijan, Georgia, Romania, and Hungary created a joint venture, based in Romania, to shepherd the project. Those four countries in 2022 inked a memorandum of understanding with the European Union to develop the energy corridor.
The involved countries are in the process of applying for the cable to be selected as an EU “project of mutual interest,” making it an infrastructure priority for connecting the union with its neighbors. If selected, “the project could qualify for 50 percent grant financing,” says Gachechiladze. “It’s a huge budget. It will improve drastically the financial condition of the project.” The commissioner responsible for EU enlargement policy projected that the union would pay an estimated €2.3 billion ($2.5 billion) toward building the cable.
Whether next week’s COP29, held in Baku, Azerbaijan, will help move the plan forward remains to be seen. In preparation for the conference, advocates of the energy corridor have been taking international journalists on tours of the country’s energy infrastructure.
Looming over the project are the security issues threaten to thwart it. Shipping routes in the Black Sea have become less dependable and safe since Russia’s invasion of Ukraine. To the south, tensions between Armenia and Azerbaijan remain after the recent war and ethnic violence.
In order to improve relations, many advocates of the energy corridor would like to include Armenia. “The cable project is in the interests of Georgia, it’s in the interests of Armenia, it’s in the interests of Azerbaijan,” says Agha Bayramov, an energy geopolitics researcher at the University of Groningen, in the Netherlands. “It might increase the chance of them living peacefully together. Maybe they’ll say, ‘We’re responsible for European energy. Let’s put our egos aside.’”
EPICS in IEEE, a service learning program for university students supported by IEEE Educational Activities, offers students opportunities to engage with engineering professionals and mentors, local organizations, and technological innovation to address community-based issues.
The following two environmentally focused projects demonstrate the value of teamwork and direct involvement with project stakeholders. One uses smart biodigesters to better manage waste in Colombia’s rural areas. The other is focused on helping Turkish olive farmers protect their trees from climate change effects by providing them with a warning system that can identify growing problems.
No time to waste in rural Colombia
Proper waste management is critical to a community’s living conditions. In rural La Vega, Colombia, the lack of an effective system has led to contaminated soil and water, an especially concerning issue because the town’s economy relies heavily on agriculture.
Vivian Estefanía Beltrán, a Ph.D. student at the Universidad del Rosario in Bogotá, addressed the problem by building a low-cost anaerobic digester that uses an instrumentation system to break down microorganisms into biodegradable material. It reduces the amount of solid waste, and the digesters can produce biogas, which can be used to generate electricity.
“Anaerobic digestion is a natural biological process that converts organic matter into two valuable products: biogas and nutrient-rich soil amendments in the form of digestate,” Beltrán says. “As a by-product of our digester’s operation, digestate is organic matter that can’t be transferred into biogas but can be used as a soil amendment for our farmers’ crops, such as coffee.
“While it may sound easy, the process is influenced by a lot of variables. The support we’ve received from EPICS in IEEE is important because it enables us to measure these variables, such as pH levels, temperature of the reactor, and biogas composition [methane and hydrogen sulfide]. The system allows us to make informed decisions that enhance the safety, quality, and efficiency of the process for the benefit of the community.”
“It’s been a great experience to see how individuals pursuing different fields of study—from engineering to electronics and computer science—can all work and learn together on a project that will have a direct positive impact on a community.” —Vivian Estefanía Beltrán
Beltrán worked closely with eight undergraduate students and three instructors—Maria Fernanda Gómez, Andrés Pérez Gordillo (the instrumentation group leader), and Carlos Felipe Vergara-Ramirez—as well as IEEE Graduate Student Member Nicolás Castiblanco (the instrumentation group coordinator).
The team constructed and installed their anaerobic digester system in an experimental station in La Vega, a town located roughly 53 kilometers northwest of Bogotá.
“This digester is an important innovation for the residents of La Vega, as it will hopefully offer a productive way to utilize the residual biomass they produce to improve quality of life and boost the economy,” Beltrán says. Soon, she adds, the system will be expanded to incorporate high-tech sensors that automatically monitor biogas production and the digestion process.
“For our students and team members, it’s been a great experience to see how individuals pursuing different fields of study—from engineering to electronics and computer science—can all work and learn together on a project that will have a direct positive impact on a community. It enables all of us to apply our classroom skills to reality,” she says. “The funding we’ve received from EPICS in IEEE has been crucial to designing, proving, and installing the system.”
The project also aims to support the development of a circular economy, which reuses materials to enhance the community’s sustainability and self-sufficiency.
Protecting olive groves in Türkiye
Türkiye is one of the world’s leading producers of olives, but the industry has been challenged in recent years by unprecedented floods, droughts, and other destructive forces of nature resulting from climate change. To help farmers in the western part of the country monitor the health of their olive trees, a team of students from Istanbul Technical University developed an early-warning system to identify irregularities including abnormal growth.
“Our system will give farmers feedback from each tree so that actions can be taken in advance to improve the yield,” says Akgül, an IEEE senior member and a professor in the university’s electronics and communication engineering department.
“We’re developing deep-learning techniques to detect changes in olive trees and their fruit so that farmers and landowners can take all necessary measures to avoid a low or damaged harvest,” says project coordinator Melike Girgin, a Ph.D. student at the university and an IEEE graduate student member.
Using drones outfitted with 360-degree optical and thermal cameras, the team collects optical, thermal, and hyperspectral imaging data through aerial methods. The information is fed into a cloud-based, open-source database system.
Akgül leads the project and teaches the team skills including signal and image processing and data collection. He says regular communication with community-based stakeholders has been critical to the project’s success.
“There are several farmers in the village who have helped us direct our drone activities to the right locations,” he says. “Their involvement in the project has been instrumental in helping us refine our process for greater effectiveness.
“For students, classroom instruction is straightforward, then they take an exam at the end. But through our EPICS project, students are continuously interacting with farmers in a hands-on, practical way and can see the results of their efforts in real time.”
Looking ahead, the team is excited about expanding the project to encompass other fruits besides olives. The team also intends to apply for a travel grant from IEEE in hopes of presenting its work at a conference.
“We’re so grateful to EPICS in IEEE for this opportunity,” Girgin says. “Our project and some of the technology we required wouldn’t have been possible without the funding we received.”
“Technical projects play a crucial role in advancing innovation and ensuring interoperability across various industries,” says Munir Mohammed, IEEE SA senior manager of product development and market engagement. “These projects not only align with our technical standards but also drive technological progress, enhance global collaboration, and ultimately improve the quality of life for communities worldwide.”
For more information on the program or to participate in service-learning projects, visit EPICS in IEEE.
On 7 November, this article was updated from an earlier version.
Last week the organization tasked with running the the biggest chunk of U.S. CHIPS Act’s US $13 billion R&D program made some significant strides: The National Semiconductor Technology Center (NSTC) released a strategic plan and selected the sites of two of three planned facilities and released a new strategic plan. The locations of the two sites—a “design and collaboration” center in Sunnyvale, Calif., and a lab devoted to advancing the leading edge of chipmaking, in Albany, N.Y.—build on an existing ecosystem at each location, experts say. The location of the third planned center—a chip prototyping and packaging site that could be especially critical for speeding semiconductor startups—is still a matter of speculation.
“The NSTC represents a once-in-a-generation opportunity for the U.S. to accelerate the pace of innovation in semiconductor technology,” Deirdre Hanford, CEO of Natcast, the nonprofit that runs the NSTC centers, said in a statement. According to the strategic plan, which covers 2025 to 2027, the NSTC is meant to accomplish three goals: extend U.S. technology leadership, reduce the time and cost to prototype, and build and sustain a semiconductor workforce development ecosystem. The three centers are meant to do a mix of all three.
New York gets extreme ultraviolet lithography
NSTC plans to direct $825 million into the Albany project. The site will be dedicated to extreme ultraviolet lithography, a technology that’s essential to making the most advanced logic chips. The Albany Nanotech Complex, which has already seen more than $25 billion in investments from the state and industry partners over two decades, will form the heart of the future NSTC center. It already has an EUV lithography machine on site and has begun an expansion to install a next-generation version, called high-NA EUV, which promises to produce even finer chip features. Working with a tool recently installed in Europe, IBM, a long-time tenant of the Albany research facility, reported record yields of copper interconnects built every 21 nanometers, a pitch several nanometers tighter than possible with ordinary EUV.
“It’s fulfilling to see that this ecosystem can be taken to the national and global level through CHIPS Act funding,” said Mukesh Khare, general manager of IBM’s semiconductors division, speaking from the future site of the NSTC EUV center. “It’s the right time, and we have all the ingredients.”
While only a few companies are capable of manufacturing cutting edge logic using EUV, the impact of the NSTC center will be much broader, Khare argues. It will extend down as far as early-stage startups with ideas or materials for improving the chipmaking process “An EUV R&D center doesn’t mean just one machine,” says Khare. “It needs so many machines around it… It’s a very large ecosystem.”
Silicon Valley lands the design center
The design center is tasked with conducting advanced research in chip design, electronic design automation (EDA), chip and system architectures, and hardware security. It will also host the NSTC’s design enablement gateway—a program that provides NSTC members with a secure, cloud-based access to design tools, reference processes and designs, and shared data sets, with the goal of reducing the time and cost of design. Additionally, it will house workforce development, member convening, and administration functions.
Situating the design center in Silicon Valley, with its concentration of research universities, venture capital, and workforce, seems like the obvious choice to many experts. “I can’t think of a better place,” says Patrick Soheili, co-founder of interconnect technology startup Eliyan, which is based in Santa Clara, Calif.
Abhijeet Chakraborty, vice president of engineering in the technology and product group at Silicon Valley-based Synopsys, a leading maker of EDA software, sees Silicon Valley’s expansive tech ecosystem as one of its main advantages in landing the NSTC’s design center. The region concentrates companies and researchers involved in the whole spectrum of the industry from semiconductor process technology to cloud software.
Access to such a broad range of industries is increasingly important for chip design startups, he says. “To design a chip or component these days you need to go from concept to design to validation in an environment that takes care of the entire stack,” he says. It’s prohibitively expensive for a startup to do that alone, so one of Chakraborty’s hopes for the design center is that it will help startups access the design kits and other data needed to operate in this new environment.
Packaging and prototyping still to come
A third promised center for prototyping and packaging is still to come. “The big question is where does the packaging and prototyping go?” says Mark Granahan, cofounder and CEO of Pennsylvania-based power semiconductor startup Ideal Semiconductor. “To me that’s a great opportunity.” He points out that because there is so little packaging technology infrastructure in the United States, any ambitious state or region should have a shot at hosting such a center. One of the original intentions of the act, after all, was to expand the number of regions of the country that are involved in the semiconductor industry.
But that hasn’t stopped some already tech-heavy regions from wanting it. “Oregon offers the strongest ecosystem for such a facility,” a spokesperson for Intel, whose technology development is done there. “The state is uniquely positioned to contribute to the success of the NSTC and help drive technological advancements in the U.S. semiconductor industry.”
As NSTC makes progress, Granahan’s concern is that bureaucracy will expand with it and slow efforts to boost the U.S. chip industry. Already the layers of control are multiplying. The Chips Office at the National Institute of Standards and Technology executes the Act. The NSTC is administered by the nonprofit Natcast, which directs the EUV center, which is in a facility run by another nonprofit, NY CREATES. “We want these things to be agile and make local decisions.”
Imagine pulling up to a refueling station and filling your vehicle’s tank with liquid hydrogen, as safe and convenient to handle as gasoline or diesel, without the need for high-pressure tanks or cryogenic storage. This vision of a sustainable future could become a reality if a Calgary, Canada–based company, Ayrton Energy, can scale up its innovative method of hydrogen storage and distribution. Ayrton’s technology could make hydrogen a viable, one-to-one replacement for fossil fuels in existing infrastructure like pipelines, fuel tankers, rail cars, and trucks.
The company’s approach is to use liquid organic hydrogen carriers (LOHCs) to make it easier to transport and store hydrogen. The method chemically bonds hydrogen to carrier molecules, which absorb hydrogen molecules and make them more stable—kind of like hydrogenating cooking oil to produce margarine.
A researcher pours a sample of Ayrton’s LOHC fluid into a vial.Ayrton Energy
The approach would allow liquid hydrogen to be transported and stored in ambient conditions, rather than in the high-pressure, cryogenic tanks (to hold it at temperatures below -252 ºC) currently required for keeping hydrogen in liquid form. It would also be a big improvement ongaseous hydrogen, which is highly volatile and difficult to keep contained.
Founded in 2021, Ayrton is one of several companies across the globe developing LOHCs, including Japan’s Chiyoda and Mitsubishi, Germany’s Covalion, and China’s Hynertech. But toxicity, energy density, and input energy issues have limited LOHCs as contenders for making liquid hydrogen feasible. Ayrton says its formulation eliminates these trade-offs.
Safe, Efficient Hydrogen Fuel for Vehicles
Conventional LOHC technologies used by most of the aforementioned companies rely on substances such as toluene, which forms methylcyclohexane when hydrogenated. These carriers pose safety risks due to their flammability and volatility. Hydrogenious LOHC Technologies in Erlanger, Germany and other hydrogen fuel companies have shifted toward dibenzyltoluene, a more stable carrier that holds more hydrogen per unit volume than methylcyclohexane, though it requires higher temperatures (and thus more energy) to bind and release the hydrogen. Dibenzyltoluene hydrogenation occurs at between 3 and 10 megapascals (30 and 100 bar) and 200–300 ºC, compared with 10 MPa (100 bar), and just under 200 ºC for methylcyclohexane.
Ayrton’s proprietary oil-based hydrogen carrier not only captures and releases hydrogen with less input energy than is required for other LOHCs, but also stores more hydrogen than methylcyclohexane can—55 kilograms per cubic meter compared with methylcyclohexane’s 50 kg/m³. Dibenzyltoluene holds more hydrogen per unit volume (up to 65 kg/m³), but Ayrton’s approach to infusing the carrier with hydrogen atoms promises to cost less. Hydrogenation or dehydrogenation with Ayrton’s carrier fluid occurs at 0.1 megapascal (1 bar) and about 100 ºC, says founder and CEO Natasha Kostenuk. And as with the other LOHCs, after hydrogenation it can be transported and stored at ambient temperatures and pressures.
Judges described [Ayrton's approach] as a critical technology for the deployment of hydrogen at large scale.” —Katie Richardson, National Renewable Energy Lab
Ayrton’s LOHC fluid is as safe to handle as margarine, but it’s still a chemical, says Kostenuk. “I wouldn’t drink it. If you did, you wouldn’t feel very good. But it’s not lethal,” she says.
Kostenuk and fellow Ayrton cofounder Brandy Kinkead (who serves as the company’s chief technical officer) were originally trying to bring hydrogen generators to market to fill gaps in the electrical grid. “We were looking for fuel cells and hydrogen storage. Fuel cells were easy to find, but we couldn’t find a hydrogen storage method or medium that would be safe and easy to transport to fuel our vision of what we were trying to do with hydrogen generators,” Kostenuk says. During the search, they came across LOHC technology but weren’t satisfied with the trade-offs demanded by existing liquid hydrogen carriers. “We had the idea that we could do it better,” she says. The duo pivoted, adjusting their focus from hydrogen generators to hydrogen storage solutions.
“Everybody gets excited about hydrogen production and hydrogen end use, but they forget that you have to store and manage the hydrogen,” Kostenuk says. Incompatibility with current storage and distribution has been a barrier to adoption, she says. “We’re really excited about being able to reuse existing infrastructure that’s in place all over the world.” Ayrton’s hydrogenated liquid has fuel-cell-grade (99.999 percent) hydrogen purity, so there’s no advantage in using pure liquid hydrogen with its need for subzero temperatures, according to the company.
The main challenge the company faces is the set of issues that come along with any technology scaling up from pilot-stage production to commercial manufacturing, says Kostenuk. “A crucial part of that is aligning ourselves with the right manufacturing partners along the way,” she notes.
Asked about how Ayrton is dealing with some other challenges common to LOHCs, Kostenuk says Ayrton has managed to sidestep them. “We stayed away from materials that are expensive and hard to procure, which will help us avoid any supply chain issues,” she says. By performing the reactions at such low temperatures, Ayrton can get its carrier fluid to withstand 1,000 hydrogenation-dehydrogenation cycles before it no longer holds enough hydrogen to be useful. Conventional LOHCs are limited to a couple of hundred cycles before the high temperatures required for bonding and releasing the hydrogen breaks down the fluid and diminishes its storage capacity, Kostenuk says.
Breakthrough in Hydrogen Storage Technology
In acknowledgement of what Ayrton’s nontoxic, oil-based carrier fluid could mean for the energy and transportation sectors, the U.S. National Renewable Energy Lab (NREL) at its annual Industry Growth Forum in May named Ayrton an “outstanding early-stage venture.” A selection committee of more than 180 climate tech and cleantech investors and industry experts chose Ayrton from a pool of more than 200 initial applicants, says Katie Richardson, group manager of NREL’s Innovation and Entrepreneurship Center, which organized the forum. The committee based its decision on the company’s innovation, market positioning, business model, team, next steps for funding, technology, capital use, and quality of pitch presentation. “Judges described Ayrton’s approach as a critical technology for the deployment of hydrogen at large scale,” Richardson says.
As a next step toward enabling hydrogen to push gasoline and diesel aside, “we’re talking with hydrogen producers who are right now offering their customers cryogenic and compressed hydrogen,” says Kostenuk. “If they offered LOHC, it would enable them to deliver across longer distances, in larger volumes, in a multimodal way.” The company is also talking to some industrial site owners who could use the hydrogenated LOHC for buffer storage to hold onto some of the energy they’re getting from clean, intermittent sources like solar and wind. Another natural fit, she says, is energy service providers that are looking for a reliable method of seasonal storage beyond what batteries can offer. The goal is to eventually scale up enough to become the go-to alternative (or perhaps the standard) fuel for cars, trucks, trains, and ships.
Along the country road that leads to ATL4, a giant data center going up east of Atlanta, dozens of parked cars and pickups lean tenuously on the narrow dirt shoulders. The many out-of-state plates are typical of the phalanx of tradespeople who muster for these massive construction jobs. With tech giants, utilities, and governments budgeting upwards of US $1 trillion for capital expansion to join the global battle for AI dominance, data centers are the bunkers, factories, and skunkworks—and concrete and electricity are the fuel and ammunition.
To the casual observer, the data industry can seem incorporeal, its products conjured out of weightless bits. But as I stand beside the busy construction site for
DataBank’s ATL4, what impresses me most is the gargantuan amount of material—mostly concrete—that gives shape to the goliath that will house, secure, power, and cool the hardware of AI. Big data is big concrete. And that poses a big problem.
Concrete is not just a major ingredient in data centers and the power plants being built to energize them. As the world’s most widely manufactured material, concrete—and especially the cement within it—is also a major contributor to climate change, accounting for around
6 percent of global greenhouse gas emissions. Data centers use so much concrete that the construction boom is wrecking tech giants’ commitments to eliminate their carbon emissions. Even though Google, Meta, and Microsoft have touted goals to be carbon neutral or negative by 2030, and Amazon by 2040, the industry is now moving in the wrong direction.
Last year, Microsoft’s carbon emissions jumped by
over 30 percent, primarily due to the materials in its new data centers. Google’s greenhouse emissions are up by nearly 50 percent over the past five years. As data centers proliferate worldwide, Morgan Stanley projects that data centers will release about 2.5 billion tonnes of CO2each year by 2030—or about 40 percent of what the United States currently emits from all sources.
But even as innovations in AI and the big-data construction boom are boosting emissions for the tech industry’s hyperscalers, the reinvention of concrete could also play a big part in solving the problem. Over the last decade, there’s been a wave of innovation, some of it profit-driven, some of it from academic labs, aimed at fixing concrete’s carbon problem. Pilot plants are being fielded to capture CO
2 from cement plants and sock it safely away. Other projects are cooking up climate-friendlier recipes for cements. And AI and other computational tools are illuminating ways to drastically cut carbon by using less cement in concrete and less concrete in data centers, power plants, and other structures.
Demand for green concrete is clearly growing. Amazon, Google, Meta, and Microsoft recently joined an initiative led by the
Open Compute Project Foundation to accelerate testing and deployment of low-carbon concrete in data centers, for example. Supply is increasing, too—though it’s still minuscule compared to humanity’s enormous appetite for moldable rock. But if the green goals of big tech can jump-start innovation in low-carbon concrete and create a robust market for it as well, the boom in big data could eventually become a boon for the planet.
Hyperscaler Data Centers: So Much Concrete
At the construction site for ATL4, I’m met by
Tony Qorri, the company’s big, friendly, straight-talking head of construction. He says that this giant building and four others DataBank has recently built or is planning in the Atlanta area will together add 133,000 square meters (1.44 million square feet) of floor space.
They all follow a universal template that Qorri developed to optimize the construction of the company’s ever-larger centers. At each site, trucks haul in more than a thousand prefabricated concrete pieces: wall panels, columns, and other structural elements. Workers quickly assemble the precision-measured parts. Hundreds of electricians swarm the building to wire it up in just a few days. Speed is crucial when construction delays can mean losing ground in the AI battle.
The ATL4 data center outside Atlanta is one of five being built by DataBank. Together they will add over 130,000 square meters of floor space.DataBank
That battle can be measured in new data centers and floor space. The United States is home to
more than 5,000 data centers today, and the Department of Commerce forecasts that number to grow by around 450 a year through 2030. Worldwide, the number of data centers now exceeds 10,000, and analysts project another 26.5 million m2 of floor space over the next five years. Here in metro Atlanta, developers broke ground last year on projects that will triple the region’s data-center capacity. Microsoft, for instance, is planning a 186,000-m2 complex; big enough to house around 100,000 rack-mounted servers, it will consume 324 megawatts of electricity.
The velocity of the data-center boom means that no one is pausing to await greener cement. For now, the industry’s mantra is “Build, baby, build.”
“There’s no good substitute for concrete in these projects,” says Aaron Grubbs, a structural engineer at ATL4. The latest processors going on the racks are bigger, heavier, hotter, and far more power hungry than previous generations. As a result, “you add a lot of columns,” Grubbs says.
1,000 Companies Working on Green Concrete
Concrete may not seem an obvious star in the story of how electricity and electronics have permeated modern life. Other materials—copper and silicon, aluminum and lithium—get higher billing. But concrete provides the literal, indispensable foundation for the world’s electrical workings. It is the solid, stable, durable, fire-resistant stuff that makes power generation and distribution possible. It undergirds nearly all advanced manufacturing and telecommunications. What was true in the rapid build-out of the power industry a century ago remains true today for the data industry: Technological progress begets more growth—and more concrete. Although each generation of processor and memory squeezes more computing onto each chip, and
advances in superconducting microcircuitry raise the tantalizing prospect of slashing the data center’s footprint, Qorri doesn’t think his buildings will shrink to the size of a shoebox anytime soon. “I’ve been through that kind of change before, and it seems the need for space just grows with it,” he says.
By weight, concrete is not a particularly carbon-intensive material. Creating a
kilogram of steel, for instance, releases about 2.4 times as much CO2 as a kilogram of cement does. But the global construction industry consumes about 35 billion tonnes of concrete a year. That’s about 4 tonnes for every person on the planet and twice as much as all other building materials combined. It’s that massive scale—and the associated cost and sheer number of producers—that creates both a threat to the climate and inertia that resists change.
At its Edmonton, Alberta, plant [above], Heidelberg Materials is adding systems to capture carbon dioxide produced by the manufacture of Portland cement.Heidelberg Materials North America
Yet change is afoot. When I visited the innovation center operated by the Swiss materials giant
Holcim, in Lyon, France, research executives told me about the database they’ve assembled of nearly 1,000 companies working to decarbonize cement and concrete. None yet has enough traction to measurably reduce global concrete emissions. But the innovators hope that the boom in data centers—and in associated infrastructure such as new
nuclear reactors andoffshore wind farms, where each turbine foundation can use up to 7,500 cubic meters of concrete—may finally push green cement and concrete beyond labs, startups, and pilot plants.
Why cement production emits so much carbon
Though the terms “cement” and “concrete” are often conflated, they are not the same thing. A popular analogy in the industry is that cement is the egg in the concrete cake. Here’s the basic recipe: Blend cement with larger amounts of sand and other aggregates. Then add water, to trigger a chemical reaction with the cement. Wait a while for the cement to form a matrix that pulls all the components together. Let sit as it cures into a rock-solid mass.
Portland cement, the key binder in most of the world’s concrete, was serendipitously invented in England by William Aspdin, while he was tinkering with earlier mortars that his father, Joseph, had patented in 1824. More than a century of science has revealed the essential chemistry of how cement works in concrete, but new findings are still leading to important innovations, as well as insights into how concrete absorbs atmospheric carbon as it ages.
As in the Aspdins’ day, the process to make Portland cement still begins with limestone, a sedimentary mineral made from crystalline forms of calcium carbonate. Most of the limestone quarried for cement originated hundreds of millions of years ago, when ocean creatures
mineralized calcium and carbonate in seawater to make shells, bones, corals, and other hard bits.
Cement producers often build their large plants next to limestone quarries that can supply decades’ worth of stone. The stone is crushed and then heated in stages as it is combined with lesser amounts of other minerals that typically include calcium, silicon, aluminum, and iron. What emerges from the mixing and cooking are small, hard nodules called clinker. A bit more processing, grinding, and mixing turns those pellets into powdered Portland cement, which accounts for
about 90 percent of the CO2 emitted by the production of conventional concrete [see infographic, “Roads to Cleaner Concrete”].
Karen Scrivener, shown in her lab at EPFL, has developed concrete recipes that reduce emissions by 30 to 40 percent.Stefan Wermuth/Bloomberg/Getty Images
Decarbonizing Portland cement is often called heavy industry’s “hard problem” because of two processes fundamental to its manufacture. The first process is combustion: To coax limestone’s chemical transformation into clinker, large heaters and kilns must sustain temperatures around 1,500 °C. Currently that means burning coal, coke, fuel oil, or natural gas, often along with waste plastics and tires. The exhaust from those fires generates 35 to 50 percent of the cement industry’s emissions. Most of the remaining emissions result from gaseous CO
2 liberated by the chemical transformation of the calcium carbonate (CaCO3) into calcium oxide (CaO), a process called calcination. That gas also usually heads straight into the atmosphere.
Concrete production, in contrast, is mainly a business of mixing cement powder with other ingredients and then delivering the slurry speedily to its destination before it sets. Most concrete in the United States is prepared to order at batch plants—souped-up materials depots where the ingredients are combined, dosed out from hoppers into special mixer trucks, and then driven to job sites. Because concrete grows too stiff to work after about 90 minutes, concrete production is highly local. There are more ready-mix batch plants in the United States than there are Burger King restaurants.
Batch plants can offer thousands of potential mixes, customized to fit the demands of different jobs. Concrete in a hundred-story building differs from that in a swimming pool. With flexibility to vary the quality of sand and the size of the stone—and to add a wide variety of chemicals—batch plants have more tricks for lowering carbon emissions than any cement plant does.
Cement plants that capture carbon
China accounts for more than half of the concrete produced and used in the world, but companies there are hard to track. Outside of China, the top three multinational cement producers—Holcim, Heidelberg Materials in Germany, and Cemex in Mexico—have launched pilot programs to snare CO2 emissions before they escape and then bury the waste deep underground. To do that, they’re taking carbon capture and storage (CCS) technology already used in the oil and gas industry and bolting it onto their cement plants.
These pilot programs will need to scale up without eating profits—something that eluded the coal industry when it tried CCS decades ago. Tough questions also remain about where exactly to store billions of tonnes of CO
2 safely, year after year.
The appeal of CCS for cement producers is that they can continue using existing plants while still making progress toward carbon neutrality, which trade associations have
committedto reach by 2050. But with well over 3,000 plants around the world, adding CCS to all of them would take enormous investment. Currently less than 1 percent of the global supply is low-emission cement. Accenture, a consultancy, estimates that outfitting the whole industry for carbon capture could cost up to $900 billion.
“The economics of carbon capture is a monster,” says
Rick Chalaturnyk, a professor of geotechnical engineering at the University of Alberta, in Edmonton, Canada, who studies carbon capture in the petroleum and power industries. He sees incentives for the early movers on CCS, however. “If Heidelberg, for example, wins the race to the lowest carbon, it will be the first [cement] company able to supply those customers that demand low-carbon products”—customers such as hyperscalers.
Though cement companies seem unlikely to invest their own billions in CCS, generous government subsidies have enticed several to begin pilot projects. Heidelberg has
announced plans to start capturing CO2 from its Edmonton operations in late 2026, transforming it into what the company claims would be “the world’s first full-scale net-zero cement plant.” Exhaust gas will run through stations that purify the CO2 and compress it into a liquid, which will then be transported to chemical plants to turn it into products or to depleted oil and gas reservoirs for injection underground, where hopefully it will stay put for an epoch or two.
Chalaturnyk says that the scale of the Edmonton plant, which aims to capture
a million tonnes of CO2 a year, is big enough to give CCS technology a reasonable test. Proving the economics is another matter. Half the $1 billion cost for the Edmonton project is being paid by the governments of Canada and Alberta.
ROADS TO CLEANER
CONCRETE
As the big-data construction boom boosts the tech industry’s emissions, the reinvention of concrete could play a major role in solving the problem.
• CONCRETE TODAY Most of the greenhouse emissions from concrete come from the production of Portland cement, which requires high heat and releases carbon dioxide (CO2) directly into the air.
• CONCRETE TOMORROW At each stage of cement and concrete production, advances in ingredients, energy supplies, and uses of concrete promise to reduce waste and pollution.
The U.S. Department of Energy has similarly offered Heidelberg
up to $500 million to help cover the cost of attaching CCS to its Mitchell, Ind., plant and burying up to 2 million tonnes of CO2 per year below the plant. And the European Union has gone even bigger, allocating nearly €1.5 billion ($1.6 billion) from its Innovation Fund to support carbon capture at cement plants in seven of its member nations.
These tests are encouraging, but they are all happening in rich countries, where demand for concrete peaked decades ago. Even in China, concrete production has started to flatten. All the growth in global demand through 2040 is expected to come from less-affluent countries, where populations are still growing and quickly urbanizing. According to
projections by the Rhodium Group, cement production in those regions is likely to rise from around 30 percent of the world’s supply today to 50 percent by 2050 and 80 percent before the end of the century.
So will rich-world CCS technology translate to the rest of the world? I asked Juan Esteban Calle Restrepo, the CEO of
Cementos Argos, the leading cement producer in Colombia, about that when I sat down with him recently at his office in Medellín. He was frank. “Carbon capture may work for the U.S. or Europe, but countries like ours cannot afford that,” he said.
Better cement through chemistry
As long as cement plants run limestone through fossil-fueled kilns, they will generate excessive amounts of carbon dioxide. But there may be ways to ditch the limestone—and the kilns. Labs and startups have been finding replacements for limestone, such as calcined kaolin clay and fly ash, that don’t release CO
2 when heated. Kaolin clays are abundant around the world and have been used for centuries in Chinese porcelain and more recently in cosmetics and paper. Fly ash—a messy, toxic by-product of coal-fired power plants—is cheap and still widely available, even as coal power dwindles in many regions.
At the Swiss Federal Institute of Technology Lausanne (EPFL),
Karen Scrivener and colleagues developed cements that blend calcined kaolin clay and ground limestone with a small portion of clinker. Calcining clay can be done at temperatures low enough that electricity from renewable sources can do the job. Various studies have found that the blend, known as LC3, can reduce overall emissions by 30 to 40 percent compared to those of Portland cement.
LC3 is also cheaper to make than Portland cement and performs as well for nearly all common uses. As a result, calcined clay plants have popped up across Africa, Europe, and Latin America. In Colombia, Cementos Argos is already producing
more than 2 million tonnes of the stuff annually. The World Economic Forum’s Centre for Energy and Materials counts LC3 among the best hopes for the decarbonization of concrete. Wide adoption by the cement industry,the centre reckons, “can help prevent up to 500 million tonnes of CO2 emissions by 2030.”
In a win-win for the environment, fly ash can also be used as a building block for low- and even zero-emission concrete, and the high heat of processing neutralizes many of the toxins it contains. Ancient Romans used
volcanic ash to make slow-setting but durable concrete: The Pantheon, built nearly two millennia ago with ash-based cement, is still in great shape.
Coal fly ash is a cost-effective ingredient that has reactive properties similar to those of Roman cement and Portland cement. Many concrete plants already add fresh fly ash to their concrete mixes, replacing
15 to 35 percent of the cement. The ash improves the workability of the concrete, and though the resulting concrete is not as strong for the first few months, it grows stronger than regular concrete as it ages, like the Pantheon.
University labs have tested concretes made entirely with fly ash and found that some actually outperform the standard variety. More than 15 years ago, researchers at Montana State University used concrete made with
100 percent fly ash in the floors and walls of a credit union and a transportation research center. But performance depends greatly on the chemical makeup of the ash, which varies from one coal plant to the next, and on following a tricky recipe. The decommissioning of coal-fired plants has also been making fresh fly ash scarcer and more expensive.
At Sublime Systems’ pilot plant in Massachusetts, the company is using electrochemistry instead of heat to produce lime silicate cements that can replace Portland cement.Tony Luong
That has spurred new methods to treat and use fly ash that’s been buried in landfills or dumped into ponds. Such industrial burial grounds hold enough fly ash to make concrete for decades, even after every coal plant shuts down. Utah-based
Eco Material Technologies is now producing cements that include both fresh and recovered fly ash as ingredients. The company claims it can replace up to 60 percent of the Portland cement in concrete—and that a new variety, suitable for 3D printing, can substitute entirely for Portland cement.
Hive 3D Builders, a Houston-based startup, has been feeding that low-emissions concrete into robots that are printing houses in several Texas developments. “We are 100 percent Portland cement–free,” says Timothy Lankau, Hive 3D’s CEO. “We want our homes to last 1,000 years.”
Sublime Systems, a startup spun out of MIT by battery scientists, uses electrochemistry rather than heat to make low-carbon cement from rocks that don’t contain carbon. Similar to a battery, Sublime’s process uses a voltage between an electrode and a cathode to create a pH gradient that isolates silicates and reactive calcium, in the form of lime (CaO). The company mixes those ingredients together to make a cement with no fugitive carbon, no kilns or furnaces, and binding power comparable to that of Portland cement. With the help of $87 million from the U.S. Department of Energy, Sublime is building a plant in Holyoke, Mass., that will be powered almost entirely by hydroelectricity. Recently the company was tapped to provide concrete for a major offshore wind farm planned off the coast of Martha’s Vineyard.
Software takes on the hard problem of concrete
It is unlikely that any one innovation will allow the cement industry to hit its target of carbon neutrality before 2050. New technologies take time to mature, scale up, and become cost-competitive. In the meantime, says
Philippe Block, a structural engineer at ETH Zurich, smart engineering can reduce carbon emissions through the leaner use of materials.
His
research group has developed digital design tools that make clever use of geometry to maximize the strength of concrete structures while minimizing their mass. The team’s designs start with the soaring architectural elements of ancient temples, cathedrals, and mosques—in particular, vaults and arches—which they miniaturize and flatten and then 3D print or mold inside concrete floors and ceilings. The lightweight slabs, suitable for the upper stories of apartment and office buildings, use much less concrete and steel reinforcement and have a CO2 footprint that’s reduced by 80 percent.
There’s hidden magic in such lean design. In multistory buildings, much of the mass of concrete is needed just to hold the weight of the material above it. The carbon savings of Block’s lighter slabs thus compound, because the size, cost, and emissions of a building’s conventional-concrete elements are slashed.
Vaulted, a Swiss startup, uses digital design tools to minimize the concrete in floors and ceilings, cutting their CO2 footprint by 80 percent.Vaulted
In Dübendorf, Switzerland, a
wildly shaped experimental building has floors, roofs, and ceilings created by Block’s structural system. Vaulted, a startup spun out of ETH, is engineering and fabricating the lighter floors of a 10-story office building under construction in Zug, Switzerland.
That country has also been a leader in smart ways to recycle and reuse concrete, rather than simply landfilling demolition rubble. This is easier said than done—concrete is tough stuff, riddled with rebar. But there’s an economic incentive: Raw materials such as sand and limestone are becoming scarcer and more costly. Some jurisdictions in Europe now require that new buildings be made from recycled and reused materials. The
new addition of the Kunsthaus Zürich museum, a showcase of exquisite Modernist architecture, uses recycled material for all but 2 percent of its concrete.
As new policies goose demand for recycled materials and threaten to restrict future use of Portland cement across Europe, Holcim has begun building recycling plants that can reclaim cement clinker from old concrete. It recently turned the demolition rubble from some 1960s apartment buildings outside Paris into part of a 220-unit housing complex—touted as the first building made from
100 percent recycled concrete. The company says it plans to build concrete recycling centers in every major metro area in Europe and, by 2030, to include 30 percent recycled material in all of its cement.
Further innovations in low-carbon concrete are certain to come, particularly as the powers of machine learning are applied to the problem. Over the past decade, the number of research papers reporting on computational tools to explore the vast space of possible concrete mixes has
grown exponentially. Much as AI is being used to accelerate drug discovery, the tools learn from huge databases of proven cement mixes and then apply their inferences to evaluate untested mixes.
Researchers from the University of Illinois and Chicago-based
Ozinga, one of the largest private concrete producers in the United States, recently worked with Meta to feed 1,030 known concrete mixes into an AI. The project yielded a novel mix that will be used for sections of a data-center complex in DeKalb, Ill. The AI-derived concrete has a carbon footprint 40 percent lower than the conventional concrete used on the rest of the site. Ryan Cialdella, Ozinga’s vice president of innovation, smiles as he notes the virtuous circle: AI systems that live in data centers can now help cut emissions from the concrete that houses them.
A sustainable foundation for the information age
Cheap, durable, and abundant yet unsustainable, concrete made with Portland cement has been one of modern technology’s Faustian bargains. The built world is on track to double in floor space by 2060, adding 230,000 km
2, or more than half the area of California. Much of that will house the 2 billion more people we are likely to add to our numbers. As global transportation, telecom, energy, and computing networks grow, their new appendages will rest upon concrete. But if concrete doesn’t change, we will perversely be forced to produce even more concrete to protect ourselves from the coming climate chaos, with its rising seas, fires, and extreme weather.
The AI-driven boom in data centers is a strange bargain of its own. In the future, AI may help us live even more prosperously, or it may undermine our freedoms, civilities, employment opportunities, and environment. But solutions to the bad climate bargain that AI’s data centers foist on the planet are at hand, if there’s a will to deploy them. Hyperscalers and governments are among the few organizations with the clout to rapidly change what kinds of cement and concrete the world uses, and how those are made. With a pivot to sustainability, concrete’s unique scale makes it one of the few materials that could do most to protect the world’s natural systems. We can’t live without concrete—but with some ambitious reinvention, we can thrive with it.
There’s good earnings news for U.S. members: Salaries are rising. Base salaries increased by about 5 percent from 2022 to 2023, according to the IEEE-USA 2024 Salary and Benefits Survey Report.
Last year’s report showed that inflation had outpaced earnings growth but that’s not the case this year.
In current dollars, the median income of U.S. engineers and other tech professionals who are IEEE members was US $174,161 last year, up about 5 percent from $169,000 in 2022, excluding overtime pay, profit sharing, and other supplemental earnings. Unemployment fell to 1.2 percent in this year’s survey, down from 1.4 percent in the previous year.
As with prior surveys, earned income is measured for the year preceding the survey’s date of record—so the 2024 survey reports income earned in 2023.
To calculate the median salary, IEEE-USA considered only respondents who were tech professionals working full time in their primary area of competence—a sample of 4,192 people.
Circuits and device engineers earn the most
Those specializing in circuits and devices earned the highest median income, $196,614, followed by those working in communications ($190,000) and computers/software technology ($181,000).
Specific lucrative subspecialties include broadcast technology ($226,000), image/video ($219,015), and hardware design or hardware support ($215,000).
Engineers in the energy and power engineering field earned the lowest salary: $155,000.
Higher education affects how well one is paid. On average, those with a Ph.D. earned the highest median income: $193,636. Members with a master’s degree in electrical engineering or computer engineering reported a salary of $182,500. Those with a bachelor’s degree in electrical engineering or computer engineering earned a median income of $159,000.
Earning potential also depends on geography within the United States. Respondents in IEEE Region 6 (Western U.S.) fared substantially better than those in Region 4 (Central U.S.), earning nearly $48,500 more on average. However, the report notes, the cost of living in the western part of the country is significantly higher than elsewhere.
The top earners live in California, Maryland, and Oregon, while those earning the least live in Arkansas, Nebraska, and South Carolina.
Academics are among the lowest earners
Full professors earned an average salary of $190,000, associate professors earned $118,000, and assistant professors earned $104,500.
Almost 38 percent of the academics surveyed are full professors, 16.6 percent are associate professors, and 11.6 percent are assistant professors. About 10 percent of respondents hold a nonteaching research appointment. Nearly half (46.8 percent) are tenured, and 10.7 percent are on a tenure track.
Gender and ethnic gaps widen
The gap between women’s and men’s salaries increased. Even considering experience levels, women earned $30,515 less than their male counterparts.
The median primary income is highest among Asian/Pacific Islander technical professionals, at $178,500, followed by White engineers ($176,500), Hispanic engineers ($152,178), African-American engineers ($150,000), and Native American/Alaskan Native engineers ($148,000). The salary gap between Black engineers and the average salary reported is $3,500 more than in last year’s report.
Asians and Pacific Islanders are the largest minority group, at 14.4 percent. Only 5 percent of members are Hispanic, 2.6 percent are African Americans, and American Indians/Alaskan Natives account for 0.9 percent of the respondents.
More job satisfaction
According to the report, overall job satisfaction is higher than at any time in the past 10 years. Members reported that their work was technically challenging and meaningful to their company. On the whole, they weren’t satisfied with advancement opportunities or their current compensation, however.
The 60-page report is available for purchase at the member price of US $125. Nonmembers pay $225.
The program is an immersive experience designed for students ages 13 to 17. It offers hands-on projects, interactive workshops, field trips, and insights into the profession from practicing engineers. Participants get to stay on a college campus, providing them with a preview of university life.
Student turned instructor
One future innovator is Natalie Ghannad, who participated in the program as a student in 2022 and was a member of this year’s instructional team in Houston at Rice University. Ghannad is in her second year as an electrical engineering student at the University of San Francisco. University students join forces with science and engineering teachers at each TESI location to serve as instructors.
For many years, Ghannad wanted to follow in her mother’s footsteps and become a pediatric neurosurgeon. As a high school junior in Houston in 2022, however, she had a change of heart and decided to pursue engineering after participating in the TESI at Rice. She received a full scholarship from the IEEE Foundation TESI Scholarship Fund, supported by IEEE societies and councils.
“I really liked that it was hands-on,” Ghannad says. “From the get-go, we were introduced to 3D printers and laser cutters.”
The benefit of participating in the program, she says, was “having the opportunity to not just do the academic side of STEM but also to really get to play around, get your hands dirty, and figure out what you’re doing.”
“Looking back,” she adds, “there are so many parallels between what I’ve actually had to do as a college student, and having that knowledge from the Summer Institute has really been great.”
She was inspired to volunteer as a teaching assistant because, she says, “I know I definitely want to teach, have the opportunity to interact with kids, and also be part of the future of STEM.”
More than 90 students attended the program at Rice. They visited Space Center Houston, where former astronauts talked to them about the history of space exploration.
Participants also were treated to presentations by guest speakers including IEEE Senior Member Phil Bautista, the founder of Bull Creek Data, a consulting company that provides technical solutions; IEEE Senior Member Christopher Sanderson, chair of the IEEE Region 5Houston Section; and James Burroughs, a standards manager for Siemens in Atlanta. Burroughs, who spoke at all three TESI events this year, provided insight on overcoming barriers to do the important work of an engineer.
Learning about transit systems and careers
The University of Pennsylvania, in Philadelphia, hosted the East Coast TESI event this year. Students were treated to a field trip to the Southeastern Pennsylvania Transportation Association (SEPTA), one of the largest transit systems in the country. Engineers from AECOM, a global infrastructure consulting firm with offices in Philadelphia that worked closely with SEPTA on its most recent station renovation, collaborated with IEEE to host the trip.
The benefit of participating in the program was “having the opportunity to not just do the academic side of STEM but also to really get to play around, get your hands dirty, and figure out what you’re doing.” — Natalie Ghannad
Participants also heard from guest speakers including Api Appulingam, chief development officer of the Philadelphia International Airport, who told the students the inspiring story of her career.
Guest speakers from Google and Meta
Students who attended the TESI camp at the University of San Diego visited Qualcomm. Hosted by the IEEE Region 6 director, Senior Member Kathy Herring Hayashi, they learned about cutting-edge technology and toured the Qualcomm Museum.
Students also heard from guest speakers including IEEE Member Andrew Saad, an engineer at Google; Gautam Deryanni, a silicon validation engineer at Meta; Kathleen Kramer, 2025 IEEE president and a professor of electrical engineering at the University of San Diego; as well as Burroughs.
“I enjoyed the opportunity to meet new, like-minded people and enjoy fun activities in the city, as well as get a sense of the dorm and college life,” one participant said.
Hands-on projects
In addition to field trips and guest speakers, participants at each location worked on several hands-on projects highlighting the engineering design process. In the toxic popcorn challenge, the students designed a process to safely remove harmful kernels. Students tackling the bridge challenge designed and built a span out of balsa wood and glue, then tested its strength by gradually adding weight until it failed. The glider challenge gave participants the tools and knowledge to build and test their aircraft designs.
One participant applauded the hands-on activities, saying, “All of them gave me a lot of experience and helped me have a better idea of what engineering field I want to go in. I love that we got to participate in challenges and not just listen to lectures—which can be boring.”
The students also worked on a weeklong sparking solutions challenge. Small teams identified a societal problem, such as a lack of clean water or limited mobility for senior citizens, then designed a solution to address it. On the last day of camp, they pitched their prototypes to a team of IEEE members that judged the projects based on their originality and feasibility. Each student on the winning teams at each location were awarded the programmable Mech-5 robot.
Thanks to their ability to adjust the system’s output accurately and quickly without detailed knowledge about its dynamics, PID control loops stand as a powerful and widely used tool for maintaining a stable and predictable output in a variety of applications. In this paper, we review the fundamental principles and characteristics of these control systems, providing insight into their functioning, tuning strategies, advantages, and trade-offs.
As a result of their integrated architecture, Zurich Instruments’ lock-in amplifiers allow users to make the most of all the advantages of digital PID control loops, so that their operation can be adapted to match the needs of different use cases.
“Cover, bring to a boil, then reduce heat. Simmer for 20 minutes.” These directions seem simple enough, and yet I have messed up many, many pots of rice over the years. My sympathies to anyone who’s ever had to boil rice on a stovetop, cook it in a clay pot over a kerosene or charcoal burner, or prepare it in a cast-iron cauldron. All hail the 1955 invention of the automatic rice cooker!
How the automatic rice cooker was invented
It isn’t often that housewives get credit in the annals of invention, but in the story of the automatic rice cooker, a woman takes center stage. That happened only after the first attempts at electrifying rice cooking, starting in the 1920s, turned out to be utter failures. Matsushita, Mitsubishi, and Sony all experimented with variations of placing electric heating coils inside wooden tubs or aluminum pots, but none of these cookers automatically switched off when the rice was done. The human cook—almost always a wife or daughter—still had to pay attention to avoid burning the rice. These electric rice cookers didn’t save any real time or effort, and they sold poorly.
But Shogo Yamada, the energetic development manager of the electric appliance division for Toshiba, became convinced that his company could do better. In post–World War II Japan, he was demonstrating and selling electric washing machines all over the country. When he took a break from his sales pitch and actually talked to women about their daily household labors, he discovered that cooking rice—not laundry—was their most challenging chore. Rice was a mainstay of the Japanese diet, and women had to prepare it up to three times a day. It took hours of work, starting with getting up by 5:00 am to fan the flames of a
kamado, a traditional earthenware stove fueled by charcoal or wood on which the rice pot was heated. The inability to properly mind the flame could earn a woman the label of “failed housewife.”
In 1951, Yamada became the cheerleader of the rice cooker within Toshiba, which was understandably skittish given the past failures of other companies. To develop the product, he turned to Yoshitada Minami, the manager of a small family factory that produced electric water heaters for Toshiba. The water-heater business wasn’t great, and the factory was on the brink of bankruptcy.
How Sources Influence the Telling of History
As someone who does a lot of research online, I often come across websites that tell very interesting histories, but without any citations. It takes only a little bit of digging before I find entire passages copied and pasted from one site to another, and so I spend a tremendous amount of time trying to track down the original source. Accounts of popular consumer products, such as the rice cooker, are particularly prone to this problem. That’s not to say that popular accounts are necessarily wrong; plus they are often much more engaging than boring academic pieces. This is just me offering a note of caution because every story offers a different perspective depending on its sources.
For example, many popular blogs sing the praises of Fumiko Minami and her tireless contributions to the development of the rice maker. But in my research, I found no mention of Minami before Helen Macnaughtan’s 2012 book chapter, “Building up Steam as Consumers: Women, Rice Cookers and the Consumption of Everyday Household Goods in Japan,” which itself was based on episode 42 of the Project X: Challengers documentary series that was produced by NHK and aired in 2002.
If instead I had relied solely on the description of the rice cooker’s early development provided by the Toshiba Science Museum (here’s an archived page from 2007), this month’s column would have offered a detailed technical description of how uncooked rice has a crystalline structure, but as it cooks, it becomes a gelatinized starch. The museum’s website notes that few engineers had ever considered the nature of cooking rice before the rice-cooker project, and it refers simply to the “project team” that discovered the process. There’s no mention of Fumiko.
Both stories are factually correct, but they emphasize different details. Sometimes it’s worth asking who is part of the “project team” because the answer might surprise you. —A.M.
Although Minami understood the basic technical principles for an electric rice cooker, he didn’t know or appreciate the finer details of preparing perfect rice. And so Minami turned to his wife, Fumiko.
Fumiko, the mother of six children, spent five years researching and testing to document the ideal recipe. She continued to make rice three times a day, carefully measuring water-to-rice ratios, noting temperatures and timings, and prototyping rice-cooker designs. Conventional wisdom was that the heat source needed to be adjusted continuously to guarantee fluffy rice, but Fumiko found that heating the water and rice to a boil and then cooking for exactly 20 minutes produced consistently good results.
But how would an automatic rice cooker know when the 20 minutes was up? A suggestion came from Toshiba engineers. A working model based on a double boiler (a pot within a pot for indirect heating) used evaporation to mark time. While the rice cooked in the inset pot, a bimetallic switch measured the temperature in the external pot. Boiling water would hold at a constant 100 °C, but once it had evaporated, the temperature would soar. When the internal temperature of the double boiler surpassed 100 °C, the switch would bend and cut the circuit. One cup of boiling water in the external pot took 20 minutes to evaporate. The same basic principle is still used in modern cookers.
Yamada wanted to ensure that the rice cooker worked in all climates, so Fumiko tested various prototypes in extreme conditions: on her rooftop in cold winters and scorching summers and near steamy bathrooms to mimic high humidity. When Fumiko became ill from testing outside, her children pitched in to help. None of the aluminum and glass prototypes, it turned out, could maintain their internal temperature in cold weather. The final design drew inspiration from the Hokkaidō region, Japan’s northernmost prefecture. Yamada had seen insulated cooking pots there, so the Minami family tried covering the rice cooker with a triple-layered iron exterior. It worked.
How Toshiba sold its automatic rice cooker
Toshiba’s automatic rice cooker went on sale on 10 December 1955, but initially, sales were slow. It didn’t help that the rice cooker was priced at 3,200 yen, about a third of the average Japanese monthly salary. It took some salesmanship to convince women they needed the new appliance. This was Yamada’s time to shine. He demonstrated using the rice cooker to prepare takikomi gohan, a rice dish seasoned with dashi, soy sauce, and a selection of meats and vegetables. When the dish was cooked in a traditional kamado, the soy sauce often burned, making the rather simple dish difficult to master. Women who saw Yamada’s demo were impressedwith the ease offered by the rice cooker.
Another clever sales technique was to get electricity companies to serve as Toshiba distributors. At the time, Japan was facing a national power surplus stemming from the widespread replacement of carbon-filament lightbulbs with more efficient tungsten ones. The energy savings were so remarkable that operations at half of the country’s power plants had to be curtailed. But with utilities distributing Toshiba rice cookers, increased demand for electricity was baked in.
Within a year, Toshiba was selling more than 200,000 rice cookers a month. Many of them came from the Minamis’ factory, which was rescued from near-bankruptcy in the process.
How the automatic rice cooker conquered the world
From there, the story becomes an international one with complex localization issues. Japanese sushi rice is not the same as Thai sticky rice which is not the same as Persian tahdig, Indian basmati, Italian risotto, or Spanish paella. You see where I’m going with this. Every culture that has a unique rice dish almost always uses its own regional rice with its own preparation preferences. And so countries wanted their own type of automatic electric rice cooker (although some rejected automation in favor of traditional cooking methods).
Yoshiko Nakano, a professor at the University of Hong Kong, wrote a book in 2009 about the localized/globalized nature of rice cookers. Where There Are Asians, There Are Rice Cookers traces the popularization of the rice cooker from Japan to China and then the world by way of Hong Kong. One of the key differences between the Japanese and Chinese rice cooker is that the latter has a glass lid, which Chinese cooks demanded so they could see when to add sausage. More innovation and diversification followed. Modern rice cookers have settings to give Iranians crispy rice at the bottom of the pot, one to let Thai customers cook noodles, one for perfect rice porridge, and one for steel-cut oats.
My friend Hyungsub Choi, in his 2022 article “Before Localization: The Story of the Electric Rice Cooker in South Korea,” pushes back a bit on Nakano’s argument that countries were insistent on tailoring cookers to their tastes. From 1965, when the first domestic rice cooker appeared in South Korea, to the early 1990s, Korean manufacturers engaged in “conscious copying,” Choi argues. That is, they didn’t bother with either innovation or adaptation. As a result, most Koreans had to put up with inferior domestic models. Even after the Korean government made it a national goal to build a better rice cooker, manufacturers failed to deliver one, perhaps because none of the engineers involved knew how to cook rice. It’s a good reminder that the history of technology is not always the story of innovation and progress.
Eventually, the Asian diaspora brought the rice cooker to all parts of the globe, including South Carolina, where I now live and which coincidentally has a long history of rice cultivation. I bought my first rice cooker on a whim, but not for its rice-cooking ability. I was intrigued by the yogurt-making function. Similar to rice, yogurt requires a constant temperature over a specific length of time. Although successful, my yogurt experiment was fleeting—store-bought was just too convenient. But the rice cooking blew my mind. Perfect rice. Every. Single. Time. I am never going back to overflowing pots of starchy water.
Part of a continuing serieslooking at historical artifacts that embrace the boundless potential of technology.
An abridged version of this article appears in the November 2024 print issue as “The Automatic Rice Cooker’s Unlikely Inventor.”
Yoshiko Nakano’s book Where There are Asians, There are Rice Cookers (Hong Kong University Press, 2009)takes the story much further with her focus on the National (Panasonic) rice cooker and its adaptation and adoption around the world.
The Toshiba Science Museum, in Kawasaki, Japan, where we sourced our main image of the original ER-4, closed to the public in June. I do not know what the future holds for its collections, but luckily some of its Web pages have been archived to continue to help researchers like me.
In this upcoming webinar, explore how to leverage the state-of-the-art high-frequency simulation capabilities of Ansys HFSS to innovate and develop advanced multiband antenna systems.
Overview
This webinar will explore how to leverage the state-of-the-art high-frequency simulation capabilities of Ansys HFSS to innovate and develop advanced multiband antenna systems. Attendees will learn how to optimize antenna performance and analyze installed performance within wireless networks. The session will also demonstrate how this approach enables users to extract valuable wireless and network KPIs, providing a comprehensive toolset for enhancing antenna design, optimizing multiband communication, and improving overall network performance. Join us to discover how Ansys HFSS can transform wireless system design and network efficiency approach.
What Attendees will Learn
How to design interleaved multiband antenna systems using the latest capabilities in HFSS
How to extract Network Key Performance Indicators
How to run and extract RF Channels for the dynamic environment
Who Should Attend
This webinar is valuable to anyone involved in antenna, R&D, product design, and wireless networks.