Reading view

There are new articles available, click to refresh the page.

The Unlikely Inventor of the Automatic Rice Cooker



“Cover, bring to a boil, then reduce heat. Simmer for 20 minutes.” These directions seem simple enough, and yet I have messed up many, many pots of rice over the years. My sympathies to anyone who’s ever had to boil rice on a stovetop, cook it in a clay pot over a kerosene or charcoal burner, or prepare it in a cast-iron cauldron. All hail the 1955 invention of the automatic rice cooker!

How the automatic rice cooker was invented

It isn’t often that housewives get credit in the annals of invention, but in the story of the automatic rice cooker, a woman takes center stage. That happened only after the first attempts at electrifying rice cooking, starting in the 1920s, turned out to be utter failures. Matsushita, Mitsubishi, and Sony all experimented with variations of placing electric heating coils inside wooden tubs or aluminum pots, but none of these cookers automatically switched off when the rice was done. The human cook—almost always a wife or daughter—still had to pay attention to avoid burning the rice. These electric rice cookers didn’t save any real time or effort, and they sold poorly.

This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

But Shogo Yamada, the energetic development manager of the electric appliance division for Toshiba, became convinced that his company could do better. In post–World War II Japan, he was demonstrating and selling electric washing machines all over the country. When he took a break from his sales pitch and actually talked to women about their daily household labors, he discovered that cooking rice—not laundry—was their most challenging chore. Rice was a mainstay of the Japanese diet, and women had to prepare it up to three times a day. It took hours of work, starting with getting up by 5:00 am to fan the flames of a kamado, a traditional earthenware stove fueled by charcoal or wood on which the rice pot was heated. The inability to properly mind the flame could earn a woman the label of “failed housewife.”

In 1951, Yamada became the cheerleader of the rice cooker within Toshiba, which was understandably skittish given the past failures of other companies. To develop the product, he turned to Yoshitada Minami, the manager of a small family factory that produced electric water heaters for Toshiba. The water-heater business wasn’t great, and the factory was on the brink of bankruptcy.

How Sources Influence the Telling of History


As someone who does a lot of research online, I often come across websites that tell very interesting histories, but without any citations. It takes only a little bit of digging before I find entire passages copied and pasted from one site to another, and so I spend a tremendous amount of time trying to track down the original source. Accounts of popular consumer products, such as the rice cooker, are particularly prone to this problem. That’s not to say that popular accounts are necessarily wrong; plus they are often much more engaging than boring academic pieces. This is just me offering a note of caution because every story offers a different perspective depending on its sources.

For example, many popular blogs sing the praises of Fumiko Minami and her tireless contributions to the development of the rice maker. But in my research, I found no mention of Minami before Helen Macnaughtan’s 2012 book chapter, “Building up Steam as Consumers: Women, Rice Cookers and the Consumption of Everyday Household Goods in Japan,” which itself was based on episode 42 of the Project X: Challengers documentary series that was produced by NHK and aired in 2002.

If instead I had relied solely on the description of the rice cooker’s early development provided by the Toshiba Science Museum (here’s an archived page from 2007), this month’s column would have offered a detailed technical description of how uncooked rice has a crystalline structure, but as it cooks, it becomes a gelatinized starch. The museum’s website notes that few engineers had ever considered the nature of cooking rice before the rice-cooker project, and it refers simply to the “project team” that discovered the process. There’s no mention of Fumiko.

Both stories are factually correct, but they emphasize different details. Sometimes it’s worth asking who is part of the “project team” because the answer might surprise you. —A.M.


Although Minami understood the basic technical principles for an electric rice cooker, he didn’t know or appreciate the finer details of preparing perfect rice. And so Minami turned to his wife, Fumiko.

Fumiko, the mother of six children, spent five years researching and testing to document the ideal recipe. She continued to make rice three times a day, carefully measuring water-to-rice ratios, noting temperatures and timings, and prototyping rice-cooker designs. Conventional wisdom was that the heat source needed to be adjusted continuously to guarantee fluffy rice, but Fumiko found that heating the water and rice to a boil and then cooking for exactly 20 minutes produced consistently good results.

But how would an automatic rice cooker know when the 20 minutes was up? A suggestion came from Toshiba engineers. A working model based on a double boiler (a pot within a pot for indirect heating) used evaporation to mark time. While the rice cooked in the inset pot, a bimetallic switch measured the temperature in the external pot. Boiling water would hold at a constant 100 °C, but once it had evaporated, the temperature would soar. When the internal temperature of the double boiler surpassed 100 °C, the switch would bend and cut the circuit. One cup of boiling water in the external pot took 20 minutes to evaporate. The same basic principle is still used in modern cookers.


Photo of three parts of a round kitchen appliance, including the outside container, an inner metal pot, and a lid.


Yamada wanted to ensure that the rice cooker worked in all climates, so Fumiko tested various prototypes in extreme conditions: on her rooftop in cold winters and scorching summers and near steamy bathrooms to mimic high humidity. When Fumiko became ill from testing outside, her children pitched in to help. None of the aluminum and glass prototypes, it turned out, could maintain their internal temperature in cold weather. The final design drew inspiration from the Hokkaidō region, Japan’s northernmost prefecture. Yamada had seen insulated cooking pots there, so the Minami family tried covering the rice cooker with a triple-layered iron exterior. It worked.

How Toshiba sold its automatic rice cooker

Toshiba’s automatic rice cooker went on sale on 10 December 1955, but initially, sales were slow. It didn’t help that the rice cooker was priced at 3,200 yen, about a third of the average Japanese monthly salary. It took some salesmanship to convince women they needed the new appliance. This was Yamada’s time to shine. He demonstrated using the rice cooker to prepare takikomi gohan, a rice dish seasoned with dashi, soy sauce, and a selection of meats and vegetables. When the dish was cooked in a traditional kamado, the soy sauce often burned, making the rather simple dish difficult to master. Women who saw Yamada’s demo were impressed with the ease offered by the rice cooker.

Another clever sales technique was to get electricity companies to serve as Toshiba distributors. At the time, Japan was facing a national power surplus stemming from the widespread replacement of carbon-filament lightbulbs with more efficient tungsten ones. The energy savings were so remarkable that operations at half of the country’s power plants had to be curtailed. But with utilities distributing Toshiba rice cookers, increased demand for electricity was baked in.

Within a year, Toshiba was selling more than 200,000 rice cookers a month. Many of them came from the Minamis’ factory, which was rescued from near-bankruptcy in the process.

How the automatic rice cooker conquered the world

From there, the story becomes an international one with complex localization issues. Japanese sushi rice is not the same as Thai sticky rice which is not the same as Persian tahdig, Indian basmati, Italian risotto, or Spanish paella. You see where I’m going with this. Every culture that has a unique rice dish almost always uses its own regional rice with its own preparation preferences. And so countries wanted their own type of automatic electric rice cooker (although some rejected automation in favor of traditional cooking methods).

Yoshiko Nakano, a professor at the University of Hong Kong, wrote a book in 2009 about the localized/globalized nature of rice cookers. Where There Are Asians, There Are Rice Cookers traces the popularization of the rice cooker from Japan to China and then the world by way of Hong Kong. One of the key differences between the Japanese and Chinese rice cooker is that the latter has a glass lid, which Chinese cooks demanded so they could see when to add sausage. More innovation and diversification followed. Modern rice cookers have settings to give Iranians crispy rice at the bottom of the pot, one to let Thai customers cook noodles, one for perfect rice porridge, and one for steel-cut oats.


A customer examines several shelves of round white appliances.


My friend Hyungsub Choi, in his 2022 article “Before Localization: The Story of the Electric Rice Cooker in South Korea,” pushes back a bit on Nakano’s argument that countries were insistent on tailoring cookers to their tastes. From 1965, when the first domestic rice cooker appeared in South Korea, to the early 1990s, Korean manufacturers engaged in “conscious copying,” Choi argues. That is, they didn’t bother with either innovation or adaptation. As a result, most Koreans had to put up with inferior domestic models. Even after the Korean government made it a national goal to build a better rice cooker, manufacturers failed to deliver one, perhaps because none of the engineers involved knew how to cook rice. It’s a good reminder that the history of technology is not always the story of innovation and progress.

Eventually, the Asian diaspora brought the rice cooker to all parts of the globe, including South Carolina, where I now live and which coincidentally has a long history of rice cultivation. I bought my first rice cooker on a whim, but not for its rice-cooking ability. I was intrigued by the yogurt-making function. Similar to rice, yogurt requires a constant temperature over a specific length of time. Although successful, my yogurt experiment was fleeting—store-bought was just too convenient. But the rice cooking blew my mind. Perfect rice. Every. Single. Time. I am never going back to overflowing pots of starchy water.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the November 2024 print issue as “The Automatic Rice Cooker’s Unlikely Inventor.”

References


Helen Macnaughtan’s 2012 book chapter, “Building up Steam as Consumers: Women, Rice Cookers and the Consumption of Everyday Household Goods in Japan,” was a great resource in understanding the development of the Toshiba ER-4. The chapter appeared in The Historical Consumer: Consumption and Everyday Life in Japan, 1850-2000, edited by Penelope Francks and Janet Hunter (Palgrave Macmillan).

Yoshiko Nakano’s book Where There are Asians, There are Rice Cookers (Hong Kong University Press, 2009) takes the story much further with her focus on the National (Panasonic) rice cooker and its adaptation and adoption around the world.

The Toshiba Science Museum, in Kawasaki, Japan, where we sourced our main image of the original ER-4, closed to the public in June. I do not know what the future holds for its collections, but luckily some of its Web pages have been archived to continue to help researchers like me.

A Patent Engineer’s Advice for First-time Inventors



Lesley-Ann Knee credits her father for introducing her to the world of patents. He’s an engineer who specializes in application-specific integrated circuits (ASICs) and holds several patents on technologies he developed while working for Hewlett-Packard and Microsoft.

“I would hear stories of his experiences through the patent prosecution processes,” Knee says, which taught her about different kinds of patents, the importance of documentation, and using detailed language. She remembers one litigation battle over a patent that went on for years, which her father’s company lost because someone forgot to delete information in a patent claim.

Lesley-Ann Knee


Employer:

Husch Blackwell

Occupation:

Patent Engineer

Education:

Bachelor’s degree in electrical engineering, Colorado State University, in Fort Collins

Knee, an electrical engineer, now works as a patent engineer in the patent prosecution department at the law office of Husch Blackwell, headquartered in Chicago. Under the supervision of patent attorneys, Knee helps with writing, filing, and managing patent applications with the U.S. Patent and Trademark Office (USPTO).

She is currently studying for the patent bar exam, which would qualify her to be a licensed patent agent, registered with the USPTO to help prepare and prosecute patent applications. Assuming she passes, she then intends to go to law school to become a patent attorney.

How to Become a Patent Engineer

Knee initially didn’t know what she wanted to study in college. Eventually she decided that an engineering degree offered diverse career opportunities, so she enrolled at Colorado State University, in her hometown of Fort Collins. She followed in her father’s footsteps, specializing in ASIC design, but also studied power systems and semiconductor physics and minored in mathematics. In 2022 she worked as an intern in the engine research division of Honda Research and Development, in Raymond, Ohio, where she developed a data analysis tool to help with testing heat distribution in vehicles.

After graduating from Colorado State in 2022, she decided to get a job related to patents. From January to May, she worked part-time as a patent technical intern at the law firm of Dorsey & Whitney, in Denver. “After learning about patents from the other side, I fell in love with the industry,” she says. Knee joined Husch Blackwell in June 2023.

She found that patent law has its quirky sides. One day her supervisor walked into one of the partner’s offices and saw the attorney “ripping apart a stuffed animal, guts everywhere,” she says. “[My] boss asked if the partner was okay. She explained that she had been pulled into a litigation case that depended entirely on the type of stitching used inside the stuffed animals.”

What Can Be Patented?

Here is Knee’s primer on U.S. patents and her advice for first-time inventors filing patents with the USPTO. This information isn’t intended to provide legal advice, she notes, and every country has its own patent system, with different rules and regulations. For specifics or guidance about legal matters, she recommends contacting a patent practitioner.

Knee’s first piece of advice? Don’t be afraid of filing a patent application. Two out of three patents get approved by the USPTO, she says.

“If you disclose your invention publicly and do not file an application within one year, you could be barred from receiving a patent on that exact invention.”

To receive a patent, an invention must have utility—that is, it has to be useful for some purpose—and novelty, meaning that it’s not an obvious variation of what already exists, she says. It could be a machine, a manufacturing process, or a composition of matter (that is, a novel combination of natural elements that are mixed mechanically or chemically).

Some things that can’t be patented, she says, are atomic weapons, devices for illegal pursuits, methods of administering business, mathematical discoveries, and scientific principles—with the exception of devices and methods that make use of those principles.

The USPTO has recognized a growing interest in artificial intelligence over the past few years, and in 2024 it released examples of AI patentability to give inventors guidance on the patentability of AI.“From my understanding, AI itself is not patentable,” Knee says. But using AI to invent something doesn’t necessarily make the invention unpatentable, she says.

An Overview of the Patent Process

The USPTO uses the “first to file” system for patent applications. “Whoever files an application first will have the best chance to patent an invention. Otherwise, you’re out of luck,” she says.

The patent filing process can vary widely in terms of cost and complexity, she says. Costs include filing fees and attorney fees. Smaller companies and individual inventors may qualify for discounts on USPTO fees. Costs may be higher for patent filings that require extensive modifications and lengthy communication with the patent office. Complexity depends on how much research USPTO examiners must do to determine the difference between existing inventions and the one in the filing.

For inventors interested in pursuing a patent for the first time, “I would highly recommend seeking out a patent practitioner—a patent attorney or patent agent—who offers free consultations to determine patentability, a possible action plan, cost, and a timeline for filing,” Knee says. Also, some universities have intellectual property legal offices that can advise professors and students on the patent process.

For someone who wants to file a patent themselves, here are some general steps:

  1. File a provisional application when you have a proof of concept or prototype. This type of application doesn’t go through the USPTO but instead holds a place in line for your patent. Provisional applications expire after one year.
  2. To follow up, file a nonprovisional application within one year of the first filing. This application is examined by the patent office and receives the filing date of the provisional application.
  3. Promptly answer and respond to any USPTO rejections (called office actions), which explain the reasons your invention can’t be patented. Knee says it’s quite common to get a rejection. You can typically respond within three months at no cost or pay a fee for an extension of up to six months. If you don’t respond, the application will be considered abandoned.
  4. If you receive a notice of allowance (NOA), celebrate! Your application is eligible to become a patent. Upon payment of some fees, you’ll receive an issue notification document showing the date when the patent will be officially granted, giving you the right to exclude others from using or selling your invention in the United States.
  5. If you receive a notice called a final office action, you have two options. You can abandon the application, or you can file a request for continued examination, which requires you to pay for another round of prosecution and explain further why your invention deserves a patent.

The Value of Intellectual Property

Be careful disclosing information about your invention or selling it before filing a patent application, Knee says.“If you disclose your invention publicly and do not file an application within one year, you could be barred from receiving a patent on that exact invention,” she says. “Because of the ‘first to file’ system, if someone steals your idea by filing first, this can be hard and very expensive to reverse.” She also advises people to be careful about disclosing their inventions through social-media platforms or other communication methods.

In today’s intellectual property market, patents are currency. Knee has seen companies use patents as collateral for a loan, even when the patent application hasn’t been approved yet.

And other inventors use patents to launch their dream startup. “I have seen people use patents for help securing investors,” Knee says. But it’s not a one-and-done situation, she says. “The key is having one patent and filing additional applications that piggyback off of it. This process can be pricey but has a huge impact on stopping competitors from manufacturing similar products in a new field and protecting inventors in litigation battles.”

The Incredible Story Behind the First Transistor Radio



Imagine if your boss called a meeting in May to announce that he’s committing 10 percent of the company’s revenue to the development of a brand-new mass-market consumer product, made with a not-yet-ready-for-mass-production component. Oh, and he wants it on store shelves in less than six months, in time for the holiday shopping season. Ambitious, yes. Kind of nuts, also yes.

But that’s pretty much what Pat Haggerty, vice president of Texas Instruments, did in 1954. The result was the Regency TR-1, the world’s first commercial transistor radio, which debuted 70 years ago this month. The engineers delivered on Haggerty’s audacious goal, and I certainly hope they received a substantial year-end bonus.

Why did Texas Instruments make the Regency TR-1 transistor radio?

But how did Texas Instruments come to make a transistor radio in the first place? TI traces its roots to a company called Geophysical Service Inc. (GSI), which made seismic instrumentation for the oil industry as well as electronics for the military. In 1945, GSI hired Patrick E. Haggerty as the general manager of its laboratory and manufacturing division and its electronics work. By 1951, Haggerty’s division was significantly outpacing GSI’s geophysical division, and so the Dallas-based company reorganized as Texas Instruments to focus on electronics.

Meanwhile, on 30 June 1948, Bell Labs announced John Bardeen and Walter Brattain’s game-changing invention of the transistor. No longer would electronics be dependent on large, hot vacuum tubes. The U.S. government chose not to classify the technology because of its potentially broad applications. In 1951, Bell Labs began licensing the transistor for US $25,000 through the Western Electric Co.; Haggerty bought a license for TI the following year.

The engineers delivered on Haggerty’s audacious goal, and I certainly hope they received a substantial year-end bonus.

TI was still a small company, with not much in the way of R&D capacity. But Haggerty and the other founders wanted it to become a big and profitable company. And so they established research labs to focus on semiconductor materials and a project-engineering group to develop marketable products.

Black and white photo of a gloved hand holding a small rectangular radio with a round dial. The TR-1 was the first transistor radio, and it ignited a desire for portable gadgets that continues to this day. Bettmann/Getty Images

Haggerty made a good investment when he hired Gordon Teal, a 22-year veteran of Bell Labs. Although Teal wasn’t part of the team that invented the germanium transistor, he recognized that it could be improved by using a single grown crystal, such as silicon. Haggerty was familiar with Teal’s work from a 1951 Bell Labs symposium on transistor technology. Teal happened to be homesick for his native Texas, so when TI advertised for a research director in the New York Times, he applied, and Haggerty offered him the job of assistant vice president instead. Teal started at TI on 1 January 1953.

Fifteen months later, Teal gave Haggerty a demonstration of the first silicon transistor, and he presented his findings three and a half weeks later at the Institute of Radio Engineers’ National Conference on Airborne Electronics, in Dayton, Ohio. His innocuously titled paper, “Some Recent Developments in Silicon and Germanium Materials and Devices,” completely understated the magnitude of the announcement. The audience was astounded to hear that TI had not just one but three types of silicon transistors already in production, as Michael Riordan recounts in his excellent article “The Lost History of the Transistor” (IEEE Spectrum, October 2004).

And fun fact: The TR-1 shown at top once belonged to Willis Adcock, a physical chemist hired by Teal to perfect TI’s silicon transistors as well as transistors for the TR-1. (The radio is now in the collections of the Smithsonian’s National Museum of American History.)

The TR-1 became a product in less than six months

This advancement in silicon put TI on the map as a major player in the transistor industry, but Haggerty was impatient. He wanted a transistorized commercial product now, even if that meant using germanium transistors. On 21 May 1954, Haggerty challenged a research group at TI to have a working prototype of a transistor radio by the following week; four days later, the team came through, with a breadboard containing eight transistors. Haggerty decided that was good enough to commit $2 million—just under 10 percent of TI’s revenue—to commercializing the radio.

Of course, a working prototype is not the same as a mass-production product, and Haggerty knew TI needed a partner to help manufacture the radio. That partner turned out to be Industrial Development Engineering Associates (IDEA), a small company out of Indianapolis that specialized in antenna boosters and other electronic goods. They signed an agreement in June 1954 with the goal of announcing the new radio in October. TI would provide the components, and IDEA would manufacture the radio under its Regency brand.

Germanium transistors at the time cost $10 to $15 apiece. With eight transistors, the radio was too expensive to be marketed at the desired price point of $50 (more than $580 today, which is coincidentally about what it’ll cost you to buy one in good condition on eBay). Vacuum-tube radios were selling for less, but TI and IDEA figured early adopters would pay that much to try out a new technology. Part of Haggerty’s strategy was to increase the volume of transistor production to eventually lower the per-transistor cost, which he managed to push down to about $2.50.

By the time TI met with IDEA, the breadboard was down to six transistors. It was IDEA’s challenge to figure out how to make the transistorized radio at a profit. According to an oral history with Richard Koch, IDEA’s chief engineer on the project, TI’s real goal was to make transistors, and the radio was simply the gimmick to get there. In fact, part of the TI–IDEA agreement was that any patents that came out of the project would be in the public domain so that TI was free to sell more transistors to other buyers.

At the initial meeting, Koch, who had never seen a transistor before in real life, suggested substituting a germanium diode for the detector (which extracted the audio signal from the desired radio frequency), bringing the transistor count down to five. After thinking about the configuration a bit more, Koch eliminated another transistor by using a single transistor for the oscillator/mixer circuit.

Photo of the inside of a small rectangular gadget, showing electronic components and a battery. TI’s original prototype used eight germanium transistors, which engineers reduced to six and, ultimately, four for the production model.Division of Work and Industry/National Museum of American History/Smithsonian Institution

The final design was four transistors set in a superheterodyne design, a type of receiver that combines two frequencies to produce an intermediate frequency that can be easily amplified, thereby boosting a weak signal and decreasing the required antenna size. The TR-1 had two transistors as intermediate-frequency amplifiers and one as an audio amplifier, plus the oscillator/mixer. Koch applied for a patent for the circuitry the following year.

The radio ran on a 22.5-volt battery, which offered a playing life of 20 to 30 hours and cost about $1.25. (Such batteries were also used in the external power and electronics pack for hearing aids, the only other consumer product to use transistors up until this point.)

While IDEA’s team was working on the circuitry, they outsourced the design of the TR-1’s packaging to the Chicago firm of Painter, Teague, and Petertil. Their first design didn’t work because the components didn’t fit. Would their second design be better? As Koch later recalled, IDEA’s purchasing agent, Floyd Hayhurst, picked up the molding dies for the radio cases in Chicago and rushed them back to Indianapolis. He arrived at 2:00 in the morning, and the team got to work. Fortunately, everything fit this time. The plastic case was a little warped, but that was simple to fix: They slapped a wooden piece on each case as it came off the line so it wouldn’t twist as it cooled.

This video shows how each radio was assembled by hand:

On 18 October 1954, Texas Instruments announced the first commercial transistorized radio. It would be available in select outlets in New York and Los Angeles beginning 1 November, with wider distribution once production ramped up. The Regency TR-1 Transistor Pocket Radio initially came in black, gray, red, and ivory. They later added green and mahogany, as well as a run of pearlescents and translucents: lavender, pearl white, meridian blue, powder pink, and lime.

The TR-1 got so-so reviews, faced competition

Consumer Reports was not enthusiastic about the Regency TR-1. In its April 1955 review, it found that transmission of speech was “adequate” under good conditions, but music transmission was unsatisfactory under any conditions, especially on a noisy street or crowded beach. The magazine used adjectives such as whistle, squeal, thin, tinny, and high-pitched to describe various sounds—not exactly high praise for a radio. It also found fault with the on/off switch. Their recommendation: Wait for further refinement before buying one.

Newspaper ad for a $49.95 radio touted as \u201cthe first transistor radio ever built!\u201d More than 100,000 TR-1s were sold in its first year, but the radio was never very profitable.Archive PL/Alamy

The engineers at TI and IDEA didn’t necessarily disagree. They knew they were making a sound-quality trade-off by going with just four transistors. They also had quality-control problems with the transistors and other components, with initial failure rates up to 50 percent. Eventually, IDEA got the failure rate down to 12 to 15 percent.

Unbeknownst to TI or IDEA, Raytheon was also working on a transistorized radio—a tabletop model rather than a pocket-size one. That gave them the space to use six transistors, which significantly upped the sound quality. Raytheon’s radio came out in February 1955. Priced at $79.95, it weighed 2 kilograms and ran on four D-cell batteries. That August, a small Japanese company called Tokyo Telecommunications Engineering Corp. released its first transistor radio, the TR-55. A few years later, the company changed its name to Sony and went on to dominate the world’s consumer radio market.

The legacy of the Regency TR-1

The Regency TR-1 was a success by many measures: It sold 100,000 in its first year, and it helped jump-start the transistor market. But the radio was never very profitable. Within a few years, both Texas Instruments and IDEA left the commercial AM radio business, TI to focus on semiconductors, and IDEA to concentrate on citizens band radios. Yet Pat Haggerty estimated that this little pocket radio pushed the market in transistorized consumer goods ahead by two years. It was a leap of faith that worked out, thanks to some hardworking engineers with a vision.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the October 2024 print issue as “The First Transistor Radio.”

References


In 1984, Michael Wolff conducted oral histories with IDEA’s lead engineer Richard Koch and purchasing agent Floyd Hayhurst. Wolff subsequently used them the following year in his IEEE Spectrum article “The Secret Six-Month Project,” which includes some great references at the end.

Robert J. Simcoe wrote “The Revolution in Your Pocket” for the fall 2004 issue of Invention and Technology to commemorate the 50th anniversary of the Regency TR-1.

As with many collectibles, the Regency TR-1 has its champions who have gathered together many primary sources. For example, Steve Reyer, a professor of electrical engineering at the Milwaukee School of Engineering before he passed away in 2018, organized his efforts in a webpage that’s now hosted by https://www.collectornet.net.

What It Takes To Let People Play With the Past



The Media Archaeology Lab is one of the largest public collections in the world of obsolete, yet functional, technology. Located on the University of Colorado Boulder campus, the MAL is where you can watch a magic lantern show, play Star Castle on a Vectrex games console, or check out the weather on an Atari 800 via Fujinet. IEEE Spectrum spoke to managing director Libi Rose Striegl about the MAL’s mission and her role in keeping all that obsolete tech functional, so that people of today can experience the media of the past.

​Libi Rose


Libi Rose Striegl is the managing director for the Media Archaeology Lab at the University of Colorado Boulder.

How is the MAL different from other collections of historical and vintage technology?

Libi Rose: Our major difference is that we treat ourselves as a lab and an experimental space for hands-on use, as opposed to a museum-type collection. We’re very much focused on the humanistic side of computer use. We’re interested in unexpected juxtapositions of technologies and ways that we can get people of all ages and all backgrounds to use these things, in either the expected ways or in unexpected ways.

What’s your role at the lab?

Rose: I do all the day-to-day admin work, managing our volunteer group, working with professors on campus to do course integration. Doing off-site events, doing repair work myself or coordinating it. [Recording a new addition] myself or coordinating it. Coordinating donations. Social-media accounts. Kind of a whole crew of people’s worth of work in one job! My office is also the repair space.

“We’re very much focused on the humanistic side of computer use.”

What’s the hardest part about keeping old systems running?

Rose: We don’t have a huge amount of trouble with old computer systems other than not having time. It’s other things that are hard to keep running. Our older things, our mechanical things, the information is gone. The people who did that work in the past have passed away. And so we’re kind of re-creating the wheel when we want to do something like repair a mechanical calculator, or figure out how to make a phonograph that stopped working start working again. For newer stuff, the hardest part of a lot of it is that the hardware itself exists, but maybe server-side infrastructure is [gone]. So older cellphones are very hard to work with, because while we can turn them on, we can’t do much else with them unless you start getting into building your own analog cell network, which we’ve talked about. Missing infrastructure is why we end up doing a lot of things. We run our little analog TV station in-house.

An analog TV station?

Rose: Yes, otherwise you can’t really see what broadcast TV would have looked like on those old analog televisions!

How do visitors respond?

Rose: It sort of depends on age and familiarity with things. Young kids are often brought in by their parents to be introduced to stuff. And my favorite reactions are from 7- and 8-year-olds who are like, “Oh, my God. I’m so sorry for you old people who had to do this.” College-age students have either their own nostalgia or sort of residual nostalgia from their parents or grandparents. They’re really interested in interacting with something that they saw on television or that their parents told them about. Older folks tend to jump right onto the nostalgia train. We get a lot of good conversation around that and where technology goes when it dies, what that all means.

This article appears in the October 2024 print issues as “5 Questions for Libi Rose.”

In 1926, TV Was Mechanical



Scottish inventor John Logie Baird had a lot of ingenious ideas, not all of which caught on. His phonovision was an early attempt at video recording, with the signals preserved on phonograph records. His noctovision used infrared light to see objects in the dark, which some experts claim was a precursor to radar.

But Baird earned his spot in history with the televisor. On 26 January 1926, select members of the Royal Institution gathered at Baird’s lab in London’s Soho neighborhood to witness the broadcast of a small but clearly defined image of a ventriloquist dummy’s face, sent from the televisor’s electromechanical transmitter to its receiver. He also demonstrated the televisor with a human subject, who observers could see speaking and moving on the screen. For this, Baird is often credited with the first public demonstration of television.

Photo of a man in a checked jacket holding the heads of ventriloquist dummies and looking at a metal apparatus. John Logie Baird [shown here] used the heads of ventriloquist dummies in early experiments because they didn’t mind the heat and bright lights of his televisor. Science History Images/Alamy

How the Nipkow Disk Led to Baird’s Televisor

To be clear, Baird didn’t invent television. Television is one of those inventions that benefited from many contributors, collaborators, and competitors. Baird’s starting point was an idea for an “electric telescope,” patented in 1885 by German engineer Paul Nipkow.

Nipkow’s apparatus captured a picture by dividing it into a vertical sequence of lines, using a spinning disk with perforated holes around the edge. The perforations were offset in a spiral so that each hole captured one slice of the image in turn—known today as scan lines. Each line would be encoded as an electrical signal. A receiving apparatus converted the signals into light, to reconstruct the image. Nipkow never commercialized his electric telescope, though, and after 15 years the patent expired.

Black and white photo of a man standing in front of a seated group of women and pointing to a boxlike apparatus on the wall. An inset image shows a face split into vertical lines. The inset on the left shows how the televisor split an image (in this case, a person’s face) into vertical lines. Bettmann/Getty Images

The system that Baird demonstrated in 1926 used two Nipkow disks, one in the transmitting apparatus and the other in the receiving apparatus. Each disk had 30 holes. He fitted the disk with glass lenses that focused the reflected light onto a photoelectric cell. As the transmitting disk rotated, the photoelectric cell detected the change in brightness coming through the individual lenses and converted the light into an electrical signal.

This signal was then sent to the receiving system. (Part of the receiving apparatus, housed at the Science Museum in London, is shown at top.) There the process was reversed, with the electrical signal first being amplified and then modulating a neon gas–discharge lamp. The light passed through a rectangular slot to focus it onto the receiving Nipkow disk, which was turning at the same speed as the transmitter. The image could be seen on a ground glass plate.

Early experiments used a dummy because the many incandescent lights needed to provide sufficient illumination made it too hot and bright for a person. Each hole in the disk captured only a small bit of the overall image, but as long as the disk spun fast enough, the brain could piece together the complete image, a phenomenon known as persistence of vision. (In a 2022 Hands On column, Markus Mierse explains how to build a modern Nipkow-disk electromechanical TV using a 3D printer, an LED module, and an Arduino Mega microcontroller.)

John Logie Baird and “True Television”

Regular readers of this column know the challenge of documenting historical “firsts”—the first radio, the first telegraph, the first high-tech prosthetic arm. Baird’s claim to the first public broadcast of television is no different. To complicate matters, the actual first demonstration of his televisor wasn’t on 26 January 1926 in front of those esteemed members of the Royal Institution; rather, it occurred in March 1925 in front of curious shoppers at a Selfridges department store.

As Donald F. McLean recounts in his excellent June 2022 article “Before ‘True Television’: Investigating John Logie Baird’s 1925 Original Television Apparatus,” Baird used a similar device for the Selfridges demo, but it had only 16 holes, organized as two groups of eight, hence its nickname the Double-8. The resolution was about as far from high definition as you could get, showing shadowy silhouettes in motion. Baird didn’t consider this “true television,” as McLean notes in his Proceedings of the IEEE piece.

Black and white photo of a man standing next to a glass case containing an apparatus that consists of disks along a central pole, with a large doll head at one end. In 1926, Baird loaned part of the televisor he used in his Selfridges demo to the Science Museum in London.PA Images/Getty Images

Writing in December 1926 in Experimental Wireless & The Wireless Engineer, Baird defined true television as “the transmission of the image of an object with all gradations of light, shade, and detail, so that it is seen on the receiving screen as it appears to the eye of an actual observer.” Consider the Selfridges demo a beta test and the one for the Royal Institution the official unveiling. (In 2017, the IEEE chose to mark the latter and not the former with a Milestone.)

The 1926 demonstration was a turning point in Baird’s career. In 1927 he established the Baird Television Development Co., and a year later he made the first transatlantic television transmission, from London to Hartsdale, N.Y. In 1929, the BBC decided to give Baird’s system a try, performing some experimental broadcasts outside of normal hours. After that, mechanical television took off in Great Britain and a few other European countries.

But Wait There’s More!

If you enjoyed this dip into the history of television, check out Spectrum’s new video collaboration with the YouTube channel Asianometry, which will offer a variety of perspectives on fascinating chapters in the history of technology. The first set of videos looks at the commercialization of color television.

Head over to Asianometry to see how Sony finally conquered the challenges of mass production of color TV sets with its Trinitron line. On Spectrum’s YouTube channel, you’ll find a video—written and narrated by yours truly—on how the eminent physicist Ernest O. Lawrence dabbled for a time in commercial TVs. Spoiler alert: Lawrence had much greater success with the cyclotron and government contracts than he ever did commercializing his Chromatron TV. Spectrum also has a video on the yearslong fight between CBS and RCA over the U.S. standard for color TV broadcasting. —A.M.

The BBC used various versions of Baird’s mechanical system from 1929 to 1937, starting with the 30-line system and upgrading to a 240-line system. But eventually the BBC switched to the all-electronic system developed by Marconi-EMI. Baird then switched to working on one of the earliest electronic color television systems, called the Telechrome. (Baird had already demonstrated a successful mechanical color television system in 1928, but it never caught on.) Meanwhile, in the United States, Columbia Broadcasting System (CBS) attempted to develop a mechanical color television system based on Baird’s original idea of a color wheel but finally ceded to an electronic standard in 1953.

Baird also experimented with stereoscopic or three-dimensional television and a 1,000-line display, similar to today’s high-definition television. Unfortunately, he died in 1946 before he could persuade anyone to take up that technology.

In a 1969 interview in TV Times, John’s widow, Margaret Baird, reflected on some of the developments in television that would have made her husband happy. He would enjoy the massive amounts of sports coverage available, she said. (Baird had done the first live broadcast of the Epsom Derby in 1931.) He would be thrilled with current affairs programs. And, my personal favorite, she thought he would love the annual broadcasting of the Eurovision song contest.

Other TV Inventors: Philo Farnsworth, Vladimir Zworykin

But as I said, television is an invention that’s had many contributors. Across the Atlantic, Philo Farnsworth was experimenting with an all-electrical system that he had first envisioned as a high school student in 1922. By 1926, Farnsworth had secured enough financial backing to work full time on his idea.

One of his main inventions was the image dissector, also known as a dissector tube. This video camera tube creates a temporary electron image that can be converted into an electrical signal. On 7 September 1927, Farnsworth and his team successfully transmitted a single black line, followed by other images of simple shapes. But the system could only handle silhouettes, not three-dimensional objects.

Meanwhile, Vladimir Zworykin was also experimenting with electronic television. In 1923, he applied for a patent for a video tube called the iconoscope. But it wasn’t until 1931, after he joined RCA, that his team developed a working version, which suspiciously came after Zworykin visited Farnsworth’s lab in California. The iconoscope overcame some of the dissector tube’s deficiencies, especially the storage capacity. It was also more sensitive and easier to manufacture. But one major drawback of both the image dissector and the iconoscope was that, like Baird’s original televisor, they required very bright lights.

Everyone was working to develop a better tube, but Farnsworth claimed that he’d invented both the concept of an electronic image moving through a vacuum tube as well as the idea of a storage-type camera tube. The iconoscope and any future improvements all depended on these progenitor patents. RCA knew this and offered to buy Farnsworth’s patents, but Farnsworth refused to sell. A multiyear patent-interference case ensued, finally finding for Farnsworth in 1935.

While the case was being litigated, Farnsworth made the first public demonstration of an all-electric television system on 25 August 1934 at the Franklin Institute in Philadelphia. And in 1939, RCA finally agreed to pay royalties to Farnsworth to use his patented technologies. But Farnsworth was never able to compete commercially with RCA and its all-electric television system, which went on to dominate the U.S. television market.

Eventually, Harold Law, Paul Weimer, and Russell Law developed a better tube at their Princeton labs, the image orthicon. Designed for TV-guided missiles for the U.S. military, it was 100 to 1,000 times as sensitive as the iconoscope. After World War II, RCA quickly adopted the tube for its TV cameras. The image orthicon became the industry standard by 1947, remaining so until 1968 and the move to color TV.

The Path to Television Was Not Obvious

My Greek teacher hated the word “television.” He considered it an abomination that combined the Greek prefix telos (far off) with a Latin base, videre (to see). But early television was a bit of an abomination—no one really knew what it was going to be. As Chris Horrocks lays out in his delightfully titled book, The Joy of Sets (2017), television was developed in relation to the media that came before—telegraph, telephone, radio, and film.

Was television going to be like a telegraph, with communication between two points and an image slowly reassembled? Was it going to be like a telephone, with direct and immediate dialog between both ends? Was it going to be like film, with prerecorded images played back to a wide audience? Or would it be more like radio, which at the time was largely live broadcasts? At the beginning, people didn’t even know they wanted a television; manufacturers had to convince them.

And technically, there were many competing visions—Baird’s, Farnsworth’s, Zworykin’s, and others. It’s no wonder that television took many years, with lots of false starts and dead ends, before it finally took hold.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the September 2024 print issue as “The Mechanical TV.”

References

In 1936, a fire destroyed the Crystal Palace, where Baird had workshops, a television studio, and a tube manufacturing plant. With it went lab notebooks, correspondence, and original artifacts, making it more difficult to know the full history of Baird and his contributions to television.

Donald McLean’s “Before ‘True Television’: Investigating John Logie Baird’s 1925 Original Television Apparatus,” which appeared in Proceedings of the IEEE in June 2022, is an excellent investigation into the double-8 apparatus that Baird used in the 1925 Selfridges demonstration.

For a detailed description of the apparatus used in the 1926 demonstration at Baird’s lab, see “John Logie Baird and the Secret in the Box: The Undiscovered Story Behind the World’s First Public Demonstration of Television,” in Proceedings of the IEEE, August 2020, by Brandon Inglis and Gary Couples.

For an overview on the history of television, check out Chris Horrocks’s The Joy of Sets: A Short History of the Television (Reaktion Books, 2017). Chapter 2 focuses on Baird and other early inventors. And if you want to learn more about Farnsworth’s and RCA’s battle, which doesn’t acknowledge Baird at all, see Evan Schwartz’s 2000 MIT Technology Review piece, “Who Really Invented Television?

Erika Cruz Keeps Whirlpool’s Machines Spinning



Few devices are as crucial to people’s everyday lives as their household appliances. Electrical engineer Erika Cruz says it’s her mission to make sure they operate smoothly.

Cruz helps design washing machines and dryers for Whirlpool, the multinational appliance manufacturer.

Erika Cruz


Employer:

Whirlpool

Occupation:

Associate electrical engineer

Education:

Bachelor’s degree in electronics engineering, Industrial University of Santander, in Bucaramanga, Colombia

As a member of the electromechanical components team at Whirlpool’s research and engineering center in Benton Harbor, Mich., she oversees the development of timers, lid locks, humidity sensors, and other components.

More engineering goes into the machines than is obvious. Because the appliances are sold around the world, she says, they must comply with different technical and safety standards and environmental conditions. And reliability is key.

“If the washer’s door lock gets stuck and your clothes are inside, your whole day is going to be a mess,” she says.

While appliances can be taken for granted, Cruz loves that her work contributes in its own small way to the quality of life of so many.

“I love knowing that every time I’m working on a new design, the lives of millions of people will be improved by using it,” she says.

From Industrial Design to Electrical Engineering

Cruz grew up in Bucaramanga, Colombia, where her father worked as an electrical engineer, designing control systems for poultry processing plants. Her childhood home was full of electronics, and Cruz says her father taught her about technology. He paid her to organize his resistors, for example, and asked her to create short videos for work presentations about items he was designing. He also took Cruz and her sister along with him to the processing plants.

“We would go and see how the big machines worked,” she says. “It was very impressive because of their complexity and impact. That’s how I got interested in technology.”

In 2010, Cruz enrolled in Colombia’s Industrial University of Santander, in Bucaramanga, to study industrial design. But she quickly became disenchanted with the course’s focus on designing objects like fancy tables and ergonomic chairs.

“I wanted to design huge machines like my father did,” she says.

A teacher suggested that she study mechanical engineering instead. But her father was concerned about discrimination she might face in that career.

“He told me it would be difficult to get a job in the industry because mechanical engineers work with heavy machinery, and they saw women as being fragile,” Cruz says.

Her father thought electrical engineers would be more receptive to women, so she switched fields.

“I am very glad I ended up studying electronics because you can apply it to so many different fields,” Cruz says. She received a bachelor’s degree in electronics engineering in 2019.

The Road to America

While at university, Cruz signed up for a program that allowed Colombian students to work summer jobs in the United States. She held a variety of summer positions in Galveston, Texas, from 2017 to 2019, including cashier, housekeeper, and hostess.

She met her future husband in 2018, an American working at the same amusement park as she did. When she returned the following summer, they started dating, and that September they married. Since she had already received her degree, he was eager for her to move to the states permanently, but she made the difficult decision to return to Colombia.

“With the language barrier and my lack of engineering experience, I knew if I stayed in the United States, I would have to continue working jobs like housekeeping forever,” she says. “So I told my husband he had to wait for me because I was going back home to get some engineering experience.”

“I love knowing that every time I’m working on a new design, the lives of millions of people will be improved by using it.”

Cruz applied for engineering jobs in neighboring Brazil, which had more opportunities than Colombia did. In 2021, she joined Whirlpool as an electrical engineer at its R&D site in Joinville, Brazil. There, she introduced into mass production sensors and actuators provided by new suppliers.

Meanwhile, she applied for a U.S. Green Card, which would allow her to work and live permanently in the country. She received it six months after starting her job. Cruz asked her manager about transferring to one of Whirlpool’s U.S. facilities, not expecting to have any luck. Her manager set up a phone call with the manager of the components team at the company’s Benton Harbor site to discuss the request. Cruz didn’t realize that the call was actually a job interview. She was offered a position there as an electrical engineer and moved to Michigan later that year.

Designing Appliances Is Complex

Designing a new washing machine or dryer is a complex process, Cruz says. First, feedback from customers about desirable features is used to develop a high-level design. Then the product design work is divided among small teams of engineers, each responsible for a given subsystem, including hardware, software, materials, and components.

Part of Cruz’s job is to test components from different suppliers to make sure they meet safety, reliability, and performance requirements. She also writes the documentation that explains to other engineers about the components’ function and design.

Cruz then helps select the groups of components to be used in a particular application—combining, say, three temperature sensors with two humidity sensors in an optimized location to create a system that finds the best time to stop the dryer.

Building a Supportive Environment

Cruz loves her job, but her father’s fears about her entering a male-dominated field weren’t unfounded. Discrimination was worse in Colombia, she says, where she regularly experienced inappropriate comments and behavior from university classmates and teachers.

Even in the United States, she points out, “As a female engineer, you have to actually show you are able to do your job, because occasionally at the beginning of a project men are not convinced.”

In both Brazil and Michigan, Cruz says, she’s been fortunate to often end up on teams with a majority of women, who created a supportive environment. That support was particularly important when she had her first child and struggled to balance work and home life.

“It’s easier to talk to women about these struggles,” she says. “They know how it feels because they have been through it too.”

Update Your Knowledge

Working in the consumer electronics industry is rewarding, Cruz says. She loves going into a store or visiting someone’s home and seeing the machines that she’s helped build in action.

A degree in electronics engineering is a must for the field, Cruz says, but she’s also a big advocate of developing project management and critical thinking skills. She is a certified associate in project management, granted by the Project Management Institute, and has been trained in using tools that facilitate critical thinking. She says the project management program taught her how to solve problems in a more systematic way and helped her stand out in interviews.

It’s also important to constantly update your knowledge, Cruz says, “because electronics is a discipline that doesn’t stand still. Keep learning. Electronics is a science that is constantly growing.”

The Saga of AD-X2, the Battery Additive That Roiled the NBS



Senate hearings, a post office ban, the resignation of the director of the National Bureau of Standards, and his reinstatement after more than 400 scientists threatened to resign. Who knew a little box of salt could stir up such drama?

What was AD-X2?

It all started in 1947 when a bulldozer operator with a 6th grade education, Jess M. Ritchie, teamed up with UC Berkeley chemistry professor Merle Randall to promote AD-X2, an additive to extend the life of lead-acid batteries. The problem of these rechargeable batteries’ dwindling capacity was well known. If AD-X2 worked as advertised, millions of car owners would save money.

Black and white photo of a man in a suit holding an object in his hands and talking. Jess M. Ritchie demonstrates his AD-X2 battery additive before the Senate Select Committee on Small Business.National Institute of Standards and Technology Digital Collections

A basic lead-acid battery has two electrodes, one of lead and the other of lead dioxide, immersed in dilute sulfuric acid. When power is drawn from the battery, the chemical reaction splits the acid molecules, and lead sulfate is deposited in the solution. When the battery is charged, the chemical process reverses, returning the electrodes to their original state—almost. Each time the cell is discharged, the lead sulfate “hardens” and less of it can dissolve in the sulfuric acid. Over time, it flakes off, and the battery loses capacity until it’s dead.

By the 1930s, so many companies had come up with battery additives that the U.S. National Bureau of Standards stepped in. Its lab tests revealed that most were variations of salt mixtures, such as sodium and magnesium sulfates. Although the additives might help the battery charge faster, they didn’t extend battery life. In May 1931, NBS (now the National Institute of Standards and Technology, or NIST) summarized its findings in Letter Circular No. 302: “No case has been found in which this fundamental reaction is materially altered by the use of these battery compounds and solutions.”

Of course, innovation never stops. Entrepreneurs kept bringing new battery additives to market, and the NBS kept testing them and finding them ineffective.

Do battery additives work?

After World War II, the National Better Business Bureau decided to update its own publication on battery additives, “Battery Compounds and Solutions.” The publication included a March 1949 letter from NBS director Edward Condon, reiterating the NBS position on additives. Prior to heading NBS, Condon, a physicist, had been associate director of research at Westinghouse Electric in Pittsburgh and a consultant to the National Defense Research Committee. He helped set up MIT’s Radiation Laboratory, and he was also briefly part of the Manhattan Project. Needless to say, Condon was familiar with standard practices for research and testing.

Meanwhile, Ritchie claimed that AD-X2’s secret formula set it apart from the hundreds of other additives on the market. He convinced his senator, William Knowland, a Republican from Oakland, Calif., to write to NBS and request that AD-X2 be tested. NBS declined, not out of any prejudice or ill will, but because it tested products only at the request of other government agencies. The bureau also had a longstanding policy of not naming the brands it tested and not allowing its findings to be used in advertisements.

Photo of a product box with directions printed on it. AD-X2 consisted mainly of Epsom salt and Glauber’s salt.National Institute of Standards and Technology Digital Collections

Ritchie cried foul, claiming that NBS was keeping new businesses from entering the marketplace. Merle Randall launched an aggressive correspondence with Condon and George W. Vinal, chief of NBS’s electrochemistry section, extolling AD-X2 and the testimonials of many users. In its responses, NBS patiently pointed out the difference between anecdotal evidence and rigorous lab testing.

Enter the Federal Trade Commission. The FTC had received a complaint from the National Better Business Bureau, which suspected that Pioneers, Inc.—Randall and Ritchie’s distribution company—was making false advertising claims. On 22 March 1950, the FTC formally asked NBS to test AD-X2.

By then, NBS had already extensively tested the additive. A chemical analysis revealed that it was 46.6 percent magnesium sulfate (Epsom salt) and 49.2 percent sodium sulfate (Glauber’s salt, a horse laxative) with the remainder being water of hydration (water that’s been chemically treated to form a hydrate). That is, AD-X2 was similar in composition to every other additive on the market. But, because of its policy of not disclosing which brands it tests, NBS didn’t immediately announce what it had learned.

The David and Goliath of battery additives

NBS then did something unusual: It agreed to ignore its own policy and let the National Better Business Bureau include the results of its AD-X2 tests in a public statement, which was published in August 1950. The NBBB allowed Pioneers to include a dissenting comment: “These tests were not run in accordance with our specification and therefore did not indicate the value to be derived from our product.”

Far from being cowed by the NBBB’s statement, Ritchie was energized, and his story was taken up by the mainstream media. Newsweek’s coverage pitted an up-from-your-bootstraps David against an overreaching governmental Goliath. Trade publications, such as Western Construction News and Batteryman, also published flattering stories about Pioneers. AD-X2 sales soared.

Then, in January 1951, NBS released its updated pamphlet on battery additives, Circular 504. Once again, tests by the NBS found no difference in performance between batteries treated with additives and the untreated control group. The Government Printing Office sold the circular for 15 cents, and it was one of NBS’s most popular publications. AD-X2 sales plummeted.

Ritchie needed a new arena in which to challenge NBS. He turned to politics. He called on all of his distributors to write to their senators. Between July and December 1951, 28 U.S. senators and one U.S. representative wrote to NBS on behalf of Pioneers.

Condon was losing his ability to effectively represent the Bureau. Although the Senate had confirmed Condon’s nomination as director without opposition in 1945, he was under investigation by the House Committee on Un-American Activities for several years. FBI Director J. Edgar Hoover suspected Condon to be a Soviet spy. (To be fair, Hoover suspected the same of many people.) Condon was repeatedly cleared and had the public backing of many prominent scientists.

But Condon felt the investigations were becoming too much of a distraction, and so he resigned on 10 August 1951. Allen V. Astin became acting director, and then permanent director the following year. And he inherited the AD-X2 mess.

Astin had been with NBS since 1930. Originally working in the electronics division, he developed radio telemetry techniques, and he designed instruments to study dielectric materials and measurements. During World War II, he shifted to military R&D, most notably development of the proximity fuse, which detonates an explosive device as it approaches a target. I don’t think that work prepared him for the political bombs that Ritchie and his supporters kept lobbing at him.

Mr. Ritchie almost goes to Washington

On 6 September 1951, another government agency entered the fray. C.C. Garner, chief inspector of the U.S. Post Office Department, wrote to Astin requesting yet another test of AD-X2. NBS dutifully submitted a report that the additive had “no beneficial effects on the performance of lead acid batteries.” The post office then charged Pioneers with mail fraud, and Ritchie was ordered to appear at a hearing in Washington, D.C., on 6 April 1952. More tests were ordered, and the hearing was delayed for months.

Back in March 1950, Ritchie had lost his biggest champion when Merle Randall died. In preparation for the hearing, Ritchie hired another scientist: Keith J. Laidler, an assistant professor of chemistry at the Catholic University of America. Laidler wrote a critique of Circular 504, questioning NBS’s objectivity and testing protocols.

Ritchie also got Harold Weber, a professor of chemical engineering at MIT, to agree to test AD-X2 and to work as an unpaid consultant to the Senate Select Committee on Small Business.

Life was about to get more complicated for Astin and NBS.

Why did the NBS Director resign?

Trying to put an end to the Pioneers affair, Astin agreed in the spring of 1952 that NBS would conduct a public test of AD-X2 according to terms set by Ritchie. Once again, the bureau concluded that the battery additive had no beneficial effect.

However, NBS deviated slightly from the agreed-upon parameters for the test. Although the bureau had a good scientific reason for the minor change, Ritchie had a predictably overblown reaction—NBS cheated!

Then, on 18 December 1952, the Senate Select Committee on Small Business—for which Ritchie’s ally Harold Weber was consulting—issued a press release summarizing the results from the MIT tests: AD-X2 worked! The results “demonstrate beyond a reasonable doubt that this material is in fact valuable, and give complete support to the claims of the manufacturer.” NBS was “simply psychologically incapable of giving Battery AD-X2 a fair trial.”

Black and white photo of a man standing next to a row of lead-acid batteries. The National Bureau of Standards’ regular tests of battery additives found that the products did not work as claimed.National Institute of Standards and Technology Digital Collections

But the press release distorted the MIT results.The MIT tests had focused on diluted solutions and slow charging rates, not the normal use conditions for automobiles, and even then AD-X2’s impact was marginal. Once NBS scientists got their hands on the report, they identified the flaws in the testing.

How did the AD-X2 controversy end?

The post office finally got around to holding its mail fraud hearing in the fall of 1952. Ritchie failed to attend in person and didn’t realize his reports would not be read into the record without him, which meant the hearing was decidedly one-sided in favor of NBS. On 27 February 1953, the Post Office Department issued a mail fraud alert. All of Pioneers’ mail would be stopped and returned to sender stamped “fraudulent.” If this charge stuck, Ritchie’s business would crumble.

But something else happened during the fall of 1952: Dwight D. Eisenhower, running on a pro-business platform, was elected U.S. president in a landslide.

Ritchie found a sympathetic ear in Eisenhower’s newly appointed Secretary of Commerce Sinclair Weeks, who acted decisively. The mail fraud alert had been issued on a Friday. Over the weekend, Weeks had a letter hand-delivered to Postmaster General Arthur Summerfield, another Eisenhower appointee. By Monday, the fraud alert had been suspended.

What’s more, Weeks found that Astin was “not sufficiently objective” and lacked a “business point of view,” and so he asked for Astin’s resignation on 24 March 1953. Astin complied. Perhaps Weeks thought this would be a mundane dismissal, just one of the thousands of political appointments that change hands with every new administration. That was not the case.

More than 400 NBS scientists—over 10 percent of the bureau’s technical staff— threatened to resign in protest. The American Academy for the Advancement of Science also backed Astin and NBS. In an editorial published in Science, the AAAS called the battery additive controversy itself “minor.” “The important issue is the fact that the independence of the scientist in his findings has been challenged, that a gross injustice has been done, and that scientific work in the government has been placed in jeopardy,” the editorial stated.

Two black and white portrait photos of men in suits. National Bureau of Standards director Edward Condon [left] resigned in 1951 because investigations into his political beliefs were impeding his ability to represent the bureau. Incoming director Allen V. Astin [right] inherited the AD-X2 controversy, which eventually led to Astin’s dismissal and then his reinstatement after a large-scale protest by NBS researchers and others. National Institute of Standards and Technology Digital Collections

Clearly, AD-X2’s effectiveness was no longer the central issue. The controversy was a stand-in for a larger debate concerning the role of government in supporting small business, the use of science in making policy decisions, and the independence of researchers. Over the previous few years, highly respected scientists, including Edward Condon and J. Robert Oppenheimer, had been repeatedly investigated for their political beliefs. The request for Astin’s resignation was yet another government incursion into scientific freedom.

Weeks, realizing his mistake, temporarily reinstated Astin on 17 April 1953, the day the resignation was supposed to take effect. He also asked the National Academy of Sciences to test AD-X2 in both the lab and the field. By the time the academy’s report came out in October 1953, Weeks had permanently reinstated Astin. The report, unsurprisingly, concluded that NBS was correct: AD-X2 had no merit. Science had won.

NIST makes a movie

On 9 December 2023, NIST released the 20-minute docudrama The AD-X2 Controversy. The film won the Best True Story Narrative and Best of Festival at the 2023 NewsFest Film Festival. I recommend taking the time to watch it.

The AD-X2 Controversy www.youtube.com

Many of the actors are NIST staff and scientists, and they really get into their roles. Much of the dialogue comes verbatim from primary sources, including congressional hearings and contemporary newspaper accounts.

Despite being an in-house production, NIST’s film has a Hollywood connection. The film features brief interviews with actors John and Sean Astin (of Lord of The Rings and Stranger Things fame)—NBS director Astin’s son and grandson.

The AD-X2 controversy is just as relevant today as it was 70 years ago. Scientific research, business interests, and politics remain deeply entangled. If the public is to have faith in science, it must have faith in the integrity of scientists and the scientific method. I have no objection to science being challenged—that’s how science moves forward—but we have to make sure that neither profit nor politics is tipping the scales.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the August 2024 print issue as “The AD-X2 Affair.”

References


I first heard about AD-X2 after my IEEE Spectrum editor sent me a notice about NIST’s short docudrama The AD-X2 Controversy, which you can, and should, stream online. NIST held a colloquium on 31 July 2018 with John Astin and his brother Alexander (Sandy), where they recalled what it was like to be college students when their father’s reputation was on the line. The agency has also compiled a wonderful list of resources, including many of the primary source government documents.

The AD-X2 controversy played out in the popular media, and I read dozens of articles following the almost daily twists and turns in the case in the New York Times, Washington Post, and Science.

I found Elio Passaglia’s A Unique Institution: The National Bureau of Standards 1950-1969 to be particularly helpful. The AD-X2 controversy is covered in detail in Chapter 2: Testing Can Be Troublesome.

A number of graduate theses have been written about AD-X2. One I consulted was Samuel Lawrence’s 1958 thesis “The Battery AD-X2 Controversy: A Study of Federal Regulation of Deceptive Business Practices.” Lawrence also published the 1962 book The Battery Additive Controversy.


Build a Radar Cat Detector



You have a closed box. There may be a live cat inside, but you won’t know until you open the box. For most people, this situation is a theoretical conundrum that probes the foundations of quantum mechanics. For me, however, it’s a pressing practical problem, not least because physics completely skates over the vital issue of how annoyed the cat will be when the box is opened. But fortunately, engineering comes to the rescue, in the form of a new US $50 maker-friendly pulsed coherent radar sensor from SparkFun.

Perhaps I should back up a little bit. Working from home during the pandemic, my wife and I discovered a colony of feral cats living in the backyards of our block in New York City. We reversed the colony’s growth by doing trap-neuter-return (TNR) on as many of its members as we could, and we purchased three Feralvilla outdoor shelters to see our furry neighbors through the harsh New York winters. These roughly cube-shaped insulated shelters allow the cats to enter via an opening in a raised floor. A removable lid on top allows us to replace straw bedding every few months. It’s impossible to see inside the shelter without removing the lid, meaning you run the risk of surprising a clawed predator that, just moments before, had been enjoying a quiet snooze.

A set of components, including an enclosure with two large holes for LEDs and what looks like cat ears on top. The enclosure for the radar [left column] is made of basswood (adding cat ears on top is optional). A microcontroller [top row, middle column] processes the results from the radar module [top row, right column] and illuminates the LEDs [right column, second from top] accordingly. A battery and on/off switch [bottom row, left to right] make up the power supply.James Provost

Feral cats respond to humans differently than socialized pet cats do. They see us as threats rather than bumbling servants. Even after years of daily feeding, most of the cats in our block’s colony will not let us approach closer than a meter or two, let alone suffer being touched. They have claws that have never seen a clipper. And they don’t like being surprised or feeling hemmed in. So I wanted a way to find out if a shelter was occupied before I popped open its lid for maintenance. And that’s where radar comes in.

SparkFun’s pulsed coherent radar module is based on Acconeer’s low-cost A121 sensor. Smaller than a fingernail, the sensor operates at 60 gigahertz, which means its signal can penetrate many common materials. As the signal passes through a material, some of it is reflected back to the sensor, allowing you to determine distances to multiple surfaces with millimeter-level precision. The radar can be put into a “presence detector” mode—intended to flag whether or not a human is present—in which it looks for changes in the distance of reflections to identify motion.

As soon as I saw the announcement for SparkFun’s module, the wheels began turning. If the radar could detect a human, why not a feline? Sure, I could have solved my is-there-a-cat-in-the-box problem with less sophisticated technology, by, say, putting a pressure sensor inside the shelter. But that would have required a permanent setup complete with weatherproofing, power, and some way of getting data out. Plus I’d have to perform three installations, one for each shelter. For information I needed only once every few months, that seemed a bit much. So I ordered the radar module, along with a $30 IoT RedBoard microcontroller. The RedBoard operates at the same 3.3 volts as the radar and can configure the module and parse its output.

If the radar could detect a human, why not a feline?

Connecting the radar to the RedBoard was a breeze, as they both have Qwiic 4-wire interfaces, which provides power along with an I2C serial connection to peripherals. SparkFun’s Arduino libraries and example code let me quickly test the idea’s feasibility by connecting the microcontroller to a host computer via USB, and I could view the results from the radar via a serial monitor. Experiments with our indoor cats (two defections from the colony) showed that the motion of their breathing was enough to trigger the presence detector, even when they were sound asleep. Further testing showed the radar could penetrate the wooden walls of the shelters and the insulated lining.

The next step was to make the thing portable. I added a small $11 lithium battery and spliced an on/off switch into its power lead. I hooked up two gumdrop LEDs to the RedBoard’s input/output pins and modified SparkFun’s sample scripts to illuminate the LEDs based on the output of the presence detector: a green LED for “no cat” and red for “cat.” I built an enclosure out of basswood, mounted the circuit boards and battery, and cut a hole in the back as a window for the radar module. (Side note: Along with tending feral cats, another thing I tried during the pandemic was 3D-printing plastic enclosures for projects. But I discovered that cutting, drilling, and gluing wood was faster, sturdier, and much more forgiving when making one-offs or prototypes.)

An outgoing sine-wave pulse from the radar is depicted on top. A series of returning pulses of lower amplitudes and at different distances are depicted on the bottom. The radar sensor sends out 60-gigahertz pulses through the walls and lining of the shelter. As the radar penetrates the layers, some radiation is reflected back to the sensor, which it detects to determine distances. Some materials will reflect the pulse more strongly than others, depending on their electrical permittivity. James Provost

I also modified the scripts to adjust the range over which the presence detector scans. When I hold the detector against the wall of a shelter, it looks only at reflections coming from the space inside that wall and the opposite side, a distance of about 50 centimeters. As all the cats in the colony are adults, they take up enough of a shelter’s volume to intersect any such radar beam, as long as I don’t place the detector near a corner.

I performed in-shelter tests of the portable detector with one of our indoor cats, bribed with treats to sit in the open box for several seconds at a time. The detector did successfully spot him whenever he was inside, although it is prone to false positives. I will be trying to reduce these errors by adjusting the plethora of available configuration settings for the radar. But in the meantime, false positives are much more desirable than false negatives: A “no cat” light means it’s definitely safe to open the shelter lid, and my nerves (and the cats’) are the better for it.

The Engineer Who Pins Down the Particles at the LHC



The Large Hadron Collider has transformed our understanding of physics since it began operating in 2008, enabling researchers to investigate the fundamental building blocks of the universe. Some 100 meters below the border between France and Switzerland, particles accelerate along the LHC’s 27-kilometer circumference, nearly reaching the speed of light before smashing together.

The LHC is often described as the biggest machine ever built. And while the physicists who carry out experiments at the facility tend to garner most of the attention, it takes hundreds of engineers and technicians to keep the LHC running. One such engineer is Irene Degl’Innocenti, who works in digital electronics at the European Organization for Nuclear Research (CERN), which operates the LHC. As a member of CERN’s beam instrumentation group, Degl’Innocenti creates custom electronics that measure the position of the particle beams as they travel.

Irene Degl’Innocenti


Employer:

CERN

Occupation:

Digital electronics engineer

Education:

Bachelor’s and master’s degrees in electrical engineering; Ph.D. in electrical, electronics, and communications engineering, University of Pisa, in Italy

“It’s a huge machine that does very challenging things, so the amount of expertise needed is vast,” Degl’Innocenti says.

The electronics she works on make up only a tiny part of the overall operation, something Degl’Innocenti is keenly aware of when she descends into the LHC’s cavernous tunnels to install or test her equipment. But she gets great satisfaction from working on such an important endeavor.

“You’re part of something that is very huge,” she says. “You feel part of this big community trying to understand what is actually going on in the universe, and that is very fascinating.”

Opportunities to Work in High-energy Physics

Growing up in Italy, Degl’Innocenti wanted to be a novelist. Throughout high school she leaned toward the humanities, but she had a natural affinity for math, thanks in part to her mother, who is a science teacher.

“I’m a very analytical person, and that has always been part of my mind-set, but I just didn’t find math charming when I was little,” Degl’Innocenti says. “It took a while to realize the opportunities it could open up.”

She started exploring electronics around age 17 because it seemed like the most direct way to translate her logical, mathematical way of thinking into a career. In 2011, she enrolled in the University of Pisa, in Italy, earning a bachelor’s degree in electrical engineering in 2014 and staying on to earn a master’s degree in the same subject.

At the time, Degl’Innocenti had no idea there were opportunities for engineers to work in high-energy physics. But she learned that a fellow student had attended a summer internship at Fermilab, the participle physics and accelerator laboratory in Batavia, Ill. So she applied for and won an internship there in 2015. Since Fermilab and CERN closely collaborate, she was able to help design a data-processing board for LHC’s Compact Muon Solenoid experiment.

Next she looked for an internship closer to home and discovered CERN’s technical student program, which allows students to work on a project over the course of a year. Working in the beam-instrumentation group, Degl’Innocenti designed a digital-acquisition system that became the basis for her master’s thesis.

Measuring the Position of Particle Beams

After receiving her master’s in 2017, Degl’Innocenti went on to pursue a Ph.D., also at the University of Pisa. She conducted her research at CERN’s beam-position section, which builds equipment to measure the position of particle beams within CERN’s accelerator complex. The LHC has roughly 1,000 monitors spaced around the accelerator ring. Each monitor typically consists of two pairs of sensors positioned on opposite sides of the accelerator pipe, and it is possible to measure the beam’s horizontal and vertical positions by comparing the strength of the signal at each sensor.

The underlying concept is simple, Degl’Innocenti says, but these measurements must be precise. Bunches of particles pass through the monitors every 25 nanoseconds, and their position must be tracked to within 50 micrometers.

“We start developing a system years in advance, and then it has to work for a couple of decades.”

Most of the signal processing is normally done in analog, but during her Ph.D., she focused on shifting as much of this work as possible to the digital domain because analog circuits are finicky, she says. They need to be precisely calibrated, and their accuracy tends to drift over time or when temperatures fluctuate.

“It’s complex to maintain,” she says. “It becomes particularly tricky when you have 1,000 monitors, and they are located in an accelerator 100 meters underground.”

Information is lost when analog is converted to digital, however, so Degl’Innocenti analyzed the performance of the latest analog-to-digital converters (ADCs) and investigated their effect on position measurements.

Designing Beam-Monitor Electronics

After completing her Ph.D. in electrical, electronics, and communications engineering in 2021, Degl’Innocenti joined CERN as a senior postdoctoral fellow. Two years later, she became a full-time employee there, applying the results of her research to developing new hardware. She’s currently designing a new beam-position monitor for the High-Luminosity upgrade to the LHC, expected to be completed in 2028. This new system will likely use a system-on-chip to house most of the electronics, including several ADCs and a field-programmable gate array (FPGA) that Degl’Innocenti will program to run a new digital signal-processing algorithm.

She’s part of a team of just 15 who handle design, implementation, and ongoing maintenance of CERN’s beam-position monitors. So she works closely with the engineers who design sensors and software for those instruments and the physicists who operate the accelerator and set the instruments’ requirements.

“We start developing a system years in advance, and then it has to work for a couple of decades,” Degl’Innocenti says.

Opportunities in High-Energy Physics

High-energy physics has a variety of interesting opportunities for engineers, Degl’Innocenti says, including high-precision electronics, vacuum systems, and cryogenics.

“The machines are very large and very complex, but we are looking at very small things,” she says. “There are a lot of big numbers involved both at the large scale and also when it comes to precision on the small scale.”

FPGA design skills are in high demand at all kinds of research facilities, and embedded systems are also becoming more important, Degl’Innocenti says. The key is keeping an open mind about where to apply your engineering knowledge, she says. She never thought there would be opportunities for people with her skill set at CERN.

“Always check what technologies are being used,” she advises. “Don’t limit yourself by assuming that working somewhere would not be possible.”

This article appears in the August 2024 print issue as “Irene Degl’Innocenti.”

❌