Reading view

There are new articles available, click to refresh the page.

Due to AI fakes, the “deep doubt” era is here

A person writing

Enlarge (credit: Memento | Aurich Lawson)

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we're seemingly entering a new age of media skepticism: the era of what I'm calling "deep doubt." While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people's existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind "deep doubt" isn't new, but its real-world impact is becoming increasingly apparent. Since the term "deepfake" first surfaced in 2017, we've seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump's baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried "AI" again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Read 22 remaining paragraphs | Comments

From Punch Cards to Python



In today’s digital world, it’s easy for just about anyone to create a mobile app or write software, thanks to Java, JavaScript, Python, and other programming languages.

But that wasn’t always the case. Because the primary language of computers is binary code, early programmers used punch cards to instruct computers what tasks to complete. Each hole represented a single binary digit.

That changed in 1952 with the A-0 compiler, a series of specifications that automatically translates high-level languages such as English into machine-readable binary code.

The compiler, now an IEEE Milestone, was developed by Grace Hopper, who worked as a senior mathematician at the Eckert-Mauchly Computer Corp., now part of Unisys, in Philadelphia.

IEEE Fellow’s innovation allowed programmers to write code faster and easier using English commands. For her, however, the most important outcome was the influence it had on the development of modern programming languages, making writing code more accessible to everyone, according to a Penn Engineering Today article.

The dedication of the A-0 compiler as an IEEE Milestone was held in Philadelphia on 7 May at the University of Pennsylvania. That’s where the Eckert-Mauchly Computer Corp. got its start.

“This milestone celebrates the first step of applying computers to automate the tedious portions of their own programming,” André DeHon, professor of electrical systems, engineering, and computer science, said at the dedication ceremony.

Eliminating the punch-card system

To program a computer, early technicians wrote out tasks in assembly language—a human-readable way to write machine code, which is made up of binary numbers. They then manually translated the assembly language into machine code and punched holes representing the binary digits into cards, according to a Medium article on the method. The cards were fed into a machine that read the holes and input the data into the computer.

The punch-card system was laborious; it could take days to complete a task. The cards couldn’t be used with even a slight defect such as a bent corner. The method also had a high risk of human error.

After leading the development of the Electronic Numerical Integrator and Computer (ENIAC) at Penn, computer scientists J. Presper Eckert and John W. Mauchly set about creating a replacement for punch cards. ENIAC was built to improve the accuracy of U.S. artillery during World War II, but the two men wanted to develop computers for commercial applications, according to a Pennsylvania Center for the Book article.

The machine they designed was the first known large-scale electronic computer, the Universal Automatic, or UNIVAC I. Hopper was on its development team.

UNIVAC I used 6,103 vacuum tubes and took up a 33-square-meter room. The machine had a memory unit. Instead of punch cards, the computer used magnetic tape to input data. The tapes, which could hold audio, video, and written data, were up to 457 meters long. Unlike previous computers, the UNIVAC I had a keyboard so an operator could input commands, according to the Pennsylvania Center for the Book article.

“This milestone celebrates the first step of applying computers to automate the tedious portions of their own programming.” —André DeHon

Technicians still had to manually feed instructions into the computer, however, to run any new program.

That time-consuming process led to errors because “programmers are lousy copyists,” Hopper said in a speech for the Association for Computing Machinery. “It was amazing how many times a 4 would turn into a delta, which was our space symbol, or into an A. Even B’s turned into 13s.”

According to a Hidden Heroes article, Hopper had an idea for simplifying programming: Have the computer translate English to machine code.

She was inspired by computer scientist Betty Holberton’s sort/merge generator and Mauchly’s Short Code. Holberton is one of six women who programmed the ENIAC to calculate artillery trajectories in seconds, and she worked alongside Hopper on the UNIVAC I. Her sort/merge program, invented in 1951 for the UNIVAC I, handled the large data files stored on magnetic tapes. Hopper defined the sort/merge program as the first version of virtual memory because it made use of overlays automatically without being directed to by the programmer, according to a Stanford presentation about programming languages. The Short Code, which was developed in the 1940s, allowed technicians to write programs using brief sequences of English words corresponding directly to machine code instructions. It bridged the gap between human-readable code and machine-executable instructions.

“I think the first step to tell us that we could actually use a computer to write programs was the sort/merge generator,” Hopper said in the presentation. “And Short Code was the first step in moving toward something which gave a programmer the actual power to write a program in a language which bore no resemblance whatsoever to the original machine code.”

A photo of a woman standing in front of a large computer bank. IEEE Fellow Grace Hopper inputting call numbers into the Universal Automatic (UNIVAC I), which allows the computer to find the correct instructions to complete. The A-0 compiler translates the English instructions into machine-readable binary code.Computer History Museum

Easier, faster, and more accurate programming

Hopper, who figured computers should speak human-like languages, rather than requiring humans to speak computer languages, began thinking about how to allow programmers to call up specific codes using English, according to an IT Professional profile.

But she needed a library of frequently used instructions for the computer to reference and a system to translate English to machine code. That way, the computer could understand what task to complete.

Such a library didn’t exist, so Hopper built her own. It included tapes that held frequently used instructions for tasks that she called subroutines. Each tape stored one subroutine, which was assigned a three-number call sign so that the UNIVAC I could locate the correct tape. The numbers represented sets of three memory addresses: one for the memory location of the subroutine, another for the memory location of the data, and the third for the output location, according to the Stanford presentation.

“All I had to do was to write down a set of call numbers, let the computer find them on the tape, and do the additions,” she said in a Centre for Computing History article. “This was the first compiler.”

The system was dubbed the A-0 compiler because code was written in one language, which was then “compiled” into a machine language.

What previously had taken a month of manual coding could now be done in five minutes, according to a Cockroach Labs article.

Hopper presented the A-0 to Eckert-Mauchly Computer executives. Instead of being excited, though, they said they didn’t believe a computer could write its own programs, according to the article.

“I had a running compiler, and nobody would touch it, because they carefully told me computers could only do arithmetic; they could not do programs,” Hopper said. “It was a selling job to get people to try it. I think with any new idea, because people are allergic to change, you have to get out and sell the idea.”

It took two years for the company’s leadership to accept the A-0.

In 1954, Hopper was promoted to director of automatic programming for the UNIVAC division. She went on to create the first compiler-based programming languages including Flow-Matic, the first English language data-processing compiler. It was used to program UNIVAC I and II machines.

Hopper also was involved in developing COBOL, one of the earliest standardized computer languages. It enabled computers to respond to words in addition to numbers, and it is still used in business, finance, and administrative systems. Hopper’s Flow-Matic formed the foundation of COBOL, whose first specifications were made available in 1959.

A plaque recognizing the A-0 is now displayed at the University of Pennsylvania. It reads:

During 1951–1952, Grace Hopper invented the A-0 Compiler, a series of specifications that functioned as a linker/loader. It was a pioneering achievement of automatic programming as well as a pioneering utility program for the management of subroutines. The A-0 Compiler influenced the development of arithmetic and business programming languages. This led to COBOL (Common Business-Oriented Language), becoming the dominant high-level language for business applications.

The IEEE Philadelphia Section sponsored the nomination.

Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments worldwide.

About Grace Hopper


Hopper didn’t start as a computer programmer. She was a mathematician at heart, earning bachelor’s degrees in mathematics and physics in 1928 from Vassar College, in Poughkeepsie, N.Y. She then received master’s and doctoral degrees in mathematics and mathematical physics from Yale in 1930 and 1934, respectively.

She taught math at Vassar, but after the bombing of Pearl Harbor and the U.S. entry into World War II, Hopper joined the war effort. She took a leave of absence from Vassar to join the U.S. Naval Reserve (Women’s Reserve) in December 1943. She was assigned to the Bureau of Ships Computation Project at Harvard, where she worked for mathematician Howard Aiken. She was part of Aiken’s team that developed the Mark I, one of the earliest electromechanical computers. Hopper was the third person and the first woman to program the machine.

After the war ended, she became a research fellow at the Harvard Computation Laboratory. In 1946 she joined the Eckert-Mauchly Computer Corp., where she worked until her retirement in 1971. During 1959 she was an adjunct lecturer at Penn’s Moore School of Electrical Engineering.

Her work in programming earned her the nickname “Amazing Grace,” according to an entry about her on the Engineering and Technology History Wiki.

Hopper remained a member of the Naval Reserve and, in 1967, was recalled to active duty. She led the effort to standardize programming languages for the military, according to the ETHW entry. She was eventually promoted to rear admiral. When she retired from the Navy at the age of 79 in 1989, she was the oldest serving officer in all the U.S. armed forces.

Among her many honors was the 1991 U.S. National Medal of Technology and Innovation “for her pioneering accomplishments in the development of computer programming languages that simplified computer technology and opened the door to a significantly larger universe of users.”

She received 40 honorary doctorates from universities, and the Navy named a warship in her honor.

Canon R5 Mk ii Drops Pixel Shift High Res. – Is Canon Missing the AI Big Picture?

Introduction

Sometimes, companies make what seems, on the surface, technically poor decisions. I consider this the case with Canon’s new R5 Mark ii (and R1) dropping support for sensor Pixel Shifting High Resolution (what Canon calls IBIS High Res). Canon removed the IBIS High Res mode, which captures (as I will demonstrate) more real information and seemingly adds an AI upscaling to create fake information. AI upscaling, if desired, can be done better and more conveniently on a computer, but Pixel Shift/IBIS High Res cannot.

The historical reason for pixel shift is to give higher resolution in certain situations. Still, because the cameras combine the images “in-camera” with the camera’s limited processing and memory resources plus simple firmware algorithms, they can’t deal with either camera or subject motion. Additionally, while the Canon R5 can take 20 frames per second (the R5 Mark ii can take 30 frames per second), taking the nine frames takes about half a second, but then it takes another ~8 seconds for the camera to process them. Rather than putting more restrictions on shooting, it would have been much easier and faster to save the raw frames (with original sensor subpixels) to the flash drive for processing later by a much more capable computer using better algorithms that can constantly be improved.

Canon’s competitors, Sony and Nikon, are already saving raw files with their pixel-shift modes. I hoped Canon would see the light with the new R5 mark ii (R5m2) and support IBIS HR in saving the raw frames. Instead, Canon went in the wrong direction; they dropped IBIS High Res altogether and added an in-camera “AI upscaling.” computer. The first-generation R5 didn’t have IBIS High Res, but a firmware release later added this capability. I’m hoping the same will happen with the R5 Mark ii, only this time saving the RAW frames rather than creating an in-camera JPEG.

Features Versus Capabilities

I want to distinguish between a “feature” and a “capability.” Take, for example, high dynamic range. The classical photography problem is taking a picture in a room with a window with a view; you can expose inside the room, in which case the view out the window will be blown out, or you expose the view out the window, in which case the room will look nearly black. The Canon R5 has an “HDR Mode” that takes multiple frames at different exposure settings and allows you to save a single processed image only or with all the frames saved. The “feature” was making a single HDR image, and the “capability” was rapidly taking multiple frames with different exposures and saving those frames.

The Canon R5 made IBIS High Res a feature when it only offered a single JPEG output without the capability of saving individual frames with the sensor shifted by sub-pixel amounts. By saving raw frames, the software could better combine frames. Additionally, the software could deal with camera and subject motion, which are unsavable artifacts in an IBIS high-res JPEG. As such, when I use IBIS High Res, I typically take three pictures just in case, as one of the pictures often will have unfixable problems that can only be seen once viewed on a computer monitor. It would also be desirable to select how many frames to save; for example, saving more than one cycle of frames would help deal with subject or camera motion.

Cameras today support some aspects of “computational photography.” Saving multiple images can be used for panoramic stitching, high dynamic range, focus stacking (to support larger than possible depths of focus with a single picture), and astrophotography image stacking (using interval timers to take many shots that are added together). Many cameras, like the R5, have even added modes to support taking multiple pictures for focus stacking, high dynamic range, and interval timers. So for the R5 mk. ii to have dropped sensor pixel shifting seems like a backward direction in the evolution of photography.

This Blog’s Use of Pixel Shifting for Higher Resolution

Both cameras have “In-Body-Stabilization” (IBIS) that normally moves the camera sensor based on motion detection to reduce camera/lens motion blur. They both also support a high-resolution mode where, instead of using the IBIS for stabilization, they use it to shift the sensor by a fraction of a pixel to take a higher-resolution image. Canon called this capability “IBIS High Res.” The R5 in-camera combines nine images, each shifted by 1/3rd of a pixel, to make a 405mp JPEG image. The D5 combines four images, each shifted by a half pixel.

In the past year, I started using my “personal camera,” the Canon R5 (45MP “full frame” 35mm), to take pictures of VR/Passthrough-AR and optical AR glasses (where possible). I also use my older Olympus D5 Mark iii (20MP Micro 4/3rd) because it is a smaller camera with smaller lenses that lets it get into the optimum optical location in smaller form factor AR glasses.

The cameras and lenses I use most are shown on the right, except for the large RF15-35mm lens on the R5 camera, which is shown for comparison. To take pictures through the optics and get inside the eye box/pupil, the lens has to be physically close to the image sensor in the camera, which limits lens selection. Thus, while the RF15-35mm lens is “better” than the fixed focus 28mm and 16mm lenses, it won’t work to take a headset picture. The RF28mm and RF16mm lenses are the only full-frame Canon lenses I found to work. Cell phones with small lenses “work,” but they don’t have the resolution of a dedicated camera, aperture control, and shutter speed control necessary to get good pictures through headsets.

Moiré

Via Big Screen Beyond

In addition to photography being my hobby, I take tens of thousands of pictures a year via the optics of AR and VR headsets, which pose particular challenges for this blog. Because I’m shooting at displays with a regular pattern of pixels with a camera its regular pattern of pixels, there is a constant chance for moiré due to the beat frequencies between the pixels and color subpixels of the camera and the display device as magnified by the camera and headset optics (left). To keep within the eye box/pupil of the headset, I am limited to simpler lenses that are physically short to keep the distance from the headset optics to the camera short, which limits the focal lengths and thus magnification to combat moiré. In camera, pixel-shifting has proven to be a way to not only improve resolution but greatly reduce moiré effects.

Issues with moiré are not limited to taking pictures via AR and VR headsets; it is a problem with real-world pictures that include things like patterns in clothing (famously with fences (from a distance where they form a small pattern) and other objects with a regular pattern (see typical photographic moiré problems below).

Anti-Aliasing

Those who know signal theory know that a low-pass cutoff filter reduces/avoids aliasing (moiré is a form of aliasing). Cameras have also used “anti-aliasing” filters, which very slightly blur the image to reduce aliasing, but this comes at the expense of resolution. In the past, with lower-resolution sensors, the chance of encountering real-world things in a picture that would cause aliasing was more likely, and the anti-aliasing filters were more necessary.

As the resolution of sensors has increased, there is a lesser likelihood that something in the typical picture that is in focus will be at the point it aliases and combined with better algorithms that can detect and reduce the effect of moiré. Still, while sometimes the moiré can be fixed in post-processing, in critical or difficult situations, it would be better if additional frames were stored to clue software into processing it as aliasing/moiré rather than “real” information.

Camera Pixels and Bayer Filter (and misunderstanding)

Most cameras today (including Canon) use a Bayer Filter pattern (below right) with two green-filtered pixels for each red or blue pixel. When producing an image for a person to view, a computer’s camera or RAW conversion software, often called “debayering” or “demosaicing,” generates a full-color pixel by combining the information from many (8 or more) surrounding single-color pixels with the total number of full-color pixels equaling the number of photosites.

Camera makers count every photosite as a pixel even though the camera only captured “one color” at that photosite. Some people, somewhat mistakenly, think the resolution is one-quarter claimed since only one-quart red and blue photosites exist. After all, with a color monitor, we don’t count the red, green, and blue subpixels as 3 pixels but just one. However, Microsoft’s ClearType does gain some resolution from the color subpixels to refine text better.

It turns out that except for extreme image cases, including special test patterns, the effective camera resolution is close to the number of photosites (and not 1/4th or 1/2). There are several reasons why this is true. First, note the red, green, and blue filter’s frequency responses for the color camera sensor (above left – taken from a Sony sensor as it was available). Notice how their spectrums are wide and overlapping. The wide spectral nature of these filters is necessary to capture all the continuous spectrums of color in the real world (every call “red” does not have the same wavelength). If the filters were very narrow and only captured a single wavelength, then any colors that are not that wavelength would be black. Each photosite captures intensity information for all colors, but the filtering biases it toward bands of colors.

Almost everything (other than spectral lines from plasmas, lasers, and some test patterns) that can be seen in the real world is not a single wavelength but a mix of wavelengths. There is even the unusual case of magenta, which does not have a wavelength (and thus, many claim it is not a color) but is a mix of blue and red. With a typical photo, we have wide-spectrum filters capturing wide-spectrum colors.

It turns out that humans sense resolution mostly in intensity and not color. This fact has been exploited to reduce the bandwidth of early color television and to reduce data in all the video and image compression algorithms. Thanks to the overlap in the color filters in the camera filters, there is considerable intensity information in the various color pixels.

Human Vision and Color

Consider human vision if the camera sensor’s Bayer patterns and color filter spectral overlaps were bad, then consider the human retina. On average, humans have 7 million cones in the retina, of which ~64% are long (L) wavelength (red), ~32% medium (M – green), and ~2% short (S – blue). However, these percentages vary widely from person to person, particularly the percentage of short/blue cones. The cones that sense color support high resolution are concentrated in the center of vision.

Notice the spectral response of the so-called red, green, and blue cones (below left) and compare it to the camera sensor filters’ response above. Note how much the “red” and “green” responses overlap. On the right is a typical distribution of cones near the fovea (center) of vision, and note there are zero “blue”/short cones in the very center of the fovea; it makes the Bayer pattern look great😁.

Acuity of the Eye

Next, we have the fact that the cones are concentrated in the center of vision and that visual acuity falls off rapidly. The charts below show the distribution of rods and cones in the eye (left) and the sharp fall-off in visual acuity from the center of vision.

Saccadic Eye Movement – The Eyes’ “Pixel Shifting”

Looking at the distribution of cones and the lack of visual acuity outside the fovea, you might wonder how humans see anything in detail. The eye constantly moves in a mix of large and small steps known as saccades. The eye tends to blank while it moves and then takes a metaphorical snapshot. The visual cortex takes the saccade’s “snapshots” and forms a composite image. In effect, the human visual system is doing “pixel shifting.”

My Use of Pixel Shifting (IBIS High-Res)

I am a regular user of the IBIS High-Resolution on this blog. Taking pictures of displays with their regular patterns is particularly prone to moiré. Plus, with the limited lenses I can use that are all wide-angle (and thus low magnification), it helps to get some more resolution. With IBIS, a single picture 405 mp (24,576 by 16,384 pixels) IBIS High-Resolution image can capture ~100-degree wide FOV and yet see details of individual pixels from a 4K display device.

It seems a bit afterthought on the R5 with the JPEG output. Even with the camera on a tripod, it screws up, so usually, I take three shots just in case because I will only know later when I look at the results blown up on a monitor if one of them messed up. The close-in crops (right) are from two back-to-back shots with IBIS high-res. In the bad shot, you can see how the edges look feathered/jagged (particularly comparing vertical elements like the “l” in Arial). I would much rather have had the IBIS HR output the 9 RAW images.

IBIS High-Res Comparison to Native Resolution

IBIS High Res helps provide higher resolution and can significantly reduce moiré. Often, the pixel shift output will have much less moiré. I can often reduce the IBIS high-res to a lower resolution, and the image has much less moiré and is a bit sharper even when scaled down to the size of a “native” resolution picture as shown below.

The crops below show the IBIS High Res image at full resolution and the native resolution scaled up to match, along with insets of the IBIS High Res picture scaled down to match the native resolution.

The Image below was taken in IBIS High Resolution and then scaled down by 33.33% for publication on this blog (from the article AWE 2024 VR – Hypervision, Sony XR, Big Screen, Apple, Meta, & LightPolymers).

The crops below compare the IBIS High Res at full resolution to a native image upscaled by 300%. Notice how the IBIS High Res has better color detail. If you look at the white tower on a diagonal in the center of the picture (pointed to by the red arrow), you can see the red (on the left) blue chroma aberrations caused by the headset’s optics, but these and other color details are lost in the native shot.

Conclusions

While my specific needs are a little special, I think Canon is missing out on a wealth of computational photography options by not supporting IBIS High-Res with RAW output. The obvious benefits are helping with moiré and getting higher-resolution still lifes. By storing RAW, there is also the opportunity to deal with movement in the scene, which may even be hand-held. It would be great to have the option to control the shift amount (shift by 1/3 and 1/2 would be good options) and the number of pictures. For example, it would be good to capture more than one “cycle” to help deal with motion.

Smartphones are cleaning up on dedicated cameras in “computational photography” to make small sensors with mediocre optics look very good. Imagine what could be done with better lenses and cameras. Sony, a leader in cell phone sensors, knows this and has pixel shift with RAW output. I don’t understand why Canon is ceding the pixel shift to Sony and Nikon. Hopefully, it will be a firmware update like it was on the original R5. Only this time, please save the RAW/cRAW files.

In related news, I’m working on an article about Texas Instrument’s renewed thrust into AR with DLP. TI DLP has been working with PoLight to support Pixel Shift (link to video with PoLight) for resolution enhancement with AR glasses (see also Cambridge Mechatronics and poLight Optics Micromovement (CES/PW Pt. 6))

The Saga of AD-X2, the Battery Additive That Roiled the NBS



Senate hearings, a post office ban, the resignation of the director of the National Bureau of Standards, and his reinstatement after more than 400 scientists threatened to resign. Who knew a little box of salt could stir up such drama?

What was AD-X2?

It all started in 1947 when a bulldozer operator with a 6th grade education, Jess M. Ritchie, teamed up with UC Berkeley chemistry professor Merle Randall to promote AD-X2, an additive to extend the life of lead-acid batteries. The problem of these rechargeable batteries’ dwindling capacity was well known. If AD-X2 worked as advertised, millions of car owners would save money.

Black and white photo of a man in a suit holding an object in his hands and talking. Jess M. Ritchie demonstrates his AD-X2 battery additive before the Senate Select Committee on Small Business.National Institute of Standards and Technology Digital Collections

A basic lead-acid battery has two electrodes, one of lead and the other of lead dioxide, immersed in dilute sulfuric acid. When power is drawn from the battery, the chemical reaction splits the acid molecules, and lead sulfate is deposited in the solution. When the battery is charged, the chemical process reverses, returning the electrodes to their original state—almost. Each time the cell is discharged, the lead sulfate “hardens” and less of it can dissolve in the sulfuric acid. Over time, it flakes off, and the battery loses capacity until it’s dead.

By the 1930s, so many companies had come up with battery additives that the U.S. National Bureau of Standards stepped in. Its lab tests revealed that most were variations of salt mixtures, such as sodium and magnesium sulfates. Although the additives might help the battery charge faster, they didn’t extend battery life. In May 1931, NBS (now the National Institute of Standards and Technology, or NIST) summarized its findings in Letter Circular No. 302: “No case has been found in which this fundamental reaction is materially altered by the use of these battery compounds and solutions.”

Of course, innovation never stops. Entrepreneurs kept bringing new battery additives to market, and the NBS kept testing them and finding them ineffective.

Do battery additives work?

After World War II, the National Better Business Bureau decided to update its own publication on battery additives, “Battery Compounds and Solutions.” The publication included a March 1949 letter from NBS director Edward Condon, reiterating the NBS position on additives. Prior to heading NBS, Condon, a physicist, had been associate director of research at Westinghouse Electric in Pittsburgh and a consultant to the National Defense Research Committee. He helped set up MIT’s Radiation Laboratory, and he was also briefly part of the Manhattan Project. Needless to say, Condon was familiar with standard practices for research and testing.

Meanwhile, Ritchie claimed that AD-X2’s secret formula set it apart from the hundreds of other additives on the market. He convinced his senator, William Knowland, a Republican from Oakland, Calif., to write to NBS and request that AD-X2 be tested. NBS declined, not out of any prejudice or ill will, but because it tested products only at the request of other government agencies. The bureau also had a longstanding policy of not naming the brands it tested and not allowing its findings to be used in advertisements.

Photo of a product box with directions printed on it. AD-X2 consisted mainly of Epsom salt and Glauber’s salt.National Institute of Standards and Technology Digital Collections

Ritchie cried foul, claiming that NBS was keeping new businesses from entering the marketplace. Merle Randall launched an aggressive correspondence with Condon and George W. Vinal, chief of NBS’s electrochemistry section, extolling AD-X2 and the testimonials of many users. In its responses, NBS patiently pointed out the difference between anecdotal evidence and rigorous lab testing.

Enter the Federal Trade Commission. The FTC had received a complaint from the National Better Business Bureau, which suspected that Pioneers, Inc.—Randall and Ritchie’s distribution company—was making false advertising claims. On 22 March 1950, the FTC formally asked NBS to test AD-X2.

By then, NBS had already extensively tested the additive. A chemical analysis revealed that it was 46.6 percent magnesium sulfate (Epsom salt) and 49.2 percent sodium sulfate (Glauber’s salt, a horse laxative) with the remainder being water of hydration (water that’s been chemically treated to form a hydrate). That is, AD-X2 was similar in composition to every other additive on the market. But, because of its policy of not disclosing which brands it tests, NBS didn’t immediately announce what it had learned.

The David and Goliath of battery additives

NBS then did something unusual: It agreed to ignore its own policy and let the National Better Business Bureau include the results of its AD-X2 tests in a public statement, which was published in August 1950. The NBBB allowed Pioneers to include a dissenting comment: “These tests were not run in accordance with our specification and therefore did not indicate the value to be derived from our product.”

Far from being cowed by the NBBB’s statement, Ritchie was energized, and his story was taken up by the mainstream media. Newsweek’s coverage pitted an up-from-your-bootstraps David against an overreaching governmental Goliath. Trade publications, such as Western Construction News and Batteryman, also published flattering stories about Pioneers. AD-X2 sales soared.

Then, in January 1951, NBS released its updated pamphlet on battery additives, Circular 504. Once again, tests by the NBS found no difference in performance between batteries treated with additives and the untreated control group. The Government Printing Office sold the circular for 15 cents, and it was one of NBS’s most popular publications. AD-X2 sales plummeted.

Ritchie needed a new arena in which to challenge NBS. He turned to politics. He called on all of his distributors to write to their senators. Between July and December 1951, 28 U.S. senators and one U.S. representative wrote to NBS on behalf of Pioneers.

Condon was losing his ability to effectively represent the Bureau. Although the Senate had confirmed Condon’s nomination as director without opposition in 1945, he was under investigation by the House Committee on Un-American Activities for several years. FBI Director J. Edgar Hoover suspected Condon to be a Soviet spy. (To be fair, Hoover suspected the same of many people.) Condon was repeatedly cleared and had the public backing of many prominent scientists.

But Condon felt the investigations were becoming too much of a distraction, and so he resigned on 10 August 1951. Allen V. Astin became acting director, and then permanent director the following year. And he inherited the AD-X2 mess.

Astin had been with NBS since 1930. Originally working in the electronics division, he developed radio telemetry techniques, and he designed instruments to study dielectric materials and measurements. During World War II, he shifted to military R&D, most notably development of the proximity fuse, which detonates an explosive device as it approaches a target. I don’t think that work prepared him for the political bombs that Ritchie and his supporters kept lobbing at him.

Mr. Ritchie almost goes to Washington

On 6 September 1951, another government agency entered the fray. C.C. Garner, chief inspector of the U.S. Post Office Department, wrote to Astin requesting yet another test of AD-X2. NBS dutifully submitted a report that the additive had “no beneficial effects on the performance of lead acid batteries.” The post office then charged Pioneers with mail fraud, and Ritchie was ordered to appear at a hearing in Washington, D.C., on 6 April 1952. More tests were ordered, and the hearing was delayed for months.

Back in March 1950, Ritchie had lost his biggest champion when Merle Randall died. In preparation for the hearing, Ritchie hired another scientist: Keith J. Laidler, an assistant professor of chemistry at the Catholic University of America. Laidler wrote a critique of Circular 504, questioning NBS’s objectivity and testing protocols.

Ritchie also got Harold Weber, a professor of chemical engineering at MIT, to agree to test AD-X2 and to work as an unpaid consultant to the Senate Select Committee on Small Business.

Life was about to get more complicated for Astin and NBS.

Why did the NBS Director resign?

Trying to put an end to the Pioneers affair, Astin agreed in the spring of 1952 that NBS would conduct a public test of AD-X2 according to terms set by Ritchie. Once again, the bureau concluded that the battery additive had no beneficial effect.

However, NBS deviated slightly from the agreed-upon parameters for the test. Although the bureau had a good scientific reason for the minor change, Ritchie had a predictably overblown reaction—NBS cheated!

Then, on 18 December 1952, the Senate Select Committee on Small Business—for which Ritchie’s ally Harold Weber was consulting—issued a press release summarizing the results from the MIT tests: AD-X2 worked! The results “demonstrate beyond a reasonable doubt that this material is in fact valuable, and give complete support to the claims of the manufacturer.” NBS was “simply psychologically incapable of giving Battery AD-X2 a fair trial.”

Black and white photo of a man standing next to a row of lead-acid batteries. The National Bureau of Standards’ regular tests of battery additives found that the products did not work as claimed.National Institute of Standards and Technology Digital Collections

But the press release distorted the MIT results.The MIT tests had focused on diluted solutions and slow charging rates, not the normal use conditions for automobiles, and even then AD-X2’s impact was marginal. Once NBS scientists got their hands on the report, they identified the flaws in the testing.

How did the AD-X2 controversy end?

The post office finally got around to holding its mail fraud hearing in the fall of 1952. Ritchie failed to attend in person and didn’t realize his reports would not be read into the record without him, which meant the hearing was decidedly one-sided in favor of NBS. On 27 February 1953, the Post Office Department issued a mail fraud alert. All of Pioneers’ mail would be stopped and returned to sender stamped “fraudulent.” If this charge stuck, Ritchie’s business would crumble.

But something else happened during the fall of 1952: Dwight D. Eisenhower, running on a pro-business platform, was elected U.S. president in a landslide.

Ritchie found a sympathetic ear in Eisenhower’s newly appointed Secretary of Commerce Sinclair Weeks, who acted decisively. The mail fraud alert had been issued on a Friday. Over the weekend, Weeks had a letter hand-delivered to Postmaster General Arthur Summerfield, another Eisenhower appointee. By Monday, the fraud alert had been suspended.

What’s more, Weeks found that Astin was “not sufficiently objective” and lacked a “business point of view,” and so he asked for Astin’s resignation on 24 March 1953. Astin complied. Perhaps Weeks thought this would be a mundane dismissal, just one of the thousands of political appointments that change hands with every new administration. That was not the case.

More than 400 NBS scientists—over 10 percent of the bureau’s technical staff— threatened to resign in protest. The American Academy for the Advancement of Science also backed Astin and NBS. In an editorial published in Science, the AAAS called the battery additive controversy itself “minor.” “The important issue is the fact that the independence of the scientist in his findings has been challenged, that a gross injustice has been done, and that scientific work in the government has been placed in jeopardy,” the editorial stated.

Two black and white portrait photos of men in suits. National Bureau of Standards director Edward Condon [left] resigned in 1951 because investigations into his political beliefs were impeding his ability to represent the bureau. Incoming director Allen V. Astin [right] inherited the AD-X2 controversy, which eventually led to Astin’s dismissal and then his reinstatement after a large-scale protest by NBS researchers and others. National Institute of Standards and Technology Digital Collections

Clearly, AD-X2’s effectiveness was no longer the central issue. The controversy was a stand-in for a larger debate concerning the role of government in supporting small business, the use of science in making policy decisions, and the independence of researchers. Over the previous few years, highly respected scientists, including Edward Condon and J. Robert Oppenheimer, had been repeatedly investigated for their political beliefs. The request for Astin’s resignation was yet another government incursion into scientific freedom.

Weeks, realizing his mistake, temporarily reinstated Astin on 17 April 1953, the day the resignation was supposed to take effect. He also asked the National Academy of Sciences to test AD-X2 in both the lab and the field. By the time the academy’s report came out in October 1953, Weeks had permanently reinstated Astin. The report, unsurprisingly, concluded that NBS was correct: AD-X2 had no merit. Science had won.

NIST makes a movie

On 9 December 2023, NIST released the 20-minute docudrama The AD-X2 Controversy. The film won the Best True Story Narrative and Best of Festival at the 2023 NewsFest Film Festival. I recommend taking the time to watch it.

The AD-X2 Controversy www.youtube.com

Many of the actors are NIST staff and scientists, and they really get into their roles. Much of the dialogue comes verbatim from primary sources, including congressional hearings and contemporary newspaper accounts.

Despite being an in-house production, NIST’s film has a Hollywood connection. The film features brief interviews with actors John and Sean Astin (of Lord of The Rings and Stranger Things fame)—NBS director Astin’s son and grandson.

The AD-X2 controversy is just as relevant today as it was 70 years ago. Scientific research, business interests, and politics remain deeply entangled. If the public is to have faith in science, it must have faith in the integrity of scientists and the scientific method. I have no objection to science being challenged—that’s how science moves forward—but we have to make sure that neither profit nor politics is tipping the scales.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the August 2024 print issue as “The AD-X2 Affair.”

References


I first heard about AD-X2 after my IEEE Spectrum editor sent me a notice about NIST’s short docudrama The AD-X2 Controversy, which you can, and should, stream online. NIST held a colloquium on 31 July 2018 with John Astin and his brother Alexander (Sandy), where they recalled what it was like to be college students when their father’s reputation was on the line. The agency has also compiled a wonderful list of resources, including many of the primary source government documents.

The AD-X2 controversy played out in the popular media, and I read dozens of articles following the almost daily twists and turns in the case in the New York Times, Washington Post, and Science.

I found Elio Passaglia’s A Unique Institution: The National Bureau of Standards 1950-1969 to be particularly helpful. The AD-X2 controversy is covered in detail in Chapter 2: Testing Can Be Troublesome.

A number of graduate theses have been written about AD-X2. One I consulted was Samuel Lawrence’s 1958 thesis “The Battery AD-X2 Controversy: A Study of Federal Regulation of Deceptive Business Practices.” Lawrence also published the 1962 book The Battery Additive Controversy.


Inside the Three-Way Race to Create the Most Widely Used Laser



The semiconductor laser, invented more than 60 years ago, is the foundation of many of today’s technologies including barcode scanners, fiber-optic communications, medical imaging, and remote controls. The tiny, versatile device is now an IEEE Milestone.

The possibilities of laser technology had set the scientific world alight in 1960, when the laser, long described in theory, was first demonstrated. Three U.S. research centers unknowingly began racing each other to create the first semiconductor version of the technology. The three—General Electric, IBM’s Thomas J. Watson Research Center, and the MIT Lincoln Laboratory—independently reported the first demonstrations of a semiconductor laser, all within a matter of days in 1962.

The semiconductor laser was dedicated as an IEEE Milestone at three ceremonies, with a plaque marking the achievement installed at each facility. The Lincoln Lab event is available to watch on demand.

Invention of the laser spurs a three-way race

The core concept of the laser dates back to 1917, when Albert Einstein theorized about “stimulated emission.” Scientists already knew electrons could absorb and emit light spontaneously, but Einstein posited that electrons could be manipulated to emit at a particular wavelength. It took decades for engineers to turn his theory into reality.

In the late 1940s, physicists were working to improve the design of a vacuum tube used by the U.S. military in World War II to detect enemy planes by amplifying their signals. Charles Townes, a researcher at Bell Labs in Murray Hill, N.J., was one of them. He proposed creating a more powerful amplifier that passed a beam of electromagnetic waves through a cavity containing gas molecules. The beam would stimulate the atoms in the gas to release their energy exactly in step with the beam’s waves, creating energy that allowed it to exit the cavity as a much more powerful beam.

In 1954 Townes, then a physics professor at Columbia, created the device, which he called a “maser” (short for microwave amplification by stimulated emission of radiation). It would prove an important precursor to the laser.

Many theorists had told Townes his device couldn’t possibly work, according to an article published by the American Physical Society. Once it did work, the article says, other researchers quickly replicated it and began inventing variations.

Townes and other engineers figured that by harnessing higher-frequency energy, they could create an optical version of the maser that would generate beams of light. Such a device potentially could generate more powerful beams than were possible with microwaves, but it also could create beams of varied wavelengths, from the infrared to the visible. In 1958 Townes published a theoretical outline of the “laser.”

“It’s amazing what these … three organizations in the Northeast of the United States did 62 years ago to provide all this capability for us now and into the future.”

Several teams worked to fabricate such a device, and in May 1960 Theodore Maiman, a researcher at Hughes Research Lab, in Malibu, Calif., built the first working laser. Maiman’s paper, published in Nature three months later, described the invention as a high-power lamp that flashed light onto a ruby rod placed between two mirrorlike silver-coated surfaces. The optical cavity created by the surfaces oscillated the light produced by the ruby’s fluorescence, achieving Einstein’s stimulated emission.

The basic laser was now a reality. Engineers quickly began creating variations.

Many perhaps were most excited by the potential for a semiconductor laser. Semiconducting material can be manipulated to conduct electricity under the right conditions. By its nature, a laser made from semiconducting material could pack all the required elements of a laser—a source of light generation and amplification, lenses, and mirrors—into a micrometer-scale device.

“These desirable attributes attracted the imagination of scientists and engineers” across disciplines, according to the Engineering and Technology History Wiki.

A pair of researchers discovered in 1962 that an existing material was a great laser semiconductor: gallium arsenide.

Gallium-arsenide was ideal for a semiconductor laser

On 9 July 1962, MIT Lincoln Laboratory researchers Robert Keyes and Theodore Quist told the audience at the Solid State Device Research Conference that they were developing an experimental semiconductor laser, IEEE Fellow Paul W. Juodawlkis said during his speech at the IEEE Milestone dedication ceremony at MIT. Juodawlkis is director of the MIT Lincoln Laboratory’s quantum information and integrated nanosystems group.

The laser wasn’t yet emitting a coherent beam, but the work was advancing quickly, Keyes said. And then Keyes and Quist shocked the audience: They said they could prove that nearly 100 percent of the electrical energy injected into a gallium-arsenide semiconductor could be converted into light.

A group of men next to devices.  MIT’s Lincoln Laboratory’s [from left] Robert Keyes, Theodore M. Quist, and Robert Rediker testing their laser on a TV set.MIT Lincoln Laboratory

No one had made such a claim before. The audience was incredulous—and vocally so.

“When Bob [Keyes] was done with his talk, one of the audience members stood up and said, ‘Uh, that violates the second law of thermodynamics,’” Juodawlkis said.

The audience erupted into laughter. But physicist Robert N. Hall—a semiconductor expert working at GE’s research laboratory in Schenectady, N.Y.—silenced them.

“Bob Hall stood up and explained why it didn’t violate the second law,” Juodawlkis said. “It created a real buzz.”

Several teams raced to develop a working semiconductor laser. The margin of victory ultimately came down to a few days.

A ‘striking coincidence’

A photo of a man in glasses looking at a glass container. A semiconductor laser is made with a tiny semiconductor crystal that is suspended inside a glass container filled with liquid nitrogen, which helps keep the device cool. General Electric Research and Development Center/AIP Emilio Segrè Visual Archives

Hall returned to GE, inspired by Keyes and Quist’s speech, certain that he could lead a team to build an efficient, effective gallium arsenide laser.

He had already spent years working with semiconductors and invented what is known as a “p-i-n” diode rectifier. Using a crystal made of purified germanium, a semiconducting material, the rectifier could convert AC to DC—a crucial development for solid-state semiconductors used in electrical transmission.

That experience helped accelerate the development of semiconductor lasers. Hall and his team used a similar setup to the “p-i-n” rectifier. They built a diode laser that generated coherent light from a gallium arsenide crystal one-third of one millimeter in size, sandwiched into a cavity between two mirrors so the light bounced back and forth repeatedly. The news of the invention came out in the November 1, 1962, Physical Review Letters.

As Hall and his team worked, so did researchers at the Watson Research Center, in Yorktown Heights, N.Y. In February 1962 Marshall I. Nathan, an IBM researcher who previously worked with gallium arsenide, received a mandate from his department director, according to ETHW: Create the first gallium arsenide laser.

Nathan led a team of researchers including William P. Dumke, Gerald Burns, Frederick H. Dill, and Gordon Lasher, to develop the laser. They completed the task in October and hand-delivered a paper outlining their work to Applied Physics Letters, which published it on 1 November 1962.

Over at MIT’s Lincoln Laboratory, Quist, Keyes, and their colleague Robert Rediker published their findings in Applied Physics Letters on 1 December1962.

It had all happened so quickly that a New York Times article marveled about the “striking coincidence,” noting that IBM officials didn’t know about GE’s success until GE sent invitations to a news conference. An MIT spokesperson told the Times that GE had achieved success “a couple days or a week” before its own team.

Both IBM and GE had applied for U.S. patents in October, and both were ultimately awarded.

All three facilities now have been honored by IEEE for their work.

“Perhaps nowhere else has the semiconductor laser had greater impact than in communications,” according to an ETHW entry, “where every second, a semiconductor laser quietly encodes the sum of human knowledge into light, enabling it to be shared almost instantaneously across oceans and space.”

A photo of fingers holding a device with light coming out.  IBM Research’s semiconductor laser used a gallium arsenide p-n diode, which was patterned into a small optical cavity with an etched mesa structure.IBM

Juodawlkis, speaking at the Lincoln Lab ceremony, noted that semiconductor lasers are used “every time you make a cellphone call” or “Google silly cat videos.”

“If we look in the broader world,” he said, “semiconductor lasers are really one of the founding pedestals of the information age.”

He concluded his speech with a quote summing up a 1963 Time magazine article: “If the world is ever afflicted with a choice between thousands of different TV programs, a few diodes with their feeble beams of infrared light might carry them all at once.”

That was a “prescient foreshadowing of what semiconductor lasers have enabled,” Juodawlkis said. “It’s amazing what these … three organizations in the Northeast of the United States did 62 years ago to provide all this capability for us now and into the future.”

Plaques recognizing the technology are now displayed at GE, the Watson Research Center, and the Lincoln Laboratory. They read:

In the autumn of 1962, General Electric’s Schenectady and Syracuse facilities, IBM Thomas J. Watson Research Center, and MIT Lincoln Laboratory each independently reported the first demonstrations of the semiconductor laser. Smaller than a grain of rice, powered using direct current injection, and available at wavelengths spanning the ultraviolet to the infrared, the semiconductor laser became ubiquitous in modern communications, data storage, and precision measurement systems.

The IEEE Boston, New York, and Schenectady sections sponsored the nomination.

Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.

Edith Clarke: Architect of Modern Power Distribution



Edith Clarke was a powerhouse in practically every sense of the word. From the start of her career at General Electric in 1922, she was determined to develop stable, more reliable power grids.

And Clarke succeeded, playing a critical role in the rapid expansion of the North American electric grid during the 1920s and ’30s.

During her first years at GE she invented what came to be known as the Clarke calculator. The slide rule let engineers solve equations involving electric current, voltage, and impedance 10 times faster than by hand.

Her calculator and the power distribution methods she developed paved the way for modern grids. She also worked on hydroelectric power plant designs, according to a 2022 profile in Hydro Review.

She broke down barriers during her life. In 1919 she became the first woman to earn a master’s degree in electrical engineering from MIT. Three years later, she became the first woman in the United States to work as an electrical engineer.

Her life is chronicled in Edith Clarke: Trailblazer in Electrical Engineering. Written by Paul Lief Rosengren, the book is part of IEEE-USA’s Famous Women Engineers in History series.

Becoming the first female electrical engineer

Clarke was born in 1883 in the small farming community of Ellicott City, Md. At the time, few women attended college, and those who did tended to be barred from taking engineering classes. She was orphaned at 12, according to Sandy Levins’s Wednesday’s Women website. After high school, Clarke used a small inheritance from her parents to attend Vassar, a women’s college in Poughkeepsie, N.Y., where she earned a bachelor’s degree in mathematics and astronomy in 1908. Those degrees were the closest equivalents to an engineering degree available to Vassar students at the time.

In 1912 Clarke was hired by AT&T in New York City as a computing assistant. She worked on calculations for transmission lines and electric circuits. During the next few years, she developed a passion for power engineering. She enrolled at MIT in 1918 to further her career, according to her Engineering and Technology History Wiki biography.

After graduating, though, she had a tough time finding a job in the man-dominated field. After months of applying with no luck, she landed a job at GE in Boston, where she did more or less the same work as she did in her previous role at AT&T, except now as a supervisor. Clarke led a team of computers—employees (mainly women) who performed long, tedious calculations by hand before computing machines became widely available.

black and white illustration with text and lines and angles The Clarke Calculator let engineers solve equations involving electric current, voltage, and impedance 10 times faster than by hand. Clarke was granted a U.S. patent for the slide rule in 1925.Science History Images/Alamy

While at GE she developed her calculator, eventually earning a patent for it in 1925.

In 1921 Clarke left GE to become a full-time physics professor at Constantinople Women’s College, in what is now Istanbul, according to a profile by the Edison Tech Center. But she returned to GE a year later when it offered her a salaried electrical engineering position in its Central Station Engineering department in Boston.

Although Clarke didn’t earn the same pay or enjoy the same prestige as her male colleagues, the new job launched her career.

U.S. power grid pioneer

According to Rosengren’s book, during Clarke’s time at GE, transmission lines were getting longer and larger power loads were increasing the chances of instability. Mathematical models for assessing grid reliability at the time were better suited to smaller systems.

To model systems and power behavior, Clarke created a technique using symmetrical components—a method of converting three-phase unbalanced systems into two sets of balanced phasors and a set of single-phase phasors. The method allowed engineers to analyze the reliability of larger systems.

black and white photograph of two women talking and smiling with hands on a desk Vivien Kellems [left] and Clarke, two of the first women to become a full voting member of the American Institute of Electrical Engineers, meeting for the first time in GE’s laboratories in Schenectady, N.Y. Bettmann/Getty Images

Clarke described the technique in “Steady-State Stability in Transmission Systems,” which was published in 1925 in A.I.E.E. Transactions, a journal of the American Institute of Electrical Engineers, one of IEEE’s predecessors. Clarke had scored another first: the first woman to have her work appear in the journal.

In the 1930s, Clarke designed the turbine system for the Hoover Dam, a hydroelectric power plant on the Colorado River between Nevada and Arizona. The electricity it produced was stored in massive GE generators. Clarke’s pioneering system later was installed in similar power plants throughout the western United States.

Clarke retired in 1945 and bought a farm in Maryland. She came out of retirement two years later and became the first female electrical engineering professor in the United States when she joined the University of Texas, Austin. She retired for good in 1956 and returned to Maryland, where she died in 1959.

First female IEEE Fellow

Clarke’s pioneering work earned her several recognitions never before bestowed on a woman. She was the first woman to become a full voting member of the AIEE and its first female Fellow, in 1948.

She received the 1954 Society of Women Engineers Achievement Award “in recognition of her many original contributions to stability theory and circuit analysis.” She was posthumously elected in 2015 to the National Inventors Hall of Fame.

Apple Vision Pro (AVP), It Begins and iFixit’s “Extreme Unboxing”

Introduction

Today, I picked up my Apple Vision Pro (AVP) at the Apple Store. I won’t bother you with yet another unboxing video. When you pick it up at the store, they give you a nice custom-made shopping bag for the AVP’s box (left). They give you about a 30-minute guided tour with a store-owned demo headset, and when you are all done with the tour, they give you yours in a sealed box.

iFixit asked if I would help identify some of the optics during their AVP “Extreme Unboxing” (it is Apple; we need a better word for “teardown”). I have helped iFixit in the past with their similar efforts on the Magic Leap One and Meta Quest Pro and readily agreed to help in any way that I could.

iFixit’s “Extreme Unboxing”

As per iFixit’s usual habit, they took the unboxing of a new product to the extreme. They published the first of several videos of their extreme unboxing of the AVP today (Feb. 3rd, 2023). You can expect more videos to follow.

Perhaps the most unexpected thing iFixit showed in the first iFixit video is that the Eyesight (front display) has more than a single lenticular lens in front of the Eyesight’s OLED display. There is a second lens-like element and/or a brightness enhancement film (BEF). BEF films a series of triangular refraction elements that act in one direction, similar to a lenticular lens.

iFixit also showed a glimpse of the AVP’s pancake optics and the OLED microdisplay used for each eye toward the end of the video. The AVP uses pancake optics as described in Apple Vision Pro (Part 4) – Hypervision Pancake Optics Analysis.

Closing

That’s it for today. I mostly wanted to let everyone know about the iFixit extreme unboxing. I have a lot of work to do to analyze the Apple Vision Pro.

❌