Reading view

There are new articles available, click to refresh the page.

Meta Orion (Pt. 3 Response to Meta CTO on Eye Glow and Transparency)

Introduction: Challenge from Bosworth Accepted

Several people pointed me to an interesting Instagram video AMA (ask me anything) by Meta CTO Andrew Bosworth on October 21, 2024, that appeared to challenge my October 6th article, Meta Orion AR Glasses (Pt. 1 Waveguide), which discussed both transparency and “Eye Glow” (what Bosworth Referred to as “Blue Glow”) — Challenge Accepted.

On the right is a Google Search for “Meta” [and] “Orion” [and] “Eye Glow” OR “Blue Glow” from Sept 7th (Orion announced) through Oct 28, 2024. Everything pertinent to the issue was from this blog or was citing this blog. A Google Search for “Meta Orion” and “blue glow” returns nothing. Shown on the right is a Google search.

As far as I can find, this blog (and a few other sites citing this blog) has been the only one reporting on Meta Orion’s transparency or Eye Glow. So when Bosworth said, “Another thing that was kind of funny about the reviews is people were like, oh, you know you can see the blue glow well,” who else could he be referring to?

Housekeeping – Parts 2 and 3 of the Snap and Orion Roundtable are to be released soon.

The rest of the two-hour roundtable discussion about Snap Spectacles and Meta Orion should be released soon. Part 2 will focus on Meta Orion. Part 3 will discuss more applications and market issues, along with some scuttlebutt about Meta’s EMG wristband controller.

Bosworth’s Statement on Transparency and Eye Glow in Instagram AMA Video – Indirect Shout Out to this Blog

Below is computer transcription with minor light to clean up the speech-to-text and add punctuation and capitalization) of Bosworth’s October 21, 2024, AMA on Instagram, starting at about 14:22 into the video.

14:22 Question: What % of light does Orion block from your view of the world, how much is it darkened?

I don’t know exactly. So, all glass limits transmission to some degree. So, even if you have completely clear glasses, you know, maybe they take you from 100% transmission up your eyes like 97% um, and normal sunglasses that you have are much darker than you think they’re like 17% transmissive is like a standard for sunglasses.  Orion is clear. It’s closer [to clear], I don’t know what the exact number is, but it’s closer to regular prescription glasses than any kind of glasses [in context, he sounds like he is referring to other AR glasses]. There’s no tint on it [Orion]. We did put tint on a couple of demo units so we could see what that looked like, but that’s not how they [Orion] work.

I won’t get into the electrochromic and that kind of stuff.  Some people were theorizing that they were tinted to increase contrast. This is not uncommon [for AR] glasses. We’re actually quite proud that these were not. If I was wearing them, and you’re looking at my eyes, you would just see my eyes.

Note that Bosworth mentioned electrochromic [dimming] but “won’t get into it.” As I stated in Orion Part 1, I believe Orion has electrochromic (electrically controlled) dimming. While not asked, Bosworth gratuitously discusses “Blue Glow,” which in context can only mean “Eye Glow.”

Another thing that was kind of funny about the reviews is people were like, oh, you know you can see the blue glow well. What we noticed was so funny was the photographers from the press who were taking pictures of the glasses would work hard to get this one angle, which is like 15 degrees down and to the side where you do see the blue glow.  That’s what we’re actually shunting the light to. If you’re standing in front of me looking at my eyes, you don’t see the glow, you just see my eyes. We really worked hard on that we’re very proud of it.

But of course, if you’re the person who’s assigned by some journalist outfit to take pictures of these new AR glasses, you want to have pictures that look like you can see something special or different about them. It was so funny as every Outlet included that one angle. And if you look at them all now, you’ll see that they’re all taken from this one specific down and to the side angle.

As far as I can find (it’s difficult to search), this blog is the only place that has discussed the transparency percentage of Orion’s glasses (see: Light Transmission (Dimming?)). Also, as discussed in the introduction, this blog is the only one discussing eye glow (see Eye Glow) in the same article. Then, consider how asking about the percentage of light blockage caused Bosworth to discuss blue [eye] glow — a big coincidence?

But what caused me to write this article is the factually incorrect statement that the only place [the glow] is visible is from “15 degrees down and to the side. He doth protest too much, methinks. Most graciously

Orion’s Glow is in pictures taken from more than “taken from this one specific down and to the side angle

To begin with, the image I show in Meta Orion AR Glasses (Pt. 1 Waveguide), shows a more or less straight-on shot from a video by The Verge (right). It is definitely not shot from a “down and to the side angle.”

In fact, I was able to find images with Bosworth in which the camera was roughly straight on, from down and to the side, and even looking down on the Orion glasses Bosworth’s Sept. 25, 2024, Instagram video and in Adam Savage’s Tested video (far right below).

In the same The Verge Video, there is eye-glow with Mark Zuckerburg looking almost straight on into the camera and from about eye level to the side.

The eye glow was even captured by the person wearing another Orion headset when playing a pong-like game. The images below are composites of the Orion camera and what was shown in the glasses; thus, they are simulated views (and NOT through the Orion’s waveguide). The stills are from The Verge (left) and CNBC (right).

Below are more views of the eye-glow (mostly blue in this case) from the same The Verge video.

The eye glow stills frames below were captured from a CNBC video.

Here are a few more examples of eye glow that were taken while playing the pong-like game from roughly the same location as the CNBC frames above right. They were taken from about even with the glasses but off to the side.

In summary, there is plenty of evidence that the eye glow from Meta’s Orion can be seen from many different angles, not just from below but also from the side, as Bosworth states.

Meta Orion’s Transparency and Electrochromic Dimming

Bosworth’s deflection on the question of Orion’s light transmission

Bosworth started by correctly saying that nothing manmade is completely transparent. A typical (uncoated) glass reflects about 8% of the light. Eyeglasses with good antireflective coatings reflect about 0.5%. The ANSI/ISEA Z87.1, safety glasses standard, specifies “clear” as >85% transmission. Bosworth appears to catch himself knowing that there is a definition for clear and says that Orion is “closer to clear” than sunglasses at about 17%.

Bosworth then says there is “no tint” in Orion, but respectfully, that was NOT the question. He then says, “I won’t get into the electrochromic and that kind of stuff,” which is likely a major contributor to the light transmission. Any dimming technology I know of is going to block much more light than a typical waveguide. The transparency of Orion is a function of the waveguide, dimming layer, other optics layers, and inner and outer protection covers.

Since Bosworth evaded answering the question, I will work through it and try to get an answer. The process will include trying to figure out what kind of dimming I think Orion uses.

What type of electrochromic dimming is Orion Using?

First, I want to put in context what my first article was discussing regarding Orion’s Light Transmission (Dimming?). I was well aware that diffractive waveguides, even glass ones, alone are typically about 85-90% transmissive. From various photographs, I’m pretty sure Orion has some form of electrochromic dimming, as I stated in the first article. I could see the dimming change in one video, and in view of the exploded parts, there appeared to be a dimming device. In looking at this figure, the dimming device seems fairly transparent and on the order of the waveguides and other flat optics. What I was trying to figure out was whether they were using more common polarization-based dimming or a non-polarization-based technology. This picture is inconclusive as to the type of dimming that is used, as the dimmer identified (by me) might be only the liquid crystal part of the shutter with the polarizers, if there are any, in the cover glass or not shown.

The Magic Leap 2 (see: Magic Leap 2 (Pt. 3): Soft Edge Occlusion, a Solution for Investors and Not Users). Polarization-based dimming is fast and gives a very wide range of dimming (from 10:1 to >100:1), but it requires the real-world light first to be polarized, and when everything is considered, it blocks more than 70% of the light. It’s also possible to get somewhat better transmission by using semi-polarizing polarizers, but it gives up a lot of dimming range to gain some transmission. Polarization also causes issues when looking at LCDs, such as computer monitors and some cell phones.

Non-polarization dimming (see, for example, CES & AR/VR/MR Pt. 4 – FlexEnable’s Dimming, Electronic Lenses, & Curved LCDs) blocks less light in its most transmissive state but has less of a dimming range. For example, FlexEnable has a dimming cell that ranges between ~87% transmissive to 35% or less than a 3:1 dimming range. Snap Spectacles 5 uses (based on a LinkedIn post that has since been removed) a non-polarization-based electrochromic dimming by Alphamicron, what they call e-Tint. Both AlphaMicron’s e-Tint and FlexEnable’s dimming use what is known as  Guest-Host LC, which absorbs light rather than changing polarization.

Assuming Orion uses non-polarization dimming, I would assume that the waveguide and related optical surfaces have about 85-90% transmissivity and about 70% to 80% for non-polarization dimming. Since the two effects are multiplicative, that would put Orion in the 90%x80% = 72% to 85 x70% = 60% range.

Orion’s Dimming

Below are a series of images from videos by CNET, The Verge, and Bloomberg. Notice that CNET’s image appears to be much more transmissive. On both CNET and The Verge, I included eye glow pictures from a few frames in the video later to prove both glasses were turned on. CNET’s Orion glasses are significantly more transparent than any other Orion video I have seen (from over 10 I have looked at to date), even when looking at the same demos as in the videos. I missed this big difference when preparing my original article and only discovered it when preparing this article.

Below are some more frame captures on the top row. On the bottom row, there are pictures of the Lumus Maximus (the most transparent waveguide I have seen), WaveOptic Titan, The Magic Leap One (with no tint), and circular polarizing glasses for comparison. The circular polarizing glasses are approximately what I would expect if the Orion glasses were using polarizing dimming.

Snap Spectacles 5, which uses non-polarization dimming, is shown on the left. It compares reasonably well to the CNET mage. Based on the available evidence, it appears that Orion must also be using an electrochromic dimming technology. Per my prior estimate, this would put Orion’s best-case (CNET) transparency in the range of 60-70%

What I don’t know is why CNET was so much more transparent than the others, even when they appear to be in similar lighting. My best guess is that the dimming feature was adjusted differently or disabled for the CNET video.

Why is Orion Using Electronic Dimming Indoors?

All the Orion videos I have seen indicate that Orion is adding electrochromic dimming when indoors. Even bright indoor lighting is much less bright than sunlight. Unlike Snap Spectacles 5 (with electronic dimming) demos, Meta didn’t demo the unit outdoors. There can be several reasons, including:

  • The most obvious reason is the lack of display brightness.
  • For colors to “pop,” they need to be at least 8x brighter than the surroundings. Bright white objects in a well-lit room could be more than 50 nits. Maybe they couldn’t or didn’t want to go that bright for power/heat reasons.
  • Reduced heat of the MicroLEDs
  • Saves on battery life

Thinking about this issue made me notice that the walls in the demo room are painted a fairly dark color. Maybe it was a designer’s decision, but it also goes to my saying, “Demos is a Magic Show,” and that darker walls would make the AR display look better.

When this is added up, it suggests that the displays in the demos were likely outputting about 200 nits (just an educated guess). While ~200 nits would be a bright computer monitor, colors would be washed out in a well-lit room when viewed against a non-black background (monitors “bring their own black background”). Simply based on how they demoed it, I suspect that Snap Spectacles 5 is four to eight times brighter than Orion with the dimming used to work outdoors (rather than indoors).

Conclusion and Comments

When I first watched Bosworth’s video, his argument that the eye glow could only be seen from one angle seemed persuasive. But then I went back to check and could easily see that what he stated was provably false. I’m left to speculate as to why he brought up the eye glow issue (as it was not the original question) and proceeded to give erroneous information. It did motivate me to understand Orion better😁.

Based on what I saw in the CNET picture and what is a reasonable assumption for the waveguide, non-polarizing dimmer, and other optics (with transparency being multiplicative and not additive), it pegs Orion in the 60% transparency range plus or minus about 5%.

Bosworth’s answer on transparency was evasive, saying there was no “tint,” which was a non-answer. He mentioned electrochromic dimming but didn’t say for sure that Orion was using it. In the end, he said Orion was closer to prescription glasses (which are about 90% uncoated, 99.5% with anti-reflective coatings) than sunglasses at 17%. If we take uncoated glasses at 90% and sunglasses at 17%, then the midpoint between them would be 53% so that Orion may be, at best, slightly closer to uncoated eyeglasses than sunglasses. There are waveguide-based AR glasses that are more transparent (but without dimming) than Orion.

Bosworth gave more of an off-the-cuff AMA and not a formal presentation for a broad audience, and some level of generalization and goofs are to be expected. While he danced around the transparency issue a bit, it was the “glow” statement and its specificity that I have more of an issue with.

Even though Bosworth is the CTO and head of Meta’s Reality Labs, his background is in software, not optics so that he may have been ill-informed rather than deliberately misleading. I generally find him likable in the videos, and he shares a lot of information (while I have met many people from Meta’s Reality Labs, I have not met Bosworth). At the same time, it sounds to my ear that when he discusses optics, he is parroting things things he has been told, sometimes without fully understanding what he is saying. This is in sharp contrast to, say, Hololen’s former leader, Alex Kipman, who I believe out and out lied repeatedly.

Working on this article caused me to reexamine what Snap Spectacles was using for dimming. In my earlier look at AlphaMicron, I missed that AlphaMicron’s “e-Tint®” was a Guest Host dimming technology rather than a polarization-based one.

From the start, I was pretty sure Orion was using electrochromic dimming, but I was not sure whether it was polarization or non-polarization-based. In working through this article, I’m now reasonably certain it is a non-polarization-based dimming technology.

Working through this article, I realized that the available evidence also suggests that Orion’s display is not very bright. I would guess less than 200 nits, or at least they didn’t want to drive it brighter than that for very long.

Appendix: Determining the light blocking from videos is tricky

Human vision has a huge dynamic range and automatically adjusts as light varies. As Bosworth stated, typical sunglasses are less than 75% transmissive. Human perception of brightness is somewhat binary logarithmic. If there is plenty of available light, most people will barely notice a 50% dimming.

When wearing AR glasses, a large percentage (for some AR headsets, nearly all) of the light needed to view the eye will pass through the AR lens optics twice (in and back out). Because light blocking in series is multiplicative, this can cause the eyes to look much darker than what the person perceives when looking through them.

I set up a simple test using Wave Optic’s waveguide, which is ~85% transmissive, circular polarizing glasses (for 3-D movies) that was 33% tranmissive, and a Magic Leap One waveguide (out of the frame) that was 70% transmissive. In the upper right, I have shown a few examples of where I had a piece of white paper far enough away from the lens that the lens did not affect the illumination of the paper. On the lower right, I moved the paper up against the lens so the paper was primarily illuminated via the lens to demonstrate the light-blocking squared effect.

Orion’s Silicon Carbide (SiC) is not significantly more transparent than glass. Most of the light blocking in a diffraction waveguide comes from the diffraction grating, optical coatings, and number of layers. Considering that Orion’s “hero prototype” with $5B in R&D expenses for only 1,000 units, it is probably more transparent by about 5%.

When looking at open glasses like Orion (unlike, say, Magic Leap or Hololens), the lenses block only part of the eye’s illumination, so you get something less than the square law effect. So, in judging the amount of light blocking, you also have to estimate how much light is getting around the lenses and frames.

Meta Orion AR (Pt. 2 Orion vs Wave Optics/Snap and Magic Leap Waveguides)

Update (Oct. 19th, 2024)

While the general premise of this article is that Meta Orion is using similar waveguide technology to Snap (Wave Optics) and that Magic Leap 2 is correct, it turns out that a number of assumptions about the specifics of what the various companies actually used in their products were incorrect. One of my readers (who wishes to remain anonymous) with deep knowledge of waveguides responded to my request for more information on the various waveguides. This person had both a theoretical knowledge of waveguides and what Meta Orion, Wave Optics (now Snap), Magic Leap Two, and Hololen 2 used.

My main error about the nature of waveguide “grating” structures was a bias toward linear gratings, with which I was more familiar. I overlooked the possibility that Wave Optics was using a set of “pillar” gratings that act like a 2D set of linear gratings.

A summary of the corrections:

  1. Hololens 2 had a two-sided waveguide. The left and right expansion gratings are on opposite sides of the waveguide.
  2. Prior Wave Optics (Snap) waveguides use a pillar-type 2-D diffraction grating on one side. There is a single waveguide for full color. The new Snap Spectacles 5 is likely (not 100% sure) using linear diffraction gratings on both sides of a single waveguide full color, as shown in this article.
  3. Magic Leap Two uses linear diffraction gratings on both sides of the waveguide. It does use three waveguides.

The above corrections indicate that Meta Orion, Snap Spectacles 5 (Wave Optics), and Magic Leap all have overlapping linear gratings on both sides. Meta Orion and Snap likely use a single waveguide for full color, whereas the Magic Leap 2 has separate waveguides for the three primary colors.

I’m working on an article that will go into more detail and should appear soon, but I wanted to get this update out quickly.

Introduction and Background

After my last article, Meta Orion AR Glasses (Pt. 1 Waveguides), I got to thinking that the only other diffractive grating waveguide I have seen with a 2-D (X-Y) expansion and exit gratings, used in Meta’s Orion, was from Wave Optics (purchased by Snap in May 2021)

The unique look of Wave Optics waveguides is how I easily identified that Snap was using them before it was announced that Snap had bought Wave Optics in 2021 (see Exclusive: Snap Spectacles Appears to Be Using WaveOptics and [an LCOS] a DLP Display).

I then wondered what Magic Leap Two (ML2) did to achieve its 70-degree FOV and uncovered some more interesting information about Meta’s Orion. The more I researched ML2, the more similarities I found with Meta’s Orion. What started as a short observation that Meta Orion’s waveguide appears to share commonality with Snap (Wave Optics) waveguides ballooned up when I discovered/rediscovered the ML2 information.

Included in this article is some “background” information from prior articles to help compare and contrast what has been done before with what Meta’s Orion, Snap/Wave Optics, and Magie Leap Two are doing.

Diffractive Waveguide Background

I hadn’t looked at in any detail how Wave Optics diffraction gratings worked differently before. All other diffraction (I don’t know about holographic) grating waveguides I had seen before used three (or four) separate gratings on the same surface of the glass. There was an Entrance Grating, a first expansion and turning grating, and then a second expansion and exit grating. The location and whether the first expansion grating was horizontal or vertical varied with different waveguides.

Hololens 2 had a variation with left and right horizontal expansion and turning gratings and a single exit grating to increase the field of view. Still, all the gratings were on the same side of the waveguide.

Diffraction gratings bend light based on wavelength, similar to a prism. But unlike a prism, a grating will bend the light in a series of “orders.” With a diffractive waveguide, only the light from one of these orders is used, and the rest of the light is not only wasted but can cause problems, including “eye glow” and reduce the contrast of the overall system

Because diffraction is wavelength-based, it bends different colors/wavelengths in different amounts. This causes issues when sending more than one color through a single waveguide/diffraction grating. These problems are compounded as the size of the exit grating and FOV increases. Several diffraction waveguide companies have one (full color), or two (red+blue and blue+green) waveguides for smaller FOVs and then use three waveguides for wider FOVs.

For more information, Quick Background on Diffraction Waveguides, MicroLEDs and Waveguides: Millions of Nits-In to Thousands of Nits-Out with Waveguides, and Magic Leap, HoloLens, and Lumus Resolution “Shootout” (ML1 review part 3).

Meta Orion’s and Wave Optics Waveguides

I want to start with a quick summary of Orion’s waveguide, as the information and figures will be helpful in comparing it to that of Wave Optics (owned by Snap and in Snap’s Spectacles AR Glasses) and the ML2.

Summary of Orion’s waveguide from the last article

Orion’s waveguide appears to be using a waveguide substrate with one entrance grating per primary color and then two expansion and exit/output gratings. The two (crossed) output gratings are on opposite sides of the Silicon Carbide (SiC) substrate, whereas most diffractive waveguides use glass, and all the gratings are on one side.

Another interesting feature shown in the patents and discussed by Meta CTO Bosworth in some of his video interviews about Orion is “Disparity Correction,” which has an extra grating used by other optics and circuitry to detect if the waveguides are misaligned. This feature is not supported in Orion, but Bosworth says it will be included in future iterations that will move the input grating to the “eye side” of the waveguide. As shown in the figure below, and apparently in Orion, light enters the waveguide from the opposite side of the eyes. Since the projectors are on the eye side (in the temples), they require some extra optics, which, according to Bosworth, make the Orion frames thicker.

Wave Optics (Snap) Dual-Sided 2D Expanding Waveguide

Wave Optics US patent application 2018/0210205 is based on the first Wave Optics patent from the international application WO/2016/020643, first filed in 2014. FIG 3 (below) shows a 3-D representation of diffraction grating with an input grating (H0) and cross gratings (H1 and H2) on opposite sides of a single waveguide substrate.

The patent also shows that the cross gratings (H1 and H2) are on opposite sides of a single waveguide (FIG. 15B above) or one side of two waveguides (FIG. 15A above). I don’t know if Wave Optics (Snap) uses single- or double-sided waveguides in its current designs, but I would suspect it is double-sided.

While on the subject of Wave Optics waveguide design, I happen to have a picture of a Wave Optics 300mm glass wafer with 24 waveguides (right). I took the picture in the Schott booth at AR/VR/MR 2020. In the inset, I added Meta’s picture of the Orion 100mm SiC wafer, roughly to scale, with just four waveguides.

By the way, in my May 2021 article Exclusive: Snap Spectacles Appears to Be Using WaveOptics and [an LCOS] a DLP Display, I assumed that Spectacles would be using LCOS in 2021 since WaveOptics was in the process of moving to LCOS when they were acquired. I was a bit premature, as it took until 2024 for Spectacles to use LCOS.

In my hurry in putting together information and digging for connection, it was looking to me that WaveOptics would be using an LCOS microdisplay. As I pointed out, WaveOptics had been moving away from DLP to LCOS with their newer designs. Subsequent information suggests that WaveOptics was still using their much older DLP design. It is still likely that future versions will use LCOS, but the current version apparently does not.

Magic Leap

Magic Leap One (ML1) “Typical” Three Grating Waveguide

This blog’s first significant article about Magic Leap was in November 2016 (Magic Leap: “A Riddle Wrapped in an Enigma”). Since then, Magic Leap has been discussed in about 90 articles. Most other waveguide companies coaxially input all colors from a single projector. However, even though the ML1 had a single field sequential color LCOS device and projector, the LED illumination sources are spatially arranged so that the image from each color output is sent to a separate input grating. ML1 had six waveguides, three for each of the two focus planes, resulting in 6 LEDs (two sets of R, G, & B) and six entrance gratings (see: Magic Leap House of Cards – FSD, Waveguides, and Focus Planes).

Below is a diagram that iFixit developed jointly with this blog. It shows a side view of the ML1 optical path. The inset picture in the lower right shows the six entrance gratings of the six stacked waveguides.

Below left is a picture of the (stack of six) ML1 waveguides showing the six entrance gratings, the large expansion and turning gratings, and the exit gratings. Other than having spatially separate entrance gratings, the general design of the waveguides is the same as most other diffractive gratings, including the Hololens 1 shown in the introduction. The expansion gratings are mostly hidden in the ML1’s upper body (below right). The large expansion and turning grating can be seen as a major problem in fitting a “typical” diffractive waveguide into an eyeglass form factor, which is what drove Meta to find an alternative that goes beyond the ML1’s 50-degree FOV.

Figure 18 from US application 2018/0052276 diagrams the ML1’s construction. This diagram is very close to the ML1’s construction down to the shape of the waveguide and even the various diffraction grating shapes.

Magic Leap Two (ML2)

The ML1 failed so badly that very few were interested in the ML2 compared to the ML1. There is much less public information about the second-generation device, and I didn’t buy an ML2 for testing. I have covered many of the technical aspects of ML2, but I haven’t studied the waveguide before. With the ML2 having a 70-degree FOV compared to the ML1’s 50-degree FOV, I became curious about how they got it to fit.

To start with, the ML2 eliminated the ML1’s support for two focus planes. This cut the waveguides in half and meant that the exit grating of the waveguide didn’t need to change the focus of the virtual image (for more on this subject, see: Single Waveguide Set with Front and Back “Lens Assemblies”).

Looking through the Magic Leap patent applications, I turned up US 2018/0052276 to Magic Leap, which shows a 2-D combined exit grating. US 2018/0052276 is what is commonly referred to in the patent field as an “omnibus patent application,” which combines a massive number of concepts (the application has 272 pages) in a single application. The application starts with concepts in the ML1 (including the just prior FIG 18) and goes on to concepts in the ML2.

This application, loosely speaking, shows how to take the Wave Optics concept of two crossed diffraction gratings on different sides of a waveguide and integrate them onto the same side of the waveguide.

Magic Leap patent application 2020/0158942 describes in detail how the two crossed output gratings are made. It shows the “prior art” (Wave Optics and Meta Orion-like) method of two gratings on opposite sides of a waveguide in FIG. 1 (below). The application then shows how the two crossed gratings can be integrated into a single grating structure. The patent even includes scanning electron microscope photos of the structures Magic Leap had made (ex., FIG 5), which demonstrates that Magic Leap had gone far beyond the concept stage by the time of the application’s filing in Nov. 2018.

I then went back to pictures I took of Magic Leap’s 2022 AR/VR/MR conference presentation (see also Magic Leap 2 at SPIE AR/VR/MR 2022) on the ML2. I realized that the concept of a 2D OPE+EPE (crossed diffraction gratings) was hiding in plain sight as part of another figure, thus confirming that ML2 was using the concept. The main topic of this figure is “Online display calibration,” which appears to be the same concept as Orion’s “disparity correction” shown earlier.

The next issue is whether the ML2 used a single input grating for all colors and whether it used more than one waveguide. It turns out that these are both answered in another figure from Magic Leap’s 2022 AR/VR/MR presentation shown below. Magic Leap developed a very compact projector engine that illuminates and LCOS panel through the (clear) part of the waveguides. Like the ML1, the red, green, and blue illumination LEDs are spatially separated, which, in turn, causes the light out of the projector lens to be spatially separated. There are then three spatially separate input gratings on three waveguides, as shown.

Based on the ML2’s three waveguides, I assumed it was too difficult or impossible to support the “crossed” diffraction grating effect while supporting full color in a single wide FOV waveguide.

Summary: Orion, ML2, & Wave Optics Waveguide Concepts

Orion, ML2, and Wave Optics have some form of two-dimensional pupil expansion using overlapping diffraction gratings. By overlapping gratings, they reduce the size of the waveguide considerably over the more conventional approach, with three diffraction gratings spatially separate on a single surface.

To summarize:

  • Meta Orion – “Crossed” diffraction gratings on both sides of a single SiC waveguide for full color.
  • Snap/Wave Optics – “Crossed” diffraction gratings on both sides of a single glass waveguide for full color. Alternatively, “crossed” diffraction waveguides on two glass waveguides for full color (I just put a request into Snap to try and clarify).
  • Magic Leap Two – A single diffraction grating that acts like a crossed diffraction grating on high index (~2.0) glass with three waveguides (one per primary color).

The above is based on the currently available public information. If you have additional information or analysis, please share it in the comments, or if you don’t want to share it publicly, you can send a private email to newsinfo@kgontech.com. To be clear, I don’t want stolen information or any violation of NDAs, but I am sure there are waveguide experts who know more about this subject.

What about Meta Orion’s Image Quality?

I have not had the opportunity to look through Meta’s Orion or Snap Spectacles 5 and have only seen ML2 in a canned demo. Unfortunately, I was not invited to demo Meta’s Orion, no less have access to one for evaluation (if you can help me gain (legal) access, contact me at newsinfo@kgontech.com).

I have tried the ML2 a few times. However, I have never had the opportunity to take pictures through the optics or use my test patterns. From my limited experience with the ML2, it is much better in terms of image quality than the ML1 (which was abysmal – see Magic Leap Review Part 1 – The Terrible View Through Diffraction Gratings), it still has significant issues with color uniformity like other wide (>40-degree) FOV diffractive waveguides. If someone has a ML2 that I can borrow for evaluation, please get in touch with me at newsinfo@kgontech.com.

I have been following Wave Optics (now Snap) for many years and have a 2020-era Titan DLP-based 40-degree FOV Wave Optics evaluation unit (through the optics picture below). Wave Optics Titan, I would consider a “middle of the pack” (I had seen better and worse) diffractive waveguide at that time. I have seen what seem to be better diffractive waveguides before and since, but it is hard to compare them objectively as they have different FOVs, and I was not able to use my content but rather curated demo content. Wave Optics seemed to be showing better waveguides at shows before being acquired by Snap 2021, but once again, that was with their demo content with short views at shows. I am working on getting a Spectacles 5 to do a more in-depth evaluation and see how it has improved.

Without the ability to test, compare, and contrast, I can only speculate about Meta Orion’s image quality based on my experience with diffractive waveguides. The higher index of refraction of SiC helps as there are fewer TIR bounces, which degrades image quality, but it is far from a volume production-ready technology. I’m concerned about image uniformity with a large FOV and even more so with a single set of diffraction gratings as diffraction is based on wavelength (color).

Lumus Reflective Waveguide Rumors

In Meta Orion AR Glasses: The first DEEP DIVE into the optical architecture, it stated:

There were rumors before that Meta would launch new glasses with a 2D reflective (array) waveguide optical solution and LCoS optical engine in 2024-2025. With the announcement of Orion, I personally think this possibility has not disappeared and still exists.

The “reflective waveguide” would most likely be a reference to Lumus’s reflective waveguides. I have seen a few “Lumus clone” reflective waveguides from Chinese companies, but their image quality is very poor compared to Lumus. In the comment section of my last article, Ding, on October 8, 2024, wrote:

There’s indeed rumor that Meta is planning an actual product in 2025 based on LCOS and Lumus waveguide. 

Lumus has demonstrated impressive image quality in a glasses-like form factor (see my 2021 article: Exclusive: Lumus Maximus 2K x 2K Per Eye, >3000 Nits, 50° FOV with Through-the-Optics Pictures). Since the 2021 Maximus, they have been shrinking the form factor and improving support for prescription lens integration with their new “Z-lens” technology. Lumus claims its Z-Lens technology should be able to support greater than a 70-degree FoV in glass. Lumus also says because their waveguides support a larger input pupil, they should have a 5x to 10x efficiency advantage.

The market question about Lumus is whether they can make their waveguide cost-effectively in mass production. In the past, I have asked their manufacturing partner, Schott, who says they can make it, but I have yet to see a consumer product around the Z-Lens. It would be interesting to see if a company like Meta had put the kind of money they invested into complex Silicon Carbide waveguides into reflective waveguides.

While diffractive waveguides are not inexpensive, they are considered less expensive at present (except, of course, for Meta Orion’s SiC waveguides). Perhaps an attractive proposition to researchers and propriety companies is that diffraction waveguides can be customized more easily (at least on glass).

Not Addressing Who Invented What First

I want to be clear: this article does not in any way make assertions about who invented what first or whether anyone is infringing on anyone else’s invention. Making that determination would require a massive amount of work, lawyers, and the courts. The reason I cite patents and patent applications is that they are public records that are easily searched and often document technical details that are missing from published presentations and articles.

Conclusions

There seems to be a surprising amount of commonality between Meta’s Orion, the Snap/Wave Optics, and the Magic Leap Two waveguides. They all avoided the “conventional” three diffraction gratings on one side of a waveguide to support a wider FOV in an eyeglass form factor. Rediscovering that the ML2 supported “dispersion correction,” as Meta refers to it, was a bit of a bonus.

As I wrote last time, Meta’s Orion seems like a strange mix of technology to make a big deal about at Meta Connect. They combined a ridiculously expensive waveguide with a very low-resolution display. The two-sided diffraction grating Silicon Carbide waveguides seem to be more than a decade away from practical volume production. It’s not clear to me that even if they could be made cost-effective, they would have as good a view out and the image quality of reflective waveguides, particularly at wider FOVs.

Meta could have put together a headset with technology that was within three years of being ready for production. As it is, it seemed like more of a stunt in response to the Apple Vision Pro. In that regard, the stunt seems to have worked in the sense that some reviewers were reminded of seeing the real world directly with optical AR/MR beats, looking at it through camera and display.

Meta Orion AR Glasses (Pt. 1 Waveguides)

Introduction

While Meta’s announced Orion prototype AR Glasses at Meta Connect made big news, there were few technical details beyond it having a 70-degree field of view (FOV) and using Silicon Carbide waveguides. While they demoed to the more general technical press and “influencers,” they didn’t seem to invite the more AR and VR-centric people who might be more analytical. Via some Meta patents, a Reddit post, and studying videos and articles, I was able to tease out some information.

This first article will concentrate on Orion’s Silicon Carbide diffractive waveguide. I have a lot of other thoughts on the mismatch of features and human factors that I will discuss in upcoming articles.

Wild Enthusiasm Stage and Lack of Technical Reviews

In the words of Yogi Berra, “It’s like deja vu all over again.” We went through this with the Apple Vision Pro, which went from being the second coming of the smartphone to almost disappearing earlier this year. This time, a more limited group of media people has been given access. There is virtually no critical analysis of the display’s image quality or the effect on the real world. I may be skeptical, but I have seen dozens of different diffractive waveguide designs, and there must be some issues, yet nothing has been reported. I expect there are problems with color uniformity and diffraction artifacts, but nothing was mentioned in any article or video. Heck, I have yet to see anyone mention the obvious eye glow problem (more on this in a bit).

The Vergecast podcast video discusses some of the utility issues and their related video, Exclusive: We tried Meta’s AR glasses with Mark Zuckerberg, which gives some more information about the experience. Thankfully, unlike Meta or any other (simulated) through-the-optics videos, The Verge clearly marked the videos as “Simulated” (screen capture on the right).

As far as I can tell, there are no true “through-the-optics” videos or pictures (likely at Meta’s request). All the images and videos I found that may look like they could have been taken through the optics have been “simulated.”

Another informative video was by Norm Chan of Adam Savages Tested, particularly in the last two-thirds of the video after his interview with Meta CTO Andrew Bosworth. Norm discussed that the demo was “on rails” with limited demos in a controlled room environment. I’m going to quote Bosworth a few times in this article because he added information; while he may have been giving some level of marketing spin, he seems to be generally truthful, unlike former Hololens 2 leader Alex Kipman, who was repeatedly dishonest in his Hololens 2 presentation (which I documented in several articles including Hololens 2 and why the resolution math fails, and Alex Kipman Fibbing about the field of view, Alex Kipman’s problems at Microsoft with references to other places where Kipman was “fibbing,” and Hololens 2 Display Evaluation (Part 2: Comparison to Hololens 1) or input “Kipman” on this blog’s search feature)

I’m not against companies making technology demos in general. However, making a big deal about a “prototype” and not a “product” at Meta Connect rather than at a technical conference like Siggraph indicates AR’s importance to Meta. It invites comparisons to the Apple Vision Pro, which Meta probably intended.

It is a little disappointing that they also only share the demos with selected “invited media” that, for the most part, lack deep expertise in display technology and are easily manipulated by a “good” demo (see Appendix: “Escape from a Lab” and “Demos Are a Magic Show”). They will naturally tend to pull punches to keep access to new product announcements from Meta and other major companies. As a result, there is no information about the image quality of the virtual display or any reported issues looking through the waveguides (which there must be).

Eye Glow

I’ve watched hours of videos and read multiple articles, and I have yet to hear anyone mention the obvious issue of “eye glow” (front projection). They will talk about the social acceptance of them looking like glasses and being able to see the person’s eyes, but then they won’t mention the glaring problem of the person’s eyes glowing. It stuck out to me because they didn’t mention the eye glow issue, evident in all the videos and many photos.

Eye glow is an issue that diffractive waveguide designers have been trying to reduce/eliminate for years. Then there are Lumus reflective waveguides with inherently little eye glow. Vuzix, Digilens, and Dispelix make big points about how they have reduced the problem with diffractive waveguides (see Front Projection (“Eye Glow”) and Pantoscoptic Tilt to Eliminate “Eye Glow”). However, these diffractive waveguide designs with greatly reduced eye glow issues have relatively small (25-35 degree) FOVs. The Orion design supports a very wide 70-degree FOV while trying to make it fit the size of a “typical” (if bulky) glasses frame; I suspect that the design methods to meet the size and FOV requirements meant that the issue of “eye glow” could not be addressed.

Light Transmission (Dimming?)

The transmissivity seems to vary in the many images and videos of people wearing Orions. It’s hard to tell, but it seems to change. On the right, two frames switch back and forth, and the glasses darken as the person puts them on (from video Orion AR Glasses: Apple’s Last Days)

Because I’m judging from videos and pictures with uncontrolled lighting, it’s impossible to know the transmissivity, but I can compare it to other AR glasses. Below are the highly transmissive Lumus Maximus glasses with greater than 80% transmissivity and the Hololens 2 with ~40% compared to the two dimming levels of the Orion glasses.

Below is a still frame from a Meta video showing some of the individual parts of the Orion glasses. They appear to show unusually dark cover glass, a dimming shutter (possibly liquid crystal) with a drive circuit attached, and a stack of flat optics with the waveguide with electronics connected to it. In his video, Norm Chen stated, “My understanding is the frontmost layer can be like a polarized layer.” This seems consistent with what appears to be the cover “glass” (which could be plastic), which looks so dark compared to the dimming shutter (LC is nearly transparent as it only changes the polarization of light).

If it does use a polarization-based dimming structure, this will cause problems when viewing polarization-based displays (such as LCD-based computer monitors and smartphones).

Orion’s Unusual Diffractive Waveguides

Axel Wong‘s analysis of Meta Orion’s Waveguide, which was translated and published on Reddit as Meta Orion AR Glasses: The first DEEP DIVE into the optical architecture, served as a starting point for my study of the Meta Orions optics, and I largely agree with his findings. Based on the figures he showed, his analysis was based on Meta Platforms’ (a patent holding company of Meta) US patent application 2024/0179284. Three figures from that application are shown below.

[10-08-2024 – Corrected the order of the Red, Green, and Blue inputs in Fig 10 below]

Overlapping Diffraction Gratings

It appears that Orion uses waveguides with diffraction gratings on both sides of the substrate (see FIG. 12A above). In Figure 10, the first and second “output gratings” overlap, which suggests that these gratings are on different surfaces. Based on FIGs 12A and 7C above, the gratings are on opposite sides of the same substrate. I have not seen this before with other waveguides and suspect it is a complicated/expensive process.

Hololens 1

As Alex Wong pointed out in his analysis, supporting such a wide FOV in a glass form factor necessitated that the two large gratings overlap. Below (upper-left) is shown the Hololens 1 waveguide, typical of most other diffractive waveguides. It consists of a small input grating, a (often) trapezoidal-shaped expansion grating, and a more rectangular second expansion and output/exit grating. In the Orion (upper right), the two larger gratings effectively overlap so that the waveguide fits in the eyeglasses form factor. I have roughly positioned the Hololens 1 and Orion waveguides at the same vertical location relative to the eye.

Also shown in the figure above (lower left) is Orion’s waveguide wafer, which I used to generate the outlines of the gratings, and a picture (lower right) showing the two diffraction gratings in the eye glow from Orion.

It should be noted that while the Hololens 1 has only about half the FOV of the Orion, the size of the exit gratings is similar. The size of the Hololens 1 exit grating is due to the Hololen 1 having enough eye relief to support most wearing glasses. The farther away the eye is from the grating, the bigger the grating needs to be for a given FOV.

Light Entering From the “wrong side” of the waveguide

The patent application figures 12A and 7C are curious because the projector is on the opposite side of the waveguide from the eye/output. This would suggest that the projectors are outside the glasses rather than hidden in the temples on the same side of the waveguide as the eye.

Meta’s Bosworth in The WILDEST Tech I’ve Ever Tried – Meta Orion at 9:55 stated, “And so, this stack right here [pointing to the corner of the glasses of the clear plastic prototype] gets much thinner, actually, about half as thick. ‘Cause the protector comes in from the back at that point.”

Based on Bosworth’s statement, some optics route the light from the projectors in the temples to the front of the waveguides, necessitating thicker frames. Bosworth said that the next generation’s waveguides will accept light from the rear side of the waveguide. I assume that making the waveguides work this way is more difficult, or they would have already done it rather than having thicker frames on Orion.

However, Bosworth said, “There’s no bubbles. Like you throw this thing in a fish tank, you’re not gonna see anything.” This implies that everything is densely packed into the glasses, so other than saving the volume of the extra optics, there may not be a major size reduction possible. (Bosworth referenced Steve Jobs Dropping an iPod prototype in water story to prove that it could be made smaller due to the air bubbles that escaped)

Disparity Correction (Shown in Patent Application but not in Orion)

Meta’s application 2024/0179284, while showing many other details of the waveguide, is directed to “disparity correction.” Bosworth discusses in several interviews (including here) that Orion does not have disparity correction but that they intend to put it in future designs. As Bosworth describes it, the disparity correction is intended to correct for any flexing of the frames (or other alignment issues) that would cause the waveguides (and their images relative to the eyes) to move. He seems to suggest that this would allow Meta to use frames that would be thinner and that might have some flex to them.

Half Circular Entrance Gratings

Wong, in the Reddit article, also noticed that small input/entrance gratings visible on the wafer looked to be cut-off circles and commented:

However, if the coupling grating is indeed half-moon shaped, the light spot output by the light engine is also likely to be this shape. I personally guess that this design is mainly to reduce a common problem with SRG at the coupling point, that is, the secondary diffraction of the coupled light by the coupling grating.

Before the light spot of the light engine embarks on the great journey of total reflection and then entering the human eye after entering the coupling grating, a considerable part of the light will unfortunately be diffracted directly out by hitting the coupling grating again. This part of the light will cause a great energy loss, and it is also possible to hit the glass surface of the screen and then return to the grating to form ghost images.

Single Waveguide for all three colors?

Magic Leap Application Shown Three Stacked Waveguides

The patent application seems to suggest that there is a single (double-sided) waveguide for all three colors (red, green, and blue). Most larger FOV full-color diffractive AR glasses will stack three (red, green, and blue—Examples Hololens One and Magic Leap 1&2) or two waveguides (red+blue and blue+green—Example Hololens 2). Dispelix has single-layer, full-color diffractive waveguides that go up to 50 degrees FOV.

Diffraction gratings have a line spacing based on the wavelengths of light they are meant to diffract. Supporting full color with such a wide FOV in a single waveguide would typically cause issues with image quality, including light fall-off in some colors and contrast losses. Unfortunately, there are no “through the optics” pictures or even subjective evaluations by an independent expert as to the image quality of Orion.

Silicon Carbide Waveguide Substrate

The idea of using silicon carbide for Waveguides it not unique to Meta. Below is an image from GETTING THE BIG PICTURE IN AR/VR, which discusses the advantages of using high-index materials like Lithium Niobate and Silicon Carbide to make waveguides. It is well known that going to a higher index of refraction substrates supports wider FOVs, as shown in the figure below. The problem, as Bosworth points out, is that growing silicon carbide wafers are very expensive. The wafers are also much smaller, enabling fewer waveguides per wafer. From the pictures of Meta’s wafers, they only get four waveguides per wafer, whereas there can be a dozen or more diffractive waveguides made on larger and much less expensive glass wafers.

Bosworth says “Nearly Artifact Free” and with Low “Rainbow” capture

Examples of “Rainbow Artifacts” from Diffractive Wavguides

A common issue with diffractive waveguides is that the diffraction gratings will capture light in the real world and then spread it out by wavelength like a prism, which creates a rainbow-like effect.

In Adam Savage’s Tested interview (@~5:10), Bosworth said, “The waveguide itself is nano etched into silicon carbide, which is a novel material with a super high index of refraction, which allows us to minimize the Lost photons and minimize the number of photons we capture from the world, so it minimizes things like ghosting and Haze and rainbow all these artifacts while giving you that field of view that you want. Well it’s not artifact free, it’s very close to artifact-free.” I appreciate that while Bosworth tried to give the advantages of their waveguide technology, he immediately corrected himself when he had overstated his case (unlike Hololens’ Kipman as cited in the Introduction). I would feel even better if they let some independent experts study it and give their opinions.

What Bosworth says about rainbows and other diffractive artifacts may be true, but I would like to see it evaluated by independent experts. Norm said in the same video, “It was a very on-rails demo with many guard rails. They walked me through this very evenly diffused lit room, so no bright lights.” I appreciate that Norm recognized he was getting at least a bit of a “magic show” demo (see appendix).

Wild Enthusiasm Stage and Lack of Technical Reviews

In the words of Yogi Berra, “It’s like deja vu all over again.” We went through this with the Apple Vision Pro, which went from being the second coming of the smartphone to almost disappearing earlier this year. This time, a more limited group of media people has been given access. There is virtually no critical analysis of the display’s image quality or the effect on the real world. I may be skeptical, but I have seen dozens of different diffractive waveguide designs, and there must be some issues, yet nothing has been reported. I’m expecting there to be problems with color uniformity and diffraction artifacts, but nothing was mentioned.

Strange Mix of a Wide FOV and Low Resolution

There was also little to no discussion in the reviews of Orion’s very low angular resolution of only 13 pixels per degree (PPD) spread over a 70-degree FOV (a topic for my next article on Orion). This works to about a 720- by 540-pixel display resolution.

Several people reported seeing a 26PPD demo, but it was unclear if this was a form factor or a lab-bench demo. Even 26PPD is a fairly low angular resolution.

Optical versus Passthough AR – Orion vs Vision Pro

Meta’s Orion demonstration is a declaration that optical AR (e.g., Orion) and non-camera passthrough AR, such as Apple Vision Pro, are the long-term prize devices. It makes the point that no passthrough camera and display combination can come close to competing with the real-world view in terms of dynamic range, resolution, biocular stereo, and infinite numbers of focus depths.

As I have repeatedly pointed out in writing and presentations, optical AR prioritizes the view of the real world, while camera passthrough AR prioritizes the virtual image view. I think there is very little overlap in their applications. I can’t imagine anyone allowing someone out on a factor floor or onto the streets of a city in a future Apple Vision Pro type device, but one could imagine it with something like the Meta Orion. And I think this is the point that Meta wanted to make.

Conclusions

I understand that Meta was demonstrating, in a way, “If money was not an obstacle, what could we do?” I think they were too fixated on the very wide FOV issue. I am concerned that the diffractive Silicon Carbide waveguides are not the right solution in the near or long term. They certainly can’t have a volume/consumer product with a significant “eye glow” problem.

This is a subject I have discussed many times, including in Small FOV Optical AR Discussion with Thad Starner and FOV Obsession. They have the worst of all worlds in some ways, with a very large FOV and a relatively low-resolution display; they block most of the real world for a given amount of content. With the same money, I think they could have made a more impressive demo with exotic waveguide materials that didn’t seem so far off in the future. I intend to get more into the human factors and display utility in this series on Meta Orion.

Appendix: “Demos Are a Magic Show”

Seeing the way Meta introduced Orion and hearing of the crafted demos they gave reminded me of one of my earliest blog articles from 2012 call Cynics Guide to CES – Glossary of Terms which gave warning about seeing demos.

Escaped From the Lab

Orion seems to fit the definition of an “escape from the lab.” Quoting from the 2012 article:

“Escaped from the lab” – This is the demonstration of a product concept that is highly impractical for any of a number of reasons including cost, lifetime/reliability, size, unrealistic setting (for example requires a special room that few could afford), and dangerous without skilled supervision.  Sometimes demos “escape from the lab” because a company’s management has sunk a lot of money into a project and a public demo is an attempt to prove to management that the concepts will at least one day appeal to consumers.

I have used this phrase a few times over the years, including The Hololens 2 (Hololens 2 Video with Microvision “Easter Egg” Plus Some Hololens and Magic Leap Rumors), which was officially discontinued this month, although it has long since been seen as a failed product. I also commented (in Magic Leap Review Part 1 – The Terrible View Through Diffraction Gratings – see my Sept. 27, 2019 comment) that the Magic Leap One was “even more of a lab project.”

Why make such a big deal about Orion, a prototype with a strange mix of features and impractically expensive components? Someone(s) is trying to prove that the product concept was worth continued investment.

Magic Show

I also warned that demos are “a magic show.”

A Wizard of Oz (visual) – Carefully controlling the lighting, image size, viewing location and/or visual content in order to hide what would be obvious defects.   Sometimes you are seeing a “magic show” that has little relationship to real world use.

I went into further detail in this subject in my early coverages of the Hololens 2 in the section, “Demos are a Magic Show and why are there no other reports of problems?“:

I constantly try and remind people that “demos are a magic show.” Most people get wowed by the show or being one of the special people to try on a new device. Many in the media may be great at writing, but they are not experts on evaluating displays. The imperfections and problems go unnoticed in a well-crafted demo with someone that is not trained to “look behind the curtain.”

The demo content is often picked to best show off a device and avoid content that might show flaws. For example, content that is busy with lots of visual “noise” will hide problems like image uniformity and dead pixels. Usually, the toughest test patterns are the simplest, as one will immediately be able to tell if something is wrong. I typically like patterns with a mostly white screen to check for uniformity and a mostly black screen to check for contrast, with some details in the patterns to show resolution and some large spots to check for unwanted reflections. For example, see my test patterns, which are free to download. When trying on a headset that supports a web browser, I will navigate to my test pattern page and select one of the test patterns.

Most of the companies that are getting early devices will have a special relationship with the manufacturer. They have a vested interest in seeing that the product succeeds either for their internal program or because they hope to develop software for the device. They certainly won’t want to be seen as causing Microsoft problems. They tend to direct their negative opinions to the manufacturer, not public forums.

Only with independent testing by people with display experience using their own test content will we understand the image quality of the Hololens 2.

Cogni Trax & Why Hard Edge Occlusion Is Still Impossible (Behind the Magic Trick)

Introduction

As I wrote in 2012’s Cynics Guide to CES—Glossary of Terms, when you see a demo at a conference, “sometimes you are seeing a “magic show” that has little relationship to real-world use.” I saw the Cogni Trax hard edge occlusion demo last week at SID Display Week 2024, and it epitomized the concept of being a “magic show.” I have been aware of Congi Trax for at least three years (and commented about the concept on Reddit), and I discovered they quoted me (I think a bit out of context) on its website (more on this later in the Appendix).

Cogni Trax has reportedly raised $7.1 million in 3 funding rounds over the last ~7 years, which I plan to show is unwarranted. I contacted Cogni Trax’s CEO (and former Apple optical designer on the Apple Vision Pro), Sajjad Khan, who was very generous in answering questions despite his knowing my skepticism about the concept.

Soft- Versus Hard-Edge Occlusion

Soft Edge Occlusion

In many ways, this article follows up on my 2021 Magic Leap 2 (Pt. 3): Soft Edge Occlusion, a Solution for Investors and Not Users, which detailed why putting an LCD in front of glass results in very “soft” occlusion.

Nobody will notice if you put a pixel-sized (angularly) dot on a person’s glasses. If it did, every dust particle on a person’s glasses would be noticeable and distracting. That is because a dot only a few millimeters from the eye is highly out of focus, and light rays from the real world will go around the dot before they are focused by the eye’s lens. That pixel dot will insignificantly dim several thousand pixels in the virtual image. As discussed in the Magic Leap soft occlusion article, the Magic Leap 2’s dimming pixel will cover ~2,100 pixels (angularly) in the virtual image and have a dimming effect on hundreds of thousands of pixels.

Hard Edge Occlusion (Optical and Camera Passthrough)

“Hard Edge Occlusion” means the precise, pixel-by-pixel light blocking. With camera passthrough AR (such as Apple Vision Pro), hard edge occlusion is trivial; one or more camera pixels are replaced by one or more pixels in the virtual image. Even though masking pixels is trivial with camera passthrough, there is still a non-trivial problem with getting the hard edge masking perfectly aligned to the real world. With passthrough mixel reality, the passthrough camera with its autofocus has focused the real world so it can be precisely masked.

With optical mixed reality hard edge occlusion, the real world must also be brought into focus before it can be precisely masked. Rather than going to a camera, the real world’s light goes to a reflective masking spatial light modular (SLM), typically LCOS, before combining it optically with the virtual image.

In Hard Edge (Pixel) Occlusion—Everyone Forgets About Focus, I discuss Arizona State University’s (ASU) optical solution for hard edge occlusion. Their solution has a set of optics that focuses the real world onto an SLM for masking. Then, a polarizing beam-splitting cube combines the result (with a change in polarization via two passes through a quarter waveplate not shown) after masking with a micro-display. While the ASU patent mentions using a polarizing beam splitter to combine the images, the patent fails to show or mention the need for a quarter waveplate between the SLM and beam splitter to work. One of the inventors, Hong Hua, was an ASU professor and a consultant to Magic Leap, and the patent was licensed to Magic Leap.

Other than being big and bulky, optically, what is wrong with the ASU’s hard edge occlusion includes:

  • It only works to hard edge occlude at a distance set by the focusing. Ano
  • The real world is “flatted” to be at the same focus as the virtual world.
  • Polarization dims the real world by at least 50%. Additionally, viewing a polarized display device (like a typical LCD monitor or phone display) will be at least partially blocked by an amount that will vary with orientation relative to the optics.
  • The real world is dimmed by at least 2x via the polarizing beam splitter.
  • As the eye moves, the real world will move differently than it would with the eye looking directly. You are looking at the real world through two sets of optics with a much longer light path.

While Cogni Trax uses the same principle for masking the real world, it is configured differently and is much smaller and lighter. Both devices block a lot of light. Cogni Trax’s design blocks about 77% of the light, and they claim their next generation will block 50%. However, note that this is likely on top of any other light losses in the optical system.

Cogni Trax SID Display Week 2024 Demo

On the surface, the Cogni Trax demo makes it look like the concept works. The demo had a smartphone camera looking through the Cogni Trax optical device. If you look carefully, you will see that they block light from 4 areas of the real world (see arrow in the inset picture below), a Nike swoosh on top of the shoe, a QR code, the Coke in the bottle (with moving bubbles), and a partially darken the wall to the right to create a shadow of the bottle.

They don’t have a microdisplay with a virtual image; thus, they can only block or darken the real world and not replace anything. Since you are looking at the image on a cell phone and not with your own eyes, you have no sense of the loss of depth and parallax issues.

When I took the picture above, I was not planning on writing an article and missed capturing the whole setup. Fortunately, Robert Scoble put out an X-video that showed most of the rig used to align the masking to the real world. The rig supports aligning the camera and Cogni Trax device with six degrees of freedom. This demo will only work if all the objects in the scene are in a precise location relative to the camera/device. This is the epitome of a canned demo.

One could hand wave that developing SLAM, eye tracking, and 3-D scaling technology to eliminate the need for the rig is a “small matter of hardware and software” (to put it lightly). However, requiring a rig is not the biggest hidden trick in these demos; it is the basic optical concept and its limitations. The “device” shown (lower right inset) is only the LCOS device and part of the optics.

Cogni Trax Gen 1 Optics – How it works

Below is a figure of Congi Trax’s patent that will be used to diagram the light path. I have added some colorization to help you follow the diagram. The dashed-lined parts in the patent for combining the virtual image are not implemented in Cogni Trax’s current design.

The view of the real world follows a fairly torturous path. First, it goes through a polarizer where at least 50% of the light is lost (in theory, this polarizer is redundant due to the polarizing beam splitter to follow, but it is likely used to reduce any ghosting). It then bounces off of the polarizing beam splitter through a focusing element to bring the real world into focus on an LCOS SLM. The LCOS device will change the polarization of anything NOT masked so that on the return trip through the focusing element, it will pass through the polarizing beam splitter. The light then passes through the “relay optics,” then a Quarter Waveplate (QWP), off a mirror, and back through the quarter waveplate and relay optics. The two passes through the “relay optics” have to undo everything done to the light by the two passes through the focusing element. The two passes through the QWP will rotate the polarization of the light so that the light will bounce off the beam splitter and be directed at the eye via a cleanup polarizer. Optionally, as shown, the light can be combined with a virtual image from a microdisplay.

I find it hard to believe that real-world light will go through all that and will behave like nothing other than the light losses from polarization that have happened to it.

Cogni Trax provided a set of diagrams showing the light path of what they call “Alpha Pix.” I edited several of their diagrams together and added some annotations in red. As stated earlier, the current prototype does not have a microdisplay for providing a virtual image. If the virtual display device were implemented, its optics and combiner would be on top of everything else shown.

I don’t see this as a practical solution to hard-edge occlusion. While much less bulky than the ASU design, it still requires polarizing the incoming light and sending it through a torturous path that will further damage/distort real-world light. And this is before they deal with adding a virtual image. There is still the issue that the hard edge occlusion only works if everything being occluded is at approximately the same focus distance. If the virtual display is implemented, it would seem that the virtual image would need to be at approximately the same focus distance for it to be occluded correctly. Then, the hardware and software are required to get everything between the virtual and real world aligned with the eye. Even if the software and eye tracking were excellent, there where will still be a lag with any rapid head movement.

Cogni Trax Waveguide Design / Gen 2

Cogni Trax’s website and video discuss a “waveguide” solution for Gen 2. I found a patent (with excerpts right and below) from Cogni Trax for a waveguide approach to hard-edge occlusion that appears to agree with the diagrams in the video and on the website for their “waveguide.” I have outlined the path for the real world (in green) and the virtual image (in red).

Rather than using polarization, this method uses time-sequential modulation via a single Texas Instrument’s DLP/DMD. The DLP is used during part of the time block/pass light from the real world and as the virtual image display. I have included Figure 1(a), which gives the overall light path; Figures 1(c) and 1(d), which show the time multiplexing; Figure 6(a) with a front view of the design; and Figures 10 (a) and (b) which show a side view of the waveguide with the real world and virtual light paths respectively.

Other than not being polarized, the light follows a more torturous light path that includes a “fixed DMD” to correct for the micro-tilts of the real world by time-multiplexed displaying and masking DMD. In addition to all the problems I had with the Gen 1 design, I find putting the relatively small mirror (120 in Figure 1a) in the middle of the view very problematic as the view over or below the mirror will look very different than the view in the mirror with all the addiction optics. While it can theoretically give more light throughput and not require polarization of the real world, it can only do so by keeping the virtual display times short, which will mean more potential field sequential color breakup and lower color bit depth from the DLP.

Overall, I see Cogni Trax’s “waveguide” design as trading one set of problems for another set of probably worse image problems.

Conclusion

Perhaps my calling hard-edge occlusion a “Holy Grail” did not fully convey its impossibility. The more I have learned, examined, and observed this problem and its proposed solutions, the more clearly it seems impossible. Yes, someone can craft a demo that works for a tightly controlled setup with what is occluded at about the same distance, but it is a magic show.

The Cogni Trax demo is not a particularly good magic show, as it uses a massive 6-axis control rig to position a camera rather than letting the user put on a headset. Furthermore, the demo does not support a virtual display.

Cogni Trax’s promise of a future “waveguide” design appears to me to be at least as fundamentally flawed. According to the publicly available records, Cogni Trax has been trying to solve this problem for 7 years, and a highly contrived setup is the best they have demonstrated, at least publicly. This is more of a university lab project than something that should be developed commercially.

Based on his history with Apple and Texas Instruments, the CEO, Sajjad Khan, is capable, but I can’t understand why he is pursuing this fool’s errand. I don’t understand why over $7M has been invested, other than people blindly investing in former Apple designers without proper technical due diligence. I understand that high-risk, high-reward concepts can be worth some investment, but in my opinion, this does not fall into that category.

Appendix – Quoting Out of Context

Cogni Trax has quoted me in their video on their website as saying, “The Holy Grail of AR Displays.” It is not clear that A) I am referring to Hard Edge Occlusion (and not Cogni Trax) and B) I go on to say, “But it is likely impossible to solve for anything more than special cases of a single distance (flat) real world with optics.” The Audio in the Cogni Trax video from me, which is rather garbled, comes from a MARCH 30, 2021, AR Show, “KARL GUTTAG (KGONTECH) ON MAPPING AR DISPLAYS TO SUITABLE OPTICS (PART 2) at ~48:55 into the video (the occlusion issue is only briefly discussed).

Below, I have cited (with new highlighting in yellow) the section from my blog discussing hard edge occlusion from November 20, 2019, where Cogni Trax got my “Holy Grail” quote. This section of the article discusses the ASU design. This article discussed using a transmissive LCD for soft edge occlusion about 3 years before Magic Leap announced the Magic Leap 2 with such a method in July 2022.

Hard Edge (Pixel) Occlusion – Everyone Forgets About Focus

“Hard Edge Occlusion” is the concept of being able to block the real world with sharply defined edges, preferably to the pixel level. It is one of the “Holy Grails” of optical AR. Not having hard edge occlusion is why optical AR images are translucent. Hard Edge Occlusion is likely impossible to solve optically for all practical purposes. The critical thing most “solutions” miss (including US 20190324274) is that the mask itself must be in focus for it to sharply block light. Also, to properly block the real world, the focusing effect required depends on the distance of everything in the real world (i.e., it is infinitely complex).

The most common hard edge occlusion idea suggested is to put a transmissive LCD screen in the glasses to form “opacity pixels,” but this does not work. The fundamental problem is that the screen is so close to the eye that the light-blocking elements are out of focus. An individual opacity pixel will have a little darkening effect, with most of the light from a real-world point in space going around it and into the eye. A large group of opacity pixels will darken as a blurry blob.

Hard edge occlusion is trivial to do with pass-through AR by essentially substituting pixels. But it is likely impossible to solve for anything more than special cases of a single distance (flat) real world with optics. The difficulty of supporting even the flat-world special case is demonstrated by some researchers at the University of Arizona, now assigned to Magic Leap (the PDF at this link can be downloaded for free) shown below. Note all the optics required to bring the real world into focus onto “SLM2” (in the patent 9,547,174 figure) so it can mask the real world and solve the case for everything being masked being at roughly the same distance. None of this is even hinted at in the Apple application.

I also referred to hard edge occlusion as one of the “Holy Grails” of AR in a comment to a Magic Leap article in 2018 citing the ASU design and discussing some of the issues. Below is the comment, with added highlighting in yellow.

One of the “Holy Grails” of AR, is what is known as “hard edge occlusion” where you block light in-focus with the image. This is trivial to do with pass-through AR and next to impossible to do realistically with see-through optics. You can do special cases if all the real world is nearly flat. This is shown by some researchers at the University of Arizona with technology that is Licensed to Magic Leap (the PDF at this link can be downloaded for free: https://www.osapublishing.org/oe/abstract.cfm?uri=oe-25-24-30539#Abstract). What you see is a lot of bulky optics just to support a real world with the depth of a bookshelf (essentially everything in the real world is nearly flat).

FM: Magic Leap One – Instant Analysis in the Comment Section by Karl Guttag (KarlG) JANUARY 3, 2018 / 8:59 AM

DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8)

Introduction – Contrast in Approaches and Technologies

This article will compare and contrast the Vuzix Ultralight, Lumus Z-lens, and DigiLens Argo waveguide-based AR prototypes I saw at CES 2023. I discussed these three prototypes with SadlyItsBradly in our CES 2023 video. It will also briefly discuss the related Avegant’s AR/VR/MR 2022 and 2023 presentations about their new smaller LCOS projection engine and Magic Leap 2’s LCOS design to show some other projection engine options.

It will go a bit deeper into some of the human factors of the Digitlens’ Argo. Not to pick on Digilens’ Argo, but because it has more features and demonstrates some common traits and issues of trying to support a rich feature set in a glasses-like form factor.

When I quote various specs below, they are all manufacturer’s claims unless otherwise stated. Some of these claims will be based on where the companies expect the product to be in production. No one has checked the claims’ veracity, and most companies typically round up, sometimes very generously, on brightness (nits) and field of view (FOV) specs.

This is a somewhat long article, and the key topics discussed include:

  • MicroLED versus LCOS Optical engine sizes
  • The image quality of MicroLED vs. LCOS and Reflective (Lumus) vs. Diffractive waveguides
  • The efficiency of Reflective vs. Diffractive waveguides with MicroLEDs
  • The efficiency of MicroLED vs. LCOS
  • Glasses form factor (using Digilens Argo as an example)

Overview of the prototypes

Vuzix Ultralite and Oppo Air Glass 2

The Vuzix Ultralite and Oppo Air Glass 2 (top two on the right) have 640 by 480 pixel Jade Bird Display (JBD) green-only per eye. And were discussed in MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7).

They are each about 38 grams in weight, including frames, processing, wireless communication, and batteries, and wirelessVuzix developed their own diffractive waveguide and support about a 30-degree FOV. Both are self-contained with wireless, with an integrated battery and processing.

Vuzix developed their own glass diffractive waveguides and optical engines for the Ultralight. They claim a 30-degree FOV with 3,000 nits.

Oppo uses resin plastic waveguides, and MicroLED optical engine developed jointly with Meta Bounds. I have previously seen prototype resin plastic waveguides from other companies for several years. This is the first time I have seen them in a product getting ready for production. The glasses (described in a 1.5-minute YouTube/CNET video) include microphones and speakers for applications, including voice-to-text and phone calls. They also plan on supporting vision correction with lenses built into the frames. Oppo claims the Air Glass 2 has a 27-degree FOV and outputs 1,400 nits.

Lumus Z-Lens

Lumus’s Z-Lens (third from the top right) supports up to a 2K by 2K full/true color LCOS display with a 50-degree FOV. Its FoV is 3 to 4 times the area of the other three headsets, so it must output more than 3 to 4 times the total light. It supports about 4.5x the number of pixels of the DigiLens Argo and over 13x the pixels of the Vuzix Ultralite and Oppo Air Glass 2.

The Z-Lens prototype is a demonstration of display capability and, unlike the other three, is not self-contained and has no battery or processing. A cable provides the display signal and power for each eye. Lumus is an optics waveguide and projector engine company and leaves it to its customers to make full-up products.

Digilens Argo

The DigiLens Argo (bottom, above right) uses a 1280 by 720 full/true color LCOS display. The Argo has many more features than the other devices, with integrated SLAM cameras, GNSS (GPS, etc.), Wi-Fi, Bluetooth, a 48mp (with 4×4 pixel “binning” like the iPhone 14) color camera, voice recognition, batteries, and a more advanced CPU (Qualcomm Snapdragon 2). Digilens intends to sell the Argo for enterprise applications, perhaps with partners, while continuing to sell waveguides optical engines as components for higher-volume applications. As the Argo has a much more complete feature set, I will discuss some of the pros and cons of some of the human factors of the Argo design later in this article.

Through the Lens Images

Below is a composite image from four photographs taken with the same camera (OM-D E-M5 Mark III) and lens (fixed 17mm). The pictures were taken at conferences, handheld, and not perfectly aligned for optimum image quality. The projected display and the room/outdoor lighting have a wide range of brightness between the pictures. None of the pictures have been resized, so the relative FoVs have been maintained, and you get an idea of the image content.

The Lumus Z-lens reflective waveguide has a much bigger FOV, significantly more resolution, and exhibits much better color uniformity with the same or higher brightness (nits). It also appears that reflective waveguides have a significant efficiency advantage with both MicroLEDs (and LCOS), as discussed in MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7). It should also be noted that the Lumus Z-lens prototype has only the display with optics and has no integrated processing, communication or battery. In contrast, the others are closer to full products.

A more complex issue is that of power consumption versus brightness. LCOS engines today are much more efficient for an image with full-screen bright images (by 10x or more) than MicroLEDs with similar waveguides. MicroLED’s big power advantage occurs when the content is sparse, as the power consumption is roughly proportional to the average pixel value, whereas, with LCOS, the whole display is illuminated regardless of the content.

If and when MicroLEDs support full color, the efficiency of nits-per-Watt will be significantly lower than monochrome green. Whatever method produces full color will detract from the overall electrical and optical efficiency. Additionally, color balancing for white requires adding blue and red light with lower nits-per-Watt.

Some caveats:

  • The Lumus Z-Lens is a prototype and does not have all the anti-reflective and other coatings of a production waveguide. Lumus uses an LCOS device with about ~3-micron pixels, which fits 1440 by 1440 within the ~50-degree FOV supported by the optics. Lumus is working with at least one LCOS maker to get an ~2-micron pixel size to support 2K by 2K resolution with the same size display. The image is cut off on the right-hand side of the image by the camera, which was rotated into portrait mode to fit inside the glasses.
  • The Digilens through the lens image is from Photonics West in 2022 (about one year old). Digilens has continued to improve its waveguide since this picture was taken.
  • The Vuzix picture was taken via Vuzix Shield, which uses the same waveguide and optics as the Vuzix Ultralight.
  • The Oppo image was taken at the AR/VR/MR 2023 conference.

Optical Engine Sizes

Vuzix has an impressively small optical engine driving Vuzix’s diffractive waveguides. Seen below left is a comparison of Vuzix’s older full-color DLP engine compared with an in-development color X-Cube engine and the green MicroLED engine used in the Vuzix Ultralite™ and Shield. In the center below is an exploded view of the Oppo and Meta Bound glasses (joint design as they describe it) with their MicroLED engine shown in their short CNET YouTube video. As seen in the still from the Oppo video, they have plans to support vision correction built into the glasses.

Below right is the Digilens LCOS engine, which uses a fairly conventional LCOS (using Ominivision’s LCOS device with driver ASIC showing). The dotted line indicates where the engine blocks off the upper part of the waveguide. This blocked-off area carries over to the Argo design.

The Digilens Argo, with its more “conventional” LCOS engine, requires are large “brow” above the eye to hide it (more on this issue later). All the other companies have designed their engine to avoid this level of intrusion into the front area of the glasses.

Lumus had developed their 1-D pupil-expanding reflective waveguide for nearly two decades, which needed a relatively wide optical engine. With the 2-D Maximus waveguide in 2021 (see: Lumus Maximus 2K x 2K Per Eye, >3000 Nits, 50° FOV with Through-the-Optics Pictures), Lumus demonstrated their ability to shrink the optical engine. This year, Lumus further reduced the size of the optical engine and its intrusion into the front lens area with their new Z-lens design (compare the two right pictures below of Maximus to Z-Lens)

Shown below are frontal views of the four lenses and their optical engines. The Oppo Air Glass 2 “disguises” the engine within the industrial design of a wider frame (and wider waveguide). The Lumus Z-Lens, with a full color about 3.5 times the FOV as the others, has about the same frontal intrusion as the green-only MicroLED engines. The Argo (below right) stands out with the large brow above the eye (the rough location of the optical engine is shown with the red dotted line).

Lumus Removes the Need for Air Gaps with the Z-Lens

Another significant improvement with Lumus’s Z-Lens is that unlike Lumus’s prior waveguides and all diffractive waveguides, it does not require an air gap between the waveguide’s surface and any encapsulating plastics. This could prove to be a big advantage in supporting integrated prescription vision correction or simple protection. Supporting air gaps with waveguides has numerous design, cost, and optical problems.

A typical full-color diffractive waveguide typically has two or three waveguides sandwiched together, with air gaps between them plus an air gap on each side of the sandwich. Everywhere there is an air gap, there is also a desire for antireflective coatings to remove reflections and improve efficiency.

Avegant and Magic Leap Small LCOS Projector Engines

Older LCOS projection engines have historically had size problems. We are seeing new LCOS designs, such as the Lumus Z-lens (above), and designs from Avegant and Magic Leap that are much smaller and no more intrusive into the lens area than the MicroLED engines. My AR/VR/MR 2022 coverage included the article Magic Leap 2 at SPIE AR/VR/MR 2022, which discusses the small LCOS engines from both Magic Leap and Avegant. In our AWE 2022 video with SadlyItsBradley, I discuss the smaller LCOS engines by Avegant, Lumus (Maximus), and Magic Leap.

Below is what Avegant demonstrated at AR/VR/MR 2022 with their small “L” shaped optical engines. These engines have very little intrusion into the front lenses, but they run down the temple of the glasses, which inhibits folding the temple for storage like normal glasses.

At the AR/VR/MR 2023, Avegant showed a newer optical design that reduced the footprint of their optics by 65%, including shortening them to the point that the temples can be folded, similar to conventional glasses (below left). It should be noted that what is called a “waveguide” in the Avegant diagram is very different from the waveguides used to show the image in AR glasses. Avegants waveguide is used to illuminate the LCOS device. Avengant, in their presentation, also discussed various drive modes of the LEDs to give higher brightness and efficiency with green-only and black-and-white modes. The 13-minute video of Avegant’s presentation is available at the SPIE site (behind SPIE’s paywall). According to Avegant’s presentation, the optics are 15.6mm long by 12.4mm wide, support a 30-degree FOV, with 34 pixels/degree, and 2 lumens of output in full color and up to 6 lumens in limited color outdoor mode. According to the presentation, they expect about 1,500 nits with typical diffractive waveguides in the full-color mode, which would roughly double in the outdoor mode.

The Magic Leap 2 (ML2) takes reducing the optics one step further and puts the illumination LEDs and LCOS on opposite sides of the display’s waveguide (below and described in Magic Leap 2 at SPIE AR/VR/MR 2022). The ML2 claims to have 2,000 nits with a much larger 70-degree FOV.

Transparency (vs. Birdbath) and “Eye Glow”

Transparency

As seen in the pictures above, all the waveguide-based glasses have transparency on the order of 80-90%. This is a far cry from the common birdbath optics, with typically only 25% transparency (see Nreal Teardown: Part 1, Clones and Birdbath Basics). The former Osterhout Design Group (ODG) made birdbath AR Glasses popular first with their R6 and then with the R8 and R9 models (see my 2017 article ODG R-8 and R-9 Optic with OLED Microdisplays) which served as the models for designs such at Nreal and Lenovo’s A3.

OGD Legacy and Progress

Several former ODG designers have ended up at Lenovo, the design firm Pulsar, Digilens, and elsewhere in the AR community. I found pictures of Digilens VP Nima Shams wearing the ODG R9 in 2017 and the Digilens Argo at CES. When I showed the pictures to Nima, he pointed out the progress that had been made. The 2023 Argo is lighter, sticks out less far, has more eye relief, is much more transparent, has a brighter image to the eye, and is much more power efficient. At the same time, it adds features and processing not found on the ODG R8 and R9.

Front Projection (“Eye Glow”)

Another social aspect of AR glasses is Front Projection, known as “Eye Glow.” Most famously, the Hololens 1 and 2 and the Magic Leap 1 and 2 project much of the light forward. The birdbath optics-based glasses also have front projection issues but are often hidden behind additional dark sunglasses.

When looking at the “eye glow” pictures below, I want to caution you that these are random pictures and not controlled tests. The glasses display radically different brightness settings, and the ambient light is very different. Also, front projection is typically highly directional, so the camera angle has a major effect (and there was no attempt to search for the worst-case angle).

In our AWE 2022 Video with SadlyItsBradley, I discussed how several companies, including Dispelix, are working to reduce front projection. Digilens is one of the companies I discussed that has been working to reduce front projection. Lumus’s reflective approach has inherent advantages in terms of front projection. DigiLens Argo (pictures 2 and 3 from the right) have greatly reduced their eye glow. The Vuzix Shield (with the same optics as the Ultralite) has some front projection (and some on my cheek), as seen in the picture below (4th from the left). Oppo appears to have a fairly pronounced front projection, as seen in two short videos (video 1 and video 2)

DigiLens Argo Deeper Look

DigiLens has been primarily a maker of diffractive waveguides, but it has, through the years, made several near-product demonstrations in the past. A few years ago, they when through a major management change (see 2021 article, DigiLens Visit), and with the management came changes in direction.

Argo’s Business Model

I’m always curious when a “component company” develops an end product. I asked DigiLens to help clarify their business approaches and received the following information (with my edits):

  1. Optical Solutions Licensing – where we provide solutions to our license to build their own waveguides using our scalable printing/contactless copy process. Our licensees can design their waveguides, which Digilens’ software tools enable.  This business is aimed at higher-volume applications from larger companies, mostly focused on, but not limited to, the consumer side of the head-worn market.
  1. Enterprise/Industrial Products – ARGO is the first product from DigiLens that targets the enterprise and industrial market as a full solution.  It will be built to scale and meet its target market’s compliance and reliability needs. It uses DigiLens optical technology in the waveguides and projector and is built by a team with experience shipping thousands of enterprise & Industrial glasses from Daqri, ODG, and RealWear. 

Image Quality

As I was familiar with Digilen’s image quality, I didn’t really check it out that much with the ARGO, but rather I was interested in the overall product concept. Over the last several years, I have seen improved image quality, including uniformity and addressing the “eye glow” issue (discussed earlier).

For the type of applications in the “enterprise market” ARGO is trying to serve, absolute image quality may not be nearly as important as other factors. As I have often said, “Hololens 2 proves that image quality for the customers that use it” (see this set of articles discussing the Hololen 2’s poor image quality). For many AR markets, the display information is simple indicators such as arrows, a few numbers, and lines. It terms of color, it may be good enough if only a few key colors are easily distinguishable.

Overall, Digilens has similar issues with color uniformity across the field of view of all other diffractive waveguides I have seen. In the last few years, they have gone from having poor color uniformity to being among the better diffractive waveguides I have seen. I don’t think any diffractive waveguide would be widely considered good enough for movies and good photographs, but they are good enough to show lines, arrows, and text. But let me add a key caveat, what all companies demonstrate are invariably certainly cherry-picked samples.

Field of View (FOV)

While the Argos 30-degree FOV is considered too small for immersive games, for many “enterprise applications,” it should be more than sufficient. I discussed why very large FOVs are often unnecessary in AR in this blog’s 2109 article FOV Obsession. Many have conflated VR emersion with AR applications that need to support key information with high transparency, lightweight, and hands-free. As Professor and decades-long AR advocate Thad Starner pointed out, requiring the eye to move too much causes discomfort. I make this point because a very large FOV comes at the expense of weight, power, and cost.

Key Feature Set

The diagram below is from DigiLen on the ARGO and outlines the key features. I won’t review all the features, but I want to discuss some of their design choices. Also, I can’t comment on the quality of their various features (SLAM, WiFi, GPS, etc.) as A) I haven’t extensively tried them, and B) I don’t have the equipment or expertise. But at least on the surface, in terms of feature set, Argo compares favorably to the Hololens 1 and 2, if having a smaller FOV than the Hololens 2 but with much better image quality.

Audio Input for True Hands-Free Operation

As stated above, Digilens’ management team includes experience from RealWear. RealWear acquired a lot of technology from Kopin’s Golden-i. Like ARGO, Golden-i was a system product outgrowth from display component maker Kopin with a legacy before 2011 when I first saw Golden-i. Even though Kopin was a display device company, Golden-i emphasized voice recognition with high accuracy even in noisy environments. Note the inclusion of 5 microphones on the ARGO.

Most realistic enterprise-use models for AR headsets include significant, if not exclusively, hands-free operation. The basic idea of mounting a display on the user’s head it so they can keep their hands free. You can’t be working with your hands and have a controller in your hand.

While hand tracking cameras remove the need for the physical controller, they do not free up the hands as the hands are busy making gestures rather than performing the task with their hands. In the implementations I have tried thus far, gestures are even worse than physical controllers in terms of distraction, as they force the user to focus on the gestures to make it (barely sometimes) work. One of the most awful experiences I have had in AR was trying to type in a long WiFi password (with it hidden as I typed by asterisk marks) using gestures on a Hololens 1 (my hands hurt just thinking about it – it was a beyond terrible user experience).

Similarly, as I discussed with SadlyItsBradley about Meta’s BCI wristband, using nerve and/or muscle-detecting wristbands still does not free up the hands. The user still has their hands and mental focus slaved to making the wristband work.

Voice control seems to have big advantages for hands-free operation if it can work accurately in a noisy environment. There is a delicate balance between not recognizing words and phrases, false recognition or activation, and becoming too burdensome with the need for verification.

Skull-Gripping “Glasses” vs. Headband or Open Helmet

In what I see as a futile attempt to sort of look like glasses (big ugly ones at that), many companies have resorted to skull-gripping features. Looking at the skull profile (right), there really isn’t much that will stop the forward rotation of front-heavy AR glasses unless they wrap around the lower part of the occipital bone at the back of the head.

Both the ARGO (below left) and Panasonic’s (Shiftall division) VR headsets (right two images below) take the concept of skull-grabbing glasses to almost comic proportions. Panasonic includes a loop for the headband, and some models also include a forehead pad. The Panasonic Shiftall uses pads pressed against the front of the head to support the front, while the ARGO uses an oversized large noise bridge as found on many other AR “glasses.”

ARGO supports a headband option, but they require the ends of the temples with the skull-grabbers temples to be removed and replaced by a headband.

As anyone who knows anything about human factors with glasses knows, the ears and the nose cannot support much weight, and the ears and nose will get sore if much weight is supported for a long time.

Large soft nose pads are not an answer. There is still too much weight on the nose, and the variety of nose shapes makes them not work well for everyone. In the case of the Argo, the large nose pads also interfere with wearing glasses; the nose pads are located almost precisely where the nose pads for glasses would go.

Bussel/Bun on the Back Weight Distribution – Liberating the Design

As was pointed about by Microsoft with their Hololens 2 (HL2), weight distribution is also very important. I don’t know if they were the first with what I call “the bustle on the back” approach, but it was a massive improvement, as I discussed in Hololens 2 First Impressions: Good Ergonomics, But The LBS Resolution Math Fails! Several others have used a similar approach, most notably with the Meta Quest Pro VR (it has very poor passthrough AR, as I discussed in Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough). Another feature of the HL2 ergonomics is the forehead pad eliminates weight from the nose and frees up that area in support of ordinary prescription glasses.

The problem with the sort-of-glasses form factor so common in most AR headsets today is that it locks the design into other poor decisions, not the least of which is putting too much weight too far forward. Once it is realized that these are not really glasses, it frees up other design features for improvement. Weight can be taken out of the front and moved to the back for better weight distribution.

ARGO’s Eye-Relief Missed Opportunity for Supporting Normal Glasses

Perhaps the best ergonomic/user feature of the Hololens 1 & 2 over most other AR headsets is that they have enough eye relief (distance from the waveguide to the eye) and space to support most normal eyeglasses. The ARGO’s waveguide and optical design have enough eye relief to support wearing most normal glasses, but still, they require specialized inserts.

You might notice some “eye glow” in the CNET picture (above right). I think this is not from the waveguide itself but is a reflection off of the prescription inserts (likely, they don’t have good anti-reflective coatings).

A big part of the problem with supporting eyeglasses goes back to trying to maintain the fiction of a “glasses form factor.” The nose bridge support will get in the way of the glasses, but the nose bridge support is required to support the headset. Additionally, hardware in the “brow” over the eyes could have been moved elsewhere, which may interfere.

Another technical issue is the location and shape of their optical engine. As discussed earlier, the Digilens engine shape causes issues with jutting into the front of glasses, resulting in a large brow over the eyes. This brow, in turn, may interfere with various eyeglasses.

It looks like Argo started with the premise of looking like glasses putting form ahead of function. As it turns out, they have what for me is an unhappy compromise that neither looks like glasses nor has the Hololens 2 advantage of working with most normal glasses. Starting from the comfort and functionality as primary would have also led to a different form factor for the optical engine.

Conclusions

While MicroLED may hold many long-term advantages, they are not ready to go head-to-head with LCOS engines regarding image quality and color. The LCOS engines are being shown by multiple companies that are more than competitive in size and shape with the small MicroLED engines. The LCOS engines are also supporting much higher resolutions and larger FOVs.

Lumus, with their Z-Lens 2-D reflective waveguides, seems to have a big advantage in image quality and efficiency over the many diffractive waveguides. Allowing the Z-lens to be encased without an air gap adds another significant advantage.

Yet today, most waveguide-based AR glasses use diffractive waveguides. The reasons include there being many sources of diffractive waveguides, and companies can make their own custom designs. In contrast, Lumus controls its reflective waveguide I.P. Additionally, Lumus has only recently developed 2-D reflective waveguides, dramatically reducing the size of the projection engine driving their waveguides. But the biggest reason for using diffraction waveguides is that the cost of Lumus waveguides is thought to be more expensive; Lumus and their new manufacturing partner Schott Glass claimed that they will be able to make waveguides at competitive or better costs.

A combination of cost, color, and image quality will likely limit MicroLEDs for use in ultra-small and light glasses with low amounts of visual content, known as “data snacking.” (think arrows and simple text and not web browsing and movies). This market could be attractive in enterprise applications. I’m doubtful that consumers will be very accepting of monochrome displays. I’m reminded of a quote from an IBM executive in the 1980s when asked whether resolution or color was more important said: “Color is the least necessary and most desired feature in a display.”

Not to pick on Argo, but it demonstrates many of the issues with making a full-featured device in a glasses form factor, as SLAM (with multiple spatially separated cameras), processing, communication, batteries, etc., the overall design strays away from looking like glasses. As I wrote in my 2019 article, Starts with Ray-Ban®, Ends Up Like Hololens.

The post DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8) first appeared on KGOnTech.

Meta Quest Pro (Part 2) – Block Diagrams & Teardown of Headset and Controller

Introduction

Limas Lin (LinkedIn Contact) is a reader of my blog and wanted to share his block diagram and teardown of the Meta Quest Pro. He sent me these diagrams over a month ago, but I have been busy with videos and blog articles about the companies I met with at CES and AR/VR/MR 2023.

I have more to write about both the Meta Quest Pro plus the recent company meetings, but I wanted to get out what I think are excellent diagrams on the Meta Quest Pro. The diagrams show the location and the teardown photos with most if not all the key components identified.

I don’t have anything more to say about these diagrams or a conclusion as I think images speak for themselves. I know it is a major effort to find all the components and present them in a such a clear and concise manner and want to than Limas Lim for sharing. You can click on each diagram for a higher resolution image.

Meta Quest Pro Headset Diagrams

Meta Quest Pro Controller Diagrams

The post Meta Quest Pro (Part 2) – Block Diagrams & Teardown of Headset and Controller first appeared on KGOnTech.

Cambridge Mechatronics and poLight Optics Micromovement (CES/PW Pt. 6)

[March 4th, 2023 Corrections/Updates – poLight informed me of some corrections, better figures, and new information that I have added to the section on poLight. Cambridge Mechatronics informed me about their voltage and current requirements for pixel-shifting (aka wobulation).]

Introduction

For this next entry in my series on companies I met with at CES or Photonics West’s (PW) AR/VR/MR show in 2023, I will be covering two different approaches to what I call “optics micromovement.” Cambridge Mechatronics (CML) uses Shape Memory Alloys (SMA) wires to move optics and devices (including haptics). poLight uses piezoelectric actuators to bend thin glass over their flexible optical polymer. I met with both companies at CES 2023, and they both provided me with some of their presentation material for use in this article.

I would also like to point out that one alternative to moving lenses for focusing is electrically controlled LC lenses. In prior articles, I discussed implementations of LC lenses by Flexenable (CES & AR/VR/MR Pt. 4 – FlexEnable’s Dimming, Electronic Lenses, & Curved LCDs); Meta (Facebook) with some on DeepOptics (Meta (aka Facebook) Cambria Electrically Controllable LC Lens for VAC? and Meta’s Cambria (Part 2): Is It Half Dome 3?); and Magic Leap with some on DeepOptics (Magic Leap 2 (Pt. 2): Possible Answers from Patent Applications); and DeepOptics (CES 2018 (Part 1 – AR Overview).

After discussing the technologies from CML and poLight, it will be got into some of the new uses within AR and VR.

Beyond Camera Focusing and Optical Image Stabilization Uses of Optics Micromovement in AR and VR

Both poLight and CML have cell phone customers using their technology for camera auto-focus and optical image stabilization (OIS). This type of technology will also be used in the various cameras found on AR and VR headsets. poLight’s TLens is known to be used in the Magic Leap 2 reported by Yole Development and Sharp’s CES 2023 VR prototype (reported by SadlItsBradley).

While the potential use of their technology in AR and VR camera optics is obvious, both companies are looking at other ways their technologies could support Augmented and Virtual Reality.

Cambridge Mechatronics (CML) – How it works

Cambridge Mechatronics is an engineering firm that makes custom designs for miniature machines using shaped memory alloy (SMA). Their business is in engineering the machines for their customers. These machines can move optics or objects. The SMA wires contract when heated due to electricity moving through them (below left) and then act on spring structures to cause movement as the wires contract or relax. Using multiple wires in various structures can cause more complex movement. Another characteristic of the SMA wire is that as it heats and contracts, it makes the wire thicker and shorter, causing the resistance to be reduced. CML uses the change in resistance as feedback for closed-loop control (below right).

Show (below right) is a 4-wire actuator that can move horizontally, vertically, or rotate (arrows pointing at the relaxed wires). The SMA wires enable a very thin structure. Below is a still from a CML video showing this type of actuator’s motion.

Below is an 8-wire (2 crossed wires on four sides) mechanism for moving a lens in X, Y, and Z and Pitch and Yaw to control focusing and optical image stabilization (OIS). Below are five still frames from a CML video on how the 8-wire mechanism works.

CML is developing some new SMA technology called “Zero Hold Power.” With this technology, they only need to apply power when moving optics. They suggest this technology would be useful in AR headsets to adjust for temperature variations in optics and support vergence accommodation conflict.

CML’s SMA wire method makes miniature motors and machines that may or may not include optics. With various configurations of wires, springs, levers, ratcheting mechanisms, etc., all kinds of different motions are possible. The issue becomes the mass of the “payload” and how fast the SMA wires can respond.

CML expects that when continuously pixel shifting, they will use take than 3.2V at ~20mA.

poLight – How It Works

poLight’s TLens uses piezoelectric actuators to bend a thin glass membrane over poLight’s special optical clear, index-matched polymer (see below). This bending process changes the lens’s focal point, similar to how the human eye works. The TLens can also be combined with other optics (below right) to support OIS and autofocus.

The GIF animation (right) show how the piezo actuators can bend the top glass membrane to change the lens in the center for autofocus, tilt the lens to shift the image for OIS, and both perform autofocus and OIS.

poLight also proposes supporting “supra” resolution (pixel shifting) for MicroLEDs by tilting flat glass with poLight’s polymer using piezo actuators to shift pixels optically.

One concern is that poLight’s actuators require up to 50 Volts. Generating higher voltages typically comes with some power loss and more components. [Corrected – March 3, 2023] poLight’s companion driver ASIC (PD50) has built-in EMI reduction that minimizes external components (it only requires ext. capacitive load) and power/current consumption is kept very low (TLens® being an optical device, consumes virtually no power, majority of <6mW total power is consumed by our driver ASIC – see table below).

poLight says that the TLens is about 94% transparent. The front aperture diameter of the TLens, while large enough for small sensor (like a smartphone) cameras, seems small at just over 2mm. The tunable wedge concept could have a much wider aperture as it does not need to form a lens. While the poLight method may result in a more compact design, the range of optics would seem to be limited in both the size of the aperture and how much the optics change.

Uses for Optics Micromovement in AR and VR beyond cameras

Going beyond the established camera uses, including autofocus and OIS, outlined below are some of the uses for these devices in AR and VR:

  • Variable focus, including addressing vergence accommodation conflict (VAC)
  • Super-resolution – shifting the display device or the optic to improve the effective resolution
  • Aiming and moving cameras:
    • When doing VR with camera-passthrough, there are human factor advantages to having the cameras positioned and aimed the same as the person’s eyes.
    • For SLAM and tracking cameras, more area could be covered with higher precision if the cameras rotate.
  • I discussed several uses for MicroLED pixel shifting in CES 2023 (Part 2) – Porotech – The Most Advanced MicroLED Technology:
    • Shifting several LEDs to the same location to average their brightness and correct for any dead or weak pixels should greatly improve yields.
    • Shifting spatial color subpixels (red, green, and blue) to the same location for a full-color pixel. This would be a way to reduce the effective size of a pixel and “cheat” the etendue issue caused by a larger spatial color pixel.
    • Improve resolution as the MicroLED emission area is typically much smaller than the pitch between pixels. There might be no overlap when switching and thus give the full resolution advantage. This technique could provide even fewer pixels with fewer connections, but there will be a tradeoff in maximum brightness that can be achieved.

Conclusions

It seems clear that future AR and VR systems will require changing optics at a minimum for autofocusing. There is also the obvious need to support focus-changing optics for VAC. Moving/changing optics will find many other uses in future AR and VR systems.

Between poLight and Cambridge Mechatronic (CML), it seems clear that CML’s technology is much more adaptable to a wider range and types of motion. For example, CML could handle the bigger lenses required for VAC in VR. poLight appears to have an advantage in size for small cameras.

The post Cambridge Mechatronics and poLight Optics Micromovement (CES/PW Pt. 6) first appeared on KGOnTech.

The post Cambridge Mechatronics and poLight Optics Micromovement (CES/PW Pt. 6) appeared first on KGOnTech.

CES & AR/VR/MR Pt. 4 – FlexEnable’s Dimming, Electronic Lenses, & Curved LCDs

Introduction – Combining 2023’s CES and AR/VR/MR

As I wrote last time, I met with over 30 companies, about 10 of which twice between CES and SPIE’s AR/VR/MR conferences. Also, since I started publishing articles and videos with SadlyItsBradley on CES, I have received information about other companies, corrections, and updates.

FlexEnable is developing technology that could affect AR, VR, and MR. FlexEnable offers an alternative to Meta Materials (not to be confused with Meta/Facebook) electronic dimming technology. Soon after publishing CES 2023 (Part 1) – Meta Materials’ Breakthrough Dimming Technology, I learned about FlexEnable. So to a degree, this article is an update on the Meta Materials article.

Additionally, FlexEnable also has a liquid crystal electronic lens technology. This blog has discussed Meta/Facebook’s interest in electronically switchable lens technology in Imagine Optix Bought By Meta – Half Dome 3’s Varifocal Tech – Meta, Valve, and Apple on Collision Course? and Meta’s Cambria (Part 2): Is It Half Dome 3?.

FlexEnable is also working on Biaxially Curved LCD technology. In addition to direct display uses, the ability to curve a display as needed will find uses in AR and VR. Curved LCDs could be particularly useful in very wide FOV systems. I discussed this briefly (discussed R6’s helmet having a curved LCD briefly in out AR/VR/MR 2022 video with SadlyItsBradley)

FlexEnable – Flexible/Moldable LC for Dimming, Electronic Lenses, Embedded Circuitry/Transistors, and Curved LCD

FlexEnable has many device structures for making interesting optical technologies that combine custom liquid crystal (LC), Tri-acetyl cellulose (TAC) clear sheets, polymer transistors, and electronic circuitry. While Flexenable has labs to produce prototypes, its business model is to design, develop, and adapt its technologies to its customers’ requirements for transfer to a manufacturing facility.

TAC films are often used in polarizers because they have high light transmittance and low birefringence (variable retardation and, thus, change in polarization). Unlike most plastics, TAC can retain its low birefringence when flexed or heated to its glass point (becomes rubbery but not melted) and molded to a biaxial curve. By biaxially curving, they can match the curvature of lenses or other product features.

FlexEnable’s Biaxially Curvable Dimming

Below is the FlexEnable approach to dimming, which is similar to how a traditional glass LC device is made. The difference is that they use TAC films to enclose the LC instead of glass. FlexEnable has formulated a non-polarization-based LC that can either darken or lighten when an electric field is applied (the undriven state can be transparent or dark). For AR, a transparent state, when undriven, would normally be preferred.

To form a biaxially curved dimmer, the TAC material is heated to its glass point (around 150°C) for molding. Below is the cell structure and an example of a curved dimmer in its transparent and dark state.

FlexEnable biaxially shapeable electronic dimming

The Need for Dimming Technology

As discussed in CES 2023 (Part 1) – Meta Materials’ Breakthrough Dimming Technology, there is a massive need in optical AR to support electronically controlled dimming that A) does not require light to be polarized, and B) has a highly transparent state when not dimming. Electronic dimming supports AR headset use in various ambient light conditions, from outside in the day to darker environments. It will make the virtual content easier to see without blasting very bright light to the eyes. Not only will it reduce system power, but it will also be easier on the eyes.

The Magic Leap has demonstrated the usefulness of electronic dimming with and without segmentation (also known as soft edge occlusion or pixelated dimming) with their Magic Leap 2 (and discussed with SadlyItsBradley). Segmented dimming allows the light blocking to be selective and more or less track the visual content and make it look more solid. Because the segmented dimming is out of focus can only do “soft edge occlusion,” where it dims general areas. “Hard-edge occlusion,” which would selectively dim the real work for each pixel in the virtual world, appears impossible with optical AR (but trivial in VR with camera passthrough).

The biggest problem with the ML2 approach is that it used polarization-based dimming that blocks about 65% of the light in its most transparent state (and ~80% after the waveguide). I discussed this problem in Magic Leap 2 (Pt. 3): Soft Edge Occlusion, a Solution for Investors and Not Users. The desire (I would say needed, as discussed in the Meta Materials article here) for light blocking in AR is undeniable, but blocking 80% of the light in the most transparent state is unacceptable in most applications. Magic Leap has been demonstrating that soft edge occlusion improves the virtual image.

Some of the possible dimming ranges

Dimming Range and Speed

Two main factors affect the darkening range and switching speed: the LC formulation and the cell gap thickness. For a given LC formula, the thicker the gap, the more light it will block in both the transmissive and the dark state.

Like with most LC materials, the switching speed increases roughly inversely proportional to the square cell gap thickness. For example, if the cell gap is half as thick, the LC will switch about 4 times faster. FlexEnable is not ready to specify the switching speeds.

The chart on the right shows some currently possible dimming ranges with different LC formulas and cell thicknesses.

Segmented/Pixelated Dimming

Fast switching speeds become particularly important for supporting segmented dimming (ex., Magic Leap 2) because the dimming switching speed needs to be about as fast as the display. Stacking two thin cells in series could give both faster switching with a larger dynamic range as the light blocking would be roughly squared.

FlexEnable supports passive and active (transistor) circuitry to segment/pixelate and control the dimming.

Electronically Controlled Lenses

FlexEnable is also developing what are known as GRIN (Gradient Index) LC lenses. With this type of LC, the electric field changes the LC’s refraction index to create a switchable lens effect. The index-changing effect is polarization specific, so to control unpolarized light, a two-layer sandwich is required (see below left). As evidenced by patent applications, Meta (Facebook) has been studying GRIN and Pancharatnam–Berry Phase (PBP) electronically switchable lenses (for more on the difference between GRIN and PBP switchable lenses, see the Appendix). Meta application 2020/0348528 (Figs. 2 and 12 right) shows using a GRIN-type lens with a Fresnel electrode pattern (what they call a Segmented Lens Profile or SPP). The same application also discusses PBP lenses.

FlexEnable (left) and Meta Patent Application 2020/0348528 Figs. 2 and 12 (right)

The switchable optical power of the GRIN lens can be increased by making the cell gap thicker, but as stated earlier, the speed of LC switching will reduce by roughly the square of the cell gap thickness. So instead, a Fresnel-like approach can be used, as seen diagrammatically in the Meta patent application figure (above right). This results in a thinner and faster switching lens but with Fresnel-type optical issues.

When used in VR (ex., Meta’s Half Dome 3), the light can be polarized, so only one layer is required per switchable lens.

There is a lot of research in the field of electronically switchable optics. DeepOptics is a company that this blog has referenced a few times since I saw them at CES 2018.

Miniature Electromechanical Focusing – Cambridge Mechatronics and poLight

At CES, I met with Cambridge Mechatronics (CML)and poLight, which have miniature electromechanical focusing and optical image stabilization devices used in cell phones and AR cameras. CML uses Shape Memory Alloy wire to move conventional optics for focusing and stabilization. poLight uses piezoelectric actuators to bend a clear deformable membrane over a clear but soft optical material to form a variable lens. They can also tilt a rigid surface against the soft optical material to control optical image stabilization and pixel shifting (often called wobulation) I plan to cover both technologies in more detail in a future article, but I wanted to mention them here as alternatives to LC control variable focus.

Polymer Transistors and Circuitry

FlexEnable has also developed polymer semiconductors that they claim perform better than amorphous silicon transistors (typically used in flat panel displays). Higher performance translates into smaller transistors. These transistors can be used in an active matrix to control higher-resolution devices.

Biaxially Curved LCD

Combining FlexEnable’s technologies together, including curved LCD, circuitry, and polymer semiconductors results in their ability to make biaxially curved LCD prototype displays (right).

Curved displays and Very Wide FOV

Curved displays become advantageous in making very wide FOV displays. At AWE 2022, Red 6 had private demonstrations (discussed briefly in my video with SadlyItsBradley) of a 100-degree FOV with no pupil swim (image distorting as the eye moves) military AR headset incorporating a curved LCD. Pulsar, an optical design consulting company, developed the concept of using a curved display and the optics for the new Red 6 prototype. To be clear, Red 6/Pulsar used a curved glass LCD display, not one from FlexEnable, but it shows that curved displays become advantageous.

Conclusions

In the near term, I find the non-polarized electronic dimming capabilities most interesting for AR. While FlexEnable doesn’t claim to have the light-to-dark range of Meta Materials, they appear to have enough range, particularly on the transparent end, for some AR applications. We must wait to see if the switching speeds are fast enough to support segmented dimming.

To have electronic dimming in a film that can be biaxially curved to add to a design will be seen by many to have design advantages over Meta Material’s rigid lens-like dimming technology. Currently, it seems that, at least on specs, Meta Materials has demonstrated a much wider dynamic range from the transparent to the dark state. I would expect that Flexenable’s LC characteristics will continue to improve.

Electronically changeable lenses are seen as a way to address vergence accommodation conflict (VAC) in VR (such as with Meta’s Half-Dome 3). They would be combined with eye tracking or other methods to move the focus based on where the user is looking. Supporting VAC with AR would be much more complex to prevent the focus changing in the real world a pre-compensation switchable lens would have to cancel out the effect on the real world. This complexity will likely prevent them from being used for VAR in optical AR anytime soon.

Biaxially curved LCDs would seem to offer optical advantages in very wide FOV applications.

Appendix: GRIN vs. Pancharatnam-Berry phase lenses

Simply put, the LC itself acts as a lens with a GRIN lens. The voltage across the LC and the LC’s thickness affects how the lens works. Pancharatnam-Berry phase (PBP) lenses use an LC shutter (uniform) to change the polarization of light that controls the effect of a film with the lens function recorded in it. The lens function film will act or not act based on the polarization of the light. As stated earlier, Meta has been considering both GRIN and PBP lenses (for example, both are shown in Meta application 2020/0348528)

For more on how GRIN lenses work, see Electrically tunable gradient-index lenses via nematic liquid crystals with a method of spatially extended phase distribution.

For more on PBP lenses, see the Augmented reality near-eye display using Pancharatnam-Berry phase lenses and my article, which discusses Meta’s use in the Half-Dome 3.

GRIN lenses don’t require light to be first polarized, but they require a sandwich of two cells. PBP in an AR application would require the real-world light to be polarized, which would lose more than 50% of the light and cause issues with looking at polarized light displays such as LCDs.

The PBP method would likely support more complex lens functions to be recorded in the films. The Meta Half-Dome 3 used a series of PBP lenses with binary-weighted lens functions (see below).

Meta patent application showing the use of multiple PBP lenses (link to article)

The post CES & AR/VR/MR Pt. 4 – FlexEnable’s Dimming, Electronic Lenses, & Curved LCDs first appeared on KGOnTech.

The post CES & AR/VR/MR Pt. 4 – FlexEnable’s Dimming, Electronic Lenses, & Curved LCDs appeared first on KGOnTech.

CES 2023 SadlyIsBradley and KGOnTech Joint Review Video Series (Part 1)

New Video Series on CES 2023

Brad Lynch of the SadlyItsBradley YouTube Channel and I sat down for over 2 hours a week after CES and recorded our discussion of more than 20 companies one or both of us met with at CES 2023. Today, Jan. 26, 2023, Brad released a 23-minute part 1 of the series. Brad is doing all the editing while I did much of the talking.

Brad primarily covers VR, while this blog mostly covers optical AR/MR. Our two subjects meet when we discuss “Mixed Reality,” where the virtual and the real world merge.

Brad’s title for part 1 is “XR at CES: Deep Dives #1 (Magic Leap 2, OpenBCI, Meta Materials).” While Brad describes the series as a “Deep Dive,” but I, as an engineer, consider it to be more of an “overview.” It will take many more days to complete my blog series on CES 2023. This video series will briefly discuss many of the same companies I plan to write about in more detail on this blog, so consider it a look ahead at some future articles.

Brad’s description of Part 1 of the series:

There have been many AR/VR CES videos from my channel and others, and while they gave a good overview of the things that could be seen on the show floor and in private demoes, many don’t have a technical background to go into how each thing may work or not work

Therefore I decided to team up with retired Electrical Engineer and AR skeptic, Karl Guttag, to go over all things XR at CES. This first part will talk about things such as the Magic Leap 2, Open BCI’s Project Galea, Meta Materials and a few bits more!

Brad also has broken the video into chapters by subject:

  • 0:00 Ramblings About CES 2023
  • 6:36 Meta Materials Non-Polarized Dimmers
  • 8:15 Magic Leap 2
  • 14:05 AR vs. VR Use Cases/Difficulties
  • 16:47 Meta’s BCI Arm Band MIGHT Help
  • 17:43 OpenBCI Project Galea

That’s it for today. Brad expects to publish about 2 to 3 videos in the next week. I will try and post a brief note as Brad publishes each video.

The post CES 2023 SadlyIsBradley and KGOnTech Joint Review Video Series (Part 1) appeared first on KGOnTech.

❌