Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

AWE 2024 Panel: The Current State and Future Direction of AR Glasses

29 June 2024 at 23:22

Introduction

At AWE 2024, I was on a panel discussion titled “The Current State and Future Direction of AR Glasses.” Jeri Ellsworth, CEO of Tilt Five, Ed Tang, CEO of Avegant, Adi Robertson, Senior Reporter at The Verge, and I were on the panel, with Jason McDowell, The AR Show, moderating. Jason McDowell did an excellent job of moderation and keeping the discussion moving. Still, with only 55 minutes, including questions from the audience, we could only cover a fraction of the topics we had considered discussing. I’m hoping to reconvene this panel sometime. I also want to thank Dean Johnson, Associate Professor at Western Michigan University, who originated the idea and helped me organize this panel. AWE’s video of our panel is available on YouTube.

First, I will outline what was discussed in the panel. Then, I want to follow up on small FOV optical AR glasses and some back-and-forth discussions with AWE Legend Thad Starner.

Outline of the Panel Discussion

The panel covered many topics, and below, I have provided a link to each part of our discussion and added additional information and details for some of the topics.

  • 0:00 Introductions
  • 2:19 Apple Vision Pro (AVP) and why it has stalled. It has been widely reported that AVP sales have stalled. Just before the conference, The Information reported that Apple had suspended the Vision Pro 2 development and is now focused on a lower-cost version. I want to point out that a 1984 128K Mac 1 adjusted for inflation would cost over $7,000 adjusted for inflation, and the original 1977 Apple 2 4K computer (without a monitor or floppy drive) would cost about $6,700 in today’s dollars. I contend that utility and not price is the key problem with the AVP sales volume and that Apple is thus drawing the wrong conclusion.
  • 7:20 Optical versus Passthrough AR. The panel discusses why their requirements are so different.
  • 11:30 Mentioned Thad Starner and the desire for smaller FOV optical AR headsets. It turns out that Thad Starner attended our panel, but as I later found out, he arrived late and missed my mentioning him. Thad, later questioned the panel. In 2019, I wrote the article FOV Obsession, which discussed Thad’s SPIE AR/VR/MR presentation about smaller FOV. Thad is a Georgia Institute of Technology professor and a part-time Staff Researcher at Google (including on Google Glass). He has continuously worn AR devices since his research work at MIT’s media lab in the 1990s.
  • 13:50 Does “tethering make sense” with cables or wirelessly?
  • 20:40 Does an AR device have to work outside (in daylight)?
  • 26:49 The need to add displays to today’s Audio-AI glasses (ex. Meta Ray-Ban Wayfarer).
  • 31:45 Making AR glasses less creepy?
  • 35:10 Does it have to be a glasses form factor?
  • 35:55 Monocular versus Biocular
  • 37:25 What did Apple Vision Pro get right (and wrong) regarding user interaction?
  • 40:00 I make the point that eye tracking and gesture recognition on the “Apple Vision Pro is magical until it is not,” paraphrasing Adi Robertson, and I then added, “and then it is damn frustrating.” I also discuss that “it’s not truly hands-free if you have to make gestures with your hands.”
  • 41:48 Waiting for the Superman [savior] company. And do big companies help or crush innovation?
  • 44:20 Vertical integration (Apple’s big advantage)
  • 46:13 Audience Question: When will AR glasses replace a smartphone (enterprise and consumer)
  • 49:05 What is the first use case to break 1 million users in Consumer AR?
  • 49:45 Thad Starner – “Bold Prediction” that the first large application will be with small FOV (~20 degrees), monocular, and not centered in the user’s vision (off to the ear side by ~8 to 20 degrees), and monochrome would be OK. A smartphone is only about 9 by 15 degrees FOV [or ~20 degrees diagonally when a phone is held at a typical distance].
  • 52:10 Audience Question: Why aren’t more companies going after OSHA (safety) certification?

Small FOV Optical AR Discussion with Thad Starner

As stated in the outline above, Thad Starner arrived late and missed my discussion of smaller FOVs that mentioned Thad, as I learned after the panel. Thad, who has been continuously wearing AR glasses and researching them since the mid-1990s, brings an interesting perspective. Since I first saw and met him in 2019, he has strongly advocated for AR headsets having a smaller FOV.

Thad also states that the AR headset should have a monocular (single-eye) display and be 8—to 20 degrees on the ear side of the user’s straight-ahead vision. He also suggests that monochrome is fine for most purposes. Thad stated that his team will soon publish papers backing up these contentions.

In the sections below, I went from the YouTube transcript and did some light editing to make what was said more readable.

My discussion from earlier in the panel:

11:30 Karl Guttag – I think a lot of the AR or Optical see-through gets confabulated with what was going on in VR because VR was cheap and easy to make a wide field of view by sticking a cell phone with some cheap Optics in front of your face. You get a wide field of view, and people went crazy about that. I made this point years ago on my blog [2019 article FOV Obsession] was the problem. Thad Starner makes this point: he’s one of our Legends at AWE, and I took that to heart many years ago at SPIE AR/VR/MR 2019.

The problem is that as soon as you say beyond about 30-degree field of view, even projecting forward [with technology advancements], as you go beyond 30-degree field of view, you’re in a helmet, something looking like Magic Leap. And Magic Leap ended up in Nowheresville. [Magic Leap] ended up with 25 to 30% see-through, so it’s not really that good see-through, and yet it’s not got the image quality that you would get of an old display shot right in your eyes. You might you could get a better image on an Xreal or something like that.

People are confabulating too many different specs, so they want a wide field of view. The problem is as soon as you say 50 degrees and then you say, yeah, and I need like spatial recognition, I want to do SLAM, and I want to do this, and I want to do that. You’ve now spiraled into the helmet. I mean, you know, Meta was talking the other day about the other panels and said they’re looking at about 50 grams [for the Meta Ray Bans], and my glasses are 23 grams. You’re out of that as soon as you say 50-degree field of view, you’re over 100 grams and and and and and heading to the Moon as you add more and more cameras and all this other stuff, so I think that’s one of our bigger problems whereas AR really Optical AR.

The experiment we’re going to see played out because many companies are working on adding displays to to so called AI audio glasses. We’re going to see if that works because companies are getting ready to make glasses that have 20—to 30-degree field of view glasses tied into AI and audio stuff.

Thad Starner’s comments and the follow-up discussion during the Q&A at the end of the panel:

AWE Legend Thad Starner Wearing Vuzix’s Ultralight Glasses – After the Panel

49:46 Hi, my name is Thad Starner. I’m Professor Georgia Tech. I’m going to make a bold prediction here that the future, at least the first system to sell over a million units, will be a small field of view monocular, non-line-of-sight display, monochrome is okay now; the reason I say that is number one I’ve done different user studies in my lab that we’ll be publishing soon on this subject but the other thing is that you know our phones which is the most popular interface out there are only 9 degrees by 16 degrees field of view. Putting something outside of the line of sight means that it doesn’t interrupt you while you’re crossing the street or driving or flying a plane, right? We know these numbers, so between 8° and 20 degrees towards the ear and plus or minus 8 degrees, I’m looking at Karl [Guttag] here so he can digest all these things.

Karl – I wrote a whole article about it [FOV Obsession]

Thad – And not having a pixel in line of sight, so now feel free to pick me apart and disagree with me.

Jeri-  I want to know a price point.

Thad, I think the first market will be captioning for the heart of hearing, not for the deaf. Also, possible transcription, not translation; at that price point, you’re talking about making reading glasses for people instead of hearing aids. There’s a lot of pushback against hearing, but reading glasses people tend to do, so I’d say you’re probably in the $200 to $300 range.

Ed – I think your prediction is spot on, minus the color green. The only thing I think is that it’s not going to fly.

Thad – I said monochrome is okay.

Ed – I think the monocular field of view is going to be an entry-level product, and you see, I think you will see products that will fit that category with roughly that field of view with roughly that offset angle [not in the center of view] is what you’re going to see in the beginning. Yeah I agree with that but I don’t I think that’s the first step I think you will see a lot of products after that that’s going to do a lot more than monocular monochrome offset displays, start going to larger field of view binocular I think that will happen pretty quickly.

Adi – It does feel like somebody tries to do that every 18 months, though, like Intel tried to make a pair of glasses that did that. It’s a little bit what North did. I guess it’s just a matter of throwing the idea at the wall because I think it’s a good one until it takes.

I was a little taken aback to have Thad call me out as if I had disagreed with him when I had made the point about the advantages of a smaller FOV earlier. Only after the presentation did I find out that he had arrived late. I’m not sure what comment I made that made Thad think I was advocating for a larger FOV in AR glasses.

I want to add that there can be big differences between what consumers and experts will accept in a product. I’m reminded of a story I read in the early 1980s when there was a big debate between very high-resolution monochrome versus lower-resolution color (back then, you could only have one or the other with CRTs) that the head of IBM’s monitor division said, “Color is the least necessary and most desired feature in a monitor.” All the research suggested that resolution was more important for the tasks people did on a computer at the time, but people still insisted on color monitors. Another example is the 1985 New Coke fiasco, in which Coke’s taste studies proved that people liked New Coke better, but it still failed as a product.

In my experience, a big factor is whether the person is being trained to use the device for enterprise or military use versus whether the user is buying it for their own enjoyment. The military has used monochrome displays on devices, including night vision and heads-up displays for decades. I like to point out that the requirement can change if “If the user paid to use versus is paying to use.” Enterprises and the military care about whether the product gets the job done and pay someone to use the device. The consumer has different criteria. I will also agree that there are cases where the user is motivated to be trained, such as Thad’s hard-of-hearing example.

Conclusion on Small FOV Optical AR

First, I agree with Thad’s comments about the smaller FOV and have stated such before. There are also cases outside of enterprise and industrial use where the user is motivated to be trained, such as Thad’s hard-of-hearing example. But while I can’t disagree with Thad or his studies that show having a monocular monochrome image located outside the line of sight is technically better, I think consumers will have a tougher time accepting a monocular monochrome display. What you can train someone to use differs from what they would buy for themselves.

Thad makes a good point that having a biocular display directly in the line of sight can be problematic and even dangerous. At the same time, untrained people don’t like monocular displays outside the line of sight. It becomes (as Ed Tang said in the panel) a point of high friction to adoption.

Based on the many designs I have seen for AR glasses, we will see this all played out. Multiple companies are developing optical see-through AR glasses with monocular green MicroLEDs, color X-cube-based MicroLEDs, and LCOS-based displays with glass form-factor waveguide optics (both diffractive and reflective).

Brilliant Labs Frame AR with AI Glasses & a Little More on the Apple Vision Pro

10 May 2024 at 04:29

Introduction

A notice in my LinkedIn feed mentioned that Brilliant Labs has started shipping its new Frame AR glasses. I briefly met with Brilliant CEO Bobak Tavangar at AWE 2023 (right) and got a short demonstration of its “Monocle” prototype. So, I investigated what Brilliant Labs was doing with its new “Frame.”

This started as a very short article, but as I put it together, I thought it would be an interesting example of making design decisions and trade-offs. So it became longer. Looking at the Frames more closely, I found issues that concerned me. I don’t mean to pick on Brillant Labs here. Any hardware device like the Frames is a massive effort, and they talk like they are concerned about their customers; I am only pointing out the complexities of supporting AI with AR for a wide audience.

While looking at how the Frame glasses work, I came across some information related to the Apple Vision Pro’s brightness (in nits), discussed last time in Apple Vision Pro Discussion Video by Karl Guttag and Jason McDowall. In the same way, the Apple Vision Pro’s brightness is being misstated as “5000 nits,” and the Brilliant Labs Frame’s brightness has been misreported as 3,000 nits. In both cases, the nits are the “potential” out of the display and not “to the eye” after the optics.

I’m also repeating the announcement that I will be at SID’s DisplayWeek next week and AWE next month. If you want to meet, please email meet@kgontech.com.

DisplayWeek (next week) and AWE (next month)

I will be at SID DisplayWeek in May and AWE in June. If you want to meet with me at either event, please email meet@kgontech.com. I usually spend most of my time on the exhibition floor where I can see the technology.

If you want to meet, please email meet@kgontech.com.

AWE has moved to Long Beach, CA, south of LA, from its prior venue in Santa Clara, and it is about one month later than last year. Last year at AWE, I presented Optical Versus Passthrough Mixed Reality, available on YouTube. This presentation was in anticipation of the Apple Vision Pro.

At AWE, I will be on the PANEL: Current State and Future Direction of AR Glasses on Wednesday, June 19th, from 11:30 AM to 12:25 PM.

There is an AWE speaker discount code – SPKR24D , which provides a 20% discount, and it can be combined with Early Bird pricing (which ends May 9th, 2024 – Today as I post this). You can register for AWE here.

Brilliant Labs Monocle & Frame “Simplistic” Optical Designs

Brillian Labs Monocle and Frame used the same basic optical architecture, but it is better hidden in the Frame design. I will start with the Monocle, as it is easier to see the elements and the light path. I was a little surprised that both designs use a very simplistic, non-polarized 50/50 beam splitter with its drawbacks.

Below (left) is a picture of the Monocle with the light path (in green). The Monocle (and Frame) both use a non-polarizing 50/50 beamsplitter. The splitter projects 50% of the display’s light forward and 50% downward to the (mostly) spherical mirror, magnifying the image and moving the apparent focus. After reflecting from the mirror, the light is split again in half, and ~25% of the light goes to the eye. The front project image will be mirrored, with an unmagnified view of the display that will be fairly bright. Front projection or “eye glow” is generally considered undesirable in social situations and is something most companies try to reduce/eliminate in their optical designs.

The middle picture above shows a picture I took of the Monocle from the outside, and you can see the light from the beam splitter projecting forward. Figures 5A and 6 (above right) from Brilliant Labs’ patent application illustrate the construction of the optics. The Monocle is made with two solid optical parts, with the bottom part forming part of the beam splitter and the bottom surface being shaped to form the curved mirror and then mirror coated. An issue with the 2-piece Monocle construction is that the beam splitter and mirror are below eye level, which requires the user to look down to see the image or position the whole device higher, which results in the user looking through the mirror.

The Frame optics work identically in function, but the size and spacing differ. The optics are formed with three parts, which enables Brilliant to position the beam splitter and mirror nearer the center of the user’s line of sight. But as Brilliant Lab’s documentation shows (right), the new Frame glasses still have the virtual (apparent) image below the line of sight.

Having the image below the line of sight reduces the distortion/artifacts of the real world by looking through the beam splitter when looking forward, but it does not eliminate all issues. The top seam of the beam splitter will likely be visible as an out-of-focus line.

The image below shows part of the construction process from a Brilliant Labs YouTube video. Note that the two parts that form the beamsplitter with its 50/50 semi-mirror coating have already been assembled to form the “Top.”

The picture above left is of a prototype taken by Forbes’ author Ben Sin of a Frame prototype from his article Frame Is The Most ‘Normal’ Looking AI Glasses I’ve Worn Yet. In this picture, the 50/50 beam splitter is evident.

Two Types of Birdbath

As discussed in Nreal Teardown: Part 1, Clones and Birdbath Basics and its Appendix: Second Type of Birdbath, there are two types of “birdbaths” used in AR. The Birdbath comprises a curved mirror (or semi-mirror) and a beamsplitter. It is called a “birdbath” because the light reflects out of the mirror. The beamsplitter can be polarized or unpolarized (more on this later). Birdbath elements are often buried in the design, such as the Lumus optical design (below left) with its curved mirror and beam splitter.

From 2023 AR/VR/MR Lumus Paper – A “birdbath” is one element of the optics

Many AR glasses today use the birdbath to change the focus and act as the combiner. The most common of these designs is where the user looks through a 50/50 birdbath mirror to see the real world (see Nreal/Xreal example below right). In this design, a polarised beam splitter is usually used with a quarter waveplate to “switch” the polarization after the reflection from the curved semi-mirror to cause the light to go through the beam splitter on its second pass (see Nreal Teardown: Part 1, Clones and Birdbath Basics for a more detailed explanation). This design is what I refer to as a “Look through the mirror” type of birdbath.

Brilliant Labs uses a “Look through the Beamsplitter” type of birdbath. Google Glass is perhaps the most famous product with this birdbath type (below left). This birdbath type has appeared in Samsung patents that were much discussed in the electronic trade press in 2019 (see my 2019 Samsung AR Design Patent—What’s Inside).

LCOS maker Raontech started showing a look through the beamsplitter reference design in 2018 (below right). The various segments of their optics are labeled below. This design uses a polarizing beam splitter and a quarter waveplate.

Brilliant Labs’ Thin Beam Splitter Causes View Issues

If you look at the RaonTech or Google Glass splitter, you should see that the beam splitter is the full height of the optics. However, in the case of the Frames and Monocle designs (right), the top and bottom beam splitter seams, the 50/50 mirror coating, and the curved mirror are in the middle of the optics and will be visible as out-of-focus blurs to the user.

Pros and Cons of Look-Through-Mirror versus Look-Through-Beamsplitter

The look-through-mirror birdbaths typically use a thin flat/plate beam splitter, and the curved semi-mirror is also thin and “encased in air.” This results in them being relatively light and inexpensive. They also don’t have to deal with the “birefringence” (polarization changing) issues associated with thick optical materials (particularly plastic). The big disadvantage of the look-through-mirror approach is that to see the real world, the user must look through both the beamsplitter and the 50/50 mirror; thus, the real world is dimmed by at least 75%.

The look-through-beamsplitter designs encase the entire design in either glass or plastic, with multiple glued-together surfaces coated or coated with films. The need to encase the design in a solid means the designs tend to be thicker and more expensive. Worse yet, typical injected mold plastics are birefringent and can’t be used with polarized optics (beamsplitters and quartwaveplates). Either heavy glass or higher-cost resin-molded plastics must be used with polarized elements. Supporting a wider FOV becomes increasingly difficult as a linear change in FOV results in a cubic increase in the volume of material (either plastic or glass) and, thus, the weight. Bigger optics are also more expensive to make. There are also optical problems when looking through very thick solid optics. You can see in the Raontech design above how thick the optics get to support a ~50-degree FOV. This approach “only” requires the user to look through the beam splitter, and thus the view of the real world is dimmed by 50% (or twice as much light gets through as the look-through-mirror method).

Pros and Cons Polarized Beam Splitter Birdbaths

Most companies with look-through-mirror and look-through-beamsplitter designs, but not Brilliant Labs, have gone with polarizing beam splitters and then use quarter waveplates to “switch” the polarization when the light reflects off the mirror. Either method requires the display’s light to make a reflective and transmissive pass via the beam splitter. With a non-polarized 50/50 beam splitter, this means multiplicative 50% losses or only 25% of the light getting through. With a polarized beam splitter, once the light is polarized with a 50% loss, with proper use of quarter waveplates, there are no more significant losses with the polarized beamsplitter.

Another advantage of the polarized optics approach is that front-projection can be mostly eliminated (there will be only a little due to scatter). The look-through-mirror method can be accomplished (as discussed in Nreal Teardown: Part 1, Clones and Birdbath Basics) with a second-quarter waveplate and a front polarizer. With the look-through-beamsplitter method, a polarizer before the beamsplitter will block the light that would project forward off the polarized beamsplitter.

As mentioned earlier, using polarized optics becomes much more difficult with the thicker solid optics associated with the look-through-beamsplitter method.

Brilliant Labs Frame Design Decision Options

It seems that at every turn in the decision process for the Frame and Monocle optics, Brilliant Labs chose the simplest and most economical design possible. By not using polarized optics, they gave up brightness and will have significant front projection. Still, they can use much less expensive injection-molded plastic optics that do not require polarizers and quart waveplates. They avoided using more expensive waveguides, which would be thinner but require LCOS or MicroLED (inorganic LED) projection engines, which may be heavier and larger. Although, the latest LCOS and MicroLED engines are getting to be pretty small and light, particularly for a >30-degree FOV (see DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8)).

Frames Brightness to the Eye – Likely >25% of 3,000 nits – Same Problem as Apple Vision Pro Reporting

As discussed in the last article on the Apple Vision Pro (AVP) in the Appendix: Rumor Mill’s 5,000 Nits Apple Vision Pro, reporters/authors constantly make erroneous comparisons of “display-out nits” with one device and to the nits-to-the-eye of other devices. Also, as stated last time, the companies appear to want this confusion by avoiding specifying the nits to the eye as they benefit from reporters and others using display device values.

I could not find an official Brilliant Labs value anywhere, but it seems to have told reporters that “the display is 3,000 nits,” which may not be a lie, but it is misleading. Most articles will dutifully give the “display number” but fail to say that they are “display device nits” and not what the user will see and leave it to the readers to make the mistake, while other reporters will make the error themselves.

Digitrends:

The display on Frame is monocular, meaning the text and graphics are displayed over the right eye only. It’s fairly bright (3,000 nits), though, so readability should be good even outdoors in sunlit areas.

Wearable:

As with the Brilliant Labs Monocle – the clip-on, open-source device that came before Frame – information is displayed in just one eye, with overlays being pumped out at around 3,000 nits brightness.

Android Central in androidcentral’s These AI glasses are being backed by the Pokemon Go CEO, who was at least making it clear that it was the display device numbers, but I still think most readers wouldn’t know what to do with this number. They added the tidbit that the panels were made by Sony, and they discussed pulse with modulation (also known as duty cycle). Interestingly, they talk about a short on-time duty cycle causing problems for people sensitive to flicker. In contrast, VR game fans favor a very short on-time duty cycle, what Brad Lynch of SadlyItsBradly refers to as low-persistence) to reduce blurring.

androidcentral’s These AI glasses are being backed by the Pokemon Go CEO

A 0.23-inch Sony MicroOLED display can be found inside one of the lenses, emitting 3,000 nits of brightness. Brilliant Labs tells me it doesn’t use PWM dimming on the display, either, meaning PWM-sensitive folks should have no trouble using it.

Below is a summary of Sony OLED Microdisplays aimed at the AR and VR market. On it, the 0.23 type device is listed with a max lumence of 3,000 nits. However, from the earlier analysis, we know that at most 25% of the light can get through Brilliant Labs Frame birdbath optics or at most 750 nits (likely less due to other optical losses). This number assumes that the device is driven at full brightness and that Brilliant Labs is not buying derated devices at a lower price.

I can’t blame Brilliant Labs because almost every company does the same in terms of hiding the ball on to-the-eye brightness. Only companies with comparatively high nits-to-the-eye values (such as Lumus) publish this spec.

Sony Specifications related to the Apple Vision Pro

The Sony specifications list a 3.5K by 4 K device. The industry common understanding is that Apple designed a custom backplane for the AVP but then used Sony’s OLED process. Notice the spec of 1,000 cd/m2 (candelas per meter squared = nits) at a 20% duty ratio. While favorable for VR gamers wanting less motion blur, the low on-duty cycle time is also a lifetime issue. The display device probably can’t handle the heat from being driven for a high percentage of the time.

It would be reasonable to assume that Apple is similarly restricted to about a 20% on-duty cycle. As I reported last time in the Apple Vision Pro Discussion Video by Karl Guttag and Jason McDowall, I have measured the on-duty cycle of the AVP to be about 18.4% or close to Sony’s 20% for their own device.

The 5,000 nits cited by MIT Tech Review are the raw displays before the optics, whereas the nits for the MQ2 were those going to the eye. The AVP’s (and all other) pancake optics transmit about 11% (or less) of the light from an OLED in the center. With Pancake optics, there is the polarization of the OLED (>50% loss), a transmissive pass, and a reflective pass through a 50/50 mirror, which starts with at most 12.5% (50% cubed) before considering all the other losses from the optics. Then, there is the on-time-duty cycle of the AVP, which I have measured to be about 18.4%. VR devices want the on-time duty cycle to be low to reduce motion blur with the rapid motion of the head and 3-D game. The MQ3 only has a 10.3% on-time duty cycle (shorter duty cycles are easier with LED-illuminated LCDs). So, while the AVP display devices likely can emit about 5,000 nits, the nits reaching the eye are approximately 5,000 nits x 11% x 18.4% = 100 nits.

View Into the Frame Glasses

I don’t want to say that Brilliant Labs is doing anything wrong or that other companies don’t often do the same. Companies often take pictures and videos of new products using non-functional prototypes because the working versions aren’t ready when shooting or because they look better on camera. Still, I want to point out something I noticed with the pictures of the CEO, Bobak Tavangar (right), that was published in many of the articles in the Frames glasses. I didn’t see the curved mirror and the 50/50 beam splitter.

In a high-resolution version of the picture, I could see the split in the optics (below left) but not the darkened rectangle of the 50/50 mirror. So far, I have found only one picture of someone wearing the Frame glasses from Bobak Tavangar’s post on X. It is of a person wearing what appears to be a functional Frame in a clear prototype body (below right). In the dotted line box, you can see the dark rectangle from the 50/50 mirror and a glint from the bottom curved mirror.

I don’t think Brilliant Labs is trying to hide anything, as I can find several pictures that appear to be functional frames, such as the picture from another Tavangar post on X showing trays full of Frame devices being produced (right) or the Forbes picture (earlier in the Optical section).

What was I hoping to show?

I’m trying to show what the Frame looks like when worn to get an idea of the social impact of wearing the glasses. I was looking for a video of someone wearing them with the Frame turned on, but unfortunately, none have surfaced. From the design analysis above, I know they will project a small but bright image view with a mirror image of the display off of the 50/50 mirror, but I have not found an image showing the working device from the outside looking in.

Exploded View of the Frame Glasses

The figure below is taken from Brilliant Lab’s online manual for the Frame glasses (I edited it to reduce space and inverted the image to make it easier to view). By AR glasses standards, the Frame design is about as simple as possible. The choice of two nose bridge inserts is not shown in the figure below.

There is only one size of glasses, which Brilliant Labs described in their AMA as being between a “medium and large” type frame. They say that the temples are flexible to accommodate many head widths. Because the Frames are monocular, IPD is not the problem it would be with a biocular headset.

AddOptics is making custom prescription lenses for the Frames glasses

Brilliant Labs is partnering with AddOptics to make prescription lenses that can be ‘Precision Bonded’ to Frames using a unique optical lens casting process. For more on AddOptics, see CES 2023 (Part 3) – AddOptics Custom Optics and my short follow-up in Mixed Reality at CES & AR/VR/MR 2024 (Part 2 Mostly Optics).

Bonding to the Frames will make for a cleaner and more compact solution than the more common insert solution, but it will likely be permanent and thus a problem for people whose prescriptions change. In their YouTube AMA, Brilliant Labs said they are working with AddOptics to increase the range of prescription values and support for astigmatism.

They didn’t say anything about bifocal or progressive lens support, which is even more complicated (and may require post-mold grinding). As the virtual image is below the centerline of vision, it would typically be where bifocal and progressive lenses would be designed for reading distance (near vision). In contrast, most AR and VR glasses aim to put the virtual image at 2 meters, considered “far vision.”

The Frame’s basic specs

Below, I have collected the basic specs on the Frame glasses and added my estimate for the nits to the eye. Also shown below is their somewhat comical charging adapter (“Mister Charger”). None of these specs are out of the ordinary and are generally at the low end for the display and camera.

  • Monocular 640×400 resolution OLED Microdisplay
  • ~750nits to the eye (based on reports of a 3,000 Sony Micro-OLED display device)
    • (90% on-time duty cycle using an
  • 20-Degree FOV
  • Weight ~40 grams
  • 1280×720 camera
  • Microphone
  • 6 axis IMU
  • Battery 222mAh  (plus 149mAh top-up from charging adapter)
    • With 80mA typical power consumption when operating 0.580 on standby)
  • CPU nRF52840 Cortex M4F (Nordic ARM)
  • Bluetooth 5.3

Everything in AR Today is “AI”

Brilliant Labs is marketing the frames as “AI Glasses.” The “AI” comes from Brilliant Lab’s Noa ChatGPT client application running on a smartphone. Brillant Labs says the hardware is “open source” and can be used by other companies’ applications.

I’m assuming the “AI” primarily runs on the Noa cell phone application, which then connects to the cloud for the heavy-lifting AI. According to their video by Brillant Labs, while on the Monocle, the CPU only controls the display and peripherals, they plan to move some processing onto the Frame’s more capable CPU. Like other “AI” wearables, I expect simple questions will get immediate responses while complex questions will wait on the cloud.

Conclusions

To be fair, designing glasses and wearable AR products for the mass market is difficult. I didn’t intend to pick on Brilliant Lab’s Frames; instead, I am using it as an example.

With a monocular, 20-degree FOV below the center of the person’s view, the Frames are a “data snacking” type AR device. It is going to be competing with products like the Human AI projector (which is a joke — see: Humane AI – Pico Laser Projection – $230M AI Twist on an Old Scam), the Rabbit R1, Meta’s (display-less) Ray Ban Wayfarer, other “AI” audio glasses, and many AR-AI glasses similar to the Frame that are in development.

This blog normally concentrates on display and optics, and on this score, the Frame’s optics are a “minimal effort” to support low cost and weight. As such, they have a lot of problems, including:

  • Small 20-degree FOV that is set below the eyes and not centered (unless you are lucky with the right IPD)
  • Due to the way the beam 50/50 splitter cuts through the optics, it will have a visible seam. I don’t think this will be pleasant to look through when the display is off (but I have not tried them yet). You could argue that you only put them on “when you need them,” but that negates most use cases.
  • The support for vision correction appears to lock the glasses to a single (current) prescription.
  • Regardless of flexibility, the single-size frame will make the glasses unwearable for many people.
  • The brightness to the eye of probably less than 750 nits is not bright enough for general outdoor use in daylight. It might be marginal if used combined with clip-on sunglasses or if they are used in the shade.

As a consumer, I hate the charger adapter concept. Why they couldn’t just put a USB-C connector on the glasses is beyond me and a friction point for every user. Users typically have dozens of USB-C power cables today, but your device is dead if you forget or lose the adaptor. Since these are supposed to be prescription glasses, the idea of needing to take them off to charge them is also problematic.

While I can see the future use model for AI prescription glasses, I think a display, even one with a small FOV, will add significant value. I think Brillant Labs’s Frames are for early adopters who will accept many faults and difficulties. At least they are reasonably priced at $349, by today’s standards, and don’t require a subscription for basic services without too many complex AI queries requiring the cloud.

Mixed Reality at CES & AR/VR/MR 2024 (Part 3 Display Devices)

20 April 2024 at 14:59

Update 2/21/22: I added a discussion of the DLP’s new frame rates and its potential to address field sequential color breakup.

Introduction

In part 3 of my combined CES and AR/VR/MR 2024 coverage of over 50 Mixed Reality companies, I will discuss display companies.

As discussed in Mixed Reality at CES and the AR/VR/MR 2024 Video (Part 1 – Headset Companies), Jason McDowall of The AR Show recorded more than four hours of video on the 50 companies. In editing the videos, I felt the need to add more information on the companies. So, I decided to release each video in sections with a companion blog article with added information.

Outline of the Video and Additional Information

The part of the video on display companies is only about 14 minutes long, but with my background working in displays, I had more to write about each company. The times in blue on the left of each subsection below link to the YouTube video section discussing a given company.

00:10 Lighting Silicon (Formerly Kopin Micro-OLED)

Lighting Silicon is a spinoff of Kopin’s micro-OLED development. Kopin started making micro-LCD microdisplays with its transmissive color filter “Lift-off LCOS” process in 1990. 2011 Kopin acquired Forth Dimension Displays (FDD), a high-resolution Ferroelectric (reflective) LCOS maker. In 2016, I first reported on Kopin Entering the OLED Microdisplay Market. Lighting Silicon (as Kopin) was the first company to promote the combination of all plastic pancake optics with micro-OLEDs (now used in the Apple Vision Pro). Panasonic picked up the Lighting/Kopin OLED with pancake optics design for their Shift All headset (see also: Pancake Optics Kopin/Panasonic).

At CES 2024, I was invited by Chris Chinnock of Insight Media to be on a panel at Lighting Silicon’s reception. The panel’s title was “Finding the Path to a Consumer-Friendly Vision Pro Headset” (video link – remember this was made before the Apple Vision Pro was available). The panel started with Lighting Silicon’s Chairman, John Fan, explaining Lighting Silicon and its relationship with Lakeside Lighting Semiconductor. Essentially, Lightning Semiconductor designs the semiconductor backplane, and Lakeside Lighting does the OLED assembly (including applying the OLED material a wafer at a time, sealing the display, singulating the displays, and bonding). Currently, Lakeside Lighting is only processing 8-inch/200mm wafers, limiting Lighting Silicon to making ~2.5K resolution devices. To make ~4K devices, Lighting Semiconductor needs a more advanced semiconductor process that is only available in more modern 12-inch/300mm FABs. Lakeside is now building a manufacturing facility that can handle 12-inch OLED wafer assembly, enabling Lighting Silicon to offer ~4K devices.

Related info on Kopin’s history in microdisplays and micro-OLEDs:

02:55 RaonTech

RaonTech seems to be one of the most popular LCOS makers, as I see their devices being used in many new designs/prototypes. Himax (Google Glass, Hololens 1, and many others) and Omnivision (Magic Leap 1&2 and other designs) are also LCOS makers I know are in multiple designs, but I didn’t see them at CES or the AR/VR/MR. I first reported on RaonTech at CES 2018 (Part 1 – AR Overview). RaonTech makes various LCOS devices with different pixel sizes and resolutions. More recently, they have developed a 2.15-micron pixel pitch field sequential color pixel with an “embedded spatial interpolation is done by pixel circuit itself,” so (as I understand it) the 4K image is based on 2K data being sent and interpolated by the display.

In addition to LCOS, RaonTech has been designing backplanes for other companies making micro-OLED and MicroLED microdisplays.

04:01 May Display (LCOS)

May Display is a Korean LCOS company that I first saw at CES 2022. It surprised me, as I thought I knew most of the LCOS makers. May is still a bit of an enigma. They make a range of LCOS panels, their most advanced being an 8K (7980 x 4,320) 3.2-micron pixel pitch. May also makes a 4K VR headset with a 75-degree FOV using their LCOS devices.

May has its own in-house LCOS manufacturing capability. May demonstrated using its LCOS devices in projectors and VR headsets and showed them being used in a (true) holographic projector (I think using phase LCOS).

May Display sounds like an impressive LCOS company, but I have not seen or heard of their LCOS devices being used in other companies’ products or prototypes.

04:16 Kopin’s Forth Dimensions Display (LCOS)

As discussed earlier with Lighting Silicon, Kopin acquired Ferroelectric LCOS maker Forth Dimension Displays (FDD) in 2011. FDD was originally founded as Micropix in 1988 as part of CRL-Opto, then renamed CRLO in 2004, and finally Forth Dimension Displays in 2005, before Kopin’s 2011 acquisition.

I started working in LCOS in 1998 as the CTO of Silicon Display, a startup developing a VR/AR monocular headset. I designed an XGA (1024 x768) LCOS backplane and the FGA to drive it. We were looking to work with MicroPix/CRL-Opto to do the LCOS assembly (applying the cover glass, glue seal, and liquid crystal). When MicroPix/CRL-Opto couldn’t get their backplane to work, they ended up licensing the XGA LCOS backplane design I did at Silicon Display to be their first device, which they had made for many years.

FDD has focused on higher-end display applications, with its most high-profile design win being the early 4K RED cameras. But (almost) all viewfinders today, including RED, use OLEDs. FDD’s LCOS devices have been used in military and industrial VR applications, but I haven’t seen them used in the broader AR/VR market. According to FDD, one of the biggest markets for their devices today is in “structured light” for 3-D depth sensing. FDD’s devices are also used in industrial and scientific applications such as 3D Super Resolution Microscopy and 3D Optical Metrology.

05:34 Texas Instruments (TI) DLP®

Around 2015, DLP and LCOS displays seemed to have been used in roughly equal numbers of waveguide-based AR/MR designs. However, since 2016, almost all new waveguide-based designs have used LCOS, most notably the Hololens 1 (2016) and Magic Leap One (2018). Even companies previously using DLP switched to LCOS and, more recently, MicroLEDs with new designs. Among the reasons the companies gave for switching from DLP to LCOS were pixel size and, thus, a smaller device for a given resolution, lower power consumption of the display+asic, more choice in device resolutions and form factors, and cost.

While DLP does not require polarized light, which is a significant efficiency advantage in room/theater projector applications that project hundreds or thousands of lumens, the power of the display device and control logic/ASICs are much more of a factor in near-eye displays that require less than 1 to at most a few lumens since the light is directly aimed into the eye rather than illuminating the whole room. Additionally, many near-eye optical designs employ one or more reflective optics requiring polarized light.

Another issue with DLP is drive algorithm control. Texas Instruments does not give its customers direct access to the DLP’s drive algorithm, which was a major issue for CREAL (to be discussed in the next article), which switched from DLP to LCOS partly because of the need to control its unique light field driving method directly. VividQ (also to be discussed in the next article), which generates a holographic display, started with DLP and now uses LCOS. Lightspace 3D has similarly switched.

Far from giving up, TI is making a concerted effort to improve its position in the AR/VR/MR market with new, smaller, and more efficient DLP/DMD devices and chipsets and reference design optics.

Color Breakup On Hololens 1 using a low color sequential field rate

Added 2/21/22: I forgot to discuss the DLP’s new frame rates and field sequential color breakup.

I find the new, much higher frame rates the most interesting. Both DLP and LCOS use field sequential color (FSC), which can be prone to color breakup with eye and/or image movement. One way to reduce the chance of breakup is to increase the frame rate and, thus, the color field sequence rate (there are nominally three color fields, R, G, & B, per frame). With DLP’s new much higher 240Hz & 480Hz frame rates, the DLP would have 720 or 1440 color fields per second. Some older LCOS had as low as 60-frames/180-fields (I think this was used on Hololens 1 – right), and many, if not most, LCOS today use 120-frames/360-fields per second. A few LCOS devices I have seen can go as high as 180-frames/540-fields per second. So, the newer DLP devices would have an advantage in that area.

The content below was extracted from the TI DLP presentation given at AR/VR/MR 2024 on January 29, 2024 (note that only the abstract seems available on the SPIE website).

My Background at Texas Instruments:

I worked at Texas Instruments from 1977 to 1998, becoming the youngest TI Fellow in the company’s history in 1988. However, contrary to what people may think, I never directly worked on the DLP. The closest I came was a short-lived joint development program to develop a DLP-based color copier using the TMS320C80 image processor, for which I was the lead architect.

I worked in the Microprocessor division developing the TMS9918/28/29 (the first “Sprite” video chip), the TMS9995 CPU, the TMS99000 CPU, the TMS34010 (the first programmable graphics processor), the TMS34020 (2nd generation), the TMS302C80 (first image processor with 4 DSP CPUs and a RISC CPU) several generations of Video DRAM (starting with the TMS4161), and the first Synchronous DRAM. I designed silicon to generate or process pixels for about 17 of my 20 years at TI.

After leaving TI, ended up working on LCOS, a rival technology to DLP, from 1998 through 2011. But then when I was designing a aftermarket autmotive HUD at Navdy, I chose use a DLP engine for the projector for its advantages in that application. I like to think of myself as a product focused and want to use whichever technology works best for the given application. I see pros and cons in all the display technologies.

07:25 VueReal MicroLED

VueReal is a Canadian-based startup developing MicroLEDs. Their initial focus was on making single color per device microdisplays (below left).

However, perhaps VueReal’s most interesting development is their cartridge-based method of microprinting MicroLEDs. In this process, they singulate the individual LEDs, test and select them, and then transfer them to a substrate with either passive (wire) or active (ex., thin-film transistors on glass or plastic). They claim to have extremely high yields with this process. With this process, they can make full-color rectangular displays (above right), transparent displays (by spacing the LEDs out on a transparent substrate, and displays of various shapes, such as an automotive instrument panel or a tail light.

I was not allowed to take pictures in the VueReal suite, but Chris Chinnock of Insight Media was allowed to make a video from the suit but had to keep his distance from demos. For more information on VueReal, I would also suggest going to MicroLED-Info, which has a combination of information and videos on VueReal.

08:26 MojoVision MicroLED

MojoVision is pivoting from a “Contact Lens Display Company” to a “MicroLED component company.” Its new CEO is Dr. Nikhil Balram, formerly the head of Google’s Display Group. MojoVision started saying (in private) that it was putting more emphasis on being a MicroLEDs component company around 2021. Still, it didn’t publicly stop developing the contact lens display until January 2023 after spending more than $200M.

To be clear, I always thought the contact lens display concept was fatally flawed due to physics, to the point where I thought it was a scam. Some third-party NDA reasons kept me from talking about MojoVision until 2022. I outlined some fundamental problems and why I thought the contact lens display was a sham in my 2022 Video with Brad Lynch on Mojovision Contact Display in my 2022 CES Discussion video with Brad Lynch (if you take pleasure in my beating up on a dumb concept for about 14 minutes, it might be a fun thing to watch).

So, in my book, Mojovision, the company starts with a major credibility problem. Still, they are now under new leadership and focusing on what they got to work, namely very small MicroLEDs. Their 1.75-micron LEDs are the smallest I have heard about. The “old” Mojovision had developed direct/native green MicroLEDs, but the new MojoVision is developing native blue LEDs and then using quantum dot conversion to get green and red.

I have been hearing about using quantum dots to make full-color MicroLEDs for ~10 years, and many companies have said they are working on it. Playnitride demonstrated quantum dot-converted microdisplays (via Lumus waveguides) and larger direct-view displays at AR/VR/MR 2023 (see MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7)).

Mike Wiemer (CTO) gave a presentation on “Comparing Reds: QD vs InGaN vs AlInGaP” (behind the SPIE Paywall). Below are a few slides from that presentation.

Wiemer gave many of the (well-known in the industry) advantages of the blue LED with the quantum dot approach for MicroLEDs over competing approaches to full-color MicroLEDs, including:

  • Blue LEDs are the most efficient color
  • You only have to make a single type of LED crystal structure in a single layer.
  • It is relatively easy to print small quantum dots; it is infeasible to pick and place microdisplay size MicroLEDs
  • Quantum dots converted blue to green and red are much more efficient than native green and red LEDs
  • Native red LEDs are inefficient in GaN crystalline structures that are moderately compatible with native green and blue LEDs.
  • Stacking native LEDs of different colors on different layers is a complex crystalline growth process, and blocking light from lower layers causes efficiency issues.
  • Single emitters with multiple-color LEDs (e.g., See my article on Porotech) have efficiency issues, particularly in RED, which are further exacerbated by the need to time sequence the colors. Controlling a large array of single emitters with multiple colors requires a yet-to-be-developed, complex backplane.

Some of the known big issues with quantum dot conversion with MicroLED microdisplays (not a problem for larger direct view displays):

  • MicroLEDs can only have a very thin layer of quantum dots. If the layer is too thin, the light/energy is wasted, and the residual blue light must be filtered out to get good greens and reds.
    • MojoVision claims to have developed quantum dots that can convert all the blue light to red or green with thin layers
  • There must be some structure/isolation to prevent the blue light from adjacent cells from activating the quantum dots of a given cell, which would cause the desaturation of colors. Eliminating color crosstalk/desaturating is another advantage of having thinner quantum dot layers.
  • The lifetime and potential for color shifting with quantum dots, particularly if they are driven hard. Native crystalline LEDs are more durable and can be driven harder/brighter. Thus, quantum dot-converted blue LEDs, while more than 10x brighter than OLEDs, are expected to be less bright than native LEDs
  • While MojoVision has a relatively small 1.37-micron LED on a 1.87-micron pitch, that still gives a 3.74-micron pixel pitch (assuming MojoVision keeps using two reds to get enough red brightness). While this is still about half the pixel pitch of the Apple Vision’s Pro ~7.5-micron pitch OLED, a smaller pixel size such as with a single-emitter-with multiple-colors (e.g., Porotech) would be better (more efficient due to étendue see: MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7)) for semi-collimating the light using microlenses as needed by waveguides.

10:20 Porotech MicroLED

I covered Porotech’s single emitter, multiple color, MicroLED technology extensively last year in CES 2023 (Part 2) – Porotech – The Most Advanced MicroLED Technology, MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7), and my CES 2023 Video with Brad Lynch.

While technically interesting, Porotech’s single-emitter device will likely take considerable time to perfect. The single-emitter approach has the major advantage of supporting a smaller pixel since only one LED per pixel is required. This also results in only two electrical connections (power and ground) to LED per pixel.

However, as the current level controls the color wavelength, this level must be precise. The brightness is then controlled by the duty cycle. An extremely advanced semiconductor backplane will be needed to precisely control the current and duty cycle per pixel, a backplane vastly more complex than LCOS or spatial color MicroLEDs (such as MojoVision and Playnitride) require.

Using current to control the color of LEDs is well-known to experts in LEDs. Multiple LED experts have told me that based on their knowledge, they believe Porotech’s red light output will be small relative to the blue and green. To produce a full-color image, the single emitter will have to sequentially display red, green, and blue, further exacerbating the red’s brightness issues.

12:55 Brilliance Color Laser Combiner

Brilliance has developed a 3-color laser combiner on silicon. Light guides formed in/on the silicon act similarly to fiber optics to combine red, green, and blue laser diodes into a single beam. The obvious application of this technology would be a laser beam scanning (LBS) display.

While I appreciate Brilliance’s technical achievement, I don’t believe that laser beam scanning (LBS) is a competitive display technology for any known application. This blog has written dozens of articles (too many to list here) about the failure of LBS displays.

14:24 TriLite/Trixel (Laser Combiner and LBS Display Glasses)

Last and certainly least, we get to TriLite Laser Beam Scanning (LBS) glasses. LBS displays for near-eye and projector use have a perfect 25+ year record of failure. I have written about many of these failures since this blog started. I see nothing in TriLite that will change this trend. It does not matter if they shoot from the temple onto a hologram directly into the eye like North Focals or use a waveguide like TriLite; the fatal weak link is using an LBS display device.

It has reached the point when I see a device with an LBS display. I’m pretty sure it is either part of a scam and/or the people involved are too incompetent to create a good product (and yes, I include Hololens 2 in this category). Every company with an LBS display (once again, including Hololens 2) lies about the resolution by confabulating “scan lines” with the rows of a pixel-based display. Scan lines are not the same as pixel rows because the LBS scan lines vary in spacing and follow a curved path. Thus, every pixel in the image must be resampled into a distorted and non-uniform scanning process.

Like Brilliance above, TriLites’ core technology combines three lasers for LBS. Unlike Brilliance, TriLites does not end up with the beams being coaxial; rather, they are at slightly different angles. This will cause the various colors to diverge by different amounts in the scanning process. TriLite uses its “Trajectory Control Module” (TCM) to compute how to re-sample the image to align the red, green, and blue.

TriLite then compounds its problems with LBS using a Lissajous scanning process, about the worst possible scanning process for generating an image. I wrote about why the Lissajous scanning process, also used by Oqmented (TriLite uses Infineon’s scanning mirror), in AWE 2021 Part 2: Laser Scanning – Oqmented, Dispelix, and ST Micro. Lissajous scanning may be a good way to scan a laser beam for LiDAR (as I discussed in CES 2023 (4) – VoxelSensors 3D Perception, Fast and Accurate), but it is a horrible way to display an image.

The information and images below have been collected from TriLite’s website.

As far as I have seen, it is a myth that LBS has any advantage in size, cost, and power over LCOS for the same image resolution and FOV. As discussed in part 1, Avegant generated the comparison below, comparing North Focals LBS glasses with a ~12-degree FOV and roughly 320×240 resolution to Avegant’s 720 x 720 30-degree LCOS-based glasses.

Below is a selection (from dozens) of related articles I have written on various LBS display devices:

Next Time

I plan to cover non-display devices next in this series on CES and AR/VR/MR 2024. That will leave sections on Holograms and Lightfields, Display Measurement Companies, and finally, Jason and my discussion of the Apple Vision Pro.

Mixed Reality at CES & AR/VR/MR 2024 (Part 2 Mostly Optics)

12 April 2024 at 02:48

Introduction

In part 1, I wrote that I was planning on covering optics and display companies at CES and the SPIE AR/VR/MR conferences in 2024 in part 2 of the video I made with Jason McDowall in this article. However, as I started filling in extra information on the various companies, the article was getting long, so I broke the optics and displays into two separate articles.

In addition to optics companies, I will also be touching on eye track with Tobii, who is doing both optics and eye tracking, and Zinn Labs.

Subscription Options Coming to KGOnTech

Many companies, including other news outlets and individuals, benefit from this blog indirectly through education or directly via the exposure it gives to large and small companies. Many, if not most, MR industry insiders read this blog worldwide based on my conference interactions. I want to keep the main blog free and not filled with advertising while still reporting on large and small companies. To make financial sense of all this and pay some people to help me, I’m in the process of setting up subscription services for companies and planning on (paid) webinars for individuals. If you or your company might be interested, please email subscriptions@kgontech.com.

Outline of the Video and Additional Information

Below is an outline of the second hour of the video, as well as additional comments and links to more information. The times in blue on the left of each subsection below link to the time in the YouTube video discussing a given company.

0:00 Waveguides and Slim Optics

0:03 Schott and Lumus

Schott AG is one of the world’s biggest makers of precision glass. In 2020, Schott entered into a strategic partnership with Lumus, and at AR/VR/MR 2024 and 2023, Lumus was prominently featured in the Schott booth. While Schott also makes the glass for diffractive waveguides, the diffraction gratings are usually left to another company. In the case of the Lumus Reflective waveguides, Schott makes the glass and has developed high-volume waveguide manufacturing processes.

Lumus waveguides consistently have significantly higher optical efficiency (for a given FOV), better color uniformity, better transparency, higher resolution, and less front projection (“eye glow”) than any diffractive waveguide. Originally, Lumus had 1-D pupil-expanding waveguides, whereas diffractive waveguides were 2-D pupil-expanding. The 1-D expanding waveguides required a large projection engine in the non-expanding direction, thus making the projection optics bigger and heavier. In 2021, Lumus first demonstrated their 2-D expanding Maximus prototype waveguides with excellent image quality, 2K by 2K resolution, and 50° FOV. With 2-D expansion, projection image optics could be much smaller. Lumus has continued to advance its Reflective 2D expanding waveguide technology with the “Z-Lens.” Lumus says that variants of this technology could support more than a 70-degree FOV.

Waveguides depend on “total internal reflection” (TIR). For this TIR to work, diffractive waveguides and earlier Lumus waveguides require an “air gap” between the waveguide surface and any other surfaces, including “push-pull” lenses, for moving the waveguide’s apparent focus distance and vision correction. These air gaps can be hard to maintain and source unwanted reflections. Lumus Z-Lens can be embedded in optics with no air gap (and the first waveguide to make this claim) due to the shallower angles of the TIR reflections.

While Lumus waveguides are better than any diffractive waveguide in almost every image quality and performance metric, their big questions have always revolved around volume manufacturing and cost. Schott thinks that the Lumus waveguides can be manufactured in high volume at a reasonable cost.

Over the last ten years, I have seen significant improvements in almost every aspect of diffractive waveguides from many companies (for example, my articles on DigiLens and Dispelix). Diffractive waveguides are easier, less expensive, and much easier to customize. Multiple companies have diffraction waveguide design tools, and there are multiple fabrication companies.

As I point out in the video, many MR applications don’t need the highest image quality or resolution; they need “good enough” for the application. Many MR applications only need simple graphics and small amounts of text. Many applications only require limited colors, such as red=bad, green=good, yellow=caution, and white or cyan for everything else. While others can get away with monochrome (say green-only). For example, many military displays, including night vision, are often monochrome (green or white), and most aviation HUDs are green-only.

I often say there is a difference between being “paid to use” and ” paying for” a headset. By this, I mean that someone is paid to use the headset to help them be more effective in their job, whereas a consumer would be paying for the headset.

For more on Lumus’s 2-D expanding waveguides:

For more on Schott and Lumus’s newer Z-Lens at AR/VR/MR 2023:

For more on green-only (MicroLED headsets) and full-color MicroLEDs through diffractive and Lumus reflective waveguides, see:

4:58 Fourier (Metasurface)

Fourier is developing metasurface technology to reflect and redirect light from a projector in the temple area of AR glasses to the eye. If a simple mirror-type coating were placed on the lens, projected light from the temple would bounce off at an angle that would miss the eye.

Multiple companies have previously created holographic Optical Elements (HOEs) for a similar optical function. Luminit developed the HOE used with North Focals, and TruLife Optics has developed similar elements (both Luminit and TruLife’s HOEs are discussed in my AWE 2022 video with Brad Lynch).

Fourier’s metasurface (and HOEs) can act not only as a tilted flat mirror but also as a tilted curved mirror with “optical power” to change magnification and focus. At least in theory (I have not seen it, and Fourier is still in development), the single metasurface would be simpler, compact, and have better optical efficiency than birdbath optics (e.g., Xreal and many others) and lower cost and with much better optical efficiency than waveguides. But while the potential benefits are large, I have yet to see a HOE (or metasurface) with great image quality. Will there, for example, be color uniformity, stray light capture, and front projection (“eye glow”) issues as seen with diffractive waveguides?

Laser beam scanning with direct temple projection, such as North Focals (see below left), uses a Hologram embedded or on the surface of a lens to redirect the light. This has been a common configuration for the lower resolution, small FOV, and very small eyebox Laser Beam Scanning (LBS) glasses shown by many companies, including North, Intel, and Bosch. Alternatively, LCOS, DLP, MicroLED, and laser beam scanning projectors have used waveguides to redirect the light and increase the eyebox size (the eyebox is the range of movement of the eye relative to the glasses where the whole image can be seen).

Avegant (above right), Lumus, Vuzix, Digilens, Oppo, and many others have demonstrated that with waveguides with DLP, LCOS, and MicroLEDs in very small form factors as HOEs and Metasufaces (see DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8). Still, waveguides are much lower in efficiency, so much so that the use of MicroOLED is impractical with waveguides. In contrast, using MicroOLED displays is possible with HOEs and Fourier’s metalenses. There are also potential differences in how prescription lenses could be supported.

As discussed above, holographic mirrors can also be used to form the equivalent of a curved mirror that is also tilted. The large CREAL3D prototype (below left) shows the two spherical semi-mirrors. CREAL3D planes to replace these physical mirrors with a flat HOE (below right).

Fourier metalens would perform the same optical function as the HOE. We will have to wait and see the image quality and whether there are significant drawbacks with either HOEs or metalenses. My expectation is that both metalenses and HOEs will have similar issues as diffraction gratings.

Some related articles and videos on small form factor optics and Videos.

6:23 Morphonics

Morphontonics has developed methods for making waveguides and similar diffractive structures on large sheets of glass. They can make many small diffractive waveguides at a time or fewer large optical devices. In addition to waveguides, Morphotonics makes a light guide structure for the Leia Lightfield monitor and tablet.

Morphotonics presentation at AR/VR/MR 2023 can be found here: Video of Morphotonics AR/VR/MR 2023 presentation.

From Morphotnics 2023 AR/VR/MR Presentation

10:33 Cellid (Wave Guides)

Cellid is a relatively new entrant in waveguide making. I have seen their devices for several years. As discussed in the video, Cellid has been continually improving its waveguides. However, at least at present, it still seems to be behind the leading diffractive waveguide companies in terms of color uniformity, FOV, and front projection (“eye glow).

11:47 LetinAR

Several companies are using LetinAR’s PinTilt optics in the AR glasses. At CES, JorJin was showing their J8L prototypes in the LetinAR booth. Nimo (as discussed in Mixed Reality at CES and the AR/VR/MR 2024 Video (Part 1 – Headset Companies) was showing their LentinAR-based glasses in their own booth. Sharp featured their LentinAR glasses in their booth but didn’t mention they were based on LetinAR optics.

LetinAR’s optics were also used in an AT&T football helmet display application for the deaf (upper left below).

LetinAR originally developed “pin mirror” optics, which I first covered in 2018 (see CES 2018 in the listings below). The pin-mirror technology has evolved into their current “PinTilt” technology.

While LetinAR has several variations of the PinTilt, the “B-Type” (right) is the one I see being used. They use an OLED microdisplay as the display device. The image light from the OLED makes a TIR (total internal reflection) bounce off the outside surface into a collimating/focusing mirror and then back up through a series of pupil-replicating slats. The pupil replication slats enable the eye to move around and support a larger FOV.

As I discussed in the video, the image quality is much better than with the Pin-Mirrors, but gaps can be seen if your eye is not perfectly placed relative to the slats. Additionally, with the display off, the view can be slightly distorted, which can likely be improved in the manufacturing process. LetinAR also let me know that they are working on other improvements.

LetinAR’s PinTilt is much more optically efficient than diffractive or even Lumus-type reflective waveguides, as evidenced by its use of micro-OLEDs rather than much brighter LCOS, DLP, or micro-LEDs. At the same time, they offer a form factor that is close to waveguides.

Some other articles and videos covering LetinAR:

13:57 Tooz

Tooz was originally spun out of Zeiss Group in 2018, but in March 2023, they returned to become part of Zeiss. Zeiss is an optical giant founded in 1846 but is probably most famous to Americans as the company making the inserts for the Apple Vision Pro.

Tooz’s “Curved Waveguide” works differently than diffractive and Lumus-type reflective waveguides, which require the image to be collimated, use many more TIR light bounces, and have pupil replication. Strictly speaking, none of these are”waveguides,” but the diffractive and Lumus-type devices are what most people in the industry call waveguides.

The Tooz device molds optics and a focusing mirror to move the focus of the display device, which currently can be either a Micro-OLED or, more recently, (green only) Micro-LED. The image light then makes a few TIR bounces before hitting a Fresnel semi-mirror, which directs the light toward the user’s eye (above right). The location of the Fresnel semi-mirror, and thus the image, is not centered in the user’s field of view but slightly off to one side. It is made for a monocular (single-eye) display. The FOV is relatively small with 11- and 15-degree designs.

Tooz’s Curved Waveguide is aimed at data snacking. It has a small FOV and a Monocular display off the side. The company emphasizes the integration of prescription optics and the small and lightweight design, which is optically much more efficient than other waveguides.

Tooz jointly announced just before the AR/VR/MR conference that they were working with North Ocean Photonics to develop push-pull optics to go with waveguides. Tooz, in their AR/VR/MR 2024 presentation, discussed how they were trying to be the prescription optics provider for both their curved waveguides and what they call planar waveguides. One of their slides demonstrated the thickness issue with putting a push/pull set of lenses around a flat waveguide. The lenses need to be thicker to “inscribe” the waveguide due to their curvature (below right).

19:08 Oorym

Oorym is a small startup founded by Yaakov Amitai, a founder and former CTO of Lumus. Oorym has a “waveguide” with many more TIR bounces than Tooz’s design but many less than diffractive and Lumus waveguides. They use a Fresnel light redirecting element. It does not require collimated light and is much more efficient than other waveguides. They can support more than a 50-degree FOV. It is thicker and more diffractive, and Lumus waveguides are in the same order as the thickness of LetinAR. Oorym is also developing a non-head-mounted Heads-Up Display (HUD) device.

Oorym

21:57 Gixel

Gixel’s technology has to be among the most “different” I have seen in a long time. The concept is to have a MicroLED “bar” display with only a single or a few rows of pixels in one direction and with the full horizontal resolution in the other. The “rows” may have full-color pixels or a series of 3 single-color row arrays. Then, a series of pupil-replicating slats rotate to scan the bar/row image vertically synchronously with a time-sequential change of the row display. In this way, the slats scan row display forms a whole image to the eye (and combines the colors if there are separate displays for each color).

They didn’t have a full working prototype, but they did have the rotating slats working.

My first impression is that it has a Steampunk feel to the design. I can see a lot of issues with the rotating slats, their speed and vibration, the time-sequential display, and a mirage of other potential issues. But still, it wins for the sheer audacity of the approach.

23:42 Meta Research (Time Sequential Fixed Foveated Display) & Varjo

From 2017 Article of Varjo

Meta Research presented the concept of a time-sequence fixed-foveated display using single pancake optics. The basic idea is that pancake optics work by making two passes through some of the refractive and mirror optics, which magnifies the display. In a normal pancake, quarter waveplates change the light’s polarization and affect the two passes. A (pixel-less) liquid crystal shutter can act as a switchable quarter waveplate. This way, the display light will make one or two passes through part of the optics to cause two different magnifications. By time sequencing the display with the LC shutter’s switching, both a lower angular resolution but larger image and a higher angular resolution but smaller “foveated” display will be seen by the eye time sequentially.

This basically happens with a single set of optics and a single display, which is what Varjo was doing with their “fixed foveated display,” which used two display devices, optics, and a combining beam splitter.

I like to warn people that when a research group from a big company presents a concept like this to all their competitors at a conference like AR/VR/MR, it is definitely NOT what they are doing in a product.

Fixed (and Eye Tracking) Foveated Displays

In 2017, Varjo was focused on its foveated display technology. Their first prototype had a “fixed foveated display,” meaning the central high-resolution region didn’t move. Varjo claimed they would soon have the foveated display tracking the eye, but as far as I know, they never solved the problem.

It turns out that tracking the eye and moving the display is a seemingly impossible problem to solve with the eye’s saccadic movement, even with exceptional eye tracking. As I like to say, “While eye tracking may know where the eye is pointing, you don’t know what the eye has seen.” Originally, researchers thought that human vision fully blanks with saccadic movement, but later research suggests that it only semi-blanks out vision with movement. Combined with the fact that what a human “sees” is basically a composite of multiple eye positions, making a foveated display that tracks the eye is exceedingly difficult, if not impossible. The problem with artifacts due to eye movement, such as field sequential color breakup, they will tend to appear as flashes that are distracting.

We are seven years since Varjo told me they were close to solving the eye-tracking foveated display. Varjo figured out that about 90% of the benefit of a moving foveated display could be realized with a fixed foveated display near the center of the FOV. They may also have realized that solving the problems with a moving foveated display was more difficult than they thought. Regardless, Varjo has pivoted from being a “foveated display company” to a “high-resolution VR/MR company” aimed primarily at enterprise applications. Pixel sizes and resolution of display devices improved to the point where it is now better to use a higher resolution display than to combine two displays optically.

Eyeway Vision Foveated Display (and Meta)

In 2021, I visited Eyeway Vision, which also worked on foveated displays using dual laser scanning displays per eye. After an acquisition by Meta fell through, Eyeway Vision went bankrupt. Eyeway Vision had a fixed foveated display and sophisticated eye tracking, but it went bankrupt before solving the moving foveated display.

Eyeway Vision’s founder, Boris Greenburg, has recently joined VoxelSensors, and VoxelSensors is looking at using their technology for eye/gaze tracking and SLAM (see Zinn Labs later)

Foveated Display (ex., Varjo) vs. Foveated Rending (ex., Apple Vision Pro)

I want to be clear between foveated rendering, where the display is fixed, and just the level of detail in the rendering changes based on eye tracking, from a foveated display, where a high-resolution sub-display is inset within a lower resolution display. Foveated rendering such as the Apple Vision Pro or Meta Quest Pro is possible, although today’s implementations have problems. However, it may be impossible to have a successful eye-tracking foveated display.

For more on this blog’s coverage of Foveated Displays, see:

32:05 Magic Leap (Mostly Human Factors)

At AR/VR/MR 2024, Magic Leap gave a presentation that mostly discussed human factors. They discussed some issues they encountered when developing the Magic Leap One, including fitting a headset to a range of human faces (below right). I thought the presentation should have been titled “Why the Apple Vision Pro is having so many problems with fitting.”

In 2016, This Blog Caught Magic Leap’s Misleading Video

In showing Magic Leap’s history, they showed a prototype headset that used birdbath optics (above left). Back in 2016, Magic Leap released a video that stated, “Shot directly through Magic Leap technology . . . without the use of special effects or compositing.I noted at the time that this left a lot of legal wiggle room and that it might not be the same “technology” they would use in the final product, and this turned out to be the case. I surmised that the video used OLED technology. It’s also clear from the video that it was not shot through a waveguide. It appears likely that the video was shot using an OLED through birdbath optics, not with the Waveguide Optics and LCOS display that the Magic Leap One eventually used.

In 2019, Magic Leap sued (and lost to) Nreal (now Xreal), which developed an AR headset using birdbath optics and an OLED display. Below are links to the 2016 article analyzing the Magic Leap deceptive video and my 2020 follow-up article:

36:45 NewSight Reality (Not Really “Transparent” MicroLED)

Sorry for being so blunt, but NewSight Reality’s “transparent” MicroLED concept does not and will not ever work. The basic concept is to put optics over small arrays of LEDs, and similar to pupil replication, the person will see an image. It is the same “physics” as MojoVision’s contact display (which I consider a scam). In fact, NewSight’s prototype has nine MojoVision displays on a substrate (below center)

The fundamental problem is that to get a display of any resolution, plus the optics, the “little dots” are so big that they, combined with diffraction, cause a blurry set of gray dots in a person’s vision. Additionally, the pupil replication effect ends up with a series of circles where you can see the image.

38:55 Other Optics and Eye Tracking

The next section is on other optics and eye tracking. Thanks to Tobii being involved in both, they sort of tie this section together.

39:01 AddOptics

AddOptics developed a 3-D-printed optical mold process. It was founded by former Luxexcel employees (Luxexcel was subsequently acquired by Meta in 2022).

I covered AddOptics last year in CES 2023 (Part 3)—AddOptics Custom Optics. The big addition in 2024 was that they showed their ability to make push-pull optics for sandwiching a waveguide. They showed they could support waveguides that required an air gap or not. As far as I am aware, most, if not all, diffractive waveguides require an air gap. The only waveguide I know of that claims they don’t need an air gap is the newer Lumus reflective-based waveguide (discussed in a previous article). Still, I have not heard of whether AddOptics is working with Lumus or one of Lumus’s customers.

Luxexcel had developed a process to directly 3-D print optics without the need for any resurfacing. This means they need to print very fine layers very precisely, lens by lens. While it means each lens it custom can be custom fit, it also seems to be an expensive process compared to the way prescription lenses are made today. By making “low run” 3-D printed molds (something that Luxexcel could also do), AddOptics would have a lower cost per unit and a faster approach. It would require having a stock of molds, but it would not require a prohibitive number of molds to support most combinations of diopter and cylinder (astigmatism) correction.

42:12 Tobii

Tobii, founded in 2001, has long been known for its eye-tracking technology. Tobii was looking to embed LED illuminators in lenses and was working with Interglass. When Interglass (founded in 2004) went bankrupt in 2020, Tobii hired the key technical team members from Integlass. Meta Materials (not to be confused with Meta, formerly Facebook) acquired the assets of Interglass and is also making a similar technology.

The Interglass/Tobii/Meta-Materials process uses many glass molds to support variations of diopter and cylinder adjustments for prescriptions. The glass molds are injected with UV-cured plastic resin, which, after curing, forms lens blanks/rounds. When molding, the molds can be rotated to set the cylinder angle. The round lens blanks can then be cut by conventional lens fitting equipment.

At 2023’s AR/VR/MR, Tobii demonstrated (left two pictures below) how their lenses were non-birefringent, which is important when working with polarized light-based optics (e.g., Pancake Optics, which Tobii says they can make) and displays (LCDs and LCOS). Tobii has videos on its website that show the lens-making and electronic integrating process (below right).

43:44 Zinn (and VoxelSensors)

Zinn Labs uses a Prophesee event-based camera sensor (Zinn and Prophesee announcement). The Prophesee event camera sensor was jointly developed with Sony. Zinn uses Prophesee’s 320×320 6.3μm pixel BSI (BackSide Illuminated) event-based sensor in a 1/5” optical format.

Event camera pixels work like the human eye in detecting changes rather than the absolute value of each pixel. The pixels are much more complex than a conventional camera sensor, with photodiodes and comparators integrated into each pixel using Sony’s BSI process. Rather than scanning out the pixel value at a frame rate, each pixel reports when it changes significantly (more details can be found in the Prophesee white paper – free, but you have to give an email address). The advantage of the event camera in image recognition is that it tends to filter out/ignore everything that is not changing.

Zinn Labs has developed algorithms that then take the output from the event camera and turn it into where the eye is gazing (for more information, see here).

VoxelSensors (and Zinn Labs)

VoxelSensors has a very different type of event sensor called a “SPAES (Single Photon Active Event Sensor)” that could be used for eye/gaze tracking. Quoting from VoxelSensors:

VoxelSensors leverages its distinctive SPAES (Single Photon Active Event Sensor) technology, allowing the integration of multimodal perception sensors, such as innovative hand and gaze tracking and SLAM, with high precision, low power consumption, and low latency. Fusing these key modalities will enable the development of next-gen XR systems.

As discussed earlier, VoxelSensors also recently hired Eyeway Vision found Boris Greenberg, who has extensive experience in eye/gaze tracking.

VoxelSensors’s SPAES uses a laser scanner to scan the area of interest in a narrow-band infrared laser (where the Prophesee event camera would use IR LED flood illumination) and then detect the laser scanner’s return to the area of interest. With narrow-band filtering to filter out all but the laser’s wavelength, the SPAES is designed to be extremely sensitive (they claim as little as a single photon) to the laser’s return. Like the Prophesee event camera, the VoxelSensors’s SPAES returns the pixel location when an event occurs.

While the VoxelSensor’s pixel is more complex than a traditional sensor, it seems simpler than Prophesee’s event camera pixel, but then VoxelSensor requires scanning lasers versus LED. Both are using event sensors to reduce the computational load. I have no idea at this point which will be better at eye tracking.

VoxelSensors with one or more sets of laser scanners and sensors can detect in three dimensions, which is obviously useful for SLAM but might also have advantages for eye tracking.

For more on Voxel Sensors my 2023 CES article: CES 2023 (4) – VoxelSensors 3D Perception, Fast and Accurate.

44:13 Lumotive (LCOS-Based Laser Scanning for LiDAR)

Lumotive has a technology that uses LCOS devices to scan a laser beam. Today, LiDAR systems use a motor-driven rotating prism or a MEMs mirror to scan a laser beam, resulting in a fixed scanning process. The Lumotive method will let them dynamically adjust and change the scanning pattern.

46:03 GreenLight Optics

I’ve known Green Light Optics since its founding in 2009 and have worked with them to help me with several optical designs over the years. Greenlight can design and manufacture optics and is located in Cincinnati, Ohio. I ran into GreenLight at the Photonics West exhibit following the AR/VR/MR conference. I thought it would be helpful for other companies that might need optical design and manufacturing to mention them.

Quoting GreenLights website:

Greenlight Optics is an optical systems engineering and manufacturing company specializing in projection displays, LED and laser illumination, imaging systems, plastic optics, and the integration of optics with electrical and mechanical systems.”

Next Time – Display Devices and Test and Measurement Companies

In the next part of this series will on CES and AR/VR/MR 2023, I plan to cover display devices and a few test and measurement companies.

Mixed Reality at CES and the AR/VR/MR 2024 Video (Part 1 – Headset Companies)

1 April 2024 at 15:29

Update 4/2/2024: Everysight corrected a comment I made about the size of their eyebox.

Introduction

This blog has covered mixed reality (MR) headsets, displays, and optics at CES since 2017 and SPIE’s AR/VR/MR conference since 2019. Both conferences occur in January each year. With this blog’s worldwide reputation (about half of the readers are from outside the U.S.), many companies want to meet. This year, I met with over 50 companies in just one month. Then Apple released the Apple Vision Pro on Feb. 2nd.

As this blog is a one-person operation, I can’t possibly write in detail about all the companies I have met with, yet I want to let people know about them. Last year, in addition to articles on some companies, Brad Lynch of the SadlyIsBradley YouTube channel and I made videos about many companies I met at CES 2023. Then, for AR/VR/MR 2023, I wrote an eight (8) part series of articles on AR/VR/MR. For CES 2024, I wrote a three (3) part series covering many companies.

However, with my Apple Vision Pro (AVP) coverage plus other commitments, I couldn’t see how to cover the over 50 companies I met with in January. While the AVP is such a major product in mixed reality and is important for a broad audience, I don’t want the other companies working on MR headsets, displays, and optics to be forgotten. So, I asked Jason McDowall of The AR Show to moderate a video presentation of the over 50 companies, with each company getting one slide.

Jason and I recorded for about 4 hours (before editing), split over two days, which works out to less than 5 minutes per company. This first hour of the video covers primarily headset companies. I made an exception for the combination of Avegant’s prototype that used Dispelix as it seemed to fit with the headsets.

In editing the video, I realized my presentation was a little “thin” regarding details on some companies. I’m adding some supplementary information and links to this article. I also moved a few companies around in the editing process and re-recorded a couple of sections, so the side numbers don’t always go in order.

Subscription Options Coming to KGOnTech

Between travel expenses and buying an Apple Vision Pro (AVP) with a MacBook for testing the AVP, I spent about $12,000 out of pocket in January and early February alone. Nobody has ever paid to be included (or excluded) in this blog. This blog, which started as a part-time hobby, has become expensive in terms of money and a full-time job. What makes it onto the blog is the tip of the iceberg of time spent on interviews, research, photographing and editing pictures and videos, and travel.

Many companies, including other news outlets and individuals, benefit from this blog indirectly through education or directly via the exposure it gives to large and small companies. Many, if not most, MR industry insiders read this blog worldwide based on my conference interactions. I want to keep the main blog free and not filled with advertising while still reporting on large and small companies. To make financial sense of all this and pay some people to help me, I’m in the process of setting up subscription services for companies and planning on (paid) webinars for individuals. If you or your company might be interested, please email subscriptions@kgontech.com.

Outline of the Video and Additional Information

Below is an outline of the first hour of the video, along with some additional comments and links to more information. The times in blue on the left of each subsection below are the times in the YouTube video discussing a given company.

0:00 Jason McDowall of the AR Show and Karl Guttag of KGOnTech introductions.

Jason and I briefly introduced ourselves.

2:59 Mixed Reality Major Design Challenges

My AR/MR design challenge list started with 11 items in a guest article in Display Daily in December 2015 with Sorry, but there is no Santa Claus – Display Daily. Since then, the list has grown to 23.

The key point is that improving any of these items will negatively affect multiple other items. For example, having a wider field of view (FOV) will make the optics bigger, heavier, and more expensive. It will also require a higher resolution display to support the same or better angular resolution, which, in turn, means more pixels requiring more processing, which will need more power, which means bigger batteries and more thermal management. All these factors combine to hurt cost and weight.

6:34 Xreal (Formerly Nreal)

I’ve followed Nreal (now Xreal) since its first big splash in the U.S. at CES 2019 (wow, five years ago). Xreal claims to have shipped 300,000 units last year, making it by far the largest unit volume shipper of optical AR headsets.

At CES 2024, Xreal demonstrated a future design that goes beyond their current headsets and adds cameras for image recognition and SLAM-type features.

BMW invited me to a demo of their proof-of-concept glasses-based heads-up display. The demo used Xreal glasses as the display device. BWM had added a head-tracking device under its rearview mirror to lock the user’s view of the car.

But even at CES 2019, Nreal was a case of déjà vu, as it looked so much like a cost-reduced version of the Osterhaut Design Group (ODG) R-9 that I first saw at CES 2017 and started covering and discussing in 2016. The ODG R-9 and the original X-Real had similar birdbath designs and used a Sony 1920×1080 Micro-OLED display. According to a friend of this blog and a former ODG R-9 designer and now CEO of the design firm PulsAR, David Bonelli, there are still some optical advantages of the ODG R-9 that others have yet to copy.

Below is a link to my recent article on CES, which discusses Xreal and my ride wearing the BMW AR demo. I have also included some links to my 2021 teardown of the Nreal birdbath optics and 2016 and 2017 articles about the ODG-R9.

11:48 Vuzix

Vuziz was founded in 1997 before making see-through AR devices, no less waveguides, became practical. It now has a wide range of products aimed at different applications. Vuzix founder and CEO Paul Travers has emphasized the need for rugged, all-day wearable AR glasses.

Vuzix historically has primarily had small, lightweight designs, with most later products having a glasses-like form factor. Vuzix originally licensed waveguide technology from Nokia, the same technology Microsoft licensed and later acquired for its Hololens 1. Vuzix says its current waveguide designs are very different from what it licensed from Nokia.

Vuzix’s current waveguide-based products include the monocular BLADE and the biocular SHIELD, which use Texas Instruments DLP displays.  Vuzix ‘s latest products are the Ultralight and Ultralight-S, which use Jade Bird Display MicroLEDs driving a waveguide. The current monocular designs use a green-only Jade Bird Display (JBD) with a 640 by 480 resolution and weigh only 34 grams. Vuzix has also announced plans to partner with the French startup Atomistic to develop full-color on a single device, MicroLEDs.

Multiple companies use Vuzix glasses as the headset platform to add other hardware and software layers to make application AR headsets. Xander was at CES with their AI voice-to-text glasses (discussed later). The company 3D2Cut has AI software that shows unskilled workers where to prune wine grape vines based on inputs from vine pruning experts. At last year’s CES, I met with 360world and their ThermalGlass prototype, which added thermal cameras to a Vuzix headset.

Below are links to my 2024 CES article that included Vuzix, plus a collection of other articles about Vuzix from prior years:

17:13 Digilens

I’ve met with Digilens many times through the years. This year was primarily an update and improvements on this major announcement of their Argo headset from last year (see 2023 article and video via the links below).

Digilens said that in response to my comments last year, they designed an Argo headband variant with a rigid headband that does not rest on the nose and can be flipped up out of view. This new design supports wearing ordinary glasses and is more comfortable for long-term wear. Digilens said many of their customers like this new design variation. A major problem I see with the Apple Vision Pro is the way it is uncomfortably clamped to the face and that it does not flip up like, say, the Lynx MR headset (see also video with Brad Lynch) and Sony MR Headset announced at CES 2024 (which looks very much like the Lynx headset).

Digilens also showed examples of their one-, two-, and three-layer waveguides, which can trade in weight and cost for differences in image quality. They also showed examples of moving the exit grating to different locations in the waveguide.

As I have covered Digilens so much in the past (see links below for some more recent articles), this year’s video was just an update:

20:00 Avegant

Avegant has become a technology development company. They are currently focused on designing small LCOS engines for AR glasses. They presented an update at the AR/VR/MR 2024 conference. Right before the conference, Avegant announced its development of “Spotlight™” to improve contrast by selective illumination of the LCOS panel, similar to LED array LCD TVs with local dimming.

Avengant has shown a very small 30-degree FOV, LCOS-based, 1280×720 pixel, light engine supporting a glasses-like form factor. Avegant’s glasses designs support higher resolution, larger FOV, and a smaller form factor than laser beam scanning or X-Cube-based MicroLEDs (see TCL below). They also got over 1 million nits out of their 30-degree FOV engines. While Avegant designed and built the projector engine and prototype glasses, they used Dispelix waveguides (to be discussed next).

Below are links to blog articles about Avegant’s small LCOS engines:

24:46 Dispelix (and Avegant)

Dispelix is a waveguide design company, not a headset maker. Avegant, among others, was using Dispelix waveguides (and why they were discussed at this point in the video).

Dispelix presented at the AR/VR/MR conference, where they discussed their roadmap to improve efficiency, reduce “eye glow,” and reduce “rainbow artifacts” caused by diffraction grating light capture.

Dispelix claims to have a roadmap to improve light throughput by a factor of ~4.5 over its current Selva design.

Dispelix, like several other diffractive waveguide companies, including Vuzix and Digilens, uses pantoscopic (front to back) tilt to reduce the eye glow effect, which is common with most other diffractive waveguides (most famously, Hololens). It turns out that for every one-degree of tilt, the “glow” is tilted down by two degrees such that with just a few degrees of tilt, the glow is projected well below most people’s view. Displelix has said that a combination of grating designs and optical coatings can nearly eliminate the glow in future designs.

Another problem (not discussed in the video) that has plagued diffractive waveguides has been the “rainbow artifact” caused by external light, particularly overhead from in front or behind the waveguide, being directed to the eye from the diffraction gratings. Because the gratings effect is wavelength-dependent, the light is broken into multiple colors (like a rainbow). Dispelix says they are developing designs that will direct the unwanted external light away from the eye.

(2024) CES (Pt. 2), Sony XR, DigiLens, Vuzix, Solos, EverySight, Mojie, TCL color µLED

30:50 Tilt-Five (and CEO Jeri Ellsworth)

I met with Jeri Ellsworth, the CEO of Tilt-Five, at CES. In addition to getting an update on Tilt-Five (with nothing I can’t talk about), Jeri and I discussed our various histories working on video game hardware, graphics co-processors, and augmented reality.

BTW, Jeri Ellsworth, Jason McDowall, Adi Robertson (editor at The Verge), Ed Tang (CEO of Avegant), and I are slated to be on a panel discussion at AWE 2024.

Below are some links to my prior reporting on Tilt-Five.

36:05 Sightful Spacetop

Sightful’s Spacetop is essentially a laptop-like keyboard and computer with Xreal-type birdbath optics using 1920×1080 OLED microdisplays with a 52-degree FOV. Under the keyboard are the processing system (Qualcomm Snapdragon XR2 Kryo 585TM 8-core 64-bit CPU and AdrenoTM 650 GP), memory (8GB), flash (128GB), and battery (5 hours of typical use). The system runs a “highly modified” Android operating system.

I saw Sightful at the Show Stoppers media event at CES, and they were nice enough to bring me custom prescription inserts to the AR/VR/MR conference. Sightful’s software environment supports multiple virtual- monitors/windows of various sizes, which are clipped to the glasses’ 1920×1080, 52-degree view. I believe the system uses the inertial sensors in the headset to make the virtual monitors appear stationary as opposed to the more advanced SLAM (simultaneous localization and mapping) used by many larger headsets.

As a side note, my first near-eye-display work in 1998 was on a monocular headset to be used with laptops as a private display when traveling. I designed the 1024×768 (high resolution for a 1998 microdisplay) LCOS display device and its controller. The monocular headset used color sequential LED illumination with birdbath mirror optics. Given the efficiency and brightness of LEDs of the day, it was all we could do to make a non-see-through monocular device. Unfortunately, the dot-com bust happened in 1999, which took out many high-tech startups.

I wrote about Sightful in my 2024 CES coverage:

36:05 Nimo

Nimo’s “Spatial Computing” approach is slightly different from Sightful’s. Instead of combining the computing hardware with the keyboard like Sightful, Nimo has a small computing and battery module that works as a 3-D spatial mouse with a trackpad (on top). Nimo has a USB-C connection for AR glasses, WiFi 6, and Bluetooth 5.1 for communication with an (optional) wireless keyboard.

The computing specs resemble Sightful’s, with a Qualcomm® XR2 8-core CPU, 8GB RAM, and 128GB Storage. Nimo supports working with Rokid, Xreal, and its own LetinAR-Optics-based 1920×1080 OLED AR glasses via its USB-C port, which provides display information and power.

Like Sightful, Nimo has a modified Android Operating system that supports multiple virtual monitors/windows. It uses the various glasses’ internal sensors to detect head movement to keep the monitors stationary in 3-D space as the user’s head moves.

I wrote about Nimo Planet in my 2024 CES coverage:

38:59 .Lumen (headset for the blind)

Lumen is a headset for blind people that incorporates lidar, cameras, and other sensors. Rather than outputting a display image, it provides haptic and audible feedback to the user. I don’t know how to judge this technology, but it seems like an interesting case where today’s technology could help people.

40:07 Ocutrx Oculenz

Ocutrx’s OcuLenz was initially aimed at helping people with macular degeneration and other forms of low vision. However, at the Ocutrx booth on its website at the CES ShowStoppers event, Ocutrx emphasized that the headset could be used for more than low vision, including gamers, surgeons, and military personnel. The optical design was done by an old friend, David Kessler, whom I ran into at the Ocutrx booth at CES and the AR/VR/MR conference.

The Oculenz uses larger-than-typical birdbath optics to support a 72-degree (diagonal) FOV. It uses 2560 x 1440 pixels per eye, so they will have a similar angular resolution but wider FOV than the more common 1920×1080 birdbath glasses (e.g., Xreal), which typically have 45- to ~50-degree FOVs. Unlike the typical birdbath glasses, which have separate processing, the Oculenz integrates a Qualcomm Snapdragon® XR2 processor, Wi-Fi, and cellular connectivity. This headset was originally aimed at people with low vision as a stand-alone device.

I wrote about Ocutrx and some of the issues of funding low-vision glasses in my earlier report on CES 2024, linked below:

44:22 Everysight

Everysight has AR glasses in a glasses-like form factor. They are designed to be self-contained, weigh only 47 grams, and have no external wiring. They use a 640×400 pixel full-color OLED display and can achieve >1000 nits to the eye.

Everysight uses a “Pre-Compensated Off-Axis” optical design, which tends to get more than double the light from the display to the eye while enabling more than three times the real-world light to pass through the display area compared to birdbath (e.g., Xreal) designs. With this design, the pre-compensation optics pre-correct for hitting the curved semi-mirror combiner off-axis. Typically, this mirror will be 50% or less reflective and only has to be applied over where the display is to be seen.

However, the Everysight glasses only support a rather small 22-degree FOV, and the eyebox is rather small. While Everysight has reduced the panoscopic tilt of the lenses over prior models, the latest Maverick modes still tilt toward the user’s cheeks more than most common glasses.

UPDATE 4/2/2024: Everysight responded to my original eyebox comment, “With respect to the eyebox, we take care of that with different sizes (Maverick today has two sizes – Medium and Large). The important part is that once you have the correct size, glass or eye movements won’t take you out of the eyebox. We believe that this is a much better tradeoff than a one-size-fits-all [with] low optical efficiency and enables you to use OLEDs in sunny days outdoors, even with clear visors.

Thus far, Everysight seems to be marketing its glasses more to the sports market, which needs s, lightight headsets with bright displays for outdoor use.

If vision correction is not required, the lenses can be easily swapped out for various types of tint. More recently, Everysight has been able to support prescription lenses. For prescriptions, the inner curved mirror corrects for the virtual image, and a corrective lens on the outside corrects for the real world, including correcting for the curvature of the inner surface with the semi-mirror.

Everysight spun out of the large military company ELBIT, which perfected the pre-compensated off-axis design for larger headsets. This optical design is famously used in the F35 helmet and, more recently, in the civilian aircraft Skylens head-wearable HUD display, which has received FAA approval for use in multiple civilian aircraft, including recently the 737ng family.

Everysight was discussed in my CES 2024 coverage linked to below:

48:42 TCL RayNeo X2 and X2 Lite

At CES 2024, TCL showed their RayNex X2 and their newer X2 Light. I have worked with 3-chip LCOS projectors with an X-Cube in the past, and I was curious to see the image quality as I know from experience aligning to X-Cubes is non-trivial, particularly with the smaller sizes of the Jade Bird Display red, green, and blue MicroLED displays.

Overall, the newer X2 Lite using the Applied Materials (AMAT) waveguides looked much better than the earlier RayNeo X2 (non-Lite). Even the AMAT had significant front projection, but as discussed with respect to Displelix above, this problem can be managed, at least for smaller FOVs (the RayNeo X2s have a ~30-degree diagonal FOV).

I covered the TCL color µLED in more detail in my CES 2024 coverage (link below). I have also included links to articles discussing the Jade Bird Displays MicroLEDs and their use of an X-Cub for a color combiner:

55:54 Mojie/Meta Bounds

Mojie/Meta Bound showed 640×480 green-only MicroLED-based glasses claiming 3,000 nits (to the eye), 90% transparency (without tinting), a 28-degree FOV, and a weight of only 38 grams. These were also wireless and, to a first approximation, very similar to Vuzix UltraLite. One thing that makes them stand out is that they use a waveguide technology made of plastic resin (most use glass).

Many companies are experimenting with plastic waveguides to reduce weight and improve safety. So far, the color uniformity with full-color displays has been worse than with glass-based waveguides. However, the uniformity issues are less noticeable with a monochrome (green) display. Mitsui Chemicals and Mitsubishi Chemicals, both of Japan, are suppliers of resin plastic substrate material for waveguides.

Below is a link to my article on Mojie/Meta Bounds in my CES 2024 coverage:

57:59 Canon Mixed Reality

Canon had a fun demo based on the 100+ camera Free Viewpoint Video System VR system. Basically, you could sit around a table and see a basketball game (I think it was the 2022 NBA All-Stars Game) played on that table from any angle. Canon has been working on this technology for a decade or more, with demos for both basketball and soccer (football). While it’s an interesting technology demo, I don’t see how this would be a great way to watch a complete game. Even with over 100 cameras and the players being relatively small (far away virtually), one could see gaps where that the cameras couldn’t cover.

Canon also showed a very small passthrough AR camera and lens setup. While it was small, the FOV and video quality were not impressive. Brad Lynch of SadlyItsBradley found it to be pointless.

I have personally purchased a lot of Canon camera equipment over the last 25 years (including my Canon R5, which I take pictures with for this blog), so I am not in any way against Canon. However, as I discussed with Brad Lynch about Canon’s booth at CES 2023 (YouTube Link), I can’t see where Canon is going or what message they are trying to send in terms of mixed reality despite their very large and expensive booth. On the surface, Canon seems to be dabbling in various MR technologies, but it is not moving in a clear direction.

59:54 Solos (and Audio Glasses)

Solos makes audio-only glasses similar to the Meta/RayBand Wayfarer (but without cameras). These glasses emphasize modular construction, with all the expensive “smarts” in the temples so that the front-part lenses can be easily swapped.

Like several others, Solos uses cellular communication to connect to ChatGPT to do on-the-fly translations. What makes Solos more interesting is that Its Chairman is John Fan, also the chairman of Lightning Silicon Technology (a spinoff of Kopin Displays), a maker of OLED Microdisplays. At Lighting Silicon’s CES 2024 suite, John Fan discussed that incorporating the displays into the Solos glasses was an obvious future step.

CES (Pt. 2), Sony XR, DigiLens, Vuzix, Solos, EverySight, Mojie, TCL color µLED

1:01:16 Xander

While I saw Xander in the AARP sponsor AgeTech Summit booth at CES 2024, I didn’t get to meet with them. Xander hits at a couple of issues I feel are important. First, they show how AR technology can be used to help people. Secondly, they show what is expected to be a growing trend of adding basic visual information to augment audio.

While I (Karl) missed Xander at CES 2024, it turns out that Jason McDowall’s The AR Show (with guest host Kaden Pierce) recently interviewed Xander CEO Alex Westner on The AR Show.

Next Time – Optics and Display Devices

The video’s next part will discuss optical and display device companies.

DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8)

27 March 2023 at 19:46

Introduction – Contrast in Approaches and Technologies

This article will compare and contrast the Vuzix Ultralight, Lumus Z-lens, and DigiLens Argo waveguide-based AR prototypes I saw at CES 2023. I discussed these three prototypes with SadlyItsBradly in our CES 2023 video. It will also briefly discuss the related Avegant’s AR/VR/MR 2022 and 2023 presentations about their new smaller LCOS projection engine and Magic Leap 2’s LCOS design to show some other projection engine options.

It will go a bit deeper into some of the human factors of the Digitlens’ Argo. Not to pick on Digilens’ Argo, but because it has more features and demonstrates some common traits and issues of trying to support a rich feature set in a glasses-like form factor.

When I quote various specs below, they are all manufacturer’s claims unless otherwise stated. Some of these claims will be based on where the companies expect the product to be in production. No one has checked the claims’ veracity, and most companies typically round up, sometimes very generously, on brightness (nits) and field of view (FOV) specs.

This is a somewhat long article, and the key topics discussed include:

  • MicroLED versus LCOS Optical engine sizes
  • The image quality of MicroLED vs. LCOS and Reflective (Lumus) vs. Diffractive waveguides
  • The efficiency of Reflective vs. Diffractive waveguides with MicroLEDs
  • The efficiency of MicroLED vs. LCOS
  • Glasses form factor (using Digilens Argo as an example)

Overview of the prototypes

Vuzix Ultralite and Oppo Air Glass 2

The Vuzix Ultralite and Oppo Air Glass 2 (top two on the right) have 640 by 480 pixel Jade Bird Display (JBD) green-only per eye. And were discussed in MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7).

They are each about 38 grams in weight, including frames, processing, wireless communication, and batteries, and wirelessVuzix developed their own diffractive waveguide and support about a 30-degree FOV. Both are self-contained with wireless, with an integrated battery and processing.

Vuzix developed their own glass diffractive waveguides and optical engines for the Ultralight. They claim a 30-degree FOV with 3,000 nits.

Oppo uses resin plastic waveguides, and MicroLED optical engine developed jointly with Meta Bounds. I have previously seen prototype resin plastic waveguides from other companies for several years. This is the first time I have seen them in a product getting ready for production. The glasses (described in a 1.5-minute YouTube/CNET video) include microphones and speakers for applications, including voice-to-text and phone calls. They also plan on supporting vision correction with lenses built into the frames. Oppo claims the Air Glass 2 has a 27-degree FOV and outputs 1,400 nits.

Lumus Z-Lens

Lumus’s Z-Lens (third from the top right) supports up to a 2K by 2K full/true color LCOS display with a 50-degree FOV. Its FoV is 3 to 4 times the area of the other three headsets, so it must output more than 3 to 4 times the total light. It supports about 4.5x the number of pixels of the DigiLens Argo and over 13x the pixels of the Vuzix Ultralite and Oppo Air Glass 2.

The Z-Lens prototype is a demonstration of display capability and, unlike the other three, is not self-contained and has no battery or processing. A cable provides the display signal and power for each eye. Lumus is an optics waveguide and projector engine company and leaves it to its customers to make full-up products.

Digilens Argo

The DigiLens Argo (bottom, above right) uses a 1280 by 720 full/true color LCOS display. The Argo has many more features than the other devices, with integrated SLAM cameras, GNSS (GPS, etc.), Wi-Fi, Bluetooth, a 48mp (with 4×4 pixel “binning” like the iPhone 14) color camera, voice recognition, batteries, and a more advanced CPU (Qualcomm Snapdragon 2). Digilens intends to sell the Argo for enterprise applications, perhaps with partners, while continuing to sell waveguides optical engines as components for higher-volume applications. As the Argo has a much more complete feature set, I will discuss some of the pros and cons of some of the human factors of the Argo design later in this article.

Through the Lens Images

Below is a composite image from four photographs taken with the same camera (OM-D E-M5 Mark III) and lens (fixed 17mm). The pictures were taken at conferences, handheld, and not perfectly aligned for optimum image quality. The projected display and the room/outdoor lighting have a wide range of brightness between the pictures. None of the pictures have been resized, so the relative FoVs have been maintained, and you get an idea of the image content.

The Lumus Z-lens reflective waveguide has a much bigger FOV, significantly more resolution, and exhibits much better color uniformity with the same or higher brightness (nits). It also appears that reflective waveguides have a significant efficiency advantage with both MicroLEDs (and LCOS), as discussed in MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7). It should also be noted that the Lumus Z-lens prototype has only the display with optics and has no integrated processing, communication or battery. In contrast, the others are closer to full products.

A more complex issue is that of power consumption versus brightness. LCOS engines today are much more efficient for an image with full-screen bright images (by 10x or more) than MicroLEDs with similar waveguides. MicroLED’s big power advantage occurs when the content is sparse, as the power consumption is roughly proportional to the average pixel value, whereas, with LCOS, the whole display is illuminated regardless of the content.

If and when MicroLEDs support full color, the efficiency of nits-per-Watt will be significantly lower than monochrome green. Whatever method produces full color will detract from the overall electrical and optical efficiency. Additionally, color balancing for white requires adding blue and red light with lower nits-per-Watt.

Some caveats:

  • The Lumus Z-Lens is a prototype and does not have all the anti-reflective and other coatings of a production waveguide. Lumus uses an LCOS device with about ~3-micron pixels, which fits 1440 by 1440 within the ~50-degree FOV supported by the optics. Lumus is working with at least one LCOS maker to get an ~2-micron pixel size to support 2K by 2K resolution with the same size display. The image is cut off on the right-hand side of the image by the camera, which was rotated into portrait mode to fit inside the glasses.
  • The Digilens through the lens image is from Photonics West in 2022 (about one year old). Digilens has continued to improve its waveguide since this picture was taken.
  • The Vuzix picture was taken via Vuzix Shield, which uses the same waveguide and optics as the Vuzix Ultralight.
  • The Oppo image was taken at the AR/VR/MR 2023 conference.

Optical Engine Sizes

Vuzix has an impressively small optical engine driving Vuzix’s diffractive waveguides. Seen below left is a comparison of Vuzix’s older full-color DLP engine compared with an in-development color X-Cube engine and the green MicroLED engine used in the Vuzix Ultralite™ and Shield. In the center below is an exploded view of the Oppo and Meta Bound glasses (joint design as they describe it) with their MicroLED engine shown in their short CNET YouTube video. As seen in the still from the Oppo video, they have plans to support vision correction built into the glasses.

Below right is the Digilens LCOS engine, which uses a fairly conventional LCOS (using Ominivision’s LCOS device with driver ASIC showing). The dotted line indicates where the engine blocks off the upper part of the waveguide. This blocked-off area carries over to the Argo design.

The Digilens Argo, with its more “conventional” LCOS engine, requires are large “brow” above the eye to hide it (more on this issue later). All the other companies have designed their engine to avoid this level of intrusion into the front area of the glasses.

Lumus had developed their 1-D pupil-expanding reflective waveguide for nearly two decades, which needed a relatively wide optical engine. With the 2-D Maximus waveguide in 2021 (see: Lumus Maximus 2K x 2K Per Eye, >3000 Nits, 50° FOV with Through-the-Optics Pictures), Lumus demonstrated their ability to shrink the optical engine. This year, Lumus further reduced the size of the optical engine and its intrusion into the front lens area with their new Z-lens design (compare the two right pictures below of Maximus to Z-Lens)

Shown below are frontal views of the four lenses and their optical engines. The Oppo Air Glass 2 “disguises” the engine within the industrial design of a wider frame (and wider waveguide). The Lumus Z-Lens, with a full color about 3.5 times the FOV as the others, has about the same frontal intrusion as the green-only MicroLED engines. The Argo (below right) stands out with the large brow above the eye (the rough location of the optical engine is shown with the red dotted line).

Lumus Removes the Need for Air Gaps with the Z-Lens

Another significant improvement with Lumus’s Z-Lens is that unlike Lumus’s prior waveguides and all diffractive waveguides, it does not require an air gap between the waveguide’s surface and any encapsulating plastics. This could prove to be a big advantage in supporting integrated prescription vision correction or simple protection. Supporting air gaps with waveguides has numerous design, cost, and optical problems.

A typical full-color diffractive waveguide typically has two or three waveguides sandwiched together, with air gaps between them plus an air gap on each side of the sandwich. Everywhere there is an air gap, there is also a desire for antireflective coatings to remove reflections and improve efficiency.

Avegant and Magic Leap Small LCOS Projector Engines

Older LCOS projection engines have historically had size problems. We are seeing new LCOS designs, such as the Lumus Z-lens (above), and designs from Avegant and Magic Leap that are much smaller and no more intrusive into the lens area than the MicroLED engines. My AR/VR/MR 2022 coverage included the article Magic Leap 2 at SPIE AR/VR/MR 2022, which discusses the small LCOS engines from both Magic Leap and Avegant. In our AWE 2022 video with SadlyItsBradley, I discuss the smaller LCOS engines by Avegant, Lumus (Maximus), and Magic Leap.

Below is what Avegant demonstrated at AR/VR/MR 2022 with their small “L” shaped optical engines. These engines have very little intrusion into the front lenses, but they run down the temple of the glasses, which inhibits folding the temple for storage like normal glasses.

At the AR/VR/MR 2023, Avegant showed a newer optical design that reduced the footprint of their optics by 65%, including shortening them to the point that the temples can be folded, similar to conventional glasses (below left). It should be noted that what is called a “waveguide” in the Avegant diagram is very different from the waveguides used to show the image in AR glasses. Avegants waveguide is used to illuminate the LCOS device. Avengant, in their presentation, also discussed various drive modes of the LEDs to give higher brightness and efficiency with green-only and black-and-white modes. The 13-minute video of Avegant’s presentation is available at the SPIE site (behind SPIE’s paywall). According to Avegant’s presentation, the optics are 15.6mm long by 12.4mm wide, support a 30-degree FOV, with 34 pixels/degree, and 2 lumens of output in full color and up to 6 lumens in limited color outdoor mode. According to the presentation, they expect about 1,500 nits with typical diffractive waveguides in the full-color mode, which would roughly double in the outdoor mode.

The Magic Leap 2 (ML2) takes reducing the optics one step further and puts the illumination LEDs and LCOS on opposite sides of the display’s waveguide (below and described in Magic Leap 2 at SPIE AR/VR/MR 2022). The ML2 claims to have 2,000 nits with a much larger 70-degree FOV.

Transparency (vs. Birdbath) and “Eye Glow”

Transparency

As seen in the pictures above, all the waveguide-based glasses have transparency on the order of 80-90%. This is a far cry from the common birdbath optics, with typically only 25% transparency (see Nreal Teardown: Part 1, Clones and Birdbath Basics). The former Osterhout Design Group (ODG) made birdbath AR Glasses popular first with their R6 and then with the R8 and R9 models (see my 2017 article ODG R-8 and R-9 Optic with OLED Microdisplays) which served as the models for designs such at Nreal and Lenovo’s A3.

OGD Legacy and Progress

Several former ODG designers have ended up at Lenovo, the design firm Pulsar, Digilens, and elsewhere in the AR community. I found pictures of Digilens VP Nima Shams wearing the ODG R9 in 2017 and the Digilens Argo at CES. When I showed the pictures to Nima, he pointed out the progress that had been made. The 2023 Argo is lighter, sticks out less far, has more eye relief, is much more transparent, has a brighter image to the eye, and is much more power efficient. At the same time, it adds features and processing not found on the ODG R8 and R9.

Front Projection (“Eye Glow”)

Another social aspect of AR glasses is Front Projection, known as “Eye Glow.” Most famously, the Hololens 1 and 2 and the Magic Leap 1 and 2 project much of the light forward. The birdbath optics-based glasses also have front projection issues but are often hidden behind additional dark sunglasses.

When looking at the “eye glow” pictures below, I want to caution you that these are random pictures and not controlled tests. The glasses display radically different brightness settings, and the ambient light is very different. Also, front projection is typically highly directional, so the camera angle has a major effect (and there was no attempt to search for the worst-case angle).

In our AWE 2022 Video with SadlyItsBradley, I discussed how several companies, including Dispelix, are working to reduce front projection. Digilens is one of the companies I discussed that has been working to reduce front projection. Lumus’s reflective approach has inherent advantages in terms of front projection. DigiLens Argo (pictures 2 and 3 from the right) have greatly reduced their eye glow. The Vuzix Shield (with the same optics as the Ultralite) has some front projection (and some on my cheek), as seen in the picture below (4th from the left). Oppo appears to have a fairly pronounced front projection, as seen in two short videos (video 1 and video 2)

DigiLens Argo Deeper Look

DigiLens has been primarily a maker of diffractive waveguides, but it has, through the years, made several near-product demonstrations in the past. A few years ago, they when through a major management change (see 2021 article, DigiLens Visit), and with the management came changes in direction.

Argo’s Business Model

I’m always curious when a “component company” develops an end product. I asked DigiLens to help clarify their business approaches and received the following information (with my edits):

  1. Optical Solutions Licensing – where we provide solutions to our license to build their own waveguides using our scalable printing/contactless copy process. Our licensees can design their waveguides, which Digilens’ software tools enable.  This business is aimed at higher-volume applications from larger companies, mostly focused on, but not limited to, the consumer side of the head-worn market.
  1. Enterprise/Industrial Products – ARGO is the first product from DigiLens that targets the enterprise and industrial market as a full solution.  It will be built to scale and meet its target market’s compliance and reliability needs. It uses DigiLens optical technology in the waveguides and projector and is built by a team with experience shipping thousands of enterprise & Industrial glasses from Daqri, ODG, and RealWear. 

Image Quality

As I was familiar with Digilen’s image quality, I didn’t really check it out that much with the ARGO, but rather I was interested in the overall product concept. Over the last several years, I have seen improved image quality, including uniformity and addressing the “eye glow” issue (discussed earlier).

For the type of applications in the “enterprise market” ARGO is trying to serve, absolute image quality may not be nearly as important as other factors. As I have often said, “Hololens 2 proves that image quality for the customers that use it” (see this set of articles discussing the Hololen 2’s poor image quality). For many AR markets, the display information is simple indicators such as arrows, a few numbers, and lines. It terms of color, it may be good enough if only a few key colors are easily distinguishable.

Overall, Digilens has similar issues with color uniformity across the field of view of all other diffractive waveguides I have seen. In the last few years, they have gone from having poor color uniformity to being among the better diffractive waveguides I have seen. I don’t think any diffractive waveguide would be widely considered good enough for movies and good photographs, but they are good enough to show lines, arrows, and text. But let me add a key caveat, what all companies demonstrate are invariably certainly cherry-picked samples.

Field of View (FOV)

While the Argos 30-degree FOV is considered too small for immersive games, for many “enterprise applications,” it should be more than sufficient. I discussed why very large FOVs are often unnecessary in AR in this blog’s 2109 article FOV Obsession. Many have conflated VR emersion with AR applications that need to support key information with high transparency, lightweight, and hands-free. As Professor and decades-long AR advocate Thad Starner pointed out, requiring the eye to move too much causes discomfort. I make this point because a very large FOV comes at the expense of weight, power, and cost.

Key Feature Set

The diagram below is from DigiLen on the ARGO and outlines the key features. I won’t review all the features, but I want to discuss some of their design choices. Also, I can’t comment on the quality of their various features (SLAM, WiFi, GPS, etc.) as A) I haven’t extensively tried them, and B) I don’t have the equipment or expertise. But at least on the surface, in terms of feature set, Argo compares favorably to the Hololens 1 and 2, if having a smaller FOV than the Hololens 2 but with much better image quality.

Audio Input for True Hands-Free Operation

As stated above, Digilens’ management team includes experience from RealWear. RealWear acquired a lot of technology from Kopin’s Golden-i. Like ARGO, Golden-i was a system product outgrowth from display component maker Kopin with a legacy before 2011 when I first saw Golden-i. Even though Kopin was a display device company, Golden-i emphasized voice recognition with high accuracy even in noisy environments. Note the inclusion of 5 microphones on the ARGO.

Most realistic enterprise-use models for AR headsets include significant, if not exclusively, hands-free operation. The basic idea of mounting a display on the user’s head it so they can keep their hands free. You can’t be working with your hands and have a controller in your hand.

While hand tracking cameras remove the need for the physical controller, they do not free up the hands as the hands are busy making gestures rather than performing the task with their hands. In the implementations I have tried thus far, gestures are even worse than physical controllers in terms of distraction, as they force the user to focus on the gestures to make it (barely sometimes) work. One of the most awful experiences I have had in AR was trying to type in a long WiFi password (with it hidden as I typed by asterisk marks) using gestures on a Hololens 1 (my hands hurt just thinking about it – it was a beyond terrible user experience).

Similarly, as I discussed with SadlyItsBradley about Meta’s BCI wristband, using nerve and/or muscle-detecting wristbands still does not free up the hands. The user still has their hands and mental focus slaved to making the wristband work.

Voice control seems to have big advantages for hands-free operation if it can work accurately in a noisy environment. There is a delicate balance between not recognizing words and phrases, false recognition or activation, and becoming too burdensome with the need for verification.

Skull-Gripping “Glasses” vs. Headband or Open Helmet

In what I see as a futile attempt to sort of look like glasses (big ugly ones at that), many companies have resorted to skull-gripping features. Looking at the skull profile (right), there really isn’t much that will stop the forward rotation of front-heavy AR glasses unless they wrap around the lower part of the occipital bone at the back of the head.

Both the ARGO (below left) and Panasonic’s (Shiftall division) VR headsets (right two images below) take the concept of skull-grabbing glasses to almost comic proportions. Panasonic includes a loop for the headband, and some models also include a forehead pad. The Panasonic Shiftall uses pads pressed against the front of the head to support the front, while the ARGO uses an oversized large noise bridge as found on many other AR “glasses.”

ARGO supports a headband option, but they require the ends of the temples with the skull-grabbers temples to be removed and replaced by a headband.

As anyone who knows anything about human factors with glasses knows, the ears and the nose cannot support much weight, and the ears and nose will get sore if much weight is supported for a long time.

Large soft nose pads are not an answer. There is still too much weight on the nose, and the variety of nose shapes makes them not work well for everyone. In the case of the Argo, the large nose pads also interfere with wearing glasses; the nose pads are located almost precisely where the nose pads for glasses would go.

Bussel/Bun on the Back Weight Distribution – Liberating the Design

As was pointed about by Microsoft with their Hololens 2 (HL2), weight distribution is also very important. I don’t know if they were the first with what I call “the bustle on the back” approach, but it was a massive improvement, as I discussed in Hololens 2 First Impressions: Good Ergonomics, But The LBS Resolution Math Fails! Several others have used a similar approach, most notably with the Meta Quest Pro VR (it has very poor passthrough AR, as I discussed in Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough). Another feature of the HL2 ergonomics is the forehead pad eliminates weight from the nose and frees up that area in support of ordinary prescription glasses.

The problem with the sort-of-glasses form factor so common in most AR headsets today is that it locks the design into other poor decisions, not the least of which is putting too much weight too far forward. Once it is realized that these are not really glasses, it frees up other design features for improvement. Weight can be taken out of the front and moved to the back for better weight distribution.

ARGO’s Eye-Relief Missed Opportunity for Supporting Normal Glasses

Perhaps the best ergonomic/user feature of the Hololens 1 & 2 over most other AR headsets is that they have enough eye relief (distance from the waveguide to the eye) and space to support most normal eyeglasses. The ARGO’s waveguide and optical design have enough eye relief to support wearing most normal glasses, but still, they require specialized inserts.

You might notice some “eye glow” in the CNET picture (above right). I think this is not from the waveguide itself but is a reflection off of the prescription inserts (likely, they don’t have good anti-reflective coatings).

A big part of the problem with supporting eyeglasses goes back to trying to maintain the fiction of a “glasses form factor.” The nose bridge support will get in the way of the glasses, but the nose bridge support is required to support the headset. Additionally, hardware in the “brow” over the eyes could have been moved elsewhere, which may interfere.

Another technical issue is the location and shape of their optical engine. As discussed earlier, the Digilens engine shape causes issues with jutting into the front of glasses, resulting in a large brow over the eyes. This brow, in turn, may interfere with various eyeglasses.

It looks like Argo started with the premise of looking like glasses putting form ahead of function. As it turns out, they have what for me is an unhappy compromise that neither looks like glasses nor has the Hololens 2 advantage of working with most normal glasses. Starting from the comfort and functionality as primary would have also led to a different form factor for the optical engine.

Conclusions

While MicroLED may hold many long-term advantages, they are not ready to go head-to-head with LCOS engines regarding image quality and color. The LCOS engines are being shown by multiple companies that are more than competitive in size and shape with the small MicroLED engines. The LCOS engines are also supporting much higher resolutions and larger FOVs.

Lumus, with their Z-Lens 2-D reflective waveguides, seems to have a big advantage in image quality and efficiency over the many diffractive waveguides. Allowing the Z-lens to be encased without an air gap adds another significant advantage.

Yet today, most waveguide-based AR glasses use diffractive waveguides. The reasons include there being many sources of diffractive waveguides, and companies can make their own custom designs. In contrast, Lumus controls its reflective waveguide I.P. Additionally, Lumus has only recently developed 2-D reflective waveguides, dramatically reducing the size of the projection engine driving their waveguides. But the biggest reason for using diffraction waveguides is that the cost of Lumus waveguides is thought to be more expensive; Lumus and their new manufacturing partner Schott Glass claimed that they will be able to make waveguides at competitive or better costs.

A combination of cost, color, and image quality will likely limit MicroLEDs for use in ultra-small and light glasses with low amounts of visual content, known as “data snacking.” (think arrows and simple text and not web browsing and movies). This market could be attractive in enterprise applications. I’m doubtful that consumers will be very accepting of monochrome displays. I’m reminded of a quote from an IBM executive in the 1980s when asked whether resolution or color was more important said: “Color is the least necessary and most desired feature in a display.”

Not to pick on Argo, but it demonstrates many of the issues with making a full-featured device in a glasses form factor, as SLAM (with multiple spatially separated cameras), processing, communication, batteries, etc., the overall design strays away from looking like glasses. As I wrote in my 2019 article, Starts with Ray-Ban®, Ends Up Like Hololens.

The post DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8) first appeared on KGOnTech.

❌
❌