Reading view

There are new articles available, click to refresh the page.

ICRA@40 Conference Celebrates 40 Years of IEEE Robotics



Four decades after the first IEEE International Conference on Robotics and Automation (ICRA) in Atlanta, robotics is bigger than ever. Next week in Rotterdam is the IEEE ICRA@40 conference, “a celebration of 40 years of pioneering research and technological advancements in robotics and automation.” There’s an ICRA every year, of course. Arguably the largest robotics research conference in the world, the 2024 edition was held in Yokohama, Japan back in May.

ICRA@40 is not just a second ICRA conference in 2024. Next week’s conference is a single track that promises “a journey through the evolution of robotics and automation,” through four days of short keynotes from prominent roboticists from across the entire field. You can see for yourself, the speaker list is nuts. There are also debates and panels tackling big ideas, like: “What progress has been made in different areas of robotics and automation over the past decades, and what key challenges remain?” Personally, I’d say “lots” and “most of them,” but that’s probably why I’m not going to be up on stage.

There will also be interactive research presentations, live demos, an expo, and more—the conference schedule is online now, and the abstracts are online as well. I’ll be there to cover it all, but if you can make it in person, it’ll be worth it.


Forty years ago is a long time, but it’s not that long, so just for fun, I had a look at the proceedings of ICRA 1984 which are available on IEEE Xplore, if you’re curious. Here’s an excerpt of the forward from the organizers, which included folks from International Business Machines and Bell Labs:

The proceedings of the first IEEE Computer Society International Conference on Robotics contains papers covering practically all aspects of robotics. The response to our call for papers has been overwhelming, and the number of papers submitted by authors outside the United States indicates the strong international interest in robotics.
The Conference program includes papers on: computer vision; touch and other local sensing; manipulator kinematics, dynamics, control and simulation; robot programming languages, operating systems, representation, planning, man-machine interfaces; multiple and mobile robot systems.
The technical level of the Conference is high with papers being presented by leading researchers in robotics. We believe that this conference, the first of a series to be sponsored by the IEEE, will provide a forum for the dissemination of fundamental research results in this fast developing field.

Technically, this was “ICR,” not “ICRA,” and it was put on by the IEEE Computer Society’s Technical Committee on Robotics, since there was no IEEE Robotics and Automation Society at that time; RAS didn’t get off the ground until 1987.

1984 ICR(A) had two tracks, and featured about 75 papers presented over three days. Looking through the proceedings, you’ll find lots of familiar names: Harry Asada, Ruzena Bajcsy, Ken Salisbury, Paolo Dario, Matt Mason, Toshio Fukuda, Ron Fearing, and Marc Raibert. Many of these folks will be at ICRA@40, so if you see them, make sure and thank them for helping to start it all, because 40 years of robotics is definitely something to celebrate.

AWE 2024 VR – Hypervision, Sony XR, Big Screen, Apple, Meta, & LightPolymers

Introduction

Based on information gathered at SID Display Week and AWE, I have many articles to write based on the thousands of pictures I took and things I learned. I have been organizing and editing the pictures.

As its name implies, Display Week is primarily about display devices. My major takeaway from that conference is that many companies work on full-color MicroLEDs with different approaches, including quantum dot color conversion, stack layers, and single emitter with color shifting based on current or voltage.

AWE moved venues from the Santa Clara Convention Center in Silicon Valley to the larger Long Beach Convention Center south of LA. More than just a venue shift, I sensed a shift in direction. Historically, at AWE, I have seen many optical see-through AR/MR headsets, but there seem to be fewer optical headsets this year. Instead, I saw many companies with software running on VR/Passthrough AR headsets, primarily on the Meta Quest 3 (MQ3)and Apple Vision Pro (AVP).

This article was partly inspired by Hypervision’s white paper discussing whether micro-OLEDs or small LCDs were the best path to 60 pixels per degree (PPD) with a wide FOV combined with the pictures I captured through Hypervision’s HO140 (140° diagonal FOV per eye) optics at AWE 2024. I have taken thousands of pictures through various headsets, and the Hypervision picture stood out in terms of FOV and sharpness. I have followed Hypervision since 2021 (see Appendix: More on Hypervision).

I took my first pictures at AWE through the Sony XR (SXR) Headset optics. At least subjectively, in a short demo, the SXR’s image quality (sharpness and contrast) seemed higher than that of the AVP, but the FOV was smaller. I had on hand (thousands) of pictures I had taken through the Big Screen Beyond (BSB), AVP, Meta Quest Pro (MQP), and Meta Quest 3 (MQ3) optics with the same camera and lens, plus a few of the Hypervision HO140 prototype. So, I decided to make some comparisons between various headsets.

I also want to mention LightPolymers’ new Quarter Waveplate (QWP) and Polarization technologies, which I first learned about from a poster in the Hypervision AWE booth. In April 2024, the two companies announced a joint development grant. They offer an alternative to the plastic film QWP and Polarizers, where 3M dominates today.

Hypervision’s HO140 Display

Based on my history of seeing Hypervision’s 240° prototypes for the last three years, I had, until AWE 2024, largely overlooked their single display 140° models. I had my Canon R5 (45Mp with 405mp ” 3×3 sensor pixel shift mode”) and tripod with me at AWE this year, so I took a few high-resolution pictures through the optics of the HO140. Below are pictures of the 240° (left) and 140° (right) prototypes in the Hypervsion Booth. Hypervision is an optics company and not a headset maker and the demos are meant to show off their optics.

When I got home and looked at the pictures through the HO140, I was impressed by the overall image quality of the HO140, after having taken thousands of pictures through the Apple Vision Pro (with Micro-OLED displays) and Meta’s Quest Pro, Quest 3 (both with mini-LCD displays), the Big Screen Beyond. It usually takes me considerable time and effort, as well as multiple reshoots, to find the “sweet spot” for the other devices, but I got good pictures through the HO140 with minimal effort and only a few pictures, which suggests a very large sweet spot in Hypervision’s optical design. The HO140 is a prototype of unknown cost that I am comparing to production products. I only have this one image to go by and not a test pattern.

The picture below is from my Canon R5, with a 16mm lens netting a FOV of 97.6° horizontal by 73.7° vertical. It was shot at 405mp and then reduced to 45mp to avoid moiré effects due to the “beat frequencies” between the camera sensor and the display devices with their color subpixels. All VR optics pincushion, which causes the pixel sizes to vary across the display and increases the chance of getting moiré in some regions.

The level of sharpness throughout the HO140’s image relative to other VR headsets suggests that it could support a higher-resolution LCD panel with a smaller pixel size if it existed. Some significant chroma aberrations are visible in the outer parts of the image, but these could be largely corrected in software.

Compared to other VR-type headsets I have photographed, I was impressed by how far out into the periphery of the FOV the image maintains sharpness while supporting a significantly larger FOV than any other device I have photographed. What I can’t tell without being able to run other content, such as test patterns, is the contrast of the display and optics combination.

I suggest also reading Hypervision’s other white papers on their Technology & Research page. Also, if you want an excellent explanation of pancake optics, I recommend Arthur Rabner’s, CTO of Hypervision, one-hour and 25-minute presentation on YouTube.

Sony XR (SXR)

Mechanical Ergonomics

AWE was my first time trying the new Sony XR (SXR) headset. In my CES 2024 coverage, I wrote about the ergonomic features I liked in Sony XR (and others compared to Apple Vision Pro). In particular, I liked the headband approach with the flip-up display, and my brief try with the Sony headset at AWE seemed to confirm the benefits of this design choice (which is very similar to the Lynx R1 headset), at least from the ergonomics perspective relative to the Apple Vision Pro.

Still, the SXR is still pretty big and bulky, much more so than the AVP or Lynx. Having only had a short demo, I can’t say how comfortable it will be in extended use. As was the case for the HO140, I couldn’t control the content.

“Enterprise” Product

Sony has been saying that this headset primarily aims at “enterprise” (= expensive high-end) applications, and they partner with Siemens. It is much more practical than the Apple Vision Pro (AVP). The support on the head is better; it supports users wearing their glasses, and the display/visor flips up so you can see the real world directly. There is air circulation to the face and eyes. The headset also supports adjustment of the distance from the headset to the eyes. The headset allows peripheral vision but does have a light shield for full VR operation. The headset is also supposed to support video passthrough, but that capability was not demonstrated. As noted in my CES article, the SXR headset put the pass-through cameras in a much better position than the AVP.

Display Devices and Image Quality

Both the AVP and SXR use ~4K micro-OLED display devices. While Sony does the OLED Assembly (applying the OLED and packaging) for its headset and the AVP’s display devices, the AVP reportedly uses a custom silicon backplane designed by Apple. The SXR’s display has ~20% smaller 6.3-micron pixels than the AVP’s 7.5-micron. The device size is also smaller. The size factors of the SXR favor higher angular resolution and a smaller FOV, as is seen with the SXR.

The picture below was taken (handheld) with my 45MP Canon R5 camera with a 16mm lens like the HO140, but because I couldn’t use a tripod, I couldn’t get a 405MP picture with the camera’s sensor shifting. I was impressed that I got relatively good images handheld, which suggests the optics have a much larger sweet spot than the AVP, for example. To get good images with the AVP requires my camera lens to be precisely aligned into the relatively small sweep spot of the AVP’s optics (using a 6-degree-of-freedom camera rig on a tripod). I believe the Apple Vision Pro’s small sweet spot and the need for eye-tracking-based lens correction, and not just for foveated rendering, are part of why the AVP has to be uncomfortably clamped against the user’s face.

Given that I was hand-holding both the headset and camera, I was rather surprised that the pictures came out so well (click on the image to see it in higher, 45mp resolution).

At least in my brief demo, the SXR’s optics image quality seems better than the AVP’s. The images seem sharper with lesser chroma (color) aberrations. The AVP seems heavily dependent on eye tracking to correct optics problems with the optics, but it does not always succeed.

Much more Eye Refief (enabling eye glasses) but lower FOV

I was surprised by how much eye relief the SXR optics afforded compared to the AVP and BSB, which also use Micro-OLED microdisplays. Typically, the requirement for high magnification of the micro-OLED pixels compared to LCD pixels inherently makes eye relief more difficult. The SXR magnifies less, resulting in a smaller FOV, but also makes it easier optically for them to support more eye relief. But note, taking advantage of the greater eye relief will further reduce the FOV. The SXR headset has a smaller FOV than any other VR-type headset I have tried recently.

Novel Sony controllers were not a hit

While I will credit Sony for trying something new with the controllers, I didn’t like finger trackpad and ring color are great solutions. I talked with several people who tried them, and no one seemed to like either controller. It is hard to judge control devices in a short demo; you must work with them for a while. Still, they didn’t make a good first impression.

VR Headset “Shootout” between AVP, MQP, Big Screen Beyond, Hypervision, and Sony XR

I have been shooting VR headsets with the Canon R5 with a 16mm lens for some time and built up a large library of pictures. For the AVP, Big Screen Beyond (BSB), and Meta Quest Pro (MQP), I had both the the headset and the camera locked down on tripods so I could center the lens in the sweet spot of the optics. For the Hypervision, while the camera and headset were on tripods, my camera was only on a travel tripod without my 6-degree-of-freedom rig and the time to precisely locate the headset’s optical sweet spot. The SXR picture was taken with my hand holding the headset and the camera.

Below are through-the-optics pictures of the AVP, BSB, MQP, Hypervision HO140, and SXR headsets, all taken with the same camera and lens combination and scaled identically. This is not a perfect comparison as the camera lens does not work identically to the eye (which also rotates), but it is reasonably close. The physically shorter and simpler 16mm prime (non-zoom) lens lets it get inside the eye box of the various headsets for the FOV it can capture.

FOV Comparison (AVP, SXR, BSB, HO140, MQ3/MQP)

While companies will talk about the number of horizontal and vertical pixels of the display device, the periphery of the display’s pixels are cut off by the optics, which tend to be circular. All the VR headset optics have a pincushion distortion, which results in higher resolution in the sweet spot (optical center), which is always toward the nose side and usually above the center for VR headsets.

In the figure below, I have overlaid the FOV of the left eye for the headsets on top of the picture HO140 image. I had to extrapolate somewhat on the image circles on the top and bottom as the headset FOVs exceeded the extent of the camera’s FOV. The HO140 supports up to a 2.9″ diagonal LCD (that does not exist yet), but they currently use a 2.56″ 2160×2160 Octagonal BOE LCD and are so far beyond the FOV of my camera lens that I used their information.

As can be seen, the LCD-based headsets of Hypervision and Meta typically have larger FOV than the micro-OLED-based headsets of AVP, Meta, and Sony. However, as will be discussed, the micro-OLED-based headsets have smaller pixels (angularly and on the physical display device).

Center Pixels (Angular Size in PPD)

Due to handholding the SXR and having pixels smaller than the AVP, I couldn’t get a super-high-resolution (405 mp) image from the center of the FOV and didn’t have the time to use a longer focal length lens to show the pixel boundaries. The SXR has roughly the same number of pixels as the AVP but a smaller FOV, so its pixels are angularly smaller than the AVP’s. I would expect the SXR to be near 60 pixels per degree (PPD) in the center of the FOV. The BSB has about the same FOV as the AVP but has a ~2.5K micro-OLED compared to the AVP’s ~4K; thus, the BSB pixels in the center are about 1.5x bigger (linearly). The Hypervision’s display has a slightly smaller center pixel pitch than the MQP (and MQ3) but with a massively bigger FOV.

The MQP (and the very similar MQ3) rotate the display device. To make it easier to compare the pixel pitches, I included a rotated inset of the MQP pixels to match the alignment of the other devices. Note that the pictures below are all “through the optics” and thus include the headset’s optical magnification. I have given the angular resolution in PPD for each headset. I have indicated the angular resolution (in pixels-per-degree, PPD) for each of the headset’s center pixels. For the center pixels pictures below, I used a 28mm lens to get more magnification to see sub-pixel detail for the AVP, BSB, and MQP. I only took 16mm lens pictures of the HO140 and, therefore, rescaled the image based on the different focal lengths of the lens.

The Micro-OLED base headsets require significantly more optical magnification than the LCD models. For example, the AVP has 3.2x (linearly) smaller display device pixels than the MQP, but after optics, the pixels are ~1.82x smaller. As a specific example, the AVP magnifies the display by ~1.76 more than the MQP.

Outer Pixels

I capture pixels from a similar (very approximately) distance from the optical center of the lens. The AVP’s “foveated rendering” makes it look worse than it is, but you can still see the pixel grid with the others. Of the micro-OLED headsets, the BSB and SXR seem to do the best regarding sharpness in the periphery. The Hypervision HO140 pixels seem much less distorted and blurry than any of the headsets, including the MQP and MP3, which have much smaller FOVs.

Micro-OLED vs. Mini-LCD Challenges

Micro-OLEDs are made by applying OLEDs on top of a CMOS substrate. CMOS transistors provide a high current per unit area, and all the transistors and circuitry are underneath the OLED pixels, so it doesn’t block light. These factors enable relatively small pixels of 6.3 to 10 microns. However, CMOS substrates are much more expensive per unit area, and modern semiconductor FABs limit of CMOS devices is about 1.4-inch diagonal (ignoring expensive and low-yielding “reticle stitched” devices).

A basic issue with OLEDs is that the display device must provide the power/current to drive each OLED. In the case of LCDs, only a small amount of capacitance has to be driven to change the pixel, after which there is virtually no current. The table on the right (which I discussed in 2017) shows the transistor mobility and the process requirements for the transistors for various display backplanes. The current need for an emitting display device like OLEDs and LEDs requires crystalline silicon (e.g., CMOS) or much larger thin-film transistors on glass. There are also issues of the size and resistivity of the wires used to provide the current and heat issues.

The OLED’s requirement for significant current/power limits how small the pixels can get on a given substrate/technology. Thin-film transistors have to be physically big to supply the current. For example, the Apple Watch Ultra Thin Film transistor OLED display has 326 PPI (~78 microns), which is more than 10x larger linearly (100x the area) than the Apple Vision Pro’s pixel, even though both are “OLEDs.”

Another issue caused by trying to support large FOVs with small devices is that the higher magnification reduces eye relief. Most of the “magnification” comes from moving the device closer to the eye. Thus, LCD headsets tend to have more eye relief. Sony’s XR headset is an exception because it has enough eye relief for glasses but does so with a smaller FOV than the other headsets.

Small LCDs used in VR displays have different challenges. They are made on glass substrates, and the transistors and circuitry must be larger. Because they are transmissive, this circuitry in the periphery of each pixel blocks light and causes more of a screen door effect. The cost per unit area is much lower than that of CMOS, and LCD devices can be much larger. Thus, less aggressive optical magnification is required for the same FOV with LCDs.

LCDs face a major challenge in making the pixels smaller to support higher resolution. As the pixels get smaller, the size of the circuitry relative to the pixel size becomes bigger, blocking more light and causing a worse screen door effect. To make the pixels smaller, they must develop higher-performance thin-film transistors and lower resistance interconnection to keep blocking too much light. This subject is discussed in an Innolux Research Paper published by SPIE in October 2023 (free to download). Innolux discusses how to go from today’s typical “small” LCD pixel of 1200 ppi (=~21 microns) to their research device with 2117 ppi (=~12 microns) to achieve a 3840 x 3840 (4K by 4k) display in a 2.56″ diagonal device. Hypervision’s HO140 white paper discusses Innolux’s 2022 research prototype with the same pixel size but with 3240×3240 pixels and a 2.27-inch panel, as well as the current prototype. The current HO140 uses a BOE 2.56″ 2160×2160 panel with 21-micron pixels, as the Innolux panel is not commercially available.

Some micro-OLED and small LCD displays for VR

YouTuber Brad Lynch of SadlyItsBradley, in an X post, listed the PPI of some common VR headset display devices. I have added more entries and the pixel pitch in microns. Many VR panels are not rectangular and may have cut corners on the bottom (and top). The size of the panels given in inches is for the longest diagonal. As you can see, Innolux’s prototypes have significantly smaller pixels, but almost 2x linearly, than the VR LCDs in volume production today:

  • Vive: 3.6″, 1080p, ~360 PPI (70 microns)
  • Rift S*: 5.5″, 1280P, ~530 PPI (48 microns)
  • Valve Index: 3.5″, 1440p, ~600 PPI (42 microns)
  • Quest 2*: 5.5″, 1900p, ~750 PPI (34 microns)
  • Quest 3: ~2.55″ 2064 × 2208, 1050 PPI (24 microns) – Pancake Optics
  • Quest Pro: 2.5″, 1832×1920, ~1050 PPI (24 microns) – Might be BOE 2.48″ miniLED LCD
  • Varjo Aero: 3.2″, 2880p, ~1200 PPI (21 microns)
  • Pico 4: 2.5″, 2160p, 1192 PPI (21 microns)
  • BOE 2.56″ LCD, 2160×2160, 1192 PPI (21 microns) – Used in Hypervision HO140 at AWE 2024
  • Innolux 2023 Prototype 2.56″, 3840×3840, 2117 ppi (12 microns) -Research prototype
  • Apple Vision Pro 1.4″ Micro-OLED, 3,660×3,200, 3386 PPI (7.5 microns)
  • SeeYa 1.03″ Micro-OLED, 2560×2560, 3528 PPI (7.2 microns) – Used in Big Screen Beyond
  • Sony ~1.3″ Micro-OLED, 3552 x 3840, 4032 PPI (6.3 microns) – Sony XR
  • BOE 1.35″ Micro-OLED 3552×3840, 4032 PPI (6.3 microns) – Demoed at Display Week 2024

In 2017, I wrote Near Eye Displays (NEDs): Gaps In Pixel Sizes (table from that article on the right) talks about what I call the pixel size gap between microdisplays (on Silicon) and small LCDs (on glass). While the pixel sizes have gotten smaller for both micro-OLED and LCDs for VR in the last ~7 years, there remains a sizable gap.

Contrast – Factoring the Display and Pancake Optics

Micro-OLEDs at the display level certainly have a better inherent black level and can turn pixels completely off. LCDs work by blocking light using cross-polarization, which results in imperfect blacks. Thus, with micro-OLEDs, a large area of black will look black, whereas with LCDs, it will be dark gray.

However, we are not looking at the displays directly but through optics, specifically pancake optics, which dominate new VR designs today. Pancake optics, which use polarized light and QWP to recirculate the image twice through parts of the optics, are prone to internal reflections that cause “ghosts” (somewhat out-of-focus reflections) and contrast loss.

Using smaller micro-OLEDs requires more “aggressive” optical designs that support higher magnification to support a wide FOV. These more aggressive optical designs can be more prone to being more expensive, less sharp, and loss of polarization. Any loss of polarization in pancake optics will cause a loss of contrast and ghosting. There seems to be a tendency with pancake optics for the stray light to bounce around and end up in the periphery of the image, causing a glow if the periphery of the image is supposed to be black.

For example, the AVP is known to have an outer “glow” when watching movie content on a black background. Most VR headsets default to a “movie or home theater” rather than a background. While it may be for aesthetics, the engineer in me thinks it might help hide the glow. People online suggest turning on some background with the AVP for people bothered by the glow on a black background.

The complaints of outer glow when watching movies seem more prevalent when using headsets micro-OLEDs, but this is hardly scientific. It could be just that the micro-OLEDs have a better black level and make the glow more noticeable, but it might also be caused by their more aggressive optical magnification (something that might be or has been (?) studied). My key point is that it is not as simple as considering the display’s inherent contrast, you have to consider the whole optical system.

LightPolymers’ Alternative to Plastic Films for QWP & Polarizers

LightPolymers has a Lyotropic (water-based) Liquid Crystal (LC) material that can make optical surfaces like QWP and polarizers. Silicon Optix, which the blog broke the news of Meta buying them in December 2021 (Exclusive: Imagine Optix Bought By Meta), was also developing LC-based polarized light control films.

Like Silicon Optix, Light Polymers has been coating plastic films with LCs, but LightPolymers is developing the ability to directly apply their films to flat and curved lenses, which is a potential game changer. In April 2024, LightPolymers and Hypervision announced the joint development of this lens-coating technology and had a poster in their Hypervision’s booth showing it (right)

3M Dominates Polarized Light Plastic Films for Pancake Optics

3M is today the dominant player in polarized light-control plastic films and is even more dominant in these films for pancake optics. At 3M’s SID Display Week booth in June 2024, they showed the ByteDance PICO4, MQP, and MQ3 pancake optics using 3M polarization films. Their films are also used in the Fresnel lens-based Quest 2. It is an open secret (but 3M would not confirm or deny) that the Apple Vision Pro also uses 3M polarization films.

According to 3M:

3M did not invent the optical architecture of pancake lenses. However, 3M was the first company to successfully demonstrate the viability of pancake lenses in VR headsets by combining it with its patented reflective polarizer technology.

That same article supports Kopin’s (now spun out to Lightning Silicon) claims to have been the first to develop pancake optics. Kopin has been demonstrating pancake optics combined with their Micro-OLEDs for years, which are used in Panasonic-ShiftAll headsets.

3M’s 2017 SPIE Paper Folded Optics with Birefringent Reflective Polarizers discusses the use of their films (and also mentions Kopin developments) in cemented (e.g., AVP) and air gap (e.g., MQP and MP3) pancake optics. The paper also discusses how their polarization films can be made (with heat softening) to conform to curved optics such as the AVP.

LightPolymers’ Potential Advantage over Plastic Films

The most obvious drawbacks of plastic films are that they are relatively thick (on the order of 70+ microns per film, and there are typically multiple films per lens) and are usually attached using adhesive coatings. The thickness, particularly when trying to conform to a curved surface, can cause issues with polarized light. The adhesives introduce some scatter, resulting in some loss of polarization.

By applying their LCs directly to the lens, LightPolymer claims they could reduce the thickness of the polarization control (QWP and Polarizers) by as much as 10x and would eliminate the use of adhesives.

In the photos below (taken with a 5x macro lens), I used a knife to slightly separate the edges of the films from the Meta Quest 3’s eye-side and display-side lenses to show them. On the eye-side lens, there are three films, which are thought to be a QWP, absorptive polarizer, and reflective polarizer. On the display-side lens, there are two films, one of which is a QWP, and the other may be just a protective film. In the eye-side lens photo, you can see where the adhesive has bubbled up after separation. The diagram on the right shows the films and paths for light with the MQ3/MQP pancake optics.

Because LighPolymers’ LC coating is applied to each lens, it could also be applied/patterned to improve or compensate for other issues in the optics.

Current State of LightPolymer’s Technology

LightPolymers is already applying its LC to plastic films and flat glass. Their joint agreement with Hypervision involves developing manufacturable methods for directly applying the LC coatings to curved lens surfaces. This technology will take time to develop. LightPolymer business of making the LC materials and then works with partners such as Hypervision to apply the LC to their lenses. They say the equipment necessary to apply the LCs is readily available and low-cost (for manufacturing equipment).

Conclusion

Hypervision has demonstrated the ability to design very wide FOV pancake optics with a large optical sweet spot and maintains a larger area of sharpness than any other design I have seen.

Based on my experience in both Semiconductors and Optics, I think Hypervision makes a good case in their white paper 60PPD: by fast LCD but not by micro OLED, getting to a wide FOV while approaching “retinal” 60PPD is more likely to happen using LCD technology than micro-OLEDs.

Fundamentally, micro-OLEDs are unlikely to get much bigger than 1.4″ diagonally, at least commercially, for many years, if not more than a decade. While they could make the pixels smaller, today’s pancake optics struggle to resolve ~7.5-micron pixels, no less small ones.

On the other hand, several companies, including Innoulux and BOE, have shown research prototypes of 12-micron LCD pixels, or half the (linear) size of today’s LCDs used in VR headsets in high volume. If BOE or Innolux went into production with these displays, it would enable Hypervision’s HO140 to reach about 48 PPD in the center with a roughly 140-degree FOV, and only small incremental changes would get them to 60 PPD with the same FOV.

Appendix: More on Hypervision

I first encountered Hypervision at AWE 2021 with their blended Fresnel lens 240-degree design, but as this blog primarily covered optical AR, it slipped under my radar. Since then, I have been covering Optical and Pass-Through mixed reality, particularly pass-through MR using Pancake Optics. By AR/VR/MR 2023, Hypervsion demonstrated a single lens (per eye) 140-degree and a blended dual lens and display 240-degree FOV (diagonal) Pancake Optics designs.

These were vastly better than their older Fresnel designs and demonstrated Hypervision’s optical design capability. In May 2023, passthrough MR startup Lynx and Hypervision announced they were collaborating. For some more background on my encounters with Hypervision, see Hypervision Background.

Hypervision has been using its knowledge of pancake optics to analyze the Apple Vision Pro’s optical design, which I have reported on in Hypervision: Micro-OLED vs. LCD – And Why the Apple Vision Pro is “Blurry,” Apple Vision Pro Discussion Video by Karl Guttag and Jason McDowall, Apple Vision Pro – Influencing the Influencers & “Information Density,” and Apple Vision Pro (Part 4)—Hypervision Pancake Optics Analysis.

AWE 2024 Panel: The Current State and Future Direction of AR Glasses

Introduction

At AWE 2024, I was on a panel discussion titled “The Current State and Future Direction of AR Glasses.” Jeri Ellsworth, CEO of Tilt Five, Ed Tang, CEO of Avegant, Adi Robertson, Senior Reporter at The Verge, and I were on the panel, with Jason McDowell, The AR Show, moderating. Jason McDowell did an excellent job of moderation and keeping the discussion moving. Still, with only 55 minutes, including questions from the audience, we could only cover a fraction of the topics we had considered discussing. I’m hoping to reconvene this panel sometime. I also want to thank Dean Johnson, Associate Professor at Western Michigan University, who originated the idea and helped me organize this panel. AWE’s video of our panel is available on YouTube.

First, I will outline what was discussed in the panel. Then, I want to follow up on small FOV optical AR glasses and some back-and-forth discussions with AWE Legend Thad Starner.

Outline of the Panel Discussion

The panel covered many topics, and below, I have provided a link to each part of our discussion and added additional information and details for some of the topics.

  • 0:00 Introductions
  • 2:19 Apple Vision Pro (AVP) and why it has stalled. It has been widely reported that AVP sales have stalled. Just before the conference, The Information reported that Apple had suspended the Vision Pro 2 development and is now focused on a lower-cost version. I want to point out that a 1984 128K Mac 1 adjusted for inflation would cost over $7,000 adjusted for inflation, and the original 1977 Apple 2 4K computer (without a monitor or floppy drive) would cost about $6,700 in today’s dollars. I contend that utility and not price is the key problem with the AVP sales volume and that Apple is thus drawing the wrong conclusion.
  • 7:20 Optical versus Passthrough AR. The panel discusses why their requirements are so different.
  • 11:30 Mentioned Thad Starner and the desire for smaller FOV optical AR headsets. It turns out that Thad Starner attended our panel, but as I later found out, he arrived late and missed my mentioning him. Thad, later questioned the panel. In 2019, I wrote the article FOV Obsession, which discussed Thad’s SPIE AR/VR/MR presentation about smaller FOV. Thad is a Georgia Institute of Technology professor and a part-time Staff Researcher at Google (including on Google Glass). He has continuously worn AR devices since his research work at MIT’s media lab in the 1990s.
  • 13:50 Does “tethering make sense” with cables or wirelessly?
  • 20:40 Does an AR device have to work outside (in daylight)?
  • 26:49 The need to add displays to today’s Audio-AI glasses (ex. Meta Ray-Ban Wayfarer).
  • 31:45 Making AR glasses less creepy?
  • 35:10 Does it have to be a glasses form factor?
  • 35:55 Monocular versus Biocular
  • 37:25 What did Apple Vision Pro get right (and wrong) regarding user interaction?
  • 40:00 I make the point that eye tracking and gesture recognition on the “Apple Vision Pro is magical until it is not,” paraphrasing Adi Robertson, and I then added, “and then it is damn frustrating.” I also discuss that “it’s not truly hands-free if you have to make gestures with your hands.”
  • 41:48 Waiting for the Superman [savior] company. And do big companies help or crush innovation?
  • 44:20 Vertical integration (Apple’s big advantage)
  • 46:13 Audience Question: When will AR glasses replace a smartphone (enterprise and consumer)
  • 49:05 What is the first use case to break 1 million users in Consumer AR?
  • 49:45 Thad Starner – “Bold Prediction” that the first large application will be with small FOV (~20 degrees), monocular, and not centered in the user’s vision (off to the ear side by ~8 to 20 degrees), and monochrome would be OK. A smartphone is only about 9 by 15 degrees FOV [or ~20 degrees diagonally when a phone is held at a typical distance].
  • 52:10 Audience Question: Why aren’t more companies going after OSHA (safety) certification?

Small FOV Optical AR Discussion with Thad Starner

As stated in the outline above, Thad Starner arrived late and missed my discussion of smaller FOVs that mentioned Thad, as I learned after the panel. Thad, who has been continuously wearing AR glasses and researching them since the mid-1990s, brings an interesting perspective. Since I first saw and met him in 2019, he has strongly advocated for AR headsets having a smaller FOV.

Thad also states that the AR headset should have a monocular (single-eye) display and be 8—to 20 degrees on the ear side of the user’s straight-ahead vision. He also suggests that monochrome is fine for most purposes. Thad stated that his team will soon publish papers backing up these contentions.

In the sections below, I went from the YouTube transcript and did some light editing to make what was said more readable.

My discussion from earlier in the panel:

11:30 Karl Guttag – I think a lot of the AR or Optical see-through gets confabulated with what was going on in VR because VR was cheap and easy to make a wide field of view by sticking a cell phone with some cheap Optics in front of your face. You get a wide field of view, and people went crazy about that. I made this point years ago on my blog [2019 article FOV Obsession] was the problem. Thad Starner makes this point: he’s one of our Legends at AWE, and I took that to heart many years ago at SPIE AR/VR/MR 2019.

The problem is that as soon as you say beyond about 30-degree field of view, even projecting forward [with technology advancements], as you go beyond 30-degree field of view, you’re in a helmet, something looking like Magic Leap. And Magic Leap ended up in Nowheresville. [Magic Leap] ended up with 25 to 30% see-through, so it’s not really that good see-through, and yet it’s not got the image quality that you would get of an old display shot right in your eyes. You might you could get a better image on an Xreal or something like that.

People are confabulating too many different specs, so they want a wide field of view. The problem is as soon as you say 50 degrees and then you say, yeah, and I need like spatial recognition, I want to do SLAM, and I want to do this, and I want to do that. You’ve now spiraled into the helmet. I mean, you know, Meta was talking the other day about the other panels and said they’re looking at about 50 grams [for the Meta Ray Bans], and my glasses are 23 grams. You’re out of that as soon as you say 50-degree field of view, you’re over 100 grams and and and and and heading to the Moon as you add more and more cameras and all this other stuff, so I think that’s one of our bigger problems whereas AR really Optical AR.

The experiment we’re going to see played out because many companies are working on adding displays to to so called AI audio glasses. We’re going to see if that works because companies are getting ready to make glasses that have 20—to 30-degree field of view glasses tied into AI and audio stuff.

Thad Starner’s comments and the follow-up discussion during the Q&A at the end of the panel:

AWE Legend Thad Starner Wearing Vuzix’s Ultralight Glasses – After the Panel

49:46 Hi, my name is Thad Starner. I’m Professor Georgia Tech. I’m going to make a bold prediction here that the future, at least the first system to sell over a million units, will be a small field of view monocular, non-line-of-sight display, monochrome is okay now; the reason I say that is number one I’ve done different user studies in my lab that we’ll be publishing soon on this subject but the other thing is that you know our phones which is the most popular interface out there are only 9 degrees by 16 degrees field of view. Putting something outside of the line of sight means that it doesn’t interrupt you while you’re crossing the street or driving or flying a plane, right? We know these numbers, so between 8° and 20 degrees towards the ear and plus or minus 8 degrees, I’m looking at Karl [Guttag] here so he can digest all these things.

Karl – I wrote a whole article about it [FOV Obsession]

Thad – And not having a pixel in line of sight, so now feel free to pick me apart and disagree with me.

Jeri-  I want to know a price point.

Thad, I think the first market will be captioning for the heart of hearing, not for the deaf. Also, possible transcription, not translation; at that price point, you’re talking about making reading glasses for people instead of hearing aids. There’s a lot of pushback against hearing, but reading glasses people tend to do, so I’d say you’re probably in the $200 to $300 range.

Ed – I think your prediction is spot on, minus the color green. The only thing I think is that it’s not going to fly.

Thad – I said monochrome is okay.

Ed – I think the monocular field of view is going to be an entry-level product, and you see, I think you will see products that will fit that category with roughly that field of view with roughly that offset angle [not in the center of view] is what you’re going to see in the beginning. Yeah I agree with that but I don’t I think that’s the first step I think you will see a lot of products after that that’s going to do a lot more than monocular monochrome offset displays, start going to larger field of view binocular I think that will happen pretty quickly.

Adi – It does feel like somebody tries to do that every 18 months, though, like Intel tried to make a pair of glasses that did that. It’s a little bit what North did. I guess it’s just a matter of throwing the idea at the wall because I think it’s a good one until it takes.

I was a little taken aback to have Thad call me out as if I had disagreed with him when I had made the point about the advantages of a smaller FOV earlier. Only after the presentation did I find out that he had arrived late. I’m not sure what comment I made that made Thad think I was advocating for a larger FOV in AR glasses.

I want to add that there can be big differences between what consumers and experts will accept in a product. I’m reminded of a story I read in the early 1980s when there was a big debate between very high-resolution monochrome versus lower-resolution color (back then, you could only have one or the other with CRTs) that the head of IBM’s monitor division said, “Color is the least necessary and most desired feature in a monitor.” All the research suggested that resolution was more important for the tasks people did on a computer at the time, but people still insisted on color monitors. Another example is the 1985 New Coke fiasco, in which Coke’s taste studies proved that people liked New Coke better, but it still failed as a product.

In my experience, a big factor is whether the person is being trained to use the device for enterprise or military use versus whether the user is buying it for their own enjoyment. The military has used monochrome displays on devices, including night vision and heads-up displays for decades. I like to point out that the requirement can change if “If the user paid to use versus is paying to use.” Enterprises and the military care about whether the product gets the job done and pay someone to use the device. The consumer has different criteria. I will also agree that there are cases where the user is motivated to be trained, such as Thad’s hard-of-hearing example.

Conclusion on Small FOV Optical AR

First, I agree with Thad’s comments about the smaller FOV and have stated such before. There are also cases outside of enterprise and industrial use where the user is motivated to be trained, such as Thad’s hard-of-hearing example. But while I can’t disagree with Thad or his studies that show having a monocular monochrome image located outside the line of sight is technically better, I think consumers will have a tougher time accepting a monocular monochrome display. What you can train someone to use differs from what they would buy for themselves.

Thad makes a good point that having a biocular display directly in the line of sight can be problematic and even dangerous. At the same time, untrained people don’t like monocular displays outside the line of sight. It becomes (as Ed Tang said in the panel) a point of high friction to adoption.

Based on the many designs I have seen for AR glasses, we will see this all played out. Multiple companies are developing optical see-through AR glasses with monocular green MicroLEDs, color X-cube-based MicroLEDs, and LCOS-based displays with glass form-factor waveguide optics (both diffractive and reflective).

Hypervision: Micro-OLED vs. LCD – And Why the Apple Vision Pro is “Blurry”

Introduction

The optics R&D  company Hypervision provided a detailed design analysis of the Apple Vision Pro’s optical design in June 2023 (see Apple Vision Pro (Part 4) – Hypervision Pancake Optics Analysis). Hypervision just released an interesting analysis exploring whether Micro-OLEDs, as used by the Apple Vision Pro, or LCDs used by Meta and most others, can support high 60 pixels per degree, angular resolution, and a wide FOV. Hypervision’s report is titled 60PPD: by fast LCD but not by micro OLED.

The optics R&D  company Hypervision provided a detailed design analysis of the Apple Vision Pro’s optical design in June 2023 (see Apple Vision Pro (Part 4) – Hypervision Pancake Optics Analysis). Hypervision just released an interesting analysis exploring whether Micro-OLEDs, as used by the Apple Vision Pro, or LCDs used by Meta and most others, can support high 60 pixels per degree, angular resolution, and a wide FOV. Hypervision’s report is titled 60PPD: by fast LCD but not by micro OLED. I’m going to touch on some highlights from Hypervision’s analysis. Please see their report for more details.

I Will Be at AWE Next Week

AWE is next week. I will be on the PANEL: Current State and Future Direction of AR Glasses at AWE on Wednesday, June 19th, from 11:30 AM to 12:25 PM. I still have a few time slots. If you want to meet, please email meet@kgontech.com.

AWE has moved to Long Beach, CA, south of LA, from its prior venue in Santa Clara. Last year at AWE, I presented Optical Versus Passthrough Mixed Reality, which is available on YouTube. This presentation was in anticipation of the Apple Vision Pro.

An AWE speaker discount code – SPKR24D- provides a 20% discount. You can register for AWE here.

Apple Vision Pro Sharpness Study at AWE 2024 – Need Help

As Hypervision’s analysis finds, plus reports I have received from users, the Apple Vision Pro’s sharpness varies from unit to unit. AWE 2024 is an opportunity to sample many Apple Vision Pro headsets to see how the focus varies from unit to unit. I will be there with my high-resolution camera.

While not absolutely necessary, it would be helpful if you could download my test pattern, located here, and install it on your Apple Vision Pro. If you want to help, contact me via meet@kgontech.com or flag me down at the show. I will be spending most of my time on the Expo floor. If you participate, you can remain anonymous or receive a mention of you or your company at the end of a related article thanking you for your participation. I can’t promise anything, but I thought it would be worth trying.

AVP Burry Image Controversy

My article Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3 was the first to report that the AVP was a little blurry. I compared high-resolution pictures showing the same FOV with the AVP and the Meta Quest 3 (MQ3) in that article.

This article caused controversy and was discussed in many forums and influencers, including Linus Tech Tips and Marquess Brownlee (see Apple Vision Pro—Influencing the Influencers & “Information Density” and “Controversy” of the AVP Being a Little Blurry Discussed on Marques Brownlee’s Podcast and Hugo Barra’s Blog).

I have recently been taking pictures through Bigscreen Beyond’s (BSB) headset and decided to compare it with the same test (above right). In terms of optical sharpness, it is between the AVP and the MQ3. Interestingly, the BSB headset has a slightly lower angular resolution (~32 pixels per degree) than the AVP (~40 ppd) in the optically best part of the lens where these crops were taken. Yet, the text and line patterns look better on the BSB than AVP.

Hypervision’s Correction – The AVP is Not Out of Focus, and the Optics are Blurry

I speculated that the AVP seemed out of focus in Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3. Hypervision corrected me that the softness could not be due to being out of focus. Hypervision has found that sharpness varies from one AVP to the next. The AVP’s best focus nominally occurs with an apparent focus of about 1 meter. Hypervision pointed out that if the headset’s device focus were slightly wrong, it would simply shift the apparent focus distance as the eye/camera would adjust to a small change in focus (unless it was so far off that eye/camera focusing was impossible). Thus, the blur is not a focus problem but rather a resolution problem with the optics.

Hypervision’s Analysis – Tolerances Required Beyond that of Today’s Plastic Optics

The AVP has very aggressive and complex pancake optics for a compact form factor while supporting a wide FOV with a relatively small Micro-OLED. Most other pancake optics have two elements, which mate with a flat surface for the polarizers and quarter waveplates that manipulate the polarized light to cause the light to pass through the optics twice (see Meta example below left). Apple has a more complex three-lens optic with curved polarizers and quarter waveplates (below right).

Based on my studies of how the AVP dynamically adjusts optical imperfections like chroma aberrations based on eye tracking, the AVP’s optics are “unstable” because, without dynamic correction, the imperfections would be seen as much worse.

Hypervision RMS Analysis

Hypervision did an RMS analysis comparing a larger LCD panel with a small Micro-OLED. It should probably come as no surprise that requiring about 1.8x (2.56/1.4) greater magnification makes everything more critical. The problem, as Hypervision points out, is that Micro-OLED on silicon can’t get bigger for many years due to semiconductor manufacturing limitations (reticle limit). Thus, the only way for Micro-OLED designs to support higher resolution and wider FOV is to make the pixels smaller and the optics much more difficult.

Hypervision Monte-Carlo Analysis

Hypervision then did a Monte-Carlo analysis factoring in optical tolerances. Remember, we are talking about fairly large plastic-molded lenses that must be reasonably priced, not something you would pay hundreds of dollars for in a large camera or microscope.

Hypervision’s 140 Degree FOV with 60PPD Approach

Hypervision believes that the only practical path to ~60PPD and ~140-degree FOV is with a 2.56″ LCD display. LCDs’ natural progression toward smaller pixels will enable higher resolution than their optics can support.

Conclusion

Overall, Hypervision makes a good case that current designs with Micro-OLED with pancake optics are already pushing the limits of reasonably priced optics. Using technology with somewhat bigger pixels makes resolving them easier, and having a bigger display makes supporting a wider FOV less challenging.

It might be that the AVP is slightly burry because it is already beyond the limits of a manufacturable design. So the natural question is, if AVP already has problems, how could they support higher resolution and wider FOV?

The size of Micro-OLEDs built on silicon backplanes is limited by a reticle limit of chip size of above ~1.4″ diagonally, at least without resorting to multiple reticle “stitching” (which is possible but not practical for a cost-effective device). Thus, for Micro-OLEDs to increase resolution, the pixels must be smaller, requiring even more magnification out of the optics. Then, increasing the FOV will require even more optical magnification of ever-tinier pixels.

LCDs have issues, particularly with black levels and contrast. Smaller illumination LEDs with local dimming may help, but they have not proven to work as well as micro-OLEDs.

CES (Pt. 3), Xreal, BMW, Ocutrx, Nimo Planet, Sightful, and LetinAR

Update 1/28/2024 – Based on some feedback from Nimo Planet, I have corrected the description of their computer pod.

Introduction

The “theme” for this article is companies I met with at CES with optical see-through Augmented and Mixed Reality using OLED microdisplays.

I’m off to SPIE AR/VR/MR 2024 in San Francisco as I release this article. So, this write-up will be a bit rushed and likely have more than the usual typos. Then, right after I get back from the AR/VR/MR show, I should be picking up my Apple Vision Pro for testing.

Xreal

Xreal (formerly Nreal) says they shipped 350K units in 2023, more than all other AR/MR companies combined. They had a large booth on the CES floor, which was very busy. They had multiple public and private demo stations.

From 2021 KGOnTech Teardown

This blog has followed Xreal/Nreal since its first appearance at CES in 2019. Xreal uses an OLED microdisplay in a “birdbath” optical architecture first made popular by (the now defunct) Osterhout Design Group (ODG) with their R8 and R9, which were shown at CES in 2017. For more on this design, I would suggest reading my 2021 teardown articles on the Nreal first product (Nreal Teardown: Part 1, Clones and Birdbath Basics, Nreal Teardown: Part 2, Detailed Look Inside, and Nreal Teardown: Part 3, Pictures Through the Lens).

Inherent in the birdbath optical architecture Xreal still uses, they will block about 70% of the real-world light, acting like moderately dark sunglasses. About 10% of the display’s light makes it to the eye, which is much more efficient than waveguides, which are much thinner and more transparent. Xreal claims their newer designs support up to 500 nits, meaning the Sony Micro-OLEDs must output about 5,000 nits.

With investment, volume, and experience, Xreal has improved its optics and image quality. It can’t improve much over the inherent limitations of a birdbath, particularly in terms of transparency. Xreal recently added an LCD dimming shutter to selectively block out more or all of the real world fully with their new Xreal Air 2 Pro and their latest Air 2 Ultra, for which I was given a demo at CES.

The earlier Xreal/Nreal headsets were little more than 1920×1080 monitors you wore with a USB-C connection for power and video. Each generation has added more “smarts” to the glasses. The Air 2 Ultra includes dual 3-D IR camera sensors for spatial recognition. Xreal and (to be discussed later) Nimo, among others, have already picked up on Apple’s “Spatial Computing,” referring to their products as affordable ways to get into spatial computing.

Most of the newer headsets will support either via a cell phone or Xreal’s “Beam” compute module, which can act to mirror or cast one more virtual display from a computer, cell phone, or tablet. While virtually there may be more monitors, they are still represented on a 1920×1080 display device. I believe (I forgot to ask) that Xreal is using internal sensors to detect head movement to virtualize the monitors with head movement.

Xreals Air 2 Ultra demo showcased the new spatial sensors’ ability to recognize hand and finger gestures. Additionally, the sensors could read “bar-coded” dials and slides made from cardboard.

BMW AR Ride Concept (Using Xreal Glasses)

In addition to seeing Xreal devices on their own, I was invited by BMW to take a ride trying out their Augmented Reality HUD on the streets around the convention center. A video produced by BMW gives a slightly different and abbreviated trip. I should emphasize that this is just an R&D demonstration, not a product that BMW plans to introduce. Also, BMW made clear that they would be working with other makes of headsets but that Xreal was the most readily available.

To augment using the Xreal glasses, BWM mounted a head tracking camera under the rearview mirror. This allows the BMW to lock the image generated to the physical car. Specifically, it allowed them to (selectively) block/occlude parts of the virtual image hidden behind the front A-pillar of the car. Not shown in the pictures from BMW below (click on the picture to see them bigger) is that you could see the images would start in the front window but be hidden by the A-pillar and then continue in the side window.

BWM’s R&D is looking at driver and passenger AR glasses use. They discussed that they would have different content for the driver, which would have to be simplified and more limited than what they could show the passenger. There are many technical and government/legal issues (all 50 states in the U.S. have different laws regarding HUD displays) with supporting headsets on drivers. From a purely technical perspective, a hear-worn AR HUD has many advantages and some disadvantages versus a fixed HUD on the windshield or dash combiner (too much to get into in this quick article).

Ocutrx (for Low-Vision and other applications)

Ocutrx’s Oculenz is also using “birdbath” optics with the OcuLenz. The OcuLens was originally designed to support people with “low vision,” especially people with Macular Degeneration and eye problems that block parts of a person’s vision. People with Macular Degeneration lose their vision’s high-resolution, high-contrast, and color-sensitive parts. They must rely on other parts of the retina, commonly called peripheral vision (although it may include more than just what is technically considered peripheral vision).

A low-vision headset must have a wide FOV to reach the outer parts of the retina. They must magnify, increase color saturation, and improve contrast over what a person with normal vision would want to see. Note that while these people may be legally blind, they still can see, particularly with their peripheral vision. This is why a headset that still allows them to use their peripheral vision is important.

About 20 million people in the US alone have what is considered “low-vision,” and about 1 million more people each develop low-vision as the population ages. It is the biggest identifiable market I know of today for augmented reality headset headsets. But a catch needs to be fixed for this market to be served. By the very nature of the people involved, having low vision and often being elderly, they need a lot of professional help while at the same time being often on a fixed or limited income. Unfortunately, rarely will private or government (Medicare/Medicaid) insurance will rarely cover either the headset cost or the professional support required. There have been bills before Congress to change this, but so far, nothing has happened of which I am aware. Without a way to pay for the headsets, the volumes are low, which makes the headsets more expensive than they need to be.

In the past, I have reported on Evergaze’s seeBoost, which existed in this market while developing their second-generation product for the economic reasons (lack of insurance coverage) above. I have also discussed NuEyes with Bradley Lynch in a video after AWE 2022. The economic realities of the low-vision market cause companies like NuEye and Ocutrx to look for other business opportunities for the headsets. It is really a frustrating situation knowing that technology could help so many people. I hope to cover this topic in more detail in the future.

Nimo Planet (Nimo)

Nimo Planet (Nimo) makes a small computer that acts as a spatial mouse pointer for AR headsets with a USB-C port for power and video input. It replaces the need for a cell phone and can send mirror/casting video information from other devices to the headset. Still, Nimo Core is a fully standalone computer with Nimo OS, which simultaneously supports Android, Web, and Unity Apps. No other host computer is needed.

According to Nimo, every other multi-screen solution in the market is developed in web platforms or UnityApp, which limits them to running only Web Views. Nimo OS created a new Stereo Rendering and Multi-Window architecture in AOSP to run multiple Android, Unity, and Web Apps simultaneously.

Nimo developed their glasses based on LentinAR optics and supports other AR glasses. Most notably, they just announced a joint development agreement with Rokid.

I got a brief demonstration of Nimo’s multi-windows on an AR headset. They use the inertial sensors in the headset to detect head movement and move the view of the multiple windows accordingly. It is like you are looking at multiple monitors through a 1920×1080 window. No matter how big the size or number of virtual monitors, they will be clipped to that 1920×1080 view. This device lets you move your head to select what you see. I discussed some of the issues with simulating virtual monitors with head-mounted displays in Apple Vision Pro (Part 5A) – Why Monitor Replacement is Ridiculous, Apple Vision Pro (Part 5B) – More on Monitor Replacement is Ridiculous, and Apple Vision Pro (Part 5C) – More on Monitor Replacement is Ridiculous.

Sightful

The Sightful is similar to the Nimo Planet type of device in some ways. With the Sightful, the computer is built inside the keyboard and touchpad, making it a full-fledged computer. Alternatively, Sightful can be viewed as a laptop computer where the display uses AR glasses rather than a flat panel.

Like Nimo and Xreal’s Beam and many other new Mixed Reality devices, Sightful supports multiple windows. I don’t know if they have sensors for 3-D sensing, so I suspect they use internal sensors to detect head movement.

Sightful’s basic display specs resemble other birdbath AR glasses designs from companies like Xreal and Rokid. I have not had a chance, however, to compare them seriously.

LetinAR

I have been writing about LetinAR since 2018. LetinAR started with a “Pin Mirror” type of pupil replication. They have now moved on to a series of what I will call “horizontal slat pupil replicators.” They also use total internal reflections (TIR) and a curved mirror to move the focus of the image form an OLED microdisplay before going to the various pupil-expanding slats.

While LetinAR’s slat design improves image quality over its earlier pin mirrors, it is still imperfect. When looking through the lenses (without a virtual image), the view is a bit “disturbed” and seems to have diffraction line effects. Similarly, you can perceive gaps or double images depending on your eye location and movement. LetinAR is working on continuing to improve this technology. While their image quality is not as good as the birdbath designs, they offer much better transparency.

LetinAR seems to be making progress with multiple customers, including Jor Jin, who was demonstrating in the LentinAR booth, Sharp, which had a big demonstration in their booth (while they didn’t say whose optics were in the demo, it was obviously LentinARs – see pictures below), and the headset discussed above by Nimo.

Conclusions

Sorry, there is no time for major conclusions today. I’m off to the AR/VR/MR Conference and Exhibition.

I will note that regardless of the success of the AVP, Apple has already succeeded in changing the language of Augmented and Mixed reality. In addition to almost everyone in AR and Mixed reality talking “AI,” many companies now use “Spatial Computing” to refer to their products in their marketing.

CES (Pt. 2), Sony XR, DigiLens, Vuzix, Solos, Xander, EverySight, Mojie, TCL color µLED

Introduction

As I wrote last time, I met with nearly 40 companies at CES, of which 31 I can talk about. This time, I will go into more detail and share some photos. I picked the companies for this article because they seemed to link together. The Sony XR headset and how it fit on the user’s head was similar to the newer DigiLens Argo headband. DigiLens and the other companies had diffractive waveguides and emphasized lightweight and glass-like form factors.

I would like to caution readers of my saying that “all demos at conferences are magic shows,” something I warn about near the beginning of this blog in Cynics Guide to CES – Glossary of Terms). I generally no longer try to take “through the optics” pictures at CES. It is difficult to get good representative photos in the short time available with all the running around and without all the proper equipment. I made an exception for the TCL color MicroLED glasses as they readily came out better than expected. But at the same time, I was only using test images provided by TCL and not test patterns that I selected. Generally, the toughest test patterns (such as those on my Test Pattern Page) are simple. For example, if you put up a solid white image and see color in the white, you know something is wrong. When you put up colorful pictures with a lot of busy detail (like a colorful parrot in the TCL demo), it is hard to tell what, if anything, is wrong.

The SPIE AR/VR/MR 2024 in San Francisco is fast approaching. If you want to meet, contact me at meet@kgontech.com). I hope to get one or two more articles on CES before leaving for the AR/VR/MR conference.

Sony XR and DigiLens Headband Mixed Reality (with contrasts to Apple Vision Pro)

Sony XR (and others compared to Apple Vision Pro)

This blog expressed concerns about the Apple Vision Pro’s (AVP) poor mechanical ergonomics (AVP), completely blocking peripheral vision and the terrible placement of the passthrough cameras. My first reaction was that the AVP looked like it was designed by a beginner with too much money and an emphasis on style over functionality. What I consider Apple’s obvious mistakes seem to be addressed in the new Sony XR headset (SonyXR).

The SonyXR shows much better weight distribution, with (likely) the battery and processing moved to the back “bustle” of the headset and a rigid frame to transfer to the weight for balance. It has been well established that with designs such as the Hololens 2 and Meta Quest Pro, this type of design leads to better comfort. This design approach can also move a significant amount of power to the back for better heat management due to having a second surface radiating heat.

The bustle on the back design also avoids the terrible design decision by Apple to have a snag hazard and disconnection nuisance with an external battery and cable.

The SonyXR is shown to have enough eye relief to wear typical prescription glasses. This will be a major advantage in many potential XR/MR headset uses, making it more interchangeable. This is particularly important for use cases that are not all-day or one-time (ex., museum tours, and other special events). Supporting enough eye relief for glasses is more optically difficult and requires larger optics for the same field of view (FOV).

Another major benefit of the larger eye relief is that it allows for peripheral vision. Peripheral vision is considered to start at about 100 degrees or about where a typical VR headset’s FOV stops. While peripheral vision is low in resolution, it is sensitive to motion. It alerts the person to motion so they will turn their head. The saying goes that peripheral vision evolved to keep humans from being eaten by tigers. This translated to the modern world, being hit by moving machinery and running into things that might hurt you.

Another good feature shown in the Sony XR is the flip-up screen. There are so many times when you want to get the screen out of your way quickly. The first MR headset I used that supported this was the Hololens 2.

Another feature of the Hololens 2 is the front-to-back head strap (optional but included). Longtime VR gamer and YouTube personality Brad Lynch of the SadlyItsBradley YouTube channel has tried many VR-type headsets and optional headbands/straps. Brad says that front-to-back straps/pads generally provide the most comfort with extended use. Side-to-side straps, such as on the AVP, generally don’t provide the support where it is needed most. Brad has also said that while a forehead pad, such as on the Meta Quest Pro, helps, headset straps (which are not directly supported on the MQP) are still needed. It is not clear whether the Sony XR headset will have over-the-head straps. Even companies that support/include overhead straps generally don’t show them in the marketing photos and demos as they mess up people’s hair.

The SonyXR cameras are located closer to the user’s eyes. While there are no perfect placements for the two cameras, the further they are from the actual location of the eyes, the more distortion will be caused for making perspective/depth-correct passthrough (for more on this subject, see: Apple Vision Pro Part 6 – Passthrough Mixed Reality (PtMR) Problems).

Lynx R1

Lynx also used the headband with a forehead pad, with the back bustle and flip-up screen. Lynx also supports enough eye relief for glasses and good peripheral vision and locates their passthrough cameras near where the eye will be when in use. Unfortunately, I found a lot of problems with the optics Lynx chose for the R1 by the optics design firm Limbak (see also my Lynx R1 discussion with Brad Lynch). Apple has since bought Limbak, and it is likely Lynx will be moving on with other optical designs.

Digilens Argo New Head Band Version at CES 2024

I wrote a lot about Digilens Argo in last year’s coverage of CES and the AR/VR/MR conference in DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8). In the section Skull-Gripping “Glasses” vs. Headband or Open Helmet, I discussed how Digilens has missed an opportunity for both comfort and supporting the wearing of glasses. Digilens said they took my comments to heart and developed a variation with the rigid headband and flip-up display shown in their suite at CES 2024. Digilens said that this version let them expand their market (and no, I didn’t get a penny for my input).

The Argos are light enough that they probably don’t need an over-the-head band for extra support. If the headband were a ground-up design rather than a modular variation, I would have liked to see the battery and processing moved to a back bustle.

While on the subject of Digilens, they also had a couple of nice static displays. Pictured below right are variations in waveguide thickness they support. Generally, image quality and field of view can be improved by supporting more waveguide layers (with three layers supporting individual red, green, and blue waveguides). Digilens also had a static display using polarized light to show different configurations they can support for the entrance, expansion, and exit gratings (below right).

Vuzix

Vuzix has been making wearable heads-up displays for about 26 years and has a wide variety of headsets for different applications. Vuzix has been discussed on this blog many times. Vuzix primarily focuses on lightweight and small form factor glasses and attachments with displays.

Vuzix Ultralite Sport (S) and Forward Projection (Eye Glow) Elimination

New this year at CES was Vuzix’s Ultralite Sports (S) model. In addition to being more “sporty” looking, their waveguides are designed to eliminate forward projection (commonly referred to as “Eye Glow”). Eye glow was famously an issue with most diffractive waveguides, including the Hololens 1 & 2 (see right), Magic Leap 1 & 2, and previous Vuzix waveguide-based glasses.

Vuzix appears to be using the same method that both Digilens and Dispelix discussed in their AR/VR/MR 2022 papers that I discussed with Brad Lynch in a YouTube video after AR/VR/MR 2022 and in my blog article, DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8) in the sections on Eye Glow.

If the lenses are canted (tilted), the exit gratings, when designed to project to the eye, will then project down at twice the angle at which the waveguides are canted. Thus, with only a small change in the tilt of the waveguides, the projection will be far below the eyesight of others (unless they are on the ground).

Ultra Light Displays with Audio (Vuzix/Xander) & Solos

Last year, Vuzix introduced their lightweight (38 grams) Z100 Ultralite, which uses 640×480 green (only) MicroLED microdisplays. Xander, using the lightweight Vuzix’s Z100, has developed text-to-speech glasses for people with hearing difficulties (Xander was in the AARP booth at CES).

While a green-only display with low resolution by today’s standards is not something you will want to watch movies, there are many uses for having a limited amount of text and graphics in a lightweight and small form factor. For example, I got to try out Solos Audio glasses, which, among other things, use ChatGPT to do on-the-fly language translation. It’s not hard to imagine that a small display could help clarify what is being said about Solos and similar products, including the Amazon Echo Frames and the Ray-Ban Meta Wayfarer.

Mojie (Green) MicroLED with Plastic Waveguide

Like Vuzix Z100, the Mojie (a trademark of Meta-Bounds) uses green-only Jade Bird Display 640×480 microLEDs with waveguide optics. The big difference is that Mojie, along with Oppo Air 2 and Meizu MYVU, all use Meta-Bounds resin plastic waveguides. Unfortunately, I didn’t get to the Mojie booth until near closing time at CES, but they were nice enough to give me a short demo. Overall, regarding weight and size, the Mojie AR glasses are similar to the Vuzix Z100, but I didn’t have the time and demo content to judge the image quality. Generally, resin plastic diffractive waveguides to date have had lower image quality than ones on a glass substrate.

I have no idea what resin plastic Meta-Bounds uses or if they have their own formula. Mitsui Chemicals and Mitsubishi Chemicals, both of Japan, are known to be suppliers of resin plastic substrate material.

EverySight

ELBIT F35 Helmet and Skylens

Everysight (the company, not the front eye display feature on the Apple Vision Pro) has been making lightweight glasses primarily for sports since about 2018. Everysight spun out of the major defense (including the F35 helmet HUD) and commercial products company ELBIT. Recently, ELBIT had their AR glasses HUD approved by the FAA for use in the Boeing 737ng series. EverySight uses an optics technology, which I call “precompensated off-axis.” Everysight (and ELBIT) have an optics engine that projects onto a curved front lens with a partial mirror coating. The precompensation optics of the projector correct for the distortion from hitting a curved mirror off-axis.

The Everysight/Elbit technology is much more optically efficient than waveguide technologies and more transparent than “birdbath technologies” (the best-known birdbath technology today being Xreal). The amount of light from the display versus transparency is a function of the semi-transparent mirror coating. The downside of the Eversight optical system with small-form glasses is that the FOV and Eyebox tend to be smaller. The new Everysight Maverick glasses have a 22-degree FOV and produce over 1,000 nits using a 5,000 nit 640×400 pixel full-color Sony Micro-OLED.

The front lens/mirror elements are inexpensive and interchangeable. But the most technically interesting thing is that Everysight has figured out how to support prescriptions built into the front lens. They use a “push-pull” optics arrangement similar to some waveguide headsets (most notably Hololens 1&2 and Magic Leap). The optical surface on the eye side of the lens corrects for the virtual display of the eye, and the optical surface on the outside surface of the lens is curved to do what is necessary to correct vision correction for the real world.

TCL RayNeo X2 and Ray Neo X2 Lite

I generally no longer try to take “through the optics” pictures at CES. It is very difficult to get good representative photos in the short time available with all the running around and without all the proper equipment. I got some good photos through TCL’s RayNeo X2 and the RayNeo X2 Lite. While the two products sound very close, the image quality with the “Lite” version, which switched to using Applied Materials (AMAT) diffractive waveguides, was dramatically better.

The older RayNeo X2s were available to see on the floor and had problems, particularly with the diffraction gratings capturing stray light and the general color quality. I was given a private showing of the newly announced “Lite” version using the AMAT waveguides, and not only were they lighter, but the image quality was much better. The picture on the right below shows the RayNeo X2 (with an unknown waveguide) on the left that captures the stray overhead light (see streaks at the arrows). The picture via the Lite model (with the AMAT waveguide) does not exhibit these streaks, even though the lighting is similar. Although hard to see in the pictures, the color uniformity with the AMAT waveguide also seems better (although not perfect, as discussed later).

Both RayNeo models use 3-separate Jade Bird Display red, green, and blue MicroLEDs (inorganic) with an X-cube color combiner. X-cubes have long been used in larger LCD and LCOS 3-panel projectors and are formed with four prisms with different dichroic coatings that are glued together. Jade Bird Display has been demoing this type of color combiner since at least AR/VR/MR 2022 (above). Having worked with 3-Panel LCOS projectors in my early days at Syndiant, I know the difficulties in aligning three panels to an X-cube combiner. This alignment is particularly difficult with the size of these MicroLED displays and their small pixels.

I must say that the image quality of the TCL RayNeo X2 Lite exceeded my expectations. Everything seems well aligned in the close-up crop from the same parrot picture (below). Also, there seems to be relatively good color without the wide variation from pixel-to-pixel brightness I have seen in past MicroLED displays. While this is quite an achievement for a MicroLED system, the RayNeo X2 light only has a modest 640×480 resolution display with a 30-degree diagonal FOV. These specs result in about 26 pixels per degree or about half the angular resolution of many other headsets. The picture below was taken with a Canon R5 with a 16mm lens, which, as it turns out, has a resolving power close to good human vision.

Per my warning in the introduction, all demos are magic shows. I don’t know how representative this prototype will be of units in production, and perhaps most importantly, I did not try my test patterns but used the images provided by TCL.

Below is another picture of the parrot taken against a darker background. Looking at the wooden limb under the parrot, you will see it is somewhat reddish on the left and greenish on the right. This might indicate color shifting due to the waveguide, as is common with diffractive waveguides. Once again, taking quick pictures at shows (all these were handheld) and without controlling the source content, it is hard to know. This is why I would like to acquire units for extended evaluations.

The next two pictures, taken against a dark background and a dimly lit room, show what I think should be a white text block on the top. But the text seems to change from a reddish tint on the left to a blueish tint on the right. Once again, this suggests some color shifting across the diffractive waveguide.

Below is the same projected image taken with identical camera settings but with different background lighting.

Below is the same projected flower image with the same camera settings and different lighting.

Another thing I noticed with the Lite/AMAT waveguides is significant front projection/eye glow. I suspect this will be addressed in the future, as has been demonstrated by Digilens, Displelix, and Vuzix, as discussed earlier.

Conclusions

The Sony XR headset seems to showcase many of the beginner mistakes made by Apple with the AVP. In the case of the Digilens Argo last year, they seemed to be caught between being a full-featured headset and the glasses form factor. The new Argo headband seems like a good industrial form factor that allows people to wear normal glasses and flip the display out of the way when desired.

Vuzix, with its newer Ultralite Z100 and Sports model, seems to be emphasizing lightweight functionality. Vuzix and the other waveguide AR glasses have not given a clear path as to how they will support people who need prescription glasses. The most obvious approach they will do some form of “push-pull” with a lens before and after the waveguides. Luxexcel had a way to 3-D print prescription push-pull lenses, but Meta bought them. Add Optics (formed by former Luxexcel employees) has another approach with 3-D printed molds. Everysight tries to address prescription lenses with a somewhat different push-pull approach that their optical design necessitates.

While not perfect, the TCL color MicroLED, at least in the newer “Lite” version, was much better than I expected. At the same time, one has to recognize the resolution, FOV, and color uniformity are still not up to some other technologies. In other words, to appreciate it, one has to recognize the technical difficulty. I also want to note that Vuzix has said that they are also planning on color MicroLED glasses with three microdisplays, but it is not clear whether they will use an X-cube or a waveguide combiner approach.

The moderate success of smart audio glasses may be pointing the way for these ultra-light glasses form factor designs for a consumer AR product. One can readily see where adding some basic text and graphics would be of further benefit. We will know if this category has become successful if Apple enters this market 😁.

Display Daily Senior Analyst, SID Display Information Article, & Speaking at AWE

Introduction

I want my readers to know about my first article on Display Daily as a “Senior Analyst” and the article I wrote for the March/April SID Information Display. I will be speaking and attending the upcoming AWE 2023 Conference from May 31st through June 2nd. I also recently recorded another AR Show Podcast with Jason McDowell, which should be published in a few weeks.

I also wanted you to know I have a lot of travel planned for May, so there may not be much, if anything, published on this blog in May. But I have several articles in the works for this month and should have more to discuss in June.

Display Daily Article On Apple – New “Senior Analyst”

Display Daily, a division of Jon Peddie research, has just put out an article by me discussing long-rumored Apple Mixed Reality headset. In some ways, this follows up an article I wrote for Display Daily in 2015 titled VR and AR Head Mounted Displays – Sorry, but there is no Santa Claus.

Display Daily and I are looking at joint wrote and video projects. I will be teaming up with Display Daily as a “Senior Analyst” on these new projects while I continue to publish this blog.

SID Information Display Magazine’s March/April 2023 Article, “The Crucial Role of Optics in AR/MR”

I was asked to contribute an article to SID’s Information Display Magazine’s printed and online March/April 2023 issue.

The article (available for free download) discusses the most common types of optics and displays used in mixed reality today and what I see as the technologies of the future.

Attending and Presenting at AWE 2023

AWE has been the best conference for seeing a wide variety of AR, VR, and MR headsets for many years. While I mostly spend my time on the show floor and in private meetings to see the “good stuff,” I have been invited to give a presentation this year. The Topic of the presentation will be the pros and cons of Optical versus Video Passthrough Mixed Reality. The conference runs from May 31st to June 2nd. I will be presenting at 9:00 AM on June 2nd.

My Long History with Display Daily (20+ Years) and even Longer with Jon Peddie (40+ Years)

I’ve been interacting with Display Daily and its former parent company Insight Media headed by Chris Chinnock, who is still a Display Daily Contributor since I left Texas Instruments to work on LCOS display devices in 1998. Meko, headed by Bob Raikes, took over Display Daily in 2014, then late in 2022, Jon Peddie Research acquired Display Daily.

It turns out that I had known market analyst Jon Peddie since the mid-1980s when he was the chief architect of the TMS34010, the world’s first fully programmable graphics processor, and led the definition of other graphics devices, including the first Video DRAM. Jon suggested we work together on some projects, and I have become a Senior Analyst at Display Daily.

AR Longan Vision AR for First Responders (CES – AR/VR/MR 2023 Pt. 5)

Introduction

This next entry in my series on companies I met with at CES or Photonics West’s (PW) AR/VR/MR show in 2023 will discuss a company working on a headset for a specific application, namely firefighting and related first responders. In discussing Longan Vision, I will mention ThermalGlass (by 360world using Vuzix Blaze optics), Campfire 3D, iGlass, and Mira, which have some similar design features. In addition to some issues common with all AR devices, Longan Vision has unique issues related to firefighting and other first responder applications.

This was my first meeting with Longan Vision, and it was not for very long. I want to be clear that I have no experience working with firefighters or their needs and opinions on AR equipment. In this short article, I want to point out how they tried to address the user’s needs in an AR headset.

Longan Vision

Below is a picture of Longan Vision’s booth, my notations, and some inset pictures from Longan’s website.

Hands-free operation is a big point and central to the use case for many AR designs. Longan uses AR to enhance vision by letting firefighters see through the smoke and darkness and providing additional life-saving information such as temperature and direction.

The AR optics are one of the simplest and least expensive possible; they use dual merged large curved free-space combiners, often called “bug-eye” combiners based on their appearance. They use a single cell phone-size display device to generate the image (some bug-eyes use two smaller displays). The combiner has a partial mirror coating to reflect the display’s image to the eye. The curvature of the semi-reflective combiner magnifies and moves the focus of the display, while light from the real world will be dimmed by roughly the amount of the display’s light reflected.

The bug-eye combiner has well-known good, bad, and other points (also discussed in a previous article).

Birdbath Optics
  • The combiner is inexpensive to produce with reasonably good image quality. This means it can also be replaced inexpensively if it becomes damaged.
  • It gives very large eye relief, so there are no issues with wearing glasses. Thus it can be worn interchangeably by almost everyone (one size fits all).
  • It is optically efficient compared to Birdbath, Waveguides, and most other AR optics.
  • While large, the combiner can be made out of very rugged plastics and is not likely to break and will not shatter. It can even serve as eye and face protection.
  • Where the eyes will verge is molded into the optics and will differ from person to person based on their IPD.
  • As the name “bug-eye” suggests, they are big and unattractive.
  • Because the combiner magnifies a very large (by near-eye standards) display with very large pixels, the angular resolution (pixels per degree) is very low, while the FOV is large.
  • Because the combiner is “off-axis” relative to the display, the magnification and focus are variable. This effect can be reduced but not eliminated by making the combiner aspherical. Birdbath optics (described here and shown above-right) have a beamsplitter, which greatly reduces efficiency but makes optics “on-axis” to eliminate these issues.
  • Brightness is limited by the display’s brightness multiplied by the fraction of light reflected by the combiner. Typically, flat panels will have between 500 and 1,000 nits. That fraction typically ranges between 50% and 20% depending on the tradeoff of display efficiency versus transparency of the real world. These factors and others typically limit their use of indoor applications.

Longan also had some unique requirements incorporated into their design:

  • The combiner had to be made out of high-temperature plastics
  • They had to use high-temperature batteries, which added some weight and bulk. Due to their flammability, they could not use the common, more energy-dense lithium batteries.
  • The combiner supports flipping up to get out of the user’s vision. This is a feature supported by some other bug-eye designs.
  • The combiner also acts as an eye and partial face shield. Their website demonstration video shows firefighters having an additional flip-up outer protective shield. It is not clear if these will interfere with each other when flipping up and down.
  • The combiner must accommodate the firefighting breathing apparatus.
  • An IR camera feeds the display to see what would otherwise be invisible.

Companies with related technologies

I want to mention a few companies that have related technologies.

At CES 2023, I met with ThermalGlass (by 360world), which combined infrared heat images with Vuzix blade technology to produce thermal vision AR glasses. I discussed ThermalGlass in my CES recap with SadlyItsBradley.

Mira has often been discussed on this blog as an example of a low-cost AR headset. Mira’s simple technology is most famously used in Universal Studios Japan, and Hollywood Mario Kart rides. Mira’s website shows a more industrially oriented product with a hard hat and an open frame/band version. Both, like Longan, support a flip-up combiner. The open headband version does not appear to have enough support, with just a headband and forehead pad. Usually, an over-the-head band is also desirable for comfort and a secure fit with this type of support.

In my video with SadlyItsBradley after AWE 2022, I discussed other large combiner companies, including Campfire, Mira, and iGlass.

The images below show some pictures I took at AWE 2018 of the iView prototype with a large off-axis combiner with a front view (upper left), a view directly of the displays (lower left), and a view through the combiner without any digital correction (below right). The football field in the picture below right illustrates how the image is distorted and how the focus varies from the top to the bottom of the display (the camera was focused at about the middle of the image). Typically the distortion can be corrected in software with some loss in resolution due to the resampling. The focusing issue, however, cannot be corrected digitally and relies on the eye to adjust focus depending on where the eye is centered.

Conclusions

Longan has thought through many features from the firefighter’s user perspective. In terms of optics, it is not the highest-tech solution, but it may not need to be for the intended application. The alternative approach might be to use a waveguide much closer to the eye but with enough eye relief to support glasses. But then the waveguide would have to be extremely ruggedized with its own set of issues in a firefighter’s extreme environment.

Unlike many AR headsets that have me scratching my head. With Longan Vision, I can see the type of customer that might want this product.

The post AR Longan Vision AR for First Responders (CES – AR/VR/MR 2023 Pt. 5) first appeared on KGOnTech.

The post AR Longan Vision AR for First Responders (CES – AR/VR/MR 2023 Pt. 5) appeared first on KGOnTech.

CES 2023 SadlyItsBradley Videos Part 1-4 and Meta Leak Controversy

Introduction

Bradley Lynch of the SadleyItsBradley YouTube channel hosted my presentation about CES 2023. The video was recorded about a week after CES, but it took a few weeks to edit and upload everything. There are over 2 hours of Brad and me talking about things we saw at CES 2023.

Brad was doing his usual YouTube content: fully editing the video, improving the visual content, and breaking the video down into small chunks. But it took Brad weeks to get 3 “sub videos” (part 1, part 2, and part 3) posted while continuing to release his own content. Realizing that it would take a very long time at this rate, Brad released part 4 with the rest of the recording session with only light editing as a single 1-hour and 44-minute video with chapters.

For those that follow news about AR and VR, Brad got involved in a controversy with his leaks of information about the Meta Quest Pro and Meta Quest 3. The controversy occurred between the recording and the release of the videos, so I felt I should comment on the issue.

Videos let me cover many more companies

This blog has become highly recognized in the AR/MR community, and I have many more companies wanting me to write about their technology than I have the time. I also want to do in-depth articles, including major through-the-optics studies on “interesting” AR/MR devices.

I have been experimenting with ways to get more content out quicker. I can spend from 3 days to up to 2 months (such as the rest of the Meta Quest Pro series yet to be published) working on a single article about a technology or product. With CES and the AR/VR/MR conference only 3 weeks apart and meeting with about 20 companies at each conference.

In the past, I only had time to write about a few companies that I thought had the most interesting technology. For the CES 2023 video, It took about 3 days to organize the photos and then about 2.5 hours to discuss about 20 companies and their products, or about 5 to 7 minutes per topic (not including all the time spent by Brad doing the video editing).

I liked working with Brad; we hope to do videos together in the future; he is fun to talk to and adds a different perspective with his deep background in VR. But in retrospect, less than half of what we discussed fits with his primary VR audience.

Working on summary articles for CES and the SPIE AR/VR/MR conference

Over 2 hours of Brad and I discussing over 20 companies and various other subjects and opinions about the AR, VR, and MR technology and industry is a lot for people to go through. Additionally, the CES video was shot in one sitting non-stop. Unfortunately, my dog friends decided they wanted to see me in my closed office door closed and barked much more than I realized as I was focused on the presentation (I should have stopped the recording and quieted them down).

I’m working on a “quick take” summary guide with pictures from the video and some comments and corrections/updates. I expect to break the guide into several parts based on broad topics. It might take a few days before this guide gets published as there is so much material.

Assuming the CES quick take guide goes well, I plan to follow up with my quick takes on the AR/VR/MR conference. I’m also looking at recording a discussion at the AR/VR/MR conference that will likely be published on the KGOnTech YouTube channel.

Links to the Various Sections of the Video

Below is a list of topics with links for the four videos.

Video 1

  • 0:00 Ramblings About CES 2023
  • 6:36 Meta Materials Non-Polarized Dimmers
  • 8:15 Magic Leap 2
  • 14:05 AR vs VR Use Cases/Difficulties
  • 16:47 Meta’s BCI Arm Band MIGHT Help
  • 17:43 OpenBCI Project Galea

Video 2

  • 0.00 Porotech MicroLEDs

Video 3

  • 0:00 NewSight Reality’s Transparent uLEDs
  • 4:07 LetinAR Glasses (Bonus Example/Explanation)

Video 4

SadlyItsBradley’s Meta Leaks Controversy

Between the time of recording the CES 2023 video with Brad and the videos being released, there was some controversy involving Brad and Meta that I felt should be addressed because of my work with Brad.

Brad Lynch made national news when the Verge reported that Meta had caught Brad’s source for the Meta Quest Pro and Meta Quest 3 information and diagrams. Perhaps ironically, the source for the Verge article was a leaked memo by Meta’s CTO, Andrew Bosworth (who goes by Boz). According to The Verge, “In his post to Meta employees, Bosworth confirmed that the unnamed leaker was paid a small sum for sharing the materials with Lynch.

From what was written in The Verge article and Brad’s subsequent Twitter statement, it seems clear that Brad didn’t know that in journalism is considered unethical “checkbook journalism” to pay a source. It is one of those gray areas where, as I understand it (and not legal advice), it is not illegal unless the reporter is soliciting the leak. At the same time, if I knew Brad was going to pay a source, I would have advised him not to do it.

It is nice to know that news media that will out and out lie, distort, hide key information, and report as true information from highly biased named and unnamed sources still has one slim ethical pillar: leaks are our life’s blood but don’t get caught paying for one. It is no wonder public trust in the news media is so low.

The above said, and to be clear, I never have and would never pay a source or encourage anyone to leak confidential content. I also don’t think it was fair or right for a person under NDA to leak sensitive information except in cases of illegal or dangerous activity by the company.

KGOnTech (My) Stance on Confidentiality

Unless under contract with a significant sum of money, I won’t sign an NDA, as it means taking on a legal and, thus, financial risk. At the same time, when I meet privately with companies, I treat information and material as confidential, even if it is not marked as such, unless they want me to release it. I’m constantly asking companies, “what of this can I write about.”

My principle is that I never want to be responsible for hurting someone that shared information with me. And as stated above, I would never encourage, no less pay someone to break a confidence. If someone shares information with me to publish, I always try to know if they want their name to be public as I don’t want to either get them in trouble or take credit for their effort.

Closing

That’s it for this article. I’ve got to finish my quick take summaries on CES and the AR/VR/MR conference.

The post CES 2023 SadlyItsBradley Videos Part 1-4 and Meta Leak Controversy appeared first on KGOnTech.

CES 2023 SadlyIsBradley and KGOnTech Joint Review Video Series (Part 1)

New Video Series on CES 2023

Brad Lynch of the SadlyItsBradley YouTube Channel and I sat down for over 2 hours a week after CES and recorded our discussion of more than 20 companies one or both of us met with at CES 2023. Today, Jan. 26, 2023, Brad released a 23-minute part 1 of the series. Brad is doing all the editing while I did much of the talking.

Brad primarily covers VR, while this blog mostly covers optical AR/MR. Our two subjects meet when we discuss “Mixed Reality,” where the virtual and the real world merge.

Brad’s title for part 1 is “XR at CES: Deep Dives #1 (Magic Leap 2, OpenBCI, Meta Materials).” While Brad describes the series as a “Deep Dive,” but I, as an engineer, consider it to be more of an “overview.” It will take many more days to complete my blog series on CES 2023. This video series will briefly discuss many of the same companies I plan to write about in more detail on this blog, so consider it a look ahead at some future articles.

Brad’s description of Part 1 of the series:

There have been many AR/VR CES videos from my channel and others, and while they gave a good overview of the things that could be seen on the show floor and in private demoes, many don’t have a technical background to go into how each thing may work or not work

Therefore I decided to team up with retired Electrical Engineer and AR skeptic, Karl Guttag, to go over all things XR at CES. This first part will talk about things such as the Magic Leap 2, Open BCI’s Project Galea, Meta Materials and a few bits more!

Brad also has broken the video into chapters by subject:

  • 0:00 Ramblings About CES 2023
  • 6:36 Meta Materials Non-Polarized Dimmers
  • 8:15 Magic Leap 2
  • 14:05 AR vs. VR Use Cases/Difficulties
  • 16:47 Meta’s BCI Arm Band MIGHT Help
  • 17:43 OpenBCI Project Galea

That’s it for today. Brad expects to publish about 2 to 3 videos in the next week. I will try and post a brief note as Brad publishes each video.

The post CES 2023 SadlyIsBradley and KGOnTech Joint Review Video Series (Part 1) appeared first on KGOnTech.

❌