Reading view

There are new articles available, click to refresh the page.

Snap Spectacles 5 and Meta Orion Roundtable Video Part 1

Introduction

On October 17th, 2024, Jason McDowell (The AR Show), Jeri Ellsworth (Tilt Five), David Bonelli (Pulsar), Bradley Lynch (SadlyItsBradley), and I recorded a 2-hour roundtable discussion about the recent announcements of the Snap Spectacles 5 and Meta Orion optical AR/MR glasses. Along the way, we discussed various related subjects, including some about the Apple Vision Pro.

I’m breaking the video into several parts to keep some discussions from being buried in a single long video. In this first part, we primarily discuss the Snap Spectacles 5 (SS5). The SS5 will be discussed some more in the other parts, which will be released later. We also made some comments on the Apple Vision Pro, which Bradley Lynch and I own.

The 2-hour roundtable is being released in several parts, with AR Roundtable Part 1 Snap Spectacles 5 and some Apple Vision Pro being the first to be released.

0:00 Introduction of the panelist

Jason McDowall, as moderator, gets things going by having each panelist introduce themself.

2:11 See-Through versus Passthrough Mixed Reality

I gave a very brief explanation of the difference between see-through/optical AR/MR and passthrough MR. The big point is that with See-through/Optical AR/MR, the real world’s view is most important, and with passthrough MR, the virtual world is more important. With passthrough MR, the virtual world is most important with the camera’s view augmenting the virtual world.

5:51 Snap Spectacles 5 (SS5) experience and discussion

Jason McDowell had the opportunity to get a demo of the Snap Spectacles 5 and followed by a discussion by the panelist. Jason has a more detailed explanation of his experience and an interview with Sophia Dominguez, the Director of AR Platform Partnerships and Ecosystem at Snap, on his podcast.

11:59 Dimming (light blocking) with optical AR glasses

Jason noted the dimming feature of the SS5, and this led to the discussion of the need for light blocking with see-through AR.

19:15 See-though AR is not well suited for watching movies and TV

I make the point that see-through AR is not going to be a good device for watching movies and TV.

19:54 What is the application?

We get into a discussion of the applications for see-through AR

20:23 Snap’s motivation? And more on applications

There is some discussion about what is driving Snap to make Spectacles, followed by more discussion of applications.

22:35 What are Snap’s and Meta’s motivations?

The panelist gives their opinions on what is motivating Snap and Meta to enter into the see-through AR space.

23:31 What makes something “portable?”

David makes the point that if AR glasses are not all-day wearable, then they are not very portable. When you take them off, you have fragile things to protect in a case that is a lot bigger and bulkier than a smartphone you can shove in your pocket.

24:13 Wearable AI (Humane AT and Rabbit)

Many companies are working on “AI wearable” devices. We know many companies are looking to combine a small FOV display (typically 25-35 degrees) with audio “AI” glasses.

24:40 Reviewers/Media Chasing the Shiney Object (Apple Vision Pro and Meta Orion)

25:45 Need for a “$99 Google Glass”

Jeri liked Google Glass and thinks there is a place for a “$99 Google Glass”-like product in the market. David adds some information about the economics of ramping up production of the semi-custom display that Google Glass uses. I (Karl) then discuss some of the ecosystem issues of making a volume product.

27:28 Apple Vision Pro discussion

Brad Lynch uses his Apple Vision Pro daily and has even replaced his monitor with the AVP. He regularly uses the “Personas” (Avatars) when talking with co-workers and others in the VR community. But now refrains from using the Personas when talking with others “out of respect.” I have only used it very occasionally since doing my initial evaluation for this blog.

29:10 Mixed Reality while driving (is a bad idea)

Jeri brings up the “influencers” that bought (and likely returned in the two-week return window) and Apple Vision Pro may a viral YouTube video driving around in a Cyber Truck. We then discuss how driving around this was is dangerous.

Next Video – Meta Orion

In the next video in this series, we discuss Meta Orion.

If Smart Glasses Are Coming, What Will That Mean for Classrooms?

When Meta held its annual conference at the end of September, the tech giant announced it is betting that the next wave of computing will come in the form of smart eyeglasses.

Mark Zuckberberg, Meta’s founder and CEO, held up what he described as the first working prototype of Orion, which lets wearers see both the physical world and a computer display hovering in the field of vision.

“They’re not a headset,” he said on stage as he announced the device, which looked like a set of unusually chunky eyeglasses. “This is the physical world with holograms overlaid on it.”

For educators, this might not come as welcome news.

After all, one of the hottest topics in edtech these days is the growing practice of banning smartphones in schools, after teachers have reported that the devices distract students from classroom activities and socializing in person with others. And a growing body of research, popularized by the Jonathan Haidt book “The Anxious Generation,” argues that smartphone and social media use harms the mental health of teenagers.

When it’s proving hard enough to regulate the appropriate use of smartphones, what will it be like to manage a rush of kids wearing computers on their faces?

Some edtech experts see upsides, though, when the technology is ready to be used for educational activities.

The idea of using VR headsets to enter an educational multiverse — the last big idea Meta was touting when it changed its corporate name three years ago from Facebook — hasn’t caught on widely, in part because getting a classroom full of students fitted with headsets and holding controllers can be difficult for teachers (not to mention expensive to obtain all that gear). But if smart glasses become cheap enough for a cart to be wheeled in with enough pairs for each student, so they can all do some activity together that blends the virtual world with in-person interactions, they could be a better fit.

“Augmented reality allows for more sharing and collaborative work than VR,” says Maya Georgieva, who runs an innovation center for VR and AR at The New School in New York City. “Lots of these augmented reality applications build on the notion of active learning and experiential learning naturally.”

And there is some initial research that has found that augmented reality experiences in education can lead to improvements in learning outcomes since, as one recent research paper put it, “they transform the learning process into a full-body experience.”

Cheating Glasses?

The Orion glasses that Zuckerberg previewed last week are not ready for prime time — in fact the Meta CEO said they won’t be released to the general public until 2027.

(EdSurge receives philanthropic support from the Chan-Zuckerberg Initiative, which is co-owned by Meta’s CEO. Learn more about EdSurge ethics and policies here and supporters here.)

But the company already sells smart eyeglasses through a partnership with sunglass-maker Ray-Ban, which are now retailing for around $300. And other companies make similar products as well.

These gadgets, which have been on the market for a couple of years in some form, don’t have a display. But they do have a small built-in computer, a camera, a microphone and speakers. And recent advances in AI mean that newer models can serve as a talking version of a chatbot that users can access when they’re away from their computer or smartphone.

While so far the number of students who own smart glasses appears low, there have already been some reports of students using smart glasses to try to cheat.

This year in Tokyo, for instance, an 18-year-old allegedly used smart glasses to try to cheat on a university entrance exam. He apparently took pictures of his exam questions, posted them online during the test, and users on X, formerly Twitter, gave him the answers (which he could presumably hear read to him on his smart glasses). He was detected and his test scores were invalidated.

Meanwhile, students are sharing videos on TikTok where they explain how to use smart glasses to cheat, even low-end models that have few “smart” features.

“Using these blue light smart glasses on a test would be absolutely diabolical,” says one TikTok user’s video, describing a pair of glasses that can simply pair with a smartphone by bluetooth and cost only about $30. “They look like regular glasses, but they have speakers and microphones in them so you can cheat on a test. So just prerecord your test or your answers or watch a video while you're at the test and just listen to it and no one can tell that you’re looking or listening to anything.”

On Reddit discussions, professors have been wondering whether this technology will make it even harder to know whether the work students are doing is their own, compounding the problems caused by ChatGPT and other new AI tools that have given students new ways to cheat on homework that are difficult to detect.

One commenter even suggested just giving up on doing tests and assignments and trying to find new ways of assessing student knowledge. “I think we have too many assessments that have limited benefit and no one here wants to run a police state to check if students actually did what they say they did,” the user wrote. “I would appreciate if anyone has a functional viable alternative to the current standard. The old way will benefit the well off and dishonest, while the underprivileged and moral will suffer (not that this is new either).”

Some of the school and state policies that ban smartphones might also apply to these new smart glasses. A state law in Florida, for instance, restricts the use of “wireless communication devices,” which could include glasses, watches, or any new gadget that gets invented that connects electronically.

“I would compare it very much to when smartphones really came on the scene and became a regular part of our everyday lives,” says Kyle Bowen, a longtime edtech expert who is now deputy chief information officer at Arizona State University, noting that these glasses might impact a range of activities if they catch on, including education.

There could be upsides in college classrooms, he predicts.

The benefit he sees for smart glasses is the pairing of AI and the devices, so that students might be able to get real-time feedback about, say a lab exercise, by asking the chatbot to weigh in on what it sees through the camera of the glasses as students go about the task.

© Screenshot from Meta video

If Smart Glasses Are Coming, What Will That Mean for Classrooms?

Here’s what I made of Snap’s new augmented-reality Spectacles

Before I get to Snap’s new Spectacles, a confession: I have a long history of putting goofy new things on my face and liking it. Back in 2011, I tried on Sony’s head-mounted 3D glasses and, apparently, enjoyed them. Sort of. At the beginning of 2013, I was enamored with a Kickstarter project I saw at CES called Oculus Rift. I then spent the better part of the year with Google’s ridiculous Glass on my face and thought it was the future. Microsoft HoloLens? Loved it. Google Cardboard? Totally normal. Apple Vision Pro? A breakthrough, baby. 

Anyway. Snap announced a new version of its Spectacles today. These are AR glasses that could finally deliver on the promises devices like Magic Leap, or HoloLens, or even Google Glass, made many years ago. I got to try them out a couple of weeks ago. They are pretty great! (But also: See above)

These fifth-generation Spectacles can display visual information and applications directly on their see-through lenses, making objects appear as if they are in the real world. The interface is powered by the company’s new operating system, Snap OS. Unlike typical VR headsets or spatial computing devices, these augmented-reality (AR) lenses don’t obscure your vision and re-create it with cameras. There is no screen covering your field of view. Instead, images appear to float and exist in three dimensions in the world around you, hovering in the air or resting on tables and floors.

Snap CTO Bobby Murphy described the intended result to MIT Technology Review as “computing overlaid on the world that enhances our experience of the people in the places that are around us, rather than isolating us or taking us out of that experience.” 

In my demo, I was able to stack Lego pieces on a table, smack an AR golf ball into a hole across the room (at least a triple bogey), paint flowers and vines across the ceilings and walls using my hands, and ask questions about the objects I was looking at and receive answers from Snap’s virtual AI chatbot. There was even a little purple virtual doglike creature from Niantic, a Peridot, that followed me around the room and outside onto a balcony. 

But look up from the table and you see a normal room. The golf ball is on the floor, not a virtual golf course. The Peridot perches on a real balcony railing. Crucially, this means you can maintain contact—including eye contact—with the people around you in the room. 

To accomplish all this, Snap packed a lot of tech into the frames. There are two processors embedded inside, so all the compute happens in the glasses themselves. Cooling chambers in the sides did an effective job of dissipating heat in my demo. Four cameras capture the world around you, as well as the movement of your hands for gesture tracking. The images are displayed via micro-projectors, similar to those found in pico projectors, that do a nice job of presenting those three-dimensional images right in front of your eyes without requiring a lot of initial setup. It creates a tall, deep field of view—Snap claims it is similar to a 100-inch display at 10 feet—in a relatively small, lightweight device (226 grams). What’s more, they automatically darken when you step outside, so they work well not just in your home but out in the world.

You control all this with a combination of voice and hand gestures, most of which came pretty naturally to me. You can pinch to select objects and drag them around, for example. The AI chatbot could respond to questions posed in natural language (“What’s that ship I see in the distance?”). Some of the interactions require a phone, but for the most part Spectacles are a standalone device. 

It doesn’t come cheap. Snap isn’t selling the glasses directly to consumers but requires you to agree to at least one year of paying $99 per month for a Spectacles Developer Program account that gives you access to them. I was assured that the company has a very open definition of who can develop for the platform. Snap also announced a new partnership with OpenAI that takes advantage of its multimodal capabilities, which it says will help developers create experiences with real-world context about the things people see or hear (or say).

The author of the post standing outside wearing oversize Snap Spectacles. The photo is a bit goofy
It me.

Having said that, it all worked together impressively well. The three-dimensional objects maintained a sense of permanence in the spaces where you placed them—meaning you can move around and they stay put. The AI assistant correctly identified everything I asked it to. There were some glitches here and there—Lego bricks collapsing into each other, for example—but for the most part this was a solid little device. 

It is not, however, a low-profile one. No one will mistake these for a normal pair of glasses or sunglasses. A colleague described them as beefed-up 3D glasses, which seems about right. They are not the silliest computer I have put on my face, but they didn’t exactly make me feel like a cool guy, either. Here’s a photo of me trying them out. Draw your own conclusions.

UbiSim, a Labster company

UbiSim is the first immersive virtual reality (VR) training platform built specifically for nurses. It is a complete simulation lab that provides nursing trainees with virtual access to a variety of clinical situations and diverse patients in a broad continuum of realistic care settings, helping institutions to overcome limited access to hospitals and other clinical sites for nursing students.

This cool tool allows institutions to create repeatable, real-life scenarios that provide engaging, standardized multi-learner experiences using VR headsets. It combines intuitive interactions, flexible modules, and immediate feedback. These contribute to developing clinical judgment, critical thinking, team interaction, clear communication, and patient engagement skills that enhance safe clinical practice and are essential to improving Next Generation NCLEX test scores.

UbiSim reduces the burden of purchasing and maintaining expensive simulation lab equipment, allowing nursing programs to scale and standardize their simulation activities. Faculty choose from 50-plus existing training scenarios created in collaboration with nursing educators and simulation experts. Educators may also customize content or create original scenarios to fit learning objectives.

Founded in 2016, UbiSim has been a Labster company since 2021. The UbiSim customer roster has grown by 117% since Fall 2022, extending its footprint at universities, community colleges, technical colleges, and medical centers within 9 countries, including 21 American states. UbiSim now partners with 100-plus nursing institutions in North America and Europe to advance the shared mission of addressing the nursing shortage by reducing the cost, time, and logistical challenges of traditional simulation methods and scaling high-quality nursing education. For these reasons and more, UbiSim, a Labster company, is a Cool Tool Award Winner for “Best Virtual Reality / Augmented Reality (AR/VR) Solution” as part of The EdTech Awards 2024 from EdTech Digest. Learn more

The post UbiSim, a Labster company appeared first on EdTech Digest.

AWE 2024 VR – Hypervision, Sony XR, Big Screen, Apple, Meta, & LightPolymers

Introduction

Based on information gathered at SID Display Week and AWE, I have many articles to write based on the thousands of pictures I took and things I learned. I have been organizing and editing the pictures.

As its name implies, Display Week is primarily about display devices. My major takeaway from that conference is that many companies work on full-color MicroLEDs with different approaches, including quantum dot color conversion, stack layers, and single emitter with color shifting based on current or voltage.

AWE moved venues from the Santa Clara Convention Center in Silicon Valley to the larger Long Beach Convention Center south of LA. More than just a venue shift, I sensed a shift in direction. Historically, at AWE, I have seen many optical see-through AR/MR headsets, but there seem to be fewer optical headsets this year. Instead, I saw many companies with software running on VR/Passthrough AR headsets, primarily on the Meta Quest 3 (MQ3)and Apple Vision Pro (AVP).

This article was partly inspired by Hypervision’s white paper discussing whether micro-OLEDs or small LCDs were the best path to 60 pixels per degree (PPD) with a wide FOV combined with the pictures I captured through Hypervision’s HO140 (140° diagonal FOV per eye) optics at AWE 2024. I have taken thousands of pictures through various headsets, and the Hypervision picture stood out in terms of FOV and sharpness. I have followed Hypervision since 2021 (see Appendix: More on Hypervision).

I took my first pictures at AWE through the Sony XR (SXR) Headset optics. At least subjectively, in a short demo, the SXR’s image quality (sharpness and contrast) seemed higher than that of the AVP, but the FOV was smaller. I had on hand (thousands) of pictures I had taken through the Big Screen Beyond (BSB), AVP, Meta Quest Pro (MQP), and Meta Quest 3 (MQ3) optics with the same camera and lens, plus a few of the Hypervision HO140 prototype. So, I decided to make some comparisons between various headsets.

I also want to mention LightPolymers’ new Quarter Waveplate (QWP) and Polarization technologies, which I first learned about from a poster in the Hypervision AWE booth. In April 2024, the two companies announced a joint development grant. They offer an alternative to the plastic film QWP and Polarizers, where 3M dominates today.

Hypervision’s HO140 Display

Based on my history of seeing Hypervision’s 240° prototypes for the last three years, I had, until AWE 2024, largely overlooked their single display 140° models. I had my Canon R5 (45Mp with 405mp ” 3×3 sensor pixel shift mode”) and tripod with me at AWE this year, so I took a few high-resolution pictures through the optics of the HO140. Below are pictures of the 240° (left) and 140° (right) prototypes in the Hypervsion Booth. Hypervision is an optics company and not a headset maker and the demos are meant to show off their optics.

When I got home and looked at the pictures through the HO140, I was impressed by the overall image quality of the HO140, after having taken thousands of pictures through the Apple Vision Pro (with Micro-OLED displays) and Meta’s Quest Pro, Quest 3 (both with mini-LCD displays), the Big Screen Beyond. It usually takes me considerable time and effort, as well as multiple reshoots, to find the “sweet spot” for the other devices, but I got good pictures through the HO140 with minimal effort and only a few pictures, which suggests a very large sweet spot in Hypervision’s optical design. The HO140 is a prototype of unknown cost that I am comparing to production products. I only have this one image to go by and not a test pattern.

The picture below is from my Canon R5, with a 16mm lens netting a FOV of 97.6° horizontal by 73.7° vertical. It was shot at 405mp and then reduced to 45mp to avoid moiré effects due to the “beat frequencies” between the camera sensor and the display devices with their color subpixels. All VR optics pincushion, which causes the pixel sizes to vary across the display and increases the chance of getting moiré in some regions.

The level of sharpness throughout the HO140’s image relative to other VR headsets suggests that it could support a higher-resolution LCD panel with a smaller pixel size if it existed. Some significant chroma aberrations are visible in the outer parts of the image, but these could be largely corrected in software.

Compared to other VR-type headsets I have photographed, I was impressed by how far out into the periphery of the FOV the image maintains sharpness while supporting a significantly larger FOV than any other device I have photographed. What I can’t tell without being able to run other content, such as test patterns, is the contrast of the display and optics combination.

I suggest also reading Hypervision’s other white papers on their Technology & Research page. Also, if you want an excellent explanation of pancake optics, I recommend Arthur Rabner’s, CTO of Hypervision, one-hour and 25-minute presentation on YouTube.

Sony XR (SXR)

Mechanical Ergonomics

AWE was my first time trying the new Sony XR (SXR) headset. In my CES 2024 coverage, I wrote about the ergonomic features I liked in Sony XR (and others compared to Apple Vision Pro). In particular, I liked the headband approach with the flip-up display, and my brief try with the Sony headset at AWE seemed to confirm the benefits of this design choice (which is very similar to the Lynx R1 headset), at least from the ergonomics perspective relative to the Apple Vision Pro.

Still, the SXR is still pretty big and bulky, much more so than the AVP or Lynx. Having only had a short demo, I can’t say how comfortable it will be in extended use. As was the case for the HO140, I couldn’t control the content.

“Enterprise” Product

Sony has been saying that this headset primarily aims at “enterprise” (= expensive high-end) applications, and they partner with Siemens. It is much more practical than the Apple Vision Pro (AVP). The support on the head is better; it supports users wearing their glasses, and the display/visor flips up so you can see the real world directly. There is air circulation to the face and eyes. The headset also supports adjustment of the distance from the headset to the eyes. The headset allows peripheral vision but does have a light shield for full VR operation. The headset is also supposed to support video passthrough, but that capability was not demonstrated. As noted in my CES article, the SXR headset put the pass-through cameras in a much better position than the AVP.

Display Devices and Image Quality

Both the AVP and SXR use ~4K micro-OLED display devices. While Sony does the OLED Assembly (applying the OLED and packaging) for its headset and the AVP’s display devices, the AVP reportedly uses a custom silicon backplane designed by Apple. The SXR’s display has ~20% smaller 6.3-micron pixels than the AVP’s 7.5-micron. The device size is also smaller. The size factors of the SXR favor higher angular resolution and a smaller FOV, as is seen with the SXR.

The picture below was taken (handheld) with my 45MP Canon R5 camera with a 16mm lens like the HO140, but because I couldn’t use a tripod, I couldn’t get a 405MP picture with the camera’s sensor shifting. I was impressed that I got relatively good images handheld, which suggests the optics have a much larger sweet spot than the AVP, for example. To get good images with the AVP requires my camera lens to be precisely aligned into the relatively small sweep spot of the AVP’s optics (using a 6-degree-of-freedom camera rig on a tripod). I believe the Apple Vision Pro’s small sweet spot and the need for eye-tracking-based lens correction, and not just for foveated rendering, are part of why the AVP has to be uncomfortably clamped against the user’s face.

Given that I was hand-holding both the headset and camera, I was rather surprised that the pictures came out so well (click on the image to see it in higher, 45mp resolution).

At least in my brief demo, the SXR’s optics image quality seems better than the AVP’s. The images seem sharper with lesser chroma (color) aberrations. The AVP seems heavily dependent on eye tracking to correct optics problems with the optics, but it does not always succeed.

Much more Eye Refief (enabling eye glasses) but lower FOV

I was surprised by how much eye relief the SXR optics afforded compared to the AVP and BSB, which also use Micro-OLED microdisplays. Typically, the requirement for high magnification of the micro-OLED pixels compared to LCD pixels inherently makes eye relief more difficult. The SXR magnifies less, resulting in a smaller FOV, but also makes it easier optically for them to support more eye relief. But note, taking advantage of the greater eye relief will further reduce the FOV. The SXR headset has a smaller FOV than any other VR-type headset I have tried recently.

Novel Sony controllers were not a hit

While I will credit Sony for trying something new with the controllers, I didn’t like finger trackpad and ring color are great solutions. I talked with several people who tried them, and no one seemed to like either controller. It is hard to judge control devices in a short demo; you must work with them for a while. Still, they didn’t make a good first impression.

VR Headset “Shootout” between AVP, MQP, Big Screen Beyond, Hypervision, and Sony XR

I have been shooting VR headsets with the Canon R5 with a 16mm lens for some time and built up a large library of pictures. For the AVP, Big Screen Beyond (BSB), and Meta Quest Pro (MQP), I had both the the headset and the camera locked down on tripods so I could center the lens in the sweet spot of the optics. For the Hypervision, while the camera and headset were on tripods, my camera was only on a travel tripod without my 6-degree-of-freedom rig and the time to precisely locate the headset’s optical sweet spot. The SXR picture was taken with my hand holding the headset and the camera.

Below are through-the-optics pictures of the AVP, BSB, MQP, Hypervision HO140, and SXR headsets, all taken with the same camera and lens combination and scaled identically. This is not a perfect comparison as the camera lens does not work identically to the eye (which also rotates), but it is reasonably close. The physically shorter and simpler 16mm prime (non-zoom) lens lets it get inside the eye box of the various headsets for the FOV it can capture.

FOV Comparison (AVP, SXR, BSB, HO140, MQ3/MQP)

While companies will talk about the number of horizontal and vertical pixels of the display device, the periphery of the display’s pixels are cut off by the optics, which tend to be circular. All the VR headset optics have a pincushion distortion, which results in higher resolution in the sweet spot (optical center), which is always toward the nose side and usually above the center for VR headsets.

In the figure below, I have overlaid the FOV of the left eye for the headsets on top of the picture HO140 image. I had to extrapolate somewhat on the image circles on the top and bottom as the headset FOVs exceeded the extent of the camera’s FOV. The HO140 supports up to a 2.9″ diagonal LCD (that does not exist yet), but they currently use a 2.56″ 2160×2160 Octagonal BOE LCD and are so far beyond the FOV of my camera lens that I used their information.

As can be seen, the LCD-based headsets of Hypervision and Meta typically have larger FOV than the micro-OLED-based headsets of AVP, Meta, and Sony. However, as will be discussed, the micro-OLED-based headsets have smaller pixels (angularly and on the physical display device).

Center Pixels (Angular Size in PPD)

Due to handholding the SXR and having pixels smaller than the AVP, I couldn’t get a super-high-resolution (405 mp) image from the center of the FOV and didn’t have the time to use a longer focal length lens to show the pixel boundaries. The SXR has roughly the same number of pixels as the AVP but a smaller FOV, so its pixels are angularly smaller than the AVP’s. I would expect the SXR to be near 60 pixels per degree (PPD) in the center of the FOV. The BSB has about the same FOV as the AVP but has a ~2.5K micro-OLED compared to the AVP’s ~4K; thus, the BSB pixels in the center are about 1.5x bigger (linearly). The Hypervision’s display has a slightly smaller center pixel pitch than the MQP (and MQ3) but with a massively bigger FOV.

The MQP (and the very similar MQ3) rotate the display device. To make it easier to compare the pixel pitches, I included a rotated inset of the MQP pixels to match the alignment of the other devices. Note that the pictures below are all “through the optics” and thus include the headset’s optical magnification. I have given the angular resolution in PPD for each headset. I have indicated the angular resolution (in pixels-per-degree, PPD) for each of the headset’s center pixels. For the center pixels pictures below, I used a 28mm lens to get more magnification to see sub-pixel detail for the AVP, BSB, and MQP. I only took 16mm lens pictures of the HO140 and, therefore, rescaled the image based on the different focal lengths of the lens.

The Micro-OLED base headsets require significantly more optical magnification than the LCD models. For example, the AVP has 3.2x (linearly) smaller display device pixels than the MQP, but after optics, the pixels are ~1.82x smaller. As a specific example, the AVP magnifies the display by ~1.76 more than the MQP.

Outer Pixels

I capture pixels from a similar (very approximately) distance from the optical center of the lens. The AVP’s “foveated rendering” makes it look worse than it is, but you can still see the pixel grid with the others. Of the micro-OLED headsets, the BSB and SXR seem to do the best regarding sharpness in the periphery. The Hypervision HO140 pixels seem much less distorted and blurry than any of the headsets, including the MQP and MP3, which have much smaller FOVs.

Micro-OLED vs. Mini-LCD Challenges

Micro-OLEDs are made by applying OLEDs on top of a CMOS substrate. CMOS transistors provide a high current per unit area, and all the transistors and circuitry are underneath the OLED pixels, so it doesn’t block light. These factors enable relatively small pixels of 6.3 to 10 microns. However, CMOS substrates are much more expensive per unit area, and modern semiconductor FABs limit of CMOS devices is about 1.4-inch diagonal (ignoring expensive and low-yielding “reticle stitched” devices).

A basic issue with OLEDs is that the display device must provide the power/current to drive each OLED. In the case of LCDs, only a small amount of capacitance has to be driven to change the pixel, after which there is virtually no current. The table on the right (which I discussed in 2017) shows the transistor mobility and the process requirements for the transistors for various display backplanes. The current need for an emitting display device like OLEDs and LEDs requires crystalline silicon (e.g., CMOS) or much larger thin-film transistors on glass. There are also issues of the size and resistivity of the wires used to provide the current and heat issues.

The OLED’s requirement for significant current/power limits how small the pixels can get on a given substrate/technology. Thin-film transistors have to be physically big to supply the current. For example, the Apple Watch Ultra Thin Film transistor OLED display has 326 PPI (~78 microns), which is more than 10x larger linearly (100x the area) than the Apple Vision Pro’s pixel, even though both are “OLEDs.”

Another issue caused by trying to support large FOVs with small devices is that the higher magnification reduces eye relief. Most of the “magnification” comes from moving the device closer to the eye. Thus, LCD headsets tend to have more eye relief. Sony’s XR headset is an exception because it has enough eye relief for glasses but does so with a smaller FOV than the other headsets.

Small LCDs used in VR displays have different challenges. They are made on glass substrates, and the transistors and circuitry must be larger. Because they are transmissive, this circuitry in the periphery of each pixel blocks light and causes more of a screen door effect. The cost per unit area is much lower than that of CMOS, and LCD devices can be much larger. Thus, less aggressive optical magnification is required for the same FOV with LCDs.

LCDs face a major challenge in making the pixels smaller to support higher resolution. As the pixels get smaller, the size of the circuitry relative to the pixel size becomes bigger, blocking more light and causing a worse screen door effect. To make the pixels smaller, they must develop higher-performance thin-film transistors and lower resistance interconnection to keep blocking too much light. This subject is discussed in an Innolux Research Paper published by SPIE in October 2023 (free to download). Innolux discusses how to go from today’s typical “small” LCD pixel of 1200 ppi (=~21 microns) to their research device with 2117 ppi (=~12 microns) to achieve a 3840 x 3840 (4K by 4k) display in a 2.56″ diagonal device. Hypervision’s HO140 white paper discusses Innolux’s 2022 research prototype with the same pixel size but with 3240×3240 pixels and a 2.27-inch panel, as well as the current prototype. The current HO140 uses a BOE 2.56″ 2160×2160 panel with 21-micron pixels, as the Innolux panel is not commercially available.

Some micro-OLED and small LCD displays for VR

YouTuber Brad Lynch of SadlyItsBradley, in an X post, listed the PPI of some common VR headset display devices. I have added more entries and the pixel pitch in microns. Many VR panels are not rectangular and may have cut corners on the bottom (and top). The size of the panels given in inches is for the longest diagonal. As you can see, Innolux’s prototypes have significantly smaller pixels, but almost 2x linearly, than the VR LCDs in volume production today:

  • Vive: 3.6″, 1080p, ~360 PPI (70 microns)
  • Rift S*: 5.5″, 1280P, ~530 PPI (48 microns)
  • Valve Index: 3.5″, 1440p, ~600 PPI (42 microns)
  • Quest 2*: 5.5″, 1900p, ~750 PPI (34 microns)
  • Quest 3: ~2.55″ 2064 × 2208, 1050 PPI (24 microns) – Pancake Optics
  • Quest Pro: 2.5″, 1832×1920, ~1050 PPI (24 microns) – Might be BOE 2.48″ miniLED LCD
  • Varjo Aero: 3.2″, 2880p, ~1200 PPI (21 microns)
  • Pico 4: 2.5″, 2160p, 1192 PPI (21 microns)
  • BOE 2.56″ LCD, 2160×2160, 1192 PPI (21 microns) – Used in Hypervision HO140 at AWE 2024
  • Innolux 2023 Prototype 2.56″, 3840×3840, 2117 ppi (12 microns) -Research prototype
  • Apple Vision Pro 1.4″ Micro-OLED, 3,660×3,200, 3386 PPI (7.5 microns)
  • SeeYa 1.03″ Micro-OLED, 2560×2560, 3528 PPI (7.2 microns) – Used in Big Screen Beyond
  • Sony ~1.3″ Micro-OLED, 3552 x 3840, 4032 PPI (6.3 microns) – Sony XR
  • BOE 1.35″ Micro-OLED 3552×3840, 4032 PPI (6.3 microns) – Demoed at Display Week 2024

In 2017, I wrote Near Eye Displays (NEDs): Gaps In Pixel Sizes (table from that article on the right) talks about what I call the pixel size gap between microdisplays (on Silicon) and small LCDs (on glass). While the pixel sizes have gotten smaller for both micro-OLED and LCDs for VR in the last ~7 years, there remains a sizable gap.

Contrast – Factoring the Display and Pancake Optics

Micro-OLEDs at the display level certainly have a better inherent black level and can turn pixels completely off. LCDs work by blocking light using cross-polarization, which results in imperfect blacks. Thus, with micro-OLEDs, a large area of black will look black, whereas with LCDs, it will be dark gray.

However, we are not looking at the displays directly but through optics, specifically pancake optics, which dominate new VR designs today. Pancake optics, which use polarized light and QWP to recirculate the image twice through parts of the optics, are prone to internal reflections that cause “ghosts” (somewhat out-of-focus reflections) and contrast loss.

Using smaller micro-OLEDs requires more “aggressive” optical designs that support higher magnification to support a wide FOV. These more aggressive optical designs can be more prone to being more expensive, less sharp, and loss of polarization. Any loss of polarization in pancake optics will cause a loss of contrast and ghosting. There seems to be a tendency with pancake optics for the stray light to bounce around and end up in the periphery of the image, causing a glow if the periphery of the image is supposed to be black.

For example, the AVP is known to have an outer “glow” when watching movie content on a black background. Most VR headsets default to a “movie or home theater” rather than a background. While it may be for aesthetics, the engineer in me thinks it might help hide the glow. People online suggest turning on some background with the AVP for people bothered by the glow on a black background.

The complaints of outer glow when watching movies seem more prevalent when using headsets micro-OLEDs, but this is hardly scientific. It could be just that the micro-OLEDs have a better black level and make the glow more noticeable, but it might also be caused by their more aggressive optical magnification (something that might be or has been (?) studied). My key point is that it is not as simple as considering the display’s inherent contrast, you have to consider the whole optical system.

LightPolymers’ Alternative to Plastic Films for QWP & Polarizers

LightPolymers has a Lyotropic (water-based) Liquid Crystal (LC) material that can make optical surfaces like QWP and polarizers. Silicon Optix, which the blog broke the news of Meta buying them in December 2021 (Exclusive: Imagine Optix Bought By Meta), was also developing LC-based polarized light control films.

Like Silicon Optix, Light Polymers has been coating plastic films with LCs, but LightPolymers is developing the ability to directly apply their films to flat and curved lenses, which is a potential game changer. In April 2024, LightPolymers and Hypervision announced the joint development of this lens-coating technology and had a poster in their Hypervision’s booth showing it (right)

3M Dominates Polarized Light Plastic Films for Pancake Optics

3M is today the dominant player in polarized light-control plastic films and is even more dominant in these films for pancake optics. At 3M’s SID Display Week booth in June 2024, they showed the ByteDance PICO4, MQP, and MQ3 pancake optics using 3M polarization films. Their films are also used in the Fresnel lens-based Quest 2. It is an open secret (but 3M would not confirm or deny) that the Apple Vision Pro also uses 3M polarization films.

According to 3M:

3M did not invent the optical architecture of pancake lenses. However, 3M was the first company to successfully demonstrate the viability of pancake lenses in VR headsets by combining it with its patented reflective polarizer technology.

That same article supports Kopin’s (now spun out to Lightning Silicon) claims to have been the first to develop pancake optics. Kopin has been demonstrating pancake optics combined with their Micro-OLEDs for years, which are used in Panasonic-ShiftAll headsets.

3M’s 2017 SPIE Paper Folded Optics with Birefringent Reflective Polarizers discusses the use of their films (and also mentions Kopin developments) in cemented (e.g., AVP) and air gap (e.g., MQP and MP3) pancake optics. The paper also discusses how their polarization films can be made (with heat softening) to conform to curved optics such as the AVP.

LightPolymers’ Potential Advantage over Plastic Films

The most obvious drawbacks of plastic films are that they are relatively thick (on the order of 70+ microns per film, and there are typically multiple films per lens) and are usually attached using adhesive coatings. The thickness, particularly when trying to conform to a curved surface, can cause issues with polarized light. The adhesives introduce some scatter, resulting in some loss of polarization.

By applying their LCs directly to the lens, LightPolymer claims they could reduce the thickness of the polarization control (QWP and Polarizers) by as much as 10x and would eliminate the use of adhesives.

In the photos below (taken with a 5x macro lens), I used a knife to slightly separate the edges of the films from the Meta Quest 3’s eye-side and display-side lenses to show them. On the eye-side lens, there are three films, which are thought to be a QWP, absorptive polarizer, and reflective polarizer. On the display-side lens, there are two films, one of which is a QWP, and the other may be just a protective film. In the eye-side lens photo, you can see where the adhesive has bubbled up after separation. The diagram on the right shows the films and paths for light with the MQ3/MQP pancake optics.

Because LighPolymers’ LC coating is applied to each lens, it could also be applied/patterned to improve or compensate for other issues in the optics.

Current State of LightPolymer’s Technology

LightPolymers is already applying its LC to plastic films and flat glass. Their joint agreement with Hypervision involves developing manufacturable methods for directly applying the LC coatings to curved lens surfaces. This technology will take time to develop. LightPolymer business of making the LC materials and then works with partners such as Hypervision to apply the LC to their lenses. They say the equipment necessary to apply the LCs is readily available and low-cost (for manufacturing equipment).

Conclusion

Hypervision has demonstrated the ability to design very wide FOV pancake optics with a large optical sweet spot and maintains a larger area of sharpness than any other design I have seen.

Based on my experience in both Semiconductors and Optics, I think Hypervision makes a good case in their white paper 60PPD: by fast LCD but not by micro OLED, getting to a wide FOV while approaching “retinal” 60PPD is more likely to happen using LCD technology than micro-OLEDs.

Fundamentally, micro-OLEDs are unlikely to get much bigger than 1.4″ diagonally, at least commercially, for many years, if not more than a decade. While they could make the pixels smaller, today’s pancake optics struggle to resolve ~7.5-micron pixels, no less small ones.

On the other hand, several companies, including Innoulux and BOE, have shown research prototypes of 12-micron LCD pixels, or half the (linear) size of today’s LCDs used in VR headsets in high volume. If BOE or Innolux went into production with these displays, it would enable Hypervision’s HO140 to reach about 48 PPD in the center with a roughly 140-degree FOV, and only small incremental changes would get them to 60 PPD with the same FOV.

Appendix: More on Hypervision

I first encountered Hypervision at AWE 2021 with their blended Fresnel lens 240-degree design, but as this blog primarily covered optical AR, it slipped under my radar. Since then, I have been covering Optical and Pass-Through mixed reality, particularly pass-through MR using Pancake Optics. By AR/VR/MR 2023, Hypervsion demonstrated a single lens (per eye) 140-degree and a blended dual lens and display 240-degree FOV (diagonal) Pancake Optics designs.

These were vastly better than their older Fresnel designs and demonstrated Hypervision’s optical design capability. In May 2023, passthrough MR startup Lynx and Hypervision announced they were collaborating. For some more background on my encounters with Hypervision, see Hypervision Background.

Hypervision has been using its knowledge of pancake optics to analyze the Apple Vision Pro’s optical design, which I have reported on in Hypervision: Micro-OLED vs. LCD – And Why the Apple Vision Pro is “Blurry,” Apple Vision Pro Discussion Video by Karl Guttag and Jason McDowall, Apple Vision Pro – Influencing the Influencers & “Information Density,” and Apple Vision Pro (Part 4)—Hypervision Pancake Optics Analysis.

AWE 2024 Panel: The Current State and Future Direction of AR Glasses

Introduction

At AWE 2024, I was on a panel discussion titled “The Current State and Future Direction of AR Glasses.” Jeri Ellsworth, CEO of Tilt Five, Ed Tang, CEO of Avegant, Adi Robertson, Senior Reporter at The Verge, and I were on the panel, with Jason McDowell, The AR Show, moderating. Jason McDowell did an excellent job of moderation and keeping the discussion moving. Still, with only 55 minutes, including questions from the audience, we could only cover a fraction of the topics we had considered discussing. I’m hoping to reconvene this panel sometime. I also want to thank Dean Johnson, Associate Professor at Western Michigan University, who originated the idea and helped me organize this panel. AWE’s video of our panel is available on YouTube.

First, I will outline what was discussed in the panel. Then, I want to follow up on small FOV optical AR glasses and some back-and-forth discussions with AWE Legend Thad Starner.

Outline of the Panel Discussion

The panel covered many topics, and below, I have provided a link to each part of our discussion and added additional information and details for some of the topics.

  • 0:00 Introductions
  • 2:19 Apple Vision Pro (AVP) and why it has stalled. It has been widely reported that AVP sales have stalled. Just before the conference, The Information reported that Apple had suspended the Vision Pro 2 development and is now focused on a lower-cost version. I want to point out that a 1984 128K Mac 1 adjusted for inflation would cost over $7,000 adjusted for inflation, and the original 1977 Apple 2 4K computer (without a monitor or floppy drive) would cost about $6,700 in today’s dollars. I contend that utility and not price is the key problem with the AVP sales volume and that Apple is thus drawing the wrong conclusion.
  • 7:20 Optical versus Passthrough AR. The panel discusses why their requirements are so different.
  • 11:30 Mentioned Thad Starner and the desire for smaller FOV optical AR headsets. It turns out that Thad Starner attended our panel, but as I later found out, he arrived late and missed my mentioning him. Thad, later questioned the panel. In 2019, I wrote the article FOV Obsession, which discussed Thad’s SPIE AR/VR/MR presentation about smaller FOV. Thad is a Georgia Institute of Technology professor and a part-time Staff Researcher at Google (including on Google Glass). He has continuously worn AR devices since his research work at MIT’s media lab in the 1990s.
  • 13:50 Does “tethering make sense” with cables or wirelessly?
  • 20:40 Does an AR device have to work outside (in daylight)?
  • 26:49 The need to add displays to today’s Audio-AI glasses (ex. Meta Ray-Ban Wayfarer).
  • 31:45 Making AR glasses less creepy?
  • 35:10 Does it have to be a glasses form factor?
  • 35:55 Monocular versus Biocular
  • 37:25 What did Apple Vision Pro get right (and wrong) regarding user interaction?
  • 40:00 I make the point that eye tracking and gesture recognition on the “Apple Vision Pro is magical until it is not,” paraphrasing Adi Robertson, and I then added, “and then it is damn frustrating.” I also discuss that “it’s not truly hands-free if you have to make gestures with your hands.”
  • 41:48 Waiting for the Superman [savior] company. And do big companies help or crush innovation?
  • 44:20 Vertical integration (Apple’s big advantage)
  • 46:13 Audience Question: When will AR glasses replace a smartphone (enterprise and consumer)
  • 49:05 What is the first use case to break 1 million users in Consumer AR?
  • 49:45 Thad Starner – “Bold Prediction” that the first large application will be with small FOV (~20 degrees), monocular, and not centered in the user’s vision (off to the ear side by ~8 to 20 degrees), and monochrome would be OK. A smartphone is only about 9 by 15 degrees FOV [or ~20 degrees diagonally when a phone is held at a typical distance].
  • 52:10 Audience Question: Why aren’t more companies going after OSHA (safety) certification?

Small FOV Optical AR Discussion with Thad Starner

As stated in the outline above, Thad Starner arrived late and missed my discussion of smaller FOVs that mentioned Thad, as I learned after the panel. Thad, who has been continuously wearing AR glasses and researching them since the mid-1990s, brings an interesting perspective. Since I first saw and met him in 2019, he has strongly advocated for AR headsets having a smaller FOV.

Thad also states that the AR headset should have a monocular (single-eye) display and be 8—to 20 degrees on the ear side of the user’s straight-ahead vision. He also suggests that monochrome is fine for most purposes. Thad stated that his team will soon publish papers backing up these contentions.

In the sections below, I went from the YouTube transcript and did some light editing to make what was said more readable.

My discussion from earlier in the panel:

11:30 Karl Guttag – I think a lot of the AR or Optical see-through gets confabulated with what was going on in VR because VR was cheap and easy to make a wide field of view by sticking a cell phone with some cheap Optics in front of your face. You get a wide field of view, and people went crazy about that. I made this point years ago on my blog [2019 article FOV Obsession] was the problem. Thad Starner makes this point: he’s one of our Legends at AWE, and I took that to heart many years ago at SPIE AR/VR/MR 2019.

The problem is that as soon as you say beyond about 30-degree field of view, even projecting forward [with technology advancements], as you go beyond 30-degree field of view, you’re in a helmet, something looking like Magic Leap. And Magic Leap ended up in Nowheresville. [Magic Leap] ended up with 25 to 30% see-through, so it’s not really that good see-through, and yet it’s not got the image quality that you would get of an old display shot right in your eyes. You might you could get a better image on an Xreal or something like that.

People are confabulating too many different specs, so they want a wide field of view. The problem is as soon as you say 50 degrees and then you say, yeah, and I need like spatial recognition, I want to do SLAM, and I want to do this, and I want to do that. You’ve now spiraled into the helmet. I mean, you know, Meta was talking the other day about the other panels and said they’re looking at about 50 grams [for the Meta Ray Bans], and my glasses are 23 grams. You’re out of that as soon as you say 50-degree field of view, you’re over 100 grams and and and and and heading to the Moon as you add more and more cameras and all this other stuff, so I think that’s one of our bigger problems whereas AR really Optical AR.

The experiment we’re going to see played out because many companies are working on adding displays to to so called AI audio glasses. We’re going to see if that works because companies are getting ready to make glasses that have 20—to 30-degree field of view glasses tied into AI and audio stuff.

Thad Starner’s comments and the follow-up discussion during the Q&A at the end of the panel:

AWE Legend Thad Starner Wearing Vuzix’s Ultralight Glasses – After the Panel

49:46 Hi, my name is Thad Starner. I’m Professor Georgia Tech. I’m going to make a bold prediction here that the future, at least the first system to sell over a million units, will be a small field of view monocular, non-line-of-sight display, monochrome is okay now; the reason I say that is number one I’ve done different user studies in my lab that we’ll be publishing soon on this subject but the other thing is that you know our phones which is the most popular interface out there are only 9 degrees by 16 degrees field of view. Putting something outside of the line of sight means that it doesn’t interrupt you while you’re crossing the street or driving or flying a plane, right? We know these numbers, so between 8° and 20 degrees towards the ear and plus or minus 8 degrees, I’m looking at Karl [Guttag] here so he can digest all these things.

Karl – I wrote a whole article about it [FOV Obsession]

Thad – And not having a pixel in line of sight, so now feel free to pick me apart and disagree with me.

Jeri-  I want to know a price point.

Thad, I think the first market will be captioning for the heart of hearing, not for the deaf. Also, possible transcription, not translation; at that price point, you’re talking about making reading glasses for people instead of hearing aids. There’s a lot of pushback against hearing, but reading glasses people tend to do, so I’d say you’re probably in the $200 to $300 range.

Ed – I think your prediction is spot on, minus the color green. The only thing I think is that it’s not going to fly.

Thad – I said monochrome is okay.

Ed – I think the monocular field of view is going to be an entry-level product, and you see, I think you will see products that will fit that category with roughly that field of view with roughly that offset angle [not in the center of view] is what you’re going to see in the beginning. Yeah I agree with that but I don’t I think that’s the first step I think you will see a lot of products after that that’s going to do a lot more than monocular monochrome offset displays, start going to larger field of view binocular I think that will happen pretty quickly.

Adi – It does feel like somebody tries to do that every 18 months, though, like Intel tried to make a pair of glasses that did that. It’s a little bit what North did. I guess it’s just a matter of throwing the idea at the wall because I think it’s a good one until it takes.

I was a little taken aback to have Thad call me out as if I had disagreed with him when I had made the point about the advantages of a smaller FOV earlier. Only after the presentation did I find out that he had arrived late. I’m not sure what comment I made that made Thad think I was advocating for a larger FOV in AR glasses.

I want to add that there can be big differences between what consumers and experts will accept in a product. I’m reminded of a story I read in the early 1980s when there was a big debate between very high-resolution monochrome versus lower-resolution color (back then, you could only have one or the other with CRTs) that the head of IBM’s monitor division said, “Color is the least necessary and most desired feature in a monitor.” All the research suggested that resolution was more important for the tasks people did on a computer at the time, but people still insisted on color monitors. Another example is the 1985 New Coke fiasco, in which Coke’s taste studies proved that people liked New Coke better, but it still failed as a product.

In my experience, a big factor is whether the person is being trained to use the device for enterprise or military use versus whether the user is buying it for their own enjoyment. The military has used monochrome displays on devices, including night vision and heads-up displays for decades. I like to point out that the requirement can change if “If the user paid to use versus is paying to use.” Enterprises and the military care about whether the product gets the job done and pay someone to use the device. The consumer has different criteria. I will also agree that there are cases where the user is motivated to be trained, such as Thad’s hard-of-hearing example.

Conclusion on Small FOV Optical AR

First, I agree with Thad’s comments about the smaller FOV and have stated such before. There are also cases outside of enterprise and industrial use where the user is motivated to be trained, such as Thad’s hard-of-hearing example. But while I can’t disagree with Thad or his studies that show having a monocular monochrome image located outside the line of sight is technically better, I think consumers will have a tougher time accepting a monocular monochrome display. What you can train someone to use differs from what they would buy for themselves.

Thad makes a good point that having a biocular display directly in the line of sight can be problematic and even dangerous. At the same time, untrained people don’t like monocular displays outside the line of sight. It becomes (as Ed Tang said in the panel) a point of high friction to adoption.

Based on the many designs I have seen for AR glasses, we will see this all played out. Multiple companies are developing optical see-through AR glasses with monocular green MicroLEDs, color X-cube-based MicroLEDs, and LCOS-based displays with glass form-factor waveguide optics (both diffractive and reflective).

Hypervision: Micro-OLED vs. LCD – And Why the Apple Vision Pro is “Blurry”

Introduction

The optics R&D  company Hypervision provided a detailed design analysis of the Apple Vision Pro’s optical design in June 2023 (see Apple Vision Pro (Part 4) – Hypervision Pancake Optics Analysis). Hypervision just released an interesting analysis exploring whether Micro-OLEDs, as used by the Apple Vision Pro, or LCDs used by Meta and most others, can support high 60 pixels per degree, angular resolution, and a wide FOV. Hypervision’s report is titled 60PPD: by fast LCD but not by micro OLED.

The optics R&D  company Hypervision provided a detailed design analysis of the Apple Vision Pro’s optical design in June 2023 (see Apple Vision Pro (Part 4) – Hypervision Pancake Optics Analysis). Hypervision just released an interesting analysis exploring whether Micro-OLEDs, as used by the Apple Vision Pro, or LCDs used by Meta and most others, can support high 60 pixels per degree, angular resolution, and a wide FOV. Hypervision’s report is titled 60PPD: by fast LCD but not by micro OLED. I’m going to touch on some highlights from Hypervision’s analysis. Please see their report for more details.

I Will Be at AWE Next Week

AWE is next week. I will be on the PANEL: Current State and Future Direction of AR Glasses at AWE on Wednesday, June 19th, from 11:30 AM to 12:25 PM. I still have a few time slots. If you want to meet, please email meet@kgontech.com.

AWE has moved to Long Beach, CA, south of LA, from its prior venue in Santa Clara. Last year at AWE, I presented Optical Versus Passthrough Mixed Reality, which is available on YouTube. This presentation was in anticipation of the Apple Vision Pro.

An AWE speaker discount code – SPKR24D- provides a 20% discount. You can register for AWE here.

Apple Vision Pro Sharpness Study at AWE 2024 – Need Help

As Hypervision’s analysis finds, plus reports I have received from users, the Apple Vision Pro’s sharpness varies from unit to unit. AWE 2024 is an opportunity to sample many Apple Vision Pro headsets to see how the focus varies from unit to unit. I will be there with my high-resolution camera.

While not absolutely necessary, it would be helpful if you could download my test pattern, located here, and install it on your Apple Vision Pro. If you want to help, contact me via meet@kgontech.com or flag me down at the show. I will be spending most of my time on the Expo floor. If you participate, you can remain anonymous or receive a mention of you or your company at the end of a related article thanking you for your participation. I can’t promise anything, but I thought it would be worth trying.

AVP Burry Image Controversy

My article Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3 was the first to report that the AVP was a little blurry. I compared high-resolution pictures showing the same FOV with the AVP and the Meta Quest 3 (MQ3) in that article.

This article caused controversy and was discussed in many forums and influencers, including Linus Tech Tips and Marquess Brownlee (see Apple Vision Pro—Influencing the Influencers & “Information Density” and “Controversy” of the AVP Being a Little Blurry Discussed on Marques Brownlee’s Podcast and Hugo Barra’s Blog).

I have recently been taking pictures through Bigscreen Beyond’s (BSB) headset and decided to compare it with the same test (above right). In terms of optical sharpness, it is between the AVP and the MQ3. Interestingly, the BSB headset has a slightly lower angular resolution (~32 pixels per degree) than the AVP (~40 ppd) in the optically best part of the lens where these crops were taken. Yet, the text and line patterns look better on the BSB than AVP.

Hypervision’s Correction – The AVP is Not Out of Focus, and the Optics are Blurry

I speculated that the AVP seemed out of focus in Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3. Hypervision corrected me that the softness could not be due to being out of focus. Hypervision has found that sharpness varies from one AVP to the next. The AVP’s best focus nominally occurs with an apparent focus of about 1 meter. Hypervision pointed out that if the headset’s device focus were slightly wrong, it would simply shift the apparent focus distance as the eye/camera would adjust to a small change in focus (unless it was so far off that eye/camera focusing was impossible). Thus, the blur is not a focus problem but rather a resolution problem with the optics.

Hypervision’s Analysis – Tolerances Required Beyond that of Today’s Plastic Optics

The AVP has very aggressive and complex pancake optics for a compact form factor while supporting a wide FOV with a relatively small Micro-OLED. Most other pancake optics have two elements, which mate with a flat surface for the polarizers and quarter waveplates that manipulate the polarized light to cause the light to pass through the optics twice (see Meta example below left). Apple has a more complex three-lens optic with curved polarizers and quarter waveplates (below right).

Based on my studies of how the AVP dynamically adjusts optical imperfections like chroma aberrations based on eye tracking, the AVP’s optics are “unstable” because, without dynamic correction, the imperfections would be seen as much worse.

Hypervision RMS Analysis

Hypervision did an RMS analysis comparing a larger LCD panel with a small Micro-OLED. It should probably come as no surprise that requiring about 1.8x (2.56/1.4) greater magnification makes everything more critical. The problem, as Hypervision points out, is that Micro-OLED on silicon can’t get bigger for many years due to semiconductor manufacturing limitations (reticle limit). Thus, the only way for Micro-OLED designs to support higher resolution and wider FOV is to make the pixels smaller and the optics much more difficult.

Hypervision Monte-Carlo Analysis

Hypervision then did a Monte-Carlo analysis factoring in optical tolerances. Remember, we are talking about fairly large plastic-molded lenses that must be reasonably priced, not something you would pay hundreds of dollars for in a large camera or microscope.

Hypervision’s 140 Degree FOV with 60PPD Approach

Hypervision believes that the only practical path to ~60PPD and ~140-degree FOV is with a 2.56″ LCD display. LCDs’ natural progression toward smaller pixels will enable higher resolution than their optics can support.

Conclusion

Overall, Hypervision makes a good case that current designs with Micro-OLED with pancake optics are already pushing the limits of reasonably priced optics. Using technology with somewhat bigger pixels makes resolving them easier, and having a bigger display makes supporting a wider FOV less challenging.

It might be that the AVP is slightly burry because it is already beyond the limits of a manufacturable design. So the natural question is, if AVP already has problems, how could they support higher resolution and wider FOV?

The size of Micro-OLEDs built on silicon backplanes is limited by a reticle limit of chip size of above ~1.4″ diagonally, at least without resorting to multiple reticle “stitching” (which is possible but not practical for a cost-effective device). Thus, for Micro-OLEDs to increase resolution, the pixels must be smaller, requiring even more magnification out of the optics. Then, increasing the FOV will require even more optical magnification of ever-tinier pixels.

LCDs have issues, particularly with black levels and contrast. Smaller illumination LEDs with local dimming may help, but they have not proven to work as well as micro-OLEDs.

Cogni Trax & Why Hard Edge Occlusion Is Still Impossible (Behind the Magic Trick)

Introduction

As I wrote in 2012’s Cynics Guide to CES—Glossary of Terms, when you see a demo at a conference, “sometimes you are seeing a “magic show” that has little relationship to real-world use.” I saw the Cogni Trax hard edge occlusion demo last week at SID Display Week 2024, and it epitomized the concept of being a “magic show.” I have been aware of Congi Trax for at least three years (and commented about the concept on Reddit), and I discovered they quoted me (I think a bit out of context) on its website (more on this later in the Appendix).

Cogni Trax has reportedly raised $7.1 million in 3 funding rounds over the last ~7 years, which I plan to show is unwarranted. I contacted Cogni Trax’s CEO (and former Apple optical designer on the Apple Vision Pro), Sajjad Khan, who was very generous in answering questions despite his knowing my skepticism about the concept.

Soft- Versus Hard-Edge Occlusion

Soft Edge Occlusion

In many ways, this article follows up on my 2021 Magic Leap 2 (Pt. 3): Soft Edge Occlusion, a Solution for Investors and Not Users, which detailed why putting an LCD in front of glass results in very “soft” occlusion.

Nobody will notice if you put a pixel-sized (angularly) dot on a person’s glasses. If it did, every dust particle on a person’s glasses would be noticeable and distracting. That is because a dot only a few millimeters from the eye is highly out of focus, and light rays from the real world will go around the dot before they are focused by the eye’s lens. That pixel dot will insignificantly dim several thousand pixels in the virtual image. As discussed in the Magic Leap soft occlusion article, the Magic Leap 2’s dimming pixel will cover ~2,100 pixels (angularly) in the virtual image and have a dimming effect on hundreds of thousands of pixels.

Hard Edge Occlusion (Optical and Camera Passthrough)

“Hard Edge Occlusion” means the precise, pixel-by-pixel light blocking. With camera passthrough AR (such as Apple Vision Pro), hard edge occlusion is trivial; one or more camera pixels are replaced by one or more pixels in the virtual image. Even though masking pixels is trivial with camera passthrough, there is still a non-trivial problem with getting the hard edge masking perfectly aligned to the real world. With passthrough mixel reality, the passthrough camera with its autofocus has focused the real world so it can be precisely masked.

With optical mixed reality hard edge occlusion, the real world must also be brought into focus before it can be precisely masked. Rather than going to a camera, the real world’s light goes to a reflective masking spatial light modular (SLM), typically LCOS, before combining it optically with the virtual image.

In Hard Edge (Pixel) Occlusion—Everyone Forgets About Focus, I discuss Arizona State University’s (ASU) optical solution for hard edge occlusion. Their solution has a set of optics that focuses the real world onto an SLM for masking. Then, a polarizing beam-splitting cube combines the result (with a change in polarization via two passes through a quarter waveplate not shown) after masking with a micro-display. While the ASU patent mentions using a polarizing beam splitter to combine the images, the patent fails to show or mention the need for a quarter waveplate between the SLM and beam splitter to work. One of the inventors, Hong Hua, was an ASU professor and a consultant to Magic Leap, and the patent was licensed to Magic Leap.

Other than being big and bulky, optically, what is wrong with the ASU’s hard edge occlusion includes:

  • It only works to hard edge occlude at a distance set by the focusing. Ano
  • The real world is “flatted” to be at the same focus as the virtual world.
  • Polarization dims the real world by at least 50%. Additionally, viewing a polarized display device (like a typical LCD monitor or phone display) will be at least partially blocked by an amount that will vary with orientation relative to the optics.
  • The real world is dimmed by at least 2x via the polarizing beam splitter.
  • As the eye moves, the real world will move differently than it would with the eye looking directly. You are looking at the real world through two sets of optics with a much longer light path.

While Cogni Trax uses the same principle for masking the real world, it is configured differently and is much smaller and lighter. Both devices block a lot of light. Cogni Trax’s design blocks about 77% of the light, and they claim their next generation will block 50%. However, note that this is likely on top of any other light losses in the optical system.

Cogni Trax SID Display Week 2024 Demo

On the surface, the Cogni Trax demo makes it look like the concept works. The demo had a smartphone camera looking through the Cogni Trax optical device. If you look carefully, you will see that they block light from 4 areas of the real world (see arrow in the inset picture below), a Nike swoosh on top of the shoe, a QR code, the Coke in the bottle (with moving bubbles), and a partially darken the wall to the right to create a shadow of the bottle.

They don’t have a microdisplay with a virtual image; thus, they can only block or darken the real world and not replace anything. Since you are looking at the image on a cell phone and not with your own eyes, you have no sense of the loss of depth and parallax issues.

When I took the picture above, I was not planning on writing an article and missed capturing the whole setup. Fortunately, Robert Scoble put out an X-video that showed most of the rig used to align the masking to the real world. The rig supports aligning the camera and Cogni Trax device with six degrees of freedom. This demo will only work if all the objects in the scene are in a precise location relative to the camera/device. This is the epitome of a canned demo.

One could hand wave that developing SLAM, eye tracking, and 3-D scaling technology to eliminate the need for the rig is a “small matter of hardware and software” (to put it lightly). However, requiring a rig is not the biggest hidden trick in these demos; it is the basic optical concept and its limitations. The “device” shown (lower right inset) is only the LCOS device and part of the optics.

Cogni Trax Gen 1 Optics – How it works

Below is a figure of Congi Trax’s patent that will be used to diagram the light path. I have added some colorization to help you follow the diagram. The dashed-lined parts in the patent for combining the virtual image are not implemented in Cogni Trax’s current design.

The view of the real world follows a fairly torturous path. First, it goes through a polarizer where at least 50% of the light is lost (in theory, this polarizer is redundant due to the polarizing beam splitter to follow, but it is likely used to reduce any ghosting). It then bounces off of the polarizing beam splitter through a focusing element to bring the real world into focus on an LCOS SLM. The LCOS device will change the polarization of anything NOT masked so that on the return trip through the focusing element, it will pass through the polarizing beam splitter. The light then passes through the “relay optics,” then a Quarter Waveplate (QWP), off a mirror, and back through the quarter waveplate and relay optics. The two passes through the “relay optics” have to undo everything done to the light by the two passes through the focusing element. The two passes through the QWP will rotate the polarization of the light so that the light will bounce off the beam splitter and be directed at the eye via a cleanup polarizer. Optionally, as shown, the light can be combined with a virtual image from a microdisplay.

I find it hard to believe that real-world light will go through all that and will behave like nothing other than the light losses from polarization that have happened to it.

Cogni Trax provided a set of diagrams showing the light path of what they call “Alpha Pix.” I edited several of their diagrams together and added some annotations in red. As stated earlier, the current prototype does not have a microdisplay for providing a virtual image. If the virtual display device were implemented, its optics and combiner would be on top of everything else shown.

I don’t see this as a practical solution to hard-edge occlusion. While much less bulky than the ASU design, it still requires polarizing the incoming light and sending it through a torturous path that will further damage/distort real-world light. And this is before they deal with adding a virtual image. There is still the issue that the hard edge occlusion only works if everything being occluded is at approximately the same focus distance. If the virtual display is implemented, it would seem that the virtual image would need to be at approximately the same focus distance for it to be occluded correctly. Then, the hardware and software are required to get everything between the virtual and real world aligned with the eye. Even if the software and eye tracking were excellent, there where will still be a lag with any rapid head movement.

Cogni Trax Waveguide Design / Gen 2

Cogni Trax’s website and video discuss a “waveguide” solution for Gen 2. I found a patent (with excerpts right and below) from Cogni Trax for a waveguide approach to hard-edge occlusion that appears to agree with the diagrams in the video and on the website for their “waveguide.” I have outlined the path for the real world (in green) and the virtual image (in red).

Rather than using polarization, this method uses time-sequential modulation via a single Texas Instrument’s DLP/DMD. The DLP is used during part of the time block/pass light from the real world and as the virtual image display. I have included Figure 1(a), which gives the overall light path; Figures 1(c) and 1(d), which show the time multiplexing; Figure 6(a) with a front view of the design; and Figures 10 (a) and (b) which show a side view of the waveguide with the real world and virtual light paths respectively.

Other than not being polarized, the light follows a more torturous light path that includes a “fixed DMD” to correct for the micro-tilts of the real world by time-multiplexed displaying and masking DMD. In addition to all the problems I had with the Gen 1 design, I find putting the relatively small mirror (120 in Figure 1a) in the middle of the view very problematic as the view over or below the mirror will look very different than the view in the mirror with all the addiction optics. While it can theoretically give more light throughput and not require polarization of the real world, it can only do so by keeping the virtual display times short, which will mean more potential field sequential color breakup and lower color bit depth from the DLP.

Overall, I see Cogni Trax’s “waveguide” design as trading one set of problems for another set of probably worse image problems.

Conclusion

Perhaps my calling hard-edge occlusion a “Holy Grail” did not fully convey its impossibility. The more I have learned, examined, and observed this problem and its proposed solutions, the more clearly it seems impossible. Yes, someone can craft a demo that works for a tightly controlled setup with what is occluded at about the same distance, but it is a magic show.

The Cogni Trax demo is not a particularly good magic show, as it uses a massive 6-axis control rig to position a camera rather than letting the user put on a headset. Furthermore, the demo does not support a virtual display.

Cogni Trax’s promise of a future “waveguide” design appears to me to be at least as fundamentally flawed. According to the publicly available records, Cogni Trax has been trying to solve this problem for 7 years, and a highly contrived setup is the best they have demonstrated, at least publicly. This is more of a university lab project than something that should be developed commercially.

Based on his history with Apple and Texas Instruments, the CEO, Sajjad Khan, is capable, but I can’t understand why he is pursuing this fool’s errand. I don’t understand why over $7M has been invested, other than people blindly investing in former Apple designers without proper technical due diligence. I understand that high-risk, high-reward concepts can be worth some investment, but in my opinion, this does not fall into that category.

Appendix – Quoting Out of Context

Cogni Trax has quoted me in their video on their website as saying, “The Holy Grail of AR Displays.” It is not clear that A) I am referring to Hard Edge Occlusion (and not Cogni Trax) and B) I go on to say, “But it is likely impossible to solve for anything more than special cases of a single distance (flat) real world with optics.” The Audio in the Cogni Trax video from me, which is rather garbled, comes from a MARCH 30, 2021, AR Show, “KARL GUTTAG (KGONTECH) ON MAPPING AR DISPLAYS TO SUITABLE OPTICS (PART 2) at ~48:55 into the video (the occlusion issue is only briefly discussed).

Below, I have cited (with new highlighting in yellow) the section from my blog discussing hard edge occlusion from November 20, 2019, where Cogni Trax got my “Holy Grail” quote. This section of the article discusses the ASU design. This article discussed using a transmissive LCD for soft edge occlusion about 3 years before Magic Leap announced the Magic Leap 2 with such a method in July 2022.

Hard Edge (Pixel) Occlusion – Everyone Forgets About Focus

“Hard Edge Occlusion” is the concept of being able to block the real world with sharply defined edges, preferably to the pixel level. It is one of the “Holy Grails” of optical AR. Not having hard edge occlusion is why optical AR images are translucent. Hard Edge Occlusion is likely impossible to solve optically for all practical purposes. The critical thing most “solutions” miss (including US 20190324274) is that the mask itself must be in focus for it to sharply block light. Also, to properly block the real world, the focusing effect required depends on the distance of everything in the real world (i.e., it is infinitely complex).

The most common hard edge occlusion idea suggested is to put a transmissive LCD screen in the glasses to form “opacity pixels,” but this does not work. The fundamental problem is that the screen is so close to the eye that the light-blocking elements are out of focus. An individual opacity pixel will have a little darkening effect, with most of the light from a real-world point in space going around it and into the eye. A large group of opacity pixels will darken as a blurry blob.

Hard edge occlusion is trivial to do with pass-through AR by essentially substituting pixels. But it is likely impossible to solve for anything more than special cases of a single distance (flat) real world with optics. The difficulty of supporting even the flat-world special case is demonstrated by some researchers at the University of Arizona, now assigned to Magic Leap (the PDF at this link can be downloaded for free) shown below. Note all the optics required to bring the real world into focus onto “SLM2” (in the patent 9,547,174 figure) so it can mask the real world and solve the case for everything being masked being at roughly the same distance. None of this is even hinted at in the Apple application.

I also referred to hard edge occlusion as one of the “Holy Grails” of AR in a comment to a Magic Leap article in 2018 citing the ASU design and discussing some of the issues. Below is the comment, with added highlighting in yellow.

One of the “Holy Grails” of AR, is what is known as “hard edge occlusion” where you block light in-focus with the image. This is trivial to do with pass-through AR and next to impossible to do realistically with see-through optics. You can do special cases if all the real world is nearly flat. This is shown by some researchers at the University of Arizona with technology that is Licensed to Magic Leap (the PDF at this link can be downloaded for free: https://www.osapublishing.org/oe/abstract.cfm?uri=oe-25-24-30539#Abstract). What you see is a lot of bulky optics just to support a real world with the depth of a bookshelf (essentially everything in the real world is nearly flat).

FM: Magic Leap One – Instant Analysis in the Comment Section by Karl Guttag (KarlG) JANUARY 3, 2018 / 8:59 AM

Brilliant Labs Frame AR with AI Glasses & a Little More on the Apple Vision Pro

Introduction

A notice in my LinkedIn feed mentioned that Brilliant Labs has started shipping its new Frame AR glasses. I briefly met with Brilliant CEO Bobak Tavangar at AWE 2023 (right) and got a short demonstration of its “Monocle” prototype. So, I investigated what Brilliant Labs was doing with its new “Frame.”

This started as a very short article, but as I put it together, I thought it would be an interesting example of making design decisions and trade-offs. So it became longer. Looking at the Frames more closely, I found issues that concerned me. I don’t mean to pick on Brillant Labs here. Any hardware device like the Frames is a massive effort, and they talk like they are concerned about their customers; I am only pointing out the complexities of supporting AI with AR for a wide audience.

While looking at how the Frame glasses work, I came across some information related to the Apple Vision Pro’s brightness (in nits), discussed last time in Apple Vision Pro Discussion Video by Karl Guttag and Jason McDowall. In the same way, the Apple Vision Pro’s brightness is being misstated as “5000 nits,” and the Brilliant Labs Frame’s brightness has been misreported as 3,000 nits. In both cases, the nits are the “potential” out of the display and not “to the eye” after the optics.

I’m also repeating the announcement that I will be at SID’s DisplayWeek next week and AWE next month. If you want to meet, please email meet@kgontech.com.

DisplayWeek (next week) and AWE (next month)

I will be at SID DisplayWeek in May and AWE in June. If you want to meet with me at either event, please email meet@kgontech.com. I usually spend most of my time on the exhibition floor where I can see the technology.

If you want to meet, please email meet@kgontech.com.

AWE has moved to Long Beach, CA, south of LA, from its prior venue in Santa Clara, and it is about one month later than last year. Last year at AWE, I presented Optical Versus Passthrough Mixed Reality, available on YouTube. This presentation was in anticipation of the Apple Vision Pro.

At AWE, I will be on the PANEL: Current State and Future Direction of AR Glasses on Wednesday, June 19th, from 11:30 AM to 12:25 PM.

There is an AWE speaker discount code – SPKR24D , which provides a 20% discount, and it can be combined with Early Bird pricing (which ends May 9th, 2024 – Today as I post this). You can register for AWE here.

Brilliant Labs Monocle & Frame “Simplistic” Optical Designs

Brillian Labs Monocle and Frame used the same basic optical architecture, but it is better hidden in the Frame design. I will start with the Monocle, as it is easier to see the elements and the light path. I was a little surprised that both designs use a very simplistic, non-polarized 50/50 beam splitter with its drawbacks.

Below (left) is a picture of the Monocle with the light path (in green). The Monocle (and Frame) both use a non-polarizing 50/50 beamsplitter. The splitter projects 50% of the display’s light forward and 50% downward to the (mostly) spherical mirror, magnifying the image and moving the apparent focus. After reflecting from the mirror, the light is split again in half, and ~25% of the light goes to the eye. The front project image will be mirrored, with an unmagnified view of the display that will be fairly bright. Front projection or “eye glow” is generally considered undesirable in social situations and is something most companies try to reduce/eliminate in their optical designs.

The middle picture above shows a picture I took of the Monocle from the outside, and you can see the light from the beam splitter projecting forward. Figures 5A and 6 (above right) from Brilliant Labs’ patent application illustrate the construction of the optics. The Monocle is made with two solid optical parts, with the bottom part forming part of the beam splitter and the bottom surface being shaped to form the curved mirror and then mirror coated. An issue with the 2-piece Monocle construction is that the beam splitter and mirror are below eye level, which requires the user to look down to see the image or position the whole device higher, which results in the user looking through the mirror.

The Frame optics work identically in function, but the size and spacing differ. The optics are formed with three parts, which enables Brilliant to position the beam splitter and mirror nearer the center of the user’s line of sight. But as Brilliant Lab’s documentation shows (right), the new Frame glasses still have the virtual (apparent) image below the line of sight.

Having the image below the line of sight reduces the distortion/artifacts of the real world by looking through the beam splitter when looking forward, but it does not eliminate all issues. The top seam of the beam splitter will likely be visible as an out-of-focus line.

The image below shows part of the construction process from a Brilliant Labs YouTube video. Note that the two parts that form the beamsplitter with its 50/50 semi-mirror coating have already been assembled to form the “Top.”

The picture above left is of a prototype taken by Forbes’ author Ben Sin of a Frame prototype from his article Frame Is The Most ‘Normal’ Looking AI Glasses I’ve Worn Yet. In this picture, the 50/50 beam splitter is evident.

Two Types of Birdbath

As discussed in Nreal Teardown: Part 1, Clones and Birdbath Basics and its Appendix: Second Type of Birdbath, there are two types of “birdbaths” used in AR. The Birdbath comprises a curved mirror (or semi-mirror) and a beamsplitter. It is called a “birdbath” because the light reflects out of the mirror. The beamsplitter can be polarized or unpolarized (more on this later). Birdbath elements are often buried in the design, such as the Lumus optical design (below left) with its curved mirror and beam splitter.

From 2023 AR/VR/MR Lumus Paper – A “birdbath” is one element of the optics

Many AR glasses today use the birdbath to change the focus and act as the combiner. The most common of these designs is where the user looks through a 50/50 birdbath mirror to see the real world (see Nreal/Xreal example below right). In this design, a polarised beam splitter is usually used with a quarter waveplate to “switch” the polarization after the reflection from the curved semi-mirror to cause the light to go through the beam splitter on its second pass (see Nreal Teardown: Part 1, Clones and Birdbath Basics for a more detailed explanation). This design is what I refer to as a “Look through the mirror” type of birdbath.

Brilliant Labs uses a “Look through the Beamsplitter” type of birdbath. Google Glass is perhaps the most famous product with this birdbath type (below left). This birdbath type has appeared in Samsung patents that were much discussed in the electronic trade press in 2019 (see my 2019 Samsung AR Design Patent—What’s Inside).

LCOS maker Raontech started showing a look through the beamsplitter reference design in 2018 (below right). The various segments of their optics are labeled below. This design uses a polarizing beam splitter and a quarter waveplate.

Brilliant Labs’ Thin Beam Splitter Causes View Issues

If you look at the RaonTech or Google Glass splitter, you should see that the beam splitter is the full height of the optics. However, in the case of the Frames and Monocle designs (right), the top and bottom beam splitter seams, the 50/50 mirror coating, and the curved mirror are in the middle of the optics and will be visible as out-of-focus blurs to the user.

Pros and Cons of Look-Through-Mirror versus Look-Through-Beamsplitter

The look-through-mirror birdbaths typically use a thin flat/plate beam splitter, and the curved semi-mirror is also thin and “encased in air.” This results in them being relatively light and inexpensive. They also don’t have to deal with the “birefringence” (polarization changing) issues associated with thick optical materials (particularly plastic). The big disadvantage of the look-through-mirror approach is that to see the real world, the user must look through both the beamsplitter and the 50/50 mirror; thus, the real world is dimmed by at least 75%.

The look-through-beamsplitter designs encase the entire design in either glass or plastic, with multiple glued-together surfaces coated or coated with films. The need to encase the design in a solid means the designs tend to be thicker and more expensive. Worse yet, typical injected mold plastics are birefringent and can’t be used with polarized optics (beamsplitters and quartwaveplates). Either heavy glass or higher-cost resin-molded plastics must be used with polarized elements. Supporting a wider FOV becomes increasingly difficult as a linear change in FOV results in a cubic increase in the volume of material (either plastic or glass) and, thus, the weight. Bigger optics are also more expensive to make. There are also optical problems when looking through very thick solid optics. You can see in the Raontech design above how thick the optics get to support a ~50-degree FOV. This approach “only” requires the user to look through the beam splitter, and thus the view of the real world is dimmed by 50% (or twice as much light gets through as the look-through-mirror method).

Pros and Cons Polarized Beam Splitter Birdbaths

Most companies with look-through-mirror and look-through-beamsplitter designs, but not Brilliant Labs, have gone with polarizing beam splitters and then use quarter waveplates to “switch” the polarization when the light reflects off the mirror. Either method requires the display’s light to make a reflective and transmissive pass via the beam splitter. With a non-polarized 50/50 beam splitter, this means multiplicative 50% losses or only 25% of the light getting through. With a polarized beam splitter, once the light is polarized with a 50% loss, with proper use of quarter waveplates, there are no more significant losses with the polarized beamsplitter.

Another advantage of the polarized optics approach is that front-projection can be mostly eliminated (there will be only a little due to scatter). The look-through-mirror method can be accomplished (as discussed in Nreal Teardown: Part 1, Clones and Birdbath Basics) with a second-quarter waveplate and a front polarizer. With the look-through-beamsplitter method, a polarizer before the beamsplitter will block the light that would project forward off the polarized beamsplitter.

As mentioned earlier, using polarized optics becomes much more difficult with the thicker solid optics associated with the look-through-beamsplitter method.

Brilliant Labs Frame Design Decision Options

It seems that at every turn in the decision process for the Frame and Monocle optics, Brilliant Labs chose the simplest and most economical design possible. By not using polarized optics, they gave up brightness and will have significant front projection. Still, they can use much less expensive injection-molded plastic optics that do not require polarizers and quart waveplates. They avoided using more expensive waveguides, which would be thinner but require LCOS or MicroLED (inorganic LED) projection engines, which may be heavier and larger. Although, the latest LCOS and MicroLED engines are getting to be pretty small and light, particularly for a >30-degree FOV (see DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8)).

Frames Brightness to the Eye – Likely >25% of 3,000 nits – Same Problem as Apple Vision Pro Reporting

As discussed in the last article on the Apple Vision Pro (AVP) in the Appendix: Rumor Mill’s 5,000 Nits Apple Vision Pro, reporters/authors constantly make erroneous comparisons of “display-out nits” with one device and to the nits-to-the-eye of other devices. Also, as stated last time, the companies appear to want this confusion by avoiding specifying the nits to the eye as they benefit from reporters and others using display device values.

I could not find an official Brilliant Labs value anywhere, but it seems to have told reporters that “the display is 3,000 nits,” which may not be a lie, but it is misleading. Most articles will dutifully give the “display number” but fail to say that they are “display device nits” and not what the user will see and leave it to the readers to make the mistake, while other reporters will make the error themselves.

Digitrends:

The display on Frame is monocular, meaning the text and graphics are displayed over the right eye only. It’s fairly bright (3,000 nits), though, so readability should be good even outdoors in sunlit areas.

Wearable:

As with the Brilliant Labs Monocle – the clip-on, open-source device that came before Frame – information is displayed in just one eye, with overlays being pumped out at around 3,000 nits brightness.

Android Central in androidcentral’s These AI glasses are being backed by the Pokemon Go CEO, who was at least making it clear that it was the display device numbers, but I still think most readers wouldn’t know what to do with this number. They added the tidbit that the panels were made by Sony, and they discussed pulse with modulation (also known as duty cycle). Interestingly, they talk about a short on-time duty cycle causing problems for people sensitive to flicker. In contrast, VR game fans favor a very short on-time duty cycle, what Brad Lynch of SadlyItsBradly refers to as low-persistence) to reduce blurring.

androidcentral’s These AI glasses are being backed by the Pokemon Go CEO

A 0.23-inch Sony MicroOLED display can be found inside one of the lenses, emitting 3,000 nits of brightness. Brilliant Labs tells me it doesn’t use PWM dimming on the display, either, meaning PWM-sensitive folks should have no trouble using it.

Below is a summary of Sony OLED Microdisplays aimed at the AR and VR market. On it, the 0.23 type device is listed with a max lumence of 3,000 nits. However, from the earlier analysis, we know that at most 25% of the light can get through Brilliant Labs Frame birdbath optics or at most 750 nits (likely less due to other optical losses). This number assumes that the device is driven at full brightness and that Brilliant Labs is not buying derated devices at a lower price.

I can’t blame Brilliant Labs because almost every company does the same in terms of hiding the ball on to-the-eye brightness. Only companies with comparatively high nits-to-the-eye values (such as Lumus) publish this spec.

Sony Specifications related to the Apple Vision Pro

The Sony specifications list a 3.5K by 4 K device. The industry common understanding is that Apple designed a custom backplane for the AVP but then used Sony’s OLED process. Notice the spec of 1,000 cd/m2 (candelas per meter squared = nits) at a 20% duty ratio. While favorable for VR gamers wanting less motion blur, the low on-duty cycle time is also a lifetime issue. The display device probably can’t handle the heat from being driven for a high percentage of the time.

It would be reasonable to assume that Apple is similarly restricted to about a 20% on-duty cycle. As I reported last time in the Apple Vision Pro Discussion Video by Karl Guttag and Jason McDowall, I have measured the on-duty cycle of the AVP to be about 18.4% or close to Sony’s 20% for their own device.

The 5,000 nits cited by MIT Tech Review are the raw displays before the optics, whereas the nits for the MQ2 were those going to the eye. The AVP’s (and all other) pancake optics transmit about 11% (or less) of the light from an OLED in the center. With Pancake optics, there is the polarization of the OLED (>50% loss), a transmissive pass, and a reflective pass through a 50/50 mirror, which starts with at most 12.5% (50% cubed) before considering all the other losses from the optics. Then, there is the on-time-duty cycle of the AVP, which I have measured to be about 18.4%. VR devices want the on-time duty cycle to be low to reduce motion blur with the rapid motion of the head and 3-D game. The MQ3 only has a 10.3% on-time duty cycle (shorter duty cycles are easier with LED-illuminated LCDs). So, while the AVP display devices likely can emit about 5,000 nits, the nits reaching the eye are approximately 5,000 nits x 11% x 18.4% = 100 nits.

View Into the Frame Glasses

I don’t want to say that Brilliant Labs is doing anything wrong or that other companies don’t often do the same. Companies often take pictures and videos of new products using non-functional prototypes because the working versions aren’t ready when shooting or because they look better on camera. Still, I want to point out something I noticed with the pictures of the CEO, Bobak Tavangar (right), that was published in many of the articles in the Frames glasses. I didn’t see the curved mirror and the 50/50 beam splitter.

In a high-resolution version of the picture, I could see the split in the optics (below left) but not the darkened rectangle of the 50/50 mirror. So far, I have found only one picture of someone wearing the Frame glasses from Bobak Tavangar’s post on X. It is of a person wearing what appears to be a functional Frame in a clear prototype body (below right). In the dotted line box, you can see the dark rectangle from the 50/50 mirror and a glint from the bottom curved mirror.

I don’t think Brilliant Labs is trying to hide anything, as I can find several pictures that appear to be functional frames, such as the picture from another Tavangar post on X showing trays full of Frame devices being produced (right) or the Forbes picture (earlier in the Optical section).

What was I hoping to show?

I’m trying to show what the Frame looks like when worn to get an idea of the social impact of wearing the glasses. I was looking for a video of someone wearing them with the Frame turned on, but unfortunately, none have surfaced. From the design analysis above, I know they will project a small but bright image view with a mirror image of the display off of the 50/50 mirror, but I have not found an image showing the working device from the outside looking in.

Exploded View of the Frame Glasses

The figure below is taken from Brilliant Lab’s online manual for the Frame glasses (I edited it to reduce space and inverted the image to make it easier to view). By AR glasses standards, the Frame design is about as simple as possible. The choice of two nose bridge inserts is not shown in the figure below.

There is only one size of glasses, which Brilliant Labs described in their AMA as being between a “medium and large” type frame. They say that the temples are flexible to accommodate many head widths. Because the Frames are monocular, IPD is not the problem it would be with a biocular headset.

AddOptics is making custom prescription lenses for the Frames glasses

Brilliant Labs is partnering with AddOptics to make prescription lenses that can be ‘Precision Bonded’ to Frames using a unique optical lens casting process. For more on AddOptics, see CES 2023 (Part 3) – AddOptics Custom Optics and my short follow-up in Mixed Reality at CES & AR/VR/MR 2024 (Part 2 Mostly Optics).

Bonding to the Frames will make for a cleaner and more compact solution than the more common insert solution, but it will likely be permanent and thus a problem for people whose prescriptions change. In their YouTube AMA, Brilliant Labs said they are working with AddOptics to increase the range of prescription values and support for astigmatism.

They didn’t say anything about bifocal or progressive lens support, which is even more complicated (and may require post-mold grinding). As the virtual image is below the centerline of vision, it would typically be where bifocal and progressive lenses would be designed for reading distance (near vision). In contrast, most AR and VR glasses aim to put the virtual image at 2 meters, considered “far vision.”

The Frame’s basic specs

Below, I have collected the basic specs on the Frame glasses and added my estimate for the nits to the eye. Also shown below is their somewhat comical charging adapter (“Mister Charger”). None of these specs are out of the ordinary and are generally at the low end for the display and camera.

  • Monocular 640×400 resolution OLED Microdisplay
  • ~750nits to the eye (based on reports of a 3,000 Sony Micro-OLED display device)
    • (90% on-time duty cycle using an
  • 20-Degree FOV
  • Weight ~40 grams
  • 1280×720 camera
  • Microphone
  • 6 axis IMU
  • Battery 222mAh  (plus 149mAh top-up from charging adapter)
    • With 80mA typical power consumption when operating 0.580 on standby)
  • CPU nRF52840 Cortex M4F (Nordic ARM)
  • Bluetooth 5.3

Everything in AR Today is “AI”

Brilliant Labs is marketing the frames as “AI Glasses.” The “AI” comes from Brilliant Lab’s Noa ChatGPT client application running on a smartphone. Brillant Labs says the hardware is “open source” and can be used by other companies’ applications.

I’m assuming the “AI” primarily runs on the Noa cell phone application, which then connects to the cloud for the heavy-lifting AI. According to their video by Brillant Labs, while on the Monocle, the CPU only controls the display and peripherals, they plan to move some processing onto the Frame’s more capable CPU. Like other “AI” wearables, I expect simple questions will get immediate responses while complex questions will wait on the cloud.

Conclusions

To be fair, designing glasses and wearable AR products for the mass market is difficult. I didn’t intend to pick on Brilliant Lab’s Frames; instead, I am using it as an example.

With a monocular, 20-degree FOV below the center of the person’s view, the Frames are a “data snacking” type AR device. It is going to be competing with products like the Human AI projector (which is a joke — see: Humane AI – Pico Laser Projection – $230M AI Twist on an Old Scam), the Rabbit R1, Meta’s (display-less) Ray Ban Wayfarer, other “AI” audio glasses, and many AR-AI glasses similar to the Frame that are in development.

This blog normally concentrates on display and optics, and on this score, the Frame’s optics are a “minimal effort” to support low cost and weight. As such, they have a lot of problems, including:

  • Small 20-degree FOV that is set below the eyes and not centered (unless you are lucky with the right IPD)
  • Due to the way the beam 50/50 splitter cuts through the optics, it will have a visible seam. I don’t think this will be pleasant to look through when the display is off (but I have not tried them yet). You could argue that you only put them on “when you need them,” but that negates most use cases.
  • The support for vision correction appears to lock the glasses to a single (current) prescription.
  • Regardless of flexibility, the single-size frame will make the glasses unwearable for many people.
  • The brightness to the eye of probably less than 750 nits is not bright enough for general outdoor use in daylight. It might be marginal if used combined with clip-on sunglasses or if they are used in the shade.

As a consumer, I hate the charger adapter concept. Why they couldn’t just put a USB-C connector on the glasses is beyond me and a friction point for every user. Users typically have dozens of USB-C power cables today, but your device is dead if you forget or lose the adaptor. Since these are supposed to be prescription glasses, the idea of needing to take them off to charge them is also problematic.

While I can see the future use model for AI prescription glasses, I think a display, even one with a small FOV, will add significant value. I think Brillant Labs’s Frames are for early adopters who will accept many faults and difficulties. At least they are reasonably priced at $349, by today’s standards, and don’t require a subscription for basic services without too many complex AI queries requiring the cloud.

Apple Vision Pro Discussion Video by Karl Guttag and Jason McDowall

Introduction

As discussed in Mixed Reality at CES and the AR/VR/MR 2024 Video (Part 1 – Headset Companies), Jason McDowall of The AR Show recorded over four hours of video discussing the 50 companies I met at CES and AR/VR/MR. The last thing we discussed for about 50 minutes was the Apple Vision Pro (AVP).

The AVP video amounts to a recap of the many articles I have written on the AVP. Where appropriate, I will give links to my more detailed coverage in prior articles and updates rather than rehash that information in this article.

It should be noted that Jason and I recorded the video on March 25th, 2024. Since then, there have been many articles from tech magazines saying the AVP sales are lagging, often citing Bloomberg’s Mark Gurman’s “Demand for demos is down” and Analyst Ming Quo reporting, “Apple has cut its 2024 Vision Pro shipments to 400–450k units (vs. market consensus of 700–800k units or more).” While many reviewers cite the price of the AVP, I have contended that price was not the problem as it was in line with a new high-tech device (adjusted for inflation, it is about the same price as the first Apple II). My criticism focuses on the utility and human factors. In high-tech, the cost is usually a fixable problem with time and effort, and people will pay more if something is of great utility.

I said the Apple Vision Pro would have utility problems before it was announced. See my 2023 AWE Presentation “Optical Versus Passthrough Mixed Reality“) and my articles on the AVP. I’m not about bashing a product or concept; when I find faults, I point them out and show my homework, so to speak, on this blog and in my presentations.

Before the main article, I want to repeat the announcement that I plan to go to DisplayWeek in May and AWE in June. I have also included a short section on YouTube personality/influence Marques Browlee’s Waveform Podast and Hugo Barra’s (former Head of Oculus at Meta) blog article discussing my controversial (but correct) assessment that the Apple Vision Pro’s optics are slightly out of focus/blurry.

DisplayWeek and AWE

I will be at SID DisplayWeek in May and AWE in June. If you want to meet with me at either event, please email meet@kgontech.com. I usually spend most of my time on the exhibition floor where I can see the technology.

AWE has moved to Long Beach, CA, south of LA, from its prior venue in Santa Clara, and it is about one month later than last year. Last year at AWE, I presented Optical Versus Passthrough Mixed Reality, available on YouTube. This presentation was in anticipation of the Apple Vision Pro.

At AWE, I will be on the PANEL: Current State and Future Direction of AR Glasses on Wednesday, June 19th, from 11:30 AM to 12:25 PM with the following panelists:

  • Jason McDowall – The AR Show (Moderator)
  • Jeri Ellsworth – Tilt Five
  • Adi Robertson – The Verge
  • Edward Tang – Avegant
  • Karl M Guttag – KGOnTech

There is an AWE speaker discount code – SPKR24D , which provides a 20% discount, and it can be combined with Early Bird pricing (which ends May 9th, 2024). You can register for AWE here.

“Controversy” of the AVP Being a Little Blurry Discussed on Marques Brownlee’s Podcast and Hugo Barra’s Blog

As discussed in Apple Vision Pro – Influencing the Influencers & “Information Density,” which included citing this blog on Linus Tips, this blog is read by other influencers, media, analysts, and key people at AR/VR/MR tech companies.

Marques Brownlee (MKBHD), another major YouTube personality, Waveform Podcast/WVFRM YouTube channel, discussed (link to the YouTube discussion) my March 1st article on Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3. Marques discussed Hugo Barra’s (former Head of Oculus at Meta) blog’s March 11, 2024 “Hot Take” article (about 1/3rd of the way down) on my blog article.

According to MKBHD and Hugo Barra, my comments about Vision Pro are controversial, but they agree that it would make sense based on my evidence and their experience. My discussion with Jason was recorded before the Waveform Podcast came out. I’m happy to defend and debate this issue.

Outline of the Video and Additional Information

The Video The times in blue on the left of each subsection give the link to the YouTube video section discussing that subject.

00:16 Ergonomics and Human Factors

I wrote about the issues with the AVP’s human factors design in Apple Vision Pro (Part 2) – Hardware Issues Mechanical Ergonomics. In a later article in CES Part 2, I compared the AVP to the new Sony XR headset in the Sony XR (and others compared to Apple Vision Pro) section.

08:23 Lynx and Hypervision

I wrote the article comparing the new Sony XR headset to the AVP mentioned the Lynx R1, first shown in 2021, in this comparison. But I didn’t realize how much they were alike until I saw a post somewhere (I couldn’t find it again) by Lynx’s CEO, Stan Larroque saying how much they were alike. It could be a matter of form following function, but how much they are alike from just about any angle is rather striking.

While on the subject of Lynx and Apple. Lynx used optic by Limbak for the Lynx R1. As I broke in December 2022 Limbak Bought by “Large US Company” (which soon was revealed as Apple) and discussed in more detail in a 2022 Video with Brad Lynch, I don’t like the R1’s Limbak “catadioptric” (combined mirror and refractive) optics. While the R1 optics are relatively thin, like pancake optics, they cause a significant loss of resolution due to their severe distortion, and worse, they have an optical discontinuity in the center of the image unless the eye is perfectly aligned.

In May 2023, Lynx and Hypervision announced that they were working together. In Apple Vision Pro (Part 4)—Hypervision Pancake Optics Analysis, Hypervision detailed the optics of the Apple Vision Pro. That article also discusses the Hypervision pancake optics it was showing at AR/VR/MR 2023. Hypervision demonstrated single pancake optics with a 140-degree FOV (the AVP is about 90 degrees) and blended dual pancake optics with a 240-degree FOV (see below right).

10:59 Big Screen Beyond Compared to AVP Comfort Issues

When I was at the LA SID One Day conference, I stopped by Big Screen Beyond to try out their headset. I wore Big Screen’s headset for over 2 hours and didn’t have any of the discomfort issues I had with the AVP. With the AVP, my eyes start bothering me after about 1/2 hours and are pretty sore by 1 hour. There are likely two major factors: one is that the AVP is applying pressure to the forehead, and the other is that something is not working right optically with the AVP.

Big Screen Beyond has a silicon gel-like custom interface that is 3-D printed based on a smartphone face scan. Like the AVP, they have magnetic prescription inserts. While the Big Screen Beyond was much more comfortable, the face interface has a large contact area with the face. While not that uncomfortable, I would like something that breathed more. When you remove the headset, you can feel the preparation evaporating from where the interface was contacting your face. I can’t imagine anyone wearing makeup being happy (the same with the with the AVP or any headset that presses against the face).

On a side note, I was impressed by Big Screen Beyond’s statement that it is cash flow positive. It is a sign that they are not wildly spending money on frills and that they understand the market they are serving. They are focused on serving dedicated VR gamers who want to connect the headset to a powerful computer.

Related to the Big Screen Beyond interface, a tip I picked up on Reddit is that you can use a silicon face pad made for the Meta Quest 2 or 3 on the AVP’s face interface (see above right). The silicon face pad gives some grip to the face interface and reduces the pressure required to hold the AVP steady. The pad adds about 1mm, but it so happens that I had recently swapped my original AVP face interface for one that is 5mm shorter. Now, I barely need to tighten the headband. A downside to the silicon pad, like the Big Screen Beyond, is that it more or less forms a seal with your face, and you can feel the perspiration evaporating when you remove it.

13:16 Some Basic AVP Information

In the video, I provide some random information about the AVP. I wanted to go into detail here about the often misquoted brightness of the AVP.

I started by saying that I have read or watched many people state that the AVP is much brighter than the Meta Quest 3 (MQ3) or Meta Quest Pro (MQP). They are giving ridiculously high brightness/nits values for the AVP. As I reported in my March 7th, 2024, comments in the article Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3, the AVP outputs to the eye about 100 nits and is only about 5-10% brighter than the MQ3 and ~20% less than the MQP.

Misinformation on AVP brightness via a Google Search

I will explain how this came about in the Appendix at the end. And to this day, if you do a Google search (captured below), it will prominently state that the AVP has a “50-fold improvement over the Meta’s Quest 2, which hits just 100 nits,” citing MIT Technology Review.

Nits are tricky to measure in a headset without the right equipment, and even then, they vary considerably from the center (usually the highest to the periphery).

The 5,000 nits cited by MIT Tech Review are the raw displays before the optics, whereas the nits for the MQ2 were those going to the eye. The AVP’s (and all other) pancake optics transmit about 11% (or less) of the light from an OLED in the center. With Pancake optics, there is the polarization of the OLED (>50% loss), a transmissive pass, and a reflective pass through a 50/50 mirror, which starts with at most 12.5% (50% cubed) before considering all the other losses from the optics. Then, there is the on-time-duty cycle of the AVP, which I have measured to be about 18.4%. VR devices want the on-time duty cycle to be low to reduce motion blur with the rapid motion of the head and 3-D game. The MQ3 only has a 10.3% on-time duty cycle (shorter duty cycles are easier with LED-illuminated LCDs). So, while the AVP display devices likely can emit about 5,000 nits, the nits reaching the eye are approximately 5,000 nits x 11% x 18.4% = 100 nits.

18:59 Computer Monitor Replacement is Rediculous

I wrote a three-part series on why I think monitor replacement by the Apple Vision Pro is ridiculous. Please see Apple Vision Pro (Part 5A) – Why Monitor Replacement is Ridiculous, Part 5B, and Part 5C. There are multiple fundamental problems that neither Apple nor anyone else is close to solving. The slide on the right summarizes some of the big issues.

Nyquist Sampling – Resampling Causes Blurring & Artifacts

I tried to explain the problem in two ways, one based on the frequency domain and the other on the spatial (pixel) domain.

19:29 Frequency Domain Discussion

Anyone familiar with signal processing may remember that a square wave has infinite odd harmonics. Images can be treated like 2-dimensional signals. A series of equally spaced, equal-width horizontal lines looks like a square wave in the vertical dimension. Thus, to represent them perfectly with a 3-D transform requires infinite resolution. Since the resolution of the AVP (or any VR headset) is limited, there will be artifacts such as blurring, wiggling, and scintillation.

As I pointed out in (Part 5A), computers tend to “cheat” and distort text and graphics to fit on the pixel grid and thus sidestep the Nyquist sampling problem that any VR headset must face when trying to make a 2-D image appear still in 3-D space. Those who know signal processing know that the Nyquist rate is 2x the highest frequency component. However, as noted above, horizontal lines have infinite frequency. Hence, some degradation is inevitable, but then we only have to beat the resolution limit of the eye, which, in effect, acts as a low-pass filter. Unfortunately, the AVP’s display is about 2-3x too low linearly (4-9x in two dimensions) in resolution for the artifacts not to be seen by a person with good vision.

22:15 Spatial Domain Discussion

To avoid relying on signal processing theory, in (Part 5A), I gave the example of how a single display pixel can be translated into 3-D space (right). The problem is that a pixel the size of a physical pixel in the headset will always cover parts of four physical pixels. Worse yet, with the slightest movement of a person’s head, how much of each pixel and even which pixels will be constantly changing, causing temporal artifacts such as wiggling and scintillation. The only way to reduce the temporal artifacts is to soften (low pass filter) the image in the resampling process.

23:19 Optics Distortion

In addition to the issues with representing a 2-D image in 3-D space, the AVP’s optics are highly distorting, as discussed in Apple Vision Pro’s (AVP) Image Quality Issues—First Impressions. The optical distortions can be “digitally corrected” but face the same resample issues discussed above.

25:51 Close-Up Center Crop and Foveated Boundary

The figures shown in this part of the video come from Apple Vision Pro’s (AVP) Image Quality Issues – First Impressions, and I will refer you to that article rather than repeat it here.

This image has an empty alt attribute; its file name is 2024-02-AVP-foveated-boundaries-2a-and-2b-copy-1024x428.jpg

28:52 AVP’s Pancake Optics and Comparison to MQ3 and Birdbath

Much of this part of the video is covered in more detail in Apple Vision Pro’s (AVP) Image Quality Issues—First Impressions.

Using Eye Tracking for Optics Has Wider Implications

A key point made in the video is that the AVP’s optics are much more “aggressive” than Meta’s, and as a result, they appear to require dynamic eye tracking to work well. I referred to the AVP optics as being “unstable.” The AVP is constantly pre-correcting for distortion and color based on eye tracking. While the use of eye tracking for Foveated Rendering and control input is much discussed by Apple and others, using eye tracking to correct the optics has much more significant implications, which may be why the AVP has to be “locked” onto a person’s face.

Eye tracking for foveated rendering does not have to be nearly as precise as it is for correction, but using it for optical correction does. This leads me to speculate that the AVP requires the facial interfaces to lock the headset to the face, which is horrible regarding human factors, to support pre-correcting the optics. This follows my rule, “when smart people do something that appears dumb, it is because the alternative was worse.”

Comparison to (Nreal/Xreal) Birdbath

One part not discussed in the video or that article but shown in the associated figure (below) is the similarity of Pancake Optics are similar to Birdbath Optics. Nreal (now Xreal) Birdbath optics are discussed in my Nreal teardown series in Nreal Birdbath Overview.

Both pancake and birdbath optics start by polarizing the image from an OLED microdisplay. They use quarter waveplates to “switch” the polarization, causing it to bounce off a polarizer and then pass through it. They both use a 50/50 coated semi-mirror. They both use a combination of refractive (lens) and reflective (mirror) optics. In the case of the birdbath, the polarizer acts as a beam splitter to the OLED display so it does not block the view out, whereas with pancake optics, everything is inline.

31:34 AVP Color Uniformity Problem

The color uniformity and the fact that the color shift moves around with eye movement were discussed in Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3.

32:11 Comparing Resolution vs a Monitor

In Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3, I compared the resolution of the AVP (below left) to various computer monitors (below right) and the Meta Quest 3.

Below is a close-up crop of the center of the same image shown on the AVP, a 28″ monitor, and the Meta Quest 3. See the article for an in-depth explanation.

33:03 Vision OS 1.1 Change in MacBook mirror processing

I received and saw some comments about my Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3 that Vision OS 1.1 MacBook mirroring was sharper. I had just run a side-by-side comparison of displaying an image from a file on the AVP versus displaying the same image via mirroring a MacBook in Apple Vision Pro Displays the Same Image Differently Depending on the Application. So, I downloaded Vision OS 1.1 to the AVP and reran the same test, and I found a clear difference in the rendering of the MacBook mirroring (but not the display from the AVP file). However, it was not that the MacBook mirror image was shaper per se, but it was less bold. Even in the thumbnails below (click on them to see the full-size images). In the thumbnails below, note how the text looks less bold on the right side of the left image (OS 1.2) versus the right side of the right image.

Below are crops from the two images above, with the OS 1.1 image on the top and OS 1.0 on the bottom. The MacBook mirroring comes from the right sides of both images. Note how much bold the text and lines are in the OS 1.1 crop.

35:57 AVP Passthrough Cameras in the Wrong Location

38:43 AVP’s Optics are Soft/Blurry

As stated in Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3, the AVP optics are a little soft. According to Marquees Brownlee (see above) and others, my statement has caused controversy. I have heard others question my methods, but I have yet to see any evidence to the contrary.

I have provided my photographic evidence (right) and have seen it with my eyes by swapping headsets back and forth with high-resolution content. For comparison, the same image was displayed on the Meta Quest 3, and the MQ3 was clearly sharper. The “blur” on the AVP is similar to what one would see with a Gaussian blur with a radius of about 0.5 to 1 pixel.

Please don’t confuse “pixel resolution” with optical sharpness. The AVP has more pixels per degree, but the optics are a bit out of focus and, thus, a little blurry/soft. One theory is that it is being done to reduce the screen door effect (seeing the individual pixels) and make the images on the AVP look smoother.

The slight blurring of the AVP may reduce the screen door effect as the gap between pixels is thinner on the OLED displays than on the MQ3’s LCDs. But jaggies and scintillation are still very visible on the AVP.

41:41 Closing Discussion: “Did Apple Move the Needle?”

The video wraps up with Jason asking the open-ended question, “Did Apple Move the Needle?” I discuss whether it will replace a cell phone, home monitor(s), laptop on the road, or home TV. I think you can guess that I am more than skeptical that the AVP now or in the future will change things for more than a very small fraction of the people who use cell phones, laptops, and TVs. As I say about some conference demos, “Not everything that would make a great theme park experience is something you will ever want in your home to use regularly.”

Appendix: Rumor Mill’s 5,000 Nits Apple Vision Pro

When I searched the Internet to see if anyone had independently reported on the brightness of the AVP, I got the Google search answer in big, bold letters: “5,000 Nits” (right). Then, I went to the source of this answer, and it was none other than the MIT Technology Review. I then thought they must be quoting the display’s brightness, not the headset’s, but it reports that it is a “50-fold improvement over Meta Quest 2,” which is ridiculous.

I see this all the time when companies quote a spec for the display device, and it gets reported as the headset’s brightness/nits to the eye. The companies are a big part of the problem because most headset makers won’t give a number for the eye’s brightness in their specs. I should note that with almost all headset optics, the peak nits in the center will be much higher than those in the periphery. Through the years, one thing I have found that all companies exaggerate in their marketing is the brightness, either in lumens for projectors or nits for headsets.

An LCOS or DLP display engine can output over a million nits into a waveguide, but that number is so big (almost never given) that it is not confused with the nits to the eye. Nits are a function of light output (measured in Lumens) and the ability to collimate the light (a function of the size of the light source and illumination optics).

The “5,000 nits” source was a tweet by Ross Young of DSCC. Part of the Tweet/X thread is copied on the right. A few respondents understood this could not be the nits to the eye, and a few responders understood that it could not be to the eye. Responder BattleZxeVR even got the part about the duty cycle being a factor, but that didn’t stop many other later responders from getting it wrong.

Citing some other publications that didn’t seem to understand the difference between nits-in versus nits-out:

Quoting from The Daejeon Chronicles (June 2023): Apple Vision Pro Screens: 5,000 Nits of Wholesome HDR Goodness (with my bold emphasis):

Dagogo Altraide of ColdFusion has this to say about the device’s brightness capability:

“The screens have 5,000 nits of peak brightness, and that’s a lot. The Meta Quest 2, for example, maxes out at about 100 nits of brightness and Sony’s PS VR, about 265 nits. So, 5,000 nits is crazy. According to display analyst Ross Young, this 5,000 nits of peak brightness isn’t going to blind users, but rather provide superior contrast, brighter colors and better highlights than any of the other displays out there today.”

Quoting from Mac Rumors (May 2023): Apple’s AR/VR Headset Display Specs: 5000+ Nits Brightness for HDR, 1.41-Inch Diagonal Display and More:

With ~5000 nits brightness or more, the AR/VR headset from Apple would support HDR or high dynamic range content, which is not typical for current VR headsets on the market. The Meta Quest 2, for example, maxes out around 100 nits of brightness and it does not offer HDR, and the HoloLens 2 offers 500 nits brightness. Sony’s PSVR 2 headset has around 265 nits of brightness, and it does have an advertised HDR feature when connected to an HDR display.

The flatpanelshd (June 2023): Apple Vision Pro: Micro-OLEDs with 3800×3000 pixels & 90/96Hz – a paradigm shift did understand that the 5,000 nist was the display device and not to the eye:

DSCC has previously said that the micro-OLED displays deliver over 5000 nits of brightness but a good portion of that is typically lost due to the lenses and the display driving method.

As I wrote in Apple Vision Pro (Part 1) – What Apple Got Right Compared to The Meta Quest Pro, Snazzy Labs had an excellent explanation of the issues with the applications shown by Apple at the AVP announcement (it is a fun and informative video). But in another otherwise excellent video, What Reviewers Aren’t Telling You About Apple Vision Pro, I have to give him credit for recognizing that the MIT Tech Review had confabulated the display’s brightness with the headset’s brightness. But then hazarded a guess that it would be “after the optics, I bet it’s around 1,000 nits.” His guess was “just a bit outside” by about 10x. I do not want to pick on Snazzy Labs, as I love the videos I have seen from them, but I want to point out how much even technically knowledgeable people without a background in optics underestimate the light losses in headset optics.

Mixed Reality at CES & AR/VR/MR 2024 (Part 3 Display Devices)

Update 2/21/22: I added a discussion of the DLP’s new frame rates and its potential to address field sequential color breakup.

Introduction

In part 3 of my combined CES and AR/VR/MR 2024 coverage of over 50 Mixed Reality companies, I will discuss display companies.

As discussed in Mixed Reality at CES and the AR/VR/MR 2024 Video (Part 1 – Headset Companies), Jason McDowall of The AR Show recorded more than four hours of video on the 50 companies. In editing the videos, I felt the need to add more information on the companies. So, I decided to release each video in sections with a companion blog article with added information.

Outline of the Video and Additional Information

The part of the video on display companies is only about 14 minutes long, but with my background working in displays, I had more to write about each company. The times in blue on the left of each subsection below link to the YouTube video section discussing a given company.

00:10 Lighting Silicon (Formerly Kopin Micro-OLED)

Lighting Silicon is a spinoff of Kopin’s micro-OLED development. Kopin started making micro-LCD microdisplays with its transmissive color filter “Lift-off LCOS” process in 1990. 2011 Kopin acquired Forth Dimension Displays (FDD), a high-resolution Ferroelectric (reflective) LCOS maker. In 2016, I first reported on Kopin Entering the OLED Microdisplay Market. Lighting Silicon (as Kopin) was the first company to promote the combination of all plastic pancake optics with micro-OLEDs (now used in the Apple Vision Pro). Panasonic picked up the Lighting/Kopin OLED with pancake optics design for their Shift All headset (see also: Pancake Optics Kopin/Panasonic).

At CES 2024, I was invited by Chris Chinnock of Insight Media to be on a panel at Lighting Silicon’s reception. The panel’s title was “Finding the Path to a Consumer-Friendly Vision Pro Headset” (video link – remember this was made before the Apple Vision Pro was available). The panel started with Lighting Silicon’s Chairman, John Fan, explaining Lighting Silicon and its relationship with Lakeside Lighting Semiconductor. Essentially, Lightning Semiconductor designs the semiconductor backplane, and Lakeside Lighting does the OLED assembly (including applying the OLED material a wafer at a time, sealing the display, singulating the displays, and bonding). Currently, Lakeside Lighting is only processing 8-inch/200mm wafers, limiting Lighting Silicon to making ~2.5K resolution devices. To make ~4K devices, Lighting Semiconductor needs a more advanced semiconductor process that is only available in more modern 12-inch/300mm FABs. Lakeside is now building a manufacturing facility that can handle 12-inch OLED wafer assembly, enabling Lighting Silicon to offer ~4K devices.

Related info on Kopin’s history in microdisplays and micro-OLEDs:

02:55 RaonTech

RaonTech seems to be one of the most popular LCOS makers, as I see their devices being used in many new designs/prototypes. Himax (Google Glass, Hololens 1, and many others) and Omnivision (Magic Leap 1&2 and other designs) are also LCOS makers I know are in multiple designs, but I didn’t see them at CES or the AR/VR/MR. I first reported on RaonTech at CES 2018 (Part 1 – AR Overview). RaonTech makes various LCOS devices with different pixel sizes and resolutions. More recently, they have developed a 2.15-micron pixel pitch field sequential color pixel with an “embedded spatial interpolation is done by pixel circuit itself,” so (as I understand it) the 4K image is based on 2K data being sent and interpolated by the display.

In addition to LCOS, RaonTech has been designing backplanes for other companies making micro-OLED and MicroLED microdisplays.

04:01 May Display (LCOS)

May Display is a Korean LCOS company that I first saw at CES 2022. It surprised me, as I thought I knew most of the LCOS makers. May is still a bit of an enigma. They make a range of LCOS panels, their most advanced being an 8K (7980 x 4,320) 3.2-micron pixel pitch. May also makes a 4K VR headset with a 75-degree FOV using their LCOS devices.

May has its own in-house LCOS manufacturing capability. May demonstrated using its LCOS devices in projectors and VR headsets and showed them being used in a (true) holographic projector (I think using phase LCOS).

May Display sounds like an impressive LCOS company, but I have not seen or heard of their LCOS devices being used in other companies’ products or prototypes.

04:16 Kopin’s Forth Dimensions Display (LCOS)

As discussed earlier with Lighting Silicon, Kopin acquired Ferroelectric LCOS maker Forth Dimension Displays (FDD) in 2011. FDD was originally founded as Micropix in 1988 as part of CRL-Opto, then renamed CRLO in 2004, and finally Forth Dimension Displays in 2005, before Kopin’s 2011 acquisition.

I started working in LCOS in 1998 as the CTO of Silicon Display, a startup developing a VR/AR monocular headset. I designed an XGA (1024 x768) LCOS backplane and the FGA to drive it. We were looking to work with MicroPix/CRL-Opto to do the LCOS assembly (applying the cover glass, glue seal, and liquid crystal). When MicroPix/CRL-Opto couldn’t get their backplane to work, they ended up licensing the XGA LCOS backplane design I did at Silicon Display to be their first device, which they had made for many years.

FDD has focused on higher-end display applications, with its most high-profile design win being the early 4K RED cameras. But (almost) all viewfinders today, including RED, use OLEDs. FDD’s LCOS devices have been used in military and industrial VR applications, but I haven’t seen them used in the broader AR/VR market. According to FDD, one of the biggest markets for their devices today is in “structured light” for 3-D depth sensing. FDD’s devices are also used in industrial and scientific applications such as 3D Super Resolution Microscopy and 3D Optical Metrology.

05:34 Texas Instruments (TI) DLP®

Around 2015, DLP and LCOS displays seemed to have been used in roughly equal numbers of waveguide-based AR/MR designs. However, since 2016, almost all new waveguide-based designs have used LCOS, most notably the Hololens 1 (2016) and Magic Leap One (2018). Even companies previously using DLP switched to LCOS and, more recently, MicroLEDs with new designs. Among the reasons the companies gave for switching from DLP to LCOS were pixel size and, thus, a smaller device for a given resolution, lower power consumption of the display+asic, more choice in device resolutions and form factors, and cost.

While DLP does not require polarized light, which is a significant efficiency advantage in room/theater projector applications that project hundreds or thousands of lumens, the power of the display device and control logic/ASICs are much more of a factor in near-eye displays that require less than 1 to at most a few lumens since the light is directly aimed into the eye rather than illuminating the whole room. Additionally, many near-eye optical designs employ one or more reflective optics requiring polarized light.

Another issue with DLP is drive algorithm control. Texas Instruments does not give its customers direct access to the DLP’s drive algorithm, which was a major issue for CREAL (to be discussed in the next article), which switched from DLP to LCOS partly because of the need to control its unique light field driving method directly. VividQ (also to be discussed in the next article), which generates a holographic display, started with DLP and now uses LCOS. Lightspace 3D has similarly switched.

Far from giving up, TI is making a concerted effort to improve its position in the AR/VR/MR market with new, smaller, and more efficient DLP/DMD devices and chipsets and reference design optics.

Color Breakup On Hololens 1 using a low color sequential field rate

Added 2/21/22: I forgot to discuss the DLP’s new frame rates and field sequential color breakup.

I find the new, much higher frame rates the most interesting. Both DLP and LCOS use field sequential color (FSC), which can be prone to color breakup with eye and/or image movement. One way to reduce the chance of breakup is to increase the frame rate and, thus, the color field sequence rate (there are nominally three color fields, R, G, & B, per frame). With DLP’s new much higher 240Hz & 480Hz frame rates, the DLP would have 720 or 1440 color fields per second. Some older LCOS had as low as 60-frames/180-fields (I think this was used on Hololens 1 – right), and many, if not most, LCOS today use 120-frames/360-fields per second. A few LCOS devices I have seen can go as high as 180-frames/540-fields per second. So, the newer DLP devices would have an advantage in that area.

The content below was extracted from the TI DLP presentation given at AR/VR/MR 2024 on January 29, 2024 (note that only the abstract seems available on the SPIE website).

My Background at Texas Instruments:

I worked at Texas Instruments from 1977 to 1998, becoming the youngest TI Fellow in the company’s history in 1988. However, contrary to what people may think, I never directly worked on the DLP. The closest I came was a short-lived joint development program to develop a DLP-based color copier using the TMS320C80 image processor, for which I was the lead architect.

I worked in the Microprocessor division developing the TMS9918/28/29 (the first “Sprite” video chip), the TMS9995 CPU, the TMS99000 CPU, the TMS34010 (the first programmable graphics processor), the TMS34020 (2nd generation), the TMS302C80 (first image processor with 4 DSP CPUs and a RISC CPU) several generations of Video DRAM (starting with the TMS4161), and the first Synchronous DRAM. I designed silicon to generate or process pixels for about 17 of my 20 years at TI.

After leaving TI, ended up working on LCOS, a rival technology to DLP, from 1998 through 2011. But then when I was designing a aftermarket autmotive HUD at Navdy, I chose use a DLP engine for the projector for its advantages in that application. I like to think of myself as a product focused and want to use whichever technology works best for the given application. I see pros and cons in all the display technologies.

07:25 VueReal MicroLED

VueReal is a Canadian-based startup developing MicroLEDs. Their initial focus was on making single color per device microdisplays (below left).

However, perhaps VueReal’s most interesting development is their cartridge-based method of microprinting MicroLEDs. In this process, they singulate the individual LEDs, test and select them, and then transfer them to a substrate with either passive (wire) or active (ex., thin-film transistors on glass or plastic). They claim to have extremely high yields with this process. With this process, they can make full-color rectangular displays (above right), transparent displays (by spacing the LEDs out on a transparent substrate, and displays of various shapes, such as an automotive instrument panel or a tail light.

I was not allowed to take pictures in the VueReal suite, but Chris Chinnock of Insight Media was allowed to make a video from the suit but had to keep his distance from demos. For more information on VueReal, I would also suggest going to MicroLED-Info, which has a combination of information and videos on VueReal.

08:26 MojoVision MicroLED

MojoVision is pivoting from a “Contact Lens Display Company” to a “MicroLED component company.” Its new CEO is Dr. Nikhil Balram, formerly the head of Google’s Display Group. MojoVision started saying (in private) that it was putting more emphasis on being a MicroLEDs component company around 2021. Still, it didn’t publicly stop developing the contact lens display until January 2023 after spending more than $200M.

To be clear, I always thought the contact lens display concept was fatally flawed due to physics, to the point where I thought it was a scam. Some third-party NDA reasons kept me from talking about MojoVision until 2022. I outlined some fundamental problems and why I thought the contact lens display was a sham in my 2022 Video with Brad Lynch on Mojovision Contact Display in my 2022 CES Discussion video with Brad Lynch (if you take pleasure in my beating up on a dumb concept for about 14 minutes, it might be a fun thing to watch).

So, in my book, Mojovision, the company starts with a major credibility problem. Still, they are now under new leadership and focusing on what they got to work, namely very small MicroLEDs. Their 1.75-micron LEDs are the smallest I have heard about. The “old” Mojovision had developed direct/native green MicroLEDs, but the new MojoVision is developing native blue LEDs and then using quantum dot conversion to get green and red.

I have been hearing about using quantum dots to make full-color MicroLEDs for ~10 years, and many companies have said they are working on it. Playnitride demonstrated quantum dot-converted microdisplays (via Lumus waveguides) and larger direct-view displays at AR/VR/MR 2023 (see MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7)).

Mike Wiemer (CTO) gave a presentation on “Comparing Reds: QD vs InGaN vs AlInGaP” (behind the SPIE Paywall). Below are a few slides from that presentation.

Wiemer gave many of the (well-known in the industry) advantages of the blue LED with the quantum dot approach for MicroLEDs over competing approaches to full-color MicroLEDs, including:

  • Blue LEDs are the most efficient color
  • You only have to make a single type of LED crystal structure in a single layer.
  • It is relatively easy to print small quantum dots; it is infeasible to pick and place microdisplay size MicroLEDs
  • Quantum dots converted blue to green and red are much more efficient than native green and red LEDs
  • Native red LEDs are inefficient in GaN crystalline structures that are moderately compatible with native green and blue LEDs.
  • Stacking native LEDs of different colors on different layers is a complex crystalline growth process, and blocking light from lower layers causes efficiency issues.
  • Single emitters with multiple-color LEDs (e.g., See my article on Porotech) have efficiency issues, particularly in RED, which are further exacerbated by the need to time sequence the colors. Controlling a large array of single emitters with multiple colors requires a yet-to-be-developed, complex backplane.

Some of the known big issues with quantum dot conversion with MicroLED microdisplays (not a problem for larger direct view displays):

  • MicroLEDs can only have a very thin layer of quantum dots. If the layer is too thin, the light/energy is wasted, and the residual blue light must be filtered out to get good greens and reds.
    • MojoVision claims to have developed quantum dots that can convert all the blue light to red or green with thin layers
  • There must be some structure/isolation to prevent the blue light from adjacent cells from activating the quantum dots of a given cell, which would cause the desaturation of colors. Eliminating color crosstalk/desaturating is another advantage of having thinner quantum dot layers.
  • The lifetime and potential for color shifting with quantum dots, particularly if they are driven hard. Native crystalline LEDs are more durable and can be driven harder/brighter. Thus, quantum dot-converted blue LEDs, while more than 10x brighter than OLEDs, are expected to be less bright than native LEDs
  • While MojoVision has a relatively small 1.37-micron LED on a 1.87-micron pitch, that still gives a 3.74-micron pixel pitch (assuming MojoVision keeps using two reds to get enough red brightness). While this is still about half the pixel pitch of the Apple Vision’s Pro ~7.5-micron pitch OLED, a smaller pixel size such as with a single-emitter-with multiple-colors (e.g., Porotech) would be better (more efficient due to étendue see: MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7)) for semi-collimating the light using microlenses as needed by waveguides.

10:20 Porotech MicroLED

I covered Porotech’s single emitter, multiple color, MicroLED technology extensively last year in CES 2023 (Part 2) – Porotech – The Most Advanced MicroLED Technology, MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7), and my CES 2023 Video with Brad Lynch.

While technically interesting, Porotech’s single-emitter device will likely take considerable time to perfect. The single-emitter approach has the major advantage of supporting a smaller pixel since only one LED per pixel is required. This also results in only two electrical connections (power and ground) to LED per pixel.

However, as the current level controls the color wavelength, this level must be precise. The brightness is then controlled by the duty cycle. An extremely advanced semiconductor backplane will be needed to precisely control the current and duty cycle per pixel, a backplane vastly more complex than LCOS or spatial color MicroLEDs (such as MojoVision and Playnitride) require.

Using current to control the color of LEDs is well-known to experts in LEDs. Multiple LED experts have told me that based on their knowledge, they believe Porotech’s red light output will be small relative to the blue and green. To produce a full-color image, the single emitter will have to sequentially display red, green, and blue, further exacerbating the red’s brightness issues.

12:55 Brilliance Color Laser Combiner

Brilliance has developed a 3-color laser combiner on silicon. Light guides formed in/on the silicon act similarly to fiber optics to combine red, green, and blue laser diodes into a single beam. The obvious application of this technology would be a laser beam scanning (LBS) display.

While I appreciate Brilliance’s technical achievement, I don’t believe that laser beam scanning (LBS) is a competitive display technology for any known application. This blog has written dozens of articles (too many to list here) about the failure of LBS displays.

14:24 TriLite/Trixel (Laser Combiner and LBS Display Glasses)

Last and certainly least, we get to TriLite Laser Beam Scanning (LBS) glasses. LBS displays for near-eye and projector use have a perfect 25+ year record of failure. I have written about many of these failures since this blog started. I see nothing in TriLite that will change this trend. It does not matter if they shoot from the temple onto a hologram directly into the eye like North Focals or use a waveguide like TriLite; the fatal weak link is using an LBS display device.

It has reached the point when I see a device with an LBS display. I’m pretty sure it is either part of a scam and/or the people involved are too incompetent to create a good product (and yes, I include Hololens 2 in this category). Every company with an LBS display (once again, including Hololens 2) lies about the resolution by confabulating “scan lines” with the rows of a pixel-based display. Scan lines are not the same as pixel rows because the LBS scan lines vary in spacing and follow a curved path. Thus, every pixel in the image must be resampled into a distorted and non-uniform scanning process.

Like Brilliance above, TriLites’ core technology combines three lasers for LBS. Unlike Brilliance, TriLites does not end up with the beams being coaxial; rather, they are at slightly different angles. This will cause the various colors to diverge by different amounts in the scanning process. TriLite uses its “Trajectory Control Module” (TCM) to compute how to re-sample the image to align the red, green, and blue.

TriLite then compounds its problems with LBS using a Lissajous scanning process, about the worst possible scanning process for generating an image. I wrote about why the Lissajous scanning process, also used by Oqmented (TriLite uses Infineon’s scanning mirror), in AWE 2021 Part 2: Laser Scanning – Oqmented, Dispelix, and ST Micro. Lissajous scanning may be a good way to scan a laser beam for LiDAR (as I discussed in CES 2023 (4) – VoxelSensors 3D Perception, Fast and Accurate), but it is a horrible way to display an image.

The information and images below have been collected from TriLite’s website.

As far as I have seen, it is a myth that LBS has any advantage in size, cost, and power over LCOS for the same image resolution and FOV. As discussed in part 1, Avegant generated the comparison below, comparing North Focals LBS glasses with a ~12-degree FOV and roughly 320×240 resolution to Avegant’s 720 x 720 30-degree LCOS-based glasses.

Below is a selection (from dozens) of related articles I have written on various LBS display devices:

Next Time

I plan to cover non-display devices next in this series on CES and AR/VR/MR 2024. That will leave sections on Holograms and Lightfields, Display Measurement Companies, and finally, Jason and my discussion of the Apple Vision Pro.

Apple Vision Pro – Influencing the Influencers & “Information Density”

Introduction

Many media outlets, large and small, both text and video, use this blog as a resource for technical information on mixed reality headsets. Sometimes, they even give credit. In the past two weeks, this blog was prominently cited in YouTube videos by Linus Tech Tips (LTT) and Artur Tech Tales. Less fortunately, Adam Savage’s Tested, hosted by Norman Chen in his Apple Vision Pro Review, used a spreadsheet test pattern from this blog to demonstrate foveated rendering issues.

I will follow up with a discussion of Linus’s Tech Tips video, which deals primarily with human factors. In particular, I want to discuss the “Information Density issue” of virtual versus physical monitors, which the LTT video touched on.

Influencing the Influencers On Apple Vision Pro

Linus Tech Tips (LTT)

In their “Apple Vision Pro—A PC Guy’s Perspective,” Linus Tech Tips showed several pages from this blog that were nice enough to prominently feature the pages they were using and the web addresses (below). Additionally, I enjoyed their somewhat humorous physical “simulation” of the AVP (more on that in a bit). LTT used images (below-left and below-center) from the blog to explain how the optics distort the display and how the processing in the AVP is used in combination with eye tracking to reduce that distortion. LTT also uses images from the blog (below-right) to show how the field of view (FOV) changes based on the distance from the eye to the optics.

Linus Tech Tips Citing this Blog

Adam Savages’ Tested

Adam Savage’s Test with host Norman Chan’s review of the Apple Vision Pro used this blog’s AVP-XLS-on-BLACK-Large-Array from Spreadsheet “Breaks” The Apple Vision Pro’s (AVP) Eye-Tracking/Foveation & the First Through-the-optics Pictures to discuss how the foveated boundaries of the Apple Vision Pro are visible. While the spreadsheet is taken from this blog, I didn’t see any references given.

The Adam Savages Tested video either missed or was incorrect on several points it made:

  • It missed the point of the blog article that the foveated rendering has problems with spreadsheets when directly rendered from Excel on the AVP instead of mirrored by a MacBook.
  • It stated that taking pictures through the optics is impossible, which this blog has been doing for over a month (including in this article).
  • It said that the AVP’s passthrough 3-D perspective was good with short-range but bad with long-range objects, but Linuses Tech tips (discussed later) find the opposite. The AVP’s accuracy is poor with short-range objects due to the camera placement.
  • It said there was no “warping” of the real world with video passthrough, which is untrue. The AVP does less warping than the Meta Quest 3 and Quest Pro, but it still warps objects less than 0.6 meters (2 feet) away and toward the center to the upper part of the user’s view. It is impossible to be both perspective-correct and not warp with the AVP’s camera placement with near objects; the AVP seems to trade off being perspective-correct to have less warping than the Meta headsets.

Artur’s Tech Tales – Interview on AVP’s Optical Design

Artur’s Tech Tales Apple Vision Pro OPTICS—Deep Technical Analysis, featuring Arthur Rabner (CEO of Hypervision), includes an interview and presentation by Hypervision’s CEO, Arther Rabner. In his presentation, Rabner mentions this blog several times. The video details the AVP optics and follows up on Hypervision’s white paper discussed in Apple Vision Pro (Part 4) – Hypervision Pancake Optics Analysis.

Linus Tech Tips on Apple Vision Pro’s Human Factors

Much of the Linus Tech Tips (LTT) videos deal with human factors and user interface issues. For the rest of this article, I will discuss and expand upon comments made in the LTT video. Linus also commented on the passthrough camera’s “shutter angle,” but I moved my discussion on that subject to the “Appendix” at the end as it was a bit off-topic and needed some explanation.

It makes a mess of your face

At 5:18 in the video, Linus takes the headset off and shows the red marks left by the Apple Vision Pro (left), which I think may have been intentional after Linus complained about issues with the headband earlier. For reference, I have included the marks left by the Apple Vision Pro on my face (below-right). I sometimes joke that I wonder if I wear it long enough, it will make a groove in my skull to help hold up the headset.

An Apple person who is an expert at AVP fitting will probably be able to tell based on the marks on our faces if we have the “wrong” face interface. Linus’s headset makes stronger marks on his cheeks, whereas mine makes the darkest marks on my forehead. As I use inserts, I have a fairly thick (but typical for wearing inserts) 25W face hood with the thinner “W” interface, and AVP’s eye detection often complains that I need to get my eyes closer to the lenses. So, I end up cranking the solo band almost to the point where I feel my pulse on my forehead like a blood pressure measuring cuff (perhaps a health “feature” in the future?).

Need for game controllers

For virtual reality, Linus is happy with the resolution and placement of virtual objects in the real world. But he stated, “Unfortunately, the whole thing falls apart when you interact with the game.” Linus then goes into the many problems of not having controllers and relying on hand tracking alone.

I’m not a VR gamer, but I agree with The Verge that AVP’s hand and eye tracking is “magic until it’s not.” I am endlessly frustrated with eye-tracking-based finger selection. Even with the headset cranked hard against my face, the eye tracking is unstable even after recalibration of the IPD and eye tracking many times. I consider eye and hand tracking a good “secondary” selection tool that needs an accurate primary selection tool. I have an Apple Magic Pad that “works” with the AVP but does not work in “3-D space.”

Windows PC Gaming Video Mirroring via WiFi has Lag, Low Resolution, and Compression Artifacts

Linus discussed using the Steam App on the AVP to play games. He liked that he could get a large image and lay back, but there is some lag, which could be problematic for some games, particularly competitive ones; the resolution is limited to 1080p, and compression artifacts are noticeable.

Linus also discussed using the Sunshine (streaming server on the PC) and Moonlight (remote access on the AVP) apps to mirror Windows PCs. While this combination supports up to 4K at 120p, Linus says you will need an incredibly good wireless access point for the higher resolution and frame rates. In terms of effective resolution and what I like to call “Information Density,” these apps will still suffer the loss of significant resolution due to trying to simulate a virtual monitor in 3-D space, as I have discussed in Apple Vision Pro (Part 5C) – More on Monitor Replacement is Ridiculous and Apple Vision Pro (Part 5A) – Why Monitor Replacement is Ridiculous and shown with through the lens pictures in Apple Vision Pro’s (AVP) Image Quality Issues – First Impressions and Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3.

From a “pro” design perspective, it is rather poor on Apple’s part that the AVP does not support a direct Thunderbolt link for both data and power, while at the same time, it requires a wired battery. I should note that the $300 developer’s strap supports a lowish 100Mbs ethernet (compared to USB-C/Thunderbolt 0.48 to 40 Gbs) speed data through a USB-C connector while still requiring the battery pack for power. There are many unused pins on the developer’s strap, and there are indications in the AVP’s software that the strap might support higher-speed connections (and maybe access to peripherals) in the future.

Warping effect of passthrough

In terms of video passthrough, at 13:43 in the video, Linus comments about the warping effect of close objects and depth perception being “a bit off.” He also discussed that you are looking at the world through phone-type cameras. When you move your head, the passthrough looks duller, with a significant blur (“Jello”).

The same Linus Tech Tip video also included humorous simulations of the AVP environment with people carrying large-screen monitors. At one point (shown below), they show a person wearing a respirator mask (to “simulate” the headset) surrounded by three very large monitors/TVs. They show how the user has to move their head around to see everything. LTT doesn’t mention that those monitors’ angular resolution is fairly low, which is why those monitors need to be so big.

Sharing documents is a pain.

Linus discussed the AVP’s difficulty sharing documents with others in the same room. Part of this is because the MacBook’s display goes blank when mirroring onto the AVP. Linus discussed how he had to use a “bizarre workaround” of setting up a video conference to share a document with people in the same room.

Information Density – The AVP Delivers Effectively Multiple Large but Very Low-Resolution Monitors

The most important demonstration in the LTT video involves what I like to call the “Information Density” problem. The AVP, or any VR headset, has low information density when trying to emulate a 2-D physical monitor in 3-D space. It is a fundamental problem; the effective resolution of the AVP well less than half (linearly, less than a quarter two-dimensionally) of the resolution of the monitors that are being simulated (as discussed in Apple Vision Pro (Part 5C) – More on Monitor Replacement is Ridiculous and Apple Vision Pro (Part 5A) and shown with through the lens pictures in Apple Vision Pro’s (AVP) Image Quality Issues – First Impressions and Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3). The key contributors to this issue are:

  • The peak display resolution in the center of the optics is only 44.4 pixels per degree (human vision it typically better than 60 ppd).
  • The 2-D/Monitor image must be resampled into 3-D space with an effective resolution loss greater than 2x.
  • If the monitor is to be viewable, it must be inscribed inside the oval sweet spot of the optics. In the case of the AVP, this cuts off about half the pixels.
  • While the AVP’s approximate horizontal FOV is about 100 degrees, the optical resolution drops considerably in the outer third of the optics. Only about the center 40-50 degrees of the FOV is usable for high-resolution content.
  • Simply put, the AVP needs more than double the PPD and better optics to provide typical modern computer monitors’ information/resolution density. Even then, it would be somewhat lacking in some aspects.

Below, show the close-up center (best case) through the AVP’s optics on the (left) and the same image at about the same FOV on a computer monitor (right). Things must be blown up about 2x (linearly) to be as legible on the AVP as on a good computer monitor.

Comparisons of AVP to a Computer Monitor and Quest 3 from Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3

Some current issues with monitor simulation are “temporary software issues” that can be improved, but that is not true with the information density problem.

Linus states in the video (at 17:48) that setting up the AVP is a “bit of a chore,” but it should be understood most of the “chore” is due to current software limitations that could be fixed with better software. The most obvious problems, as identified by Linus, are that the AVP does not currently support multiple screens from a MacBook, and it does not save the virtual screen location of the MacBook. I think most people expect Apple to fix these problems at some point in the near future.

At 18:20, Linus showed the real multiple-monitor workspace of someone doing video editing (see below). While a bit extreme for some people with two vertically stacked 4K monitors in landscape orientation monitors and a third 4K monitor in portrait mode, it is not that far off what I have been using for over a decade with two large side-by-side monitors (today I have a 34″ 22:9 1440p “center monitor” and a 28″ 4K side monitor both in landscape mode).

I want to note a comment made by Linus (with my bold emphasis):

“Vision Pro Sounds like having your own personal Colin holding a TV for you and then allowing it to be repositioned and float effortlessly wherever you want. But in practice, I just don’t really often need to do that, and neither do a lot of people. For example, Nicole, here’s a real person doing real work [and] for a fraction of the cost of a Vision Pro, she has multiple 4K displays all within her field of view at once, and this is how much she has to move her head in order to look between them. Wow.  

Again, I appreciate this thing for the technological Marvel that it is—a 4K display in a single Square inch. But for optimal text clarity, you need to use most of those pixels, meaning that the virtual monitor needs to be absolutely massive for the Vision Pro to really shine.

The bold highlights above make the point about information density. A person can see all the information all at once and then, with minimal eye and head movement, see the specific information they want to see at that moment. Making text bigger only “works” for small amounts of content as it makes reading slower with larger head and eye movement and will tend to make the eyes more tired with movement over wider angles.

To drive the point home, the LTT video “simulates” an AVP desktop, assuming multiple monitor support but physically placing three very large monitors side by side with two smaller displays on top. They had the simulated user wear a paint respirator mask to “simulate” the headset (and likely for comic effect). I would like to add that each of those large monitors, even at that size, with the AVP, will have the resolution capability of more like a 1920x1080p monitor or about half linearly and one-fourth in area, the content of a 4K monitor.

Quoting Linus about this part of the video (with my bold emphasis):

It’s more like having a much larger TV that is quite a bit farther away, and that is a good thing in the sense that you’ll be focusing more than a few feet in front of you. But I still found that in spite of this, that it was a big problem for me if I spent more than an hour or so in spatial-computing-land.

Making this productivity problem worse is the fact that, at this time, the Vision Pro doesn’t allow you to save your layouts. So every time you want to get back into it, you’ve got to put it on, authenticate, connect to your MacBook, resize that display, open a safari window, put that over there where you want it, maybe your emails go over here, it’s a lot of friction that our editors, for example, don’t go through every time they want to sit down and get a couple hours of work done before their eyes and face hurt too much to continue.

I would classify Many of the issues Linus gave in the above quote as solvable in software for the AVP. What is not likely solvable in software are headaches, eye strain, and low angular resolution of the AVP relative to a modern computer monitor in a typical setup.

While speaking in the Los Angeles area at the SID LA One Day conference, I stopped in a Bigscreen Beyond to try out their headset for about three hours. I could wear the Bigscreen Beyond for almost three hours, where typically, I get a spitting headache with the AVP after about 40 minutes. I don’t know why, but it is likely a combination of much less pressure on my forehead and something to do with the optics. Whatever it is, there is clearly a big difference to me. It was also much easier to drink from a can (right) with the Bigscreen’s much-reduced headset.

Conclusion

It is gratifying to see the blog’s work reach a wide audience worldwide (about 50% of this blog’s audience is outside the USA). As a result of other media outlets picking up this blog’s work, the readership roughly doubled last month to about 50,000 (Google Analytics “Users”).

I particularly appreciated the Linus Tech Tip example of a real workspace in contrast to their “simulation” of the AVP workspace. It helps illustrate some human factor issues with having a headset simulate a computer monitor, including information density. I keep pounding on the Information Density issue because it seems underappreciated by many of the media reports on the AVP.

Appendix Linus Comments on AVP’s “Weird Camera Shutter Angle”

I moved this discussion to this Appendix because it involves some technical discussion that, while it may be important, may not be of interest to everyone and takes some time to explain. At the same time, I didn’t want to ignore it as it brings up a potential issue with the AVP.

At about 16:30 in the LTT Video, Linus also states that the Apple Vision Pro cameras use “weird shutter angles to compensate for the flickering of lights around you, causing them [the AVP] to crank up the ISO [sensitivity], adding a bunch of noise to the image.”

From Wikipedia – Example of a 180-degree shutter angle

For those that don’t know, “shutter angle” (see also https://www.digitalcameraworld.com/features/cheat-sheet-shutter-angles-vs-shutter-speeds) is a hold-over term from the days of mechanical movie shutters where the shutter was open for a percentage of a 360-degree rotating shutter (right). Still, it is now applied to camera shutters, including “electronic shutters” (many large mirrorless cameras have mechanical and electronic shutter options with different effects). A 180-degree shutter angle means the shutter/camera scanning is open one-half the frame time, say 1/48th of a 1/24th of a second frame time or 1/180th of a 1/90th of a second frame rate. Typically, people talk about how different shutter angles affect the choppiness of motion and motion blur, not brightness or ISO, even though it does affect ISO/Brightness due to the change in exposure time.

I’m not sure why Linus is saying that certain lights are reducing the shutter angle, thus increasing ISO, unless he is saying that the shutter time is being reduced with certain types of light (or simply bright lights) or with certain types of flickering lights the cameras are missing much of the light. If so, it is a roundabout way of discussing the camera issue; as discussed above, the term shutter angle is typically used in the context of motion effects, with brightness/ISO being more of a side issue.

A related temporal issue is the duty cycle of the displays (as opposed to the passthrough cameras), which has a similar “shutter angle” issue. VR users have found that displays with long on-time duty cycles cause perceived blurriness with rapid head movement. Thus, they tend to prefer display technologies with low-duty cycles. However, low display duty cycles typically result in less display brightness. LED backlit LCDs can drive the LEDs harder for shorter periods to help make up for the brightness loss. However, OLED microdisplays commonly have relatively long (sometimes 100%) on-time duty cycles. I have not yet had a chance to check the duty cycle of the AVP, but it is one of the things on my to-do list. In light of Linus’s comments, I will want to set up some experiments to check out the temporal behavior of the AVP’s passthrough camera.

Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3

Introduction – Sorry, But It’s True

I have taken thousands of pictures through dozens of different headsets, and I noticed that the Apple Vision Pro (AVP) image is a little blurry, so I decided to investigate. Following up on my Apple Vision Pro’s (AVP) Image Quality Issues – First Impressions article, this article will compare the AVP to the Meta Quest 3 by taking the same image at the same size in both headsets, and I got what many will find to be surprising results.

I know all “instant experts” are singing the praises of “the Vision Pro as having such high resolution that there is no screen door effect,” but they don’t seem to understand that the screen door effect is hiding in plain sight, or should I say “blurry sight.” As mentioned last time, the AVP covers its lower-than-human vision angular resolution by making everything bigger and bolder (defaults, even for the small window mode setting, are pretty large).

While I’m causing controversies by showing evidence, I might as well point out that the AVP’s contrast and color uniformity are also slightly lower than the Meta Quest 3 on anything but a nearly black image. This is because the issues with AVP’s pancake optics dominate over AVP’s OLED microdisplay. This should not be a surprise. Many people have reported “glow” coming from the AVP, particularly when watching movies. That “glow” is caused by unwanted reflections in the pancake optics.

If you click on any image in this article, you can access it in full resolution as cropped from a 45-megapixel original image. The source image is on this blog’s Test Pattern Page. As if the usual practice of this blog, I will show my work below. If you disagree, please show your evidence.

Hiding the Screen Door Effect in Plain Sight with Blur

The numbers don’t lie. As I reported last time in Apple Vision Pro’s (AVP) Image Quality Issues – First Impressions, the AVP’s peak center resolution is about 44.4 pixels per degree (PPD), below 80 PPD, what Apple calls “retinal resolution,” and the pixel jaggies and screen door should be visible — if the optics were sharp. So why are so many reporting that the AVP’s resolution must be high since they don’t see the screen door effect? Well, because they are ignoring the issue of the sharpness of the optics.

Two factors affect the effective resolution: the PPD of optics and the optics’ modulation transfer function sharpness and contrast of the optics, commonly measured by the Modulation Transfer Function (MTF — see Appendix on MTF).

People do not see the screen door effect with the AVP because the display is slightly out of focus/blurry. Low pass filtering/blurring is the classic way to reduce aliasing and screen door effects. I noticed that when playing with the AVP’s optics, the optics have to be almost touching the display to be in focus. The AVP’s panel appears to be recessed by about 1 millimeter (roughly judging by my eye) beyond the best focus distance. This is just enough so that the thinner gaps between pixels are out of focus while only making the pixels slightly blurry. There are potentially other explanations for the blur, including the microlenses over the OLED panel or possibly a softening film on top of the panel. Still, the focus seems to be the most likely cause of the blurring.

Full Image Pictures from the center 46 Degrees of the FOV

I’m going to start with high-resolution pictures through the optics. You won’t be able to see any detail without clicking on them to see them at full resolution, but you may discern that the MQ3 feels sharper by looking at the progressively smaller fonts. This is true even in the center of the optics (square “34” below), even before the AVP’s foveate rendering results in a very large blur at the outside of the image (11, 21, 31, 41, 51, and 61). Later, I will show a series of crops to show the central regions next to each other in more detail.

The pictures below were taken by a Canon R5 (45 Megapixel) camera with a 16mm lens at f8. With a combination of window sizing and moving the headset, I created the same size image on the Apple Vision Pro and Meta Quest Pro to give a fair comparison (yes, it took a lot of time). A MacBook Pro M3 Pro was casting the AVP image, and the Meta Quest 3 was running the Immersed application (to get a flat image) mirroring a PC laptop. For reference, I added a picture of a 28″ LCD monitor taken from about 30″ to give approximately the same FOV as the image from a conventional 4K monitor (this monitor could resolve single pixels of four of these 1080p images, although you would have to have very good vision see them distinctly).

Medium Close-Up Comparison

Below are crops from near the center of the AVP image (left), the 28″ monitor (center), and the MQ3 image (right). The red circle on the AVP image over the number 34 is from the eye-tracking pointer being on (also used to help align and focus the camera). The blur of the AVP is more evident in the larger view.

Extreme Close-Up of AVP and MQ3

Cropping even closer to see the details (all the images above are at the same resolution) with the AVP on the top and the MQ3 on the bottom. Some things to note:

  1. Neither the AVP nor MQ3 can resolve the 1-pixel lines, even though a cheap 1080p monitor would show them distinctly.
  2. While the MQ3 has more jaggies and the screen door effect, it is noticeably sharper.
  3. Looking at the space between the circle and the 3-pixel wide lines pointed at by the red arrow, it should be noticed that the AVP has less contrast (is less black) than the MQ3.
  4. Neither the AVP nor MQ3 can resolve the 1-pixel-wide lines correctly, but the 2- and 3-pixel-wide lines, along with all the text, are significantly sharper and have higher contrast than on the AVP. Yes, the effective resolution of the MQ3 is objectively better than the AVP.
  5. Some color moiré can be seen in the MQ3 image, a color artifact due to the camera’s Bayer filter (not seen by the eye) and the relative sharpness of the MQ3 optics. The camera can “see” the MQ3’s LCD color filters through the optics.

Experiment with Slightly Blurring the Meta Quest 3

A natural question is whether the MQ3 should have made their optics slightly out of focus to hide the screen door effect. As a quick experiment, I tried a (Gaussian) blur of the MQ3’s image a little (middle image below) as an experiment. There is room to blur it while still having a higher effective resolution than the AVP. The AVP still has more pixels, and the person/elf’s image looks softer on the slightly blurred MQ3. The lines are testing for high contrast resolution (and optical reflections), and the photograph shows what happens to a lower contrast, more natural image with more pixel detail.

AVP’s Issues with High-Resolution Content

While Apple markets each display as having the same number of pixels as a 4K monitor (but differently shaped and not as wide), the resolution is reduced by multiple factors, including those listed below:

  1. The oval-shaped optics cut about 25-30% of the pixels.
  2. The outer part of the optics has poor resolution (about 1/3rd the pixels per degree of the center) and has poor color.
  3. A rectangular image must be inscribed inside the “good” part of the oval-shaped optics with a margin to support head movement. While the combined display might have a ~100-degree FOV, there is only about a 45- to 50-degree sweet spot.
  4. Any pixels in the source image must be scaled and mapped into the destination pixels. For any high-resolution content, this can cause more than a 2x (linear) loss in resolution and much worse if it aliases. For more on the scaling issues, see my articles on Apple Vision Pro (Part 5A, 5B, & 5C).
  5. As part of #4 above or in a separate process, the image must be corrected for optical distortion and color as a function of eye tracking, causing further image degradation
  6. Scintillation and wiggling of high-resolution content with any head movement.
  7. Blurring by the optics

The net of the above, and as demonstrated by the photographs through the optics shown earlier, the AVP can’t accurately display a detailed 1920×1080 (1080p) image.

AVP Lack “Information Density”

Making everything bigger, including short messages and videos, can work for low-information-density applications. If anything, the AVP demonstrates that very high resolution is less important for movies than people think (watching movies is a notoriously bad way to judge resolution).

As discussed last time, the AVP makes up the less-than-human angular resolution by making everything big to hide the issue. But making the individual elements bigger means less content can be seen simultaneously as the overall image is enlarged. But making things bigger means that the “information density” goes down, with the eyes and head having to move more to see the same amount of content and less overall content can be seen simultaneously. Consider a spreadsheet; fewer rows and columns will be in the sweet spot of a person’s vision, and less of the spreadsheet will be visible without needing to turn your head.

This blog’s article, FOV Obsession, discusses the issue of eye movement and fatigue using information from Thad Starner’s 2019 Photonic’s West AR/VR/MR presentation. The key point is that the eye does not normally want to move more than 10 degrees for an extended period. The graph below left is for a monocular display where the text does not move with the head-turning. Starner points out that a typical newspaper column is only about 6.6 degrees. It is also well known that when reading content more than ~30 degrees wide, even for a short period, people will turn their heads rather than move their eyes. Making text content bigger to make it legible will necessitate more eye and head movement to see/read the same amount of content, likely leading to fatigue (I would like to see a study of this issue).

ANSI-Like Contrast

A standard way to measure contrast is using a black-and-white checkerboard pattern, often called ANSI Contrast. It turns out that with a large checkerboard pattern, the AVP and MQ3 have very similar contrast ratios. For the picture below, I make the checkerboard bigger to fill about 70 degrees horizontally for each device’s FOV. The optical reflections inside the AVP’s optics cancel out the inherent high contrast of the OLED displays inside the AVP.

The AVP Has Worse Color Uniformity than the MQ3

You may be able to tell that the AVP has a slightly pink color in the center white squares. As I move my head around, I see the pink region move with it. Part of the AVP’s processing is used to correct color based on eye tracking. Most of the time, the AVP does an OK job, but it can’t perfectly correct for color issues with the optics, which becomes apparent in large white areas. The issues are most apparent with head and eye movement. Sometimes, by Apple’s admission, the correction can go terribly wrong if it has problems with eye tracking.

Using the same images above and increasing the color saturation in both images by the same amount makes the color issues more apparent. The MQ3 color uniformity only slightly changes in the color of the whites, but the AVP turns pink in the center and cyan on the outside.

The AVP’s “aggressive” optical design has about 1.6x the magnification of the MQ3 and, as discussed last time, has a curved quarter waveplate (QWP). Waveplates modify polarized light and are wavelength (color) and angle of light-dependent. Having repeatedly switched between the AVP and MQ3, the MQ3 has better color uniformity, particularly when taking one off and quickly putting the other on.

Conclusion and Comments

As a complete product (more on this in future articles), the AVP is superior to the Meta Quest Pro, Quest 3, or any other passthrough mixed reality headset. Still, the AVP’s effective resolution is less than the pixel differences would suggest due to the softer/blurrier optics.

While the pixel resolution is better than the Quest Pro and Quest 3, its effective resolution after the optics is worse on high-contrast images. Due to having a somewhat higher PPD, the AVP looks better than the MQP and MQ3 on “natural” lower-contrast content. The AVP image is much worse than a cheap monitor displaying high-resolution, high-contrast content. Effectively, what the AVP supports is multiple low angular resolution monitors.

And before anyone makes me out to be a Meta fanboy, please read my series of articles on the Meta Quest Pro. I’m not saying the MQ3 is better than the AVP. I am saying that the MQ3 is objectively sharper and has better color uniformity. Apple and Meta don’t get different physics, and they make different trade-offs which I am pointing out.

The AVP and any VR/MR headset will fare much better with “movie” and video content with few high-contrast edges; most “natural” content is also low in detail and pixel-to-pixel contrast (and why compression works so well with pictures and movies). I must also caution that we are still in the “wild enthusiasm stage,” where the everyday problems with technology get overlooked.

In the best case, the AVP in the center of the display gives the user a ~20/30 vision view of its direct (non-passthrough) content and worse when using passthrough (20/35 to 20/50). Certainly, some people will find the AVP useful. But it is still a technogeek toy. It will impress people the way 3-D movies did over a decade ago. As a reminder, 3-D TV peaked at 41.45 million units in 2012 before disappearing a few years later.

Making a headset display is like n-dimensional chess; more than 20 major factors must be improved, and improving one typically worsens other factors. These factors include higher resolution, wider FOV, peripheral vision and safety issues, lower power, smaller, less weight, better optics, better cameras, more cameras and sensors, and so on. And people want all these improvements while drastically reducing the cost. I think too much is being made about the cost, as the AVP is about right regarding the cost for a new technology when adjusted for inflation; I’m worried about the other 20 problems that must be fixed to have a mass-market product.

Appendix – Modulation Transfer Function (MTF)

MTF is measured by putting in a series of lines of equal width and spacing and measuring the difference between the white and black as the size and spacing of the lines change. People typically use 50% contrast critical to specify the MTF by convention. But note that contrast is defined as (Imax-Imin)/(Imax+Imin), so to achieve 50% contrast, the black level must be 1/3rd of the white level. The figure (below) shows how the response changes with the line spacing.

The MTF of the optics is reduced by both the sharpness of the optics and any internal reflections that, in turn, reduce contrast.

CES (Pt. 3), Xreal, BMW, Ocutrx, Nimo Planet, Sightful, and LetinAR

Update 1/28/2024 – Based on some feedback from Nimo Planet, I have corrected the description of their computer pod.

Introduction

The “theme” for this article is companies I met with at CES with optical see-through Augmented and Mixed Reality using OLED microdisplays.

I’m off to SPIE AR/VR/MR 2024 in San Francisco as I release this article. So, this write-up will be a bit rushed and likely have more than the usual typos. Then, right after I get back from the AR/VR/MR show, I should be picking up my Apple Vision Pro for testing.

Xreal

Xreal (formerly Nreal) says they shipped 350K units in 2023, more than all other AR/MR companies combined. They had a large booth on the CES floor, which was very busy. They had multiple public and private demo stations.

From 2021 KGOnTech Teardown

This blog has followed Xreal/Nreal since its first appearance at CES in 2019. Xreal uses an OLED microdisplay in a “birdbath” optical architecture first made popular by (the now defunct) Osterhout Design Group (ODG) with their R8 and R9, which were shown at CES in 2017. For more on this design, I would suggest reading my 2021 teardown articles on the Nreal first product (Nreal Teardown: Part 1, Clones and Birdbath Basics, Nreal Teardown: Part 2, Detailed Look Inside, and Nreal Teardown: Part 3, Pictures Through the Lens).

Inherent in the birdbath optical architecture Xreal still uses, they will block about 70% of the real-world light, acting like moderately dark sunglasses. About 10% of the display’s light makes it to the eye, which is much more efficient than waveguides, which are much thinner and more transparent. Xreal claims their newer designs support up to 500 nits, meaning the Sony Micro-OLEDs must output about 5,000 nits.

With investment, volume, and experience, Xreal has improved its optics and image quality. It can’t improve much over the inherent limitations of a birdbath, particularly in terms of transparency. Xreal recently added an LCD dimming shutter to selectively block out more or all of the real world fully with their new Xreal Air 2 Pro and their latest Air 2 Ultra, for which I was given a demo at CES.

The earlier Xreal/Nreal headsets were little more than 1920×1080 monitors you wore with a USB-C connection for power and video. Each generation has added more “smarts” to the glasses. The Air 2 Ultra includes dual 3-D IR camera sensors for spatial recognition. Xreal and (to be discussed later) Nimo, among others, have already picked up on Apple’s “Spatial Computing,” referring to their products as affordable ways to get into spatial computing.

Most of the newer headsets will support either via a cell phone or Xreal’s “Beam” compute module, which can act to mirror or cast one more virtual display from a computer, cell phone, or tablet. While virtually there may be more monitors, they are still represented on a 1920×1080 display device. I believe (I forgot to ask) that Xreal is using internal sensors to detect head movement to virtualize the monitors with head movement.

Xreals Air 2 Ultra demo showcased the new spatial sensors’ ability to recognize hand and finger gestures. Additionally, the sensors could read “bar-coded” dials and slides made from cardboard.

BMW AR Ride Concept (Using Xreal Glasses)

In addition to seeing Xreal devices on their own, I was invited by BMW to take a ride trying out their Augmented Reality HUD on the streets around the convention center. A video produced by BMW gives a slightly different and abbreviated trip. I should emphasize that this is just an R&D demonstration, not a product that BMW plans to introduce. Also, BMW made clear that they would be working with other makes of headsets but that Xreal was the most readily available.

To augment using the Xreal glasses, BWM mounted a head tracking camera under the rearview mirror. This allows the BMW to lock the image generated to the physical car. Specifically, it allowed them to (selectively) block/occlude parts of the virtual image hidden behind the front A-pillar of the car. Not shown in the pictures from BMW below (click on the picture to see them bigger) is that you could see the images would start in the front window but be hidden by the A-pillar and then continue in the side window.

BWM’s R&D is looking at driver and passenger AR glasses use. They discussed that they would have different content for the driver, which would have to be simplified and more limited than what they could show the passenger. There are many technical and government/legal issues (all 50 states in the U.S. have different laws regarding HUD displays) with supporting headsets on drivers. From a purely technical perspective, a hear-worn AR HUD has many advantages and some disadvantages versus a fixed HUD on the windshield or dash combiner (too much to get into in this quick article).

Ocutrx (for Low-Vision and other applications)

Ocutrx’s Oculenz is also using “birdbath” optics with the OcuLenz. The OcuLens was originally designed to support people with “low vision,” especially people with Macular Degeneration and eye problems that block parts of a person’s vision. People with Macular Degeneration lose their vision’s high-resolution, high-contrast, and color-sensitive parts. They must rely on other parts of the retina, commonly called peripheral vision (although it may include more than just what is technically considered peripheral vision).

A low-vision headset must have a wide FOV to reach the outer parts of the retina. They must magnify, increase color saturation, and improve contrast over what a person with normal vision would want to see. Note that while these people may be legally blind, they still can see, particularly with their peripheral vision. This is why a headset that still allows them to use their peripheral vision is important.

About 20 million people in the US alone have what is considered “low-vision,” and about 1 million more people each develop low-vision as the population ages. It is the biggest identifiable market I know of today for augmented reality headset headsets. But a catch needs to be fixed for this market to be served. By the very nature of the people involved, having low vision and often being elderly, they need a lot of professional help while at the same time being often on a fixed or limited income. Unfortunately, rarely will private or government (Medicare/Medicaid) insurance will rarely cover either the headset cost or the professional support required. There have been bills before Congress to change this, but so far, nothing has happened of which I am aware. Without a way to pay for the headsets, the volumes are low, which makes the headsets more expensive than they need to be.

In the past, I have reported on Evergaze’s seeBoost, which existed in this market while developing their second-generation product for the economic reasons (lack of insurance coverage) above. I have also discussed NuEyes with Bradley Lynch in a video after AWE 2022. The economic realities of the low-vision market cause companies like NuEye and Ocutrx to look for other business opportunities for the headsets. It is really a frustrating situation knowing that technology could help so many people. I hope to cover this topic in more detail in the future.

Nimo Planet (Nimo)

Nimo Planet (Nimo) makes a small computer that acts as a spatial mouse pointer for AR headsets with a USB-C port for power and video input. It replaces the need for a cell phone and can send mirror/casting video information from other devices to the headset. Still, Nimo Core is a fully standalone computer with Nimo OS, which simultaneously supports Android, Web, and Unity Apps. No other host computer is needed.

According to Nimo, every other multi-screen solution in the market is developed in web platforms or UnityApp, which limits them to running only Web Views. Nimo OS created a new Stereo Rendering and Multi-Window architecture in AOSP to run multiple Android, Unity, and Web Apps simultaneously.

Nimo developed their glasses based on LentinAR optics and supports other AR glasses. Most notably, they just announced a joint development agreement with Rokid.

I got a brief demonstration of Nimo’s multi-windows on an AR headset. They use the inertial sensors in the headset to detect head movement and move the view of the multiple windows accordingly. It is like you are looking at multiple monitors through a 1920×1080 window. No matter how big the size or number of virtual monitors, they will be clipped to that 1920×1080 view. This device lets you move your head to select what you see. I discussed some of the issues with simulating virtual monitors with head-mounted displays in Apple Vision Pro (Part 5A) – Why Monitor Replacement is Ridiculous, Apple Vision Pro (Part 5B) – More on Monitor Replacement is Ridiculous, and Apple Vision Pro (Part 5C) – More on Monitor Replacement is Ridiculous.

Sightful

The Sightful is similar to the Nimo Planet type of device in some ways. With the Sightful, the computer is built inside the keyboard and touchpad, making it a full-fledged computer. Alternatively, Sightful can be viewed as a laptop computer where the display uses AR glasses rather than a flat panel.

Like Nimo and Xreal’s Beam and many other new Mixed Reality devices, Sightful supports multiple windows. I don’t know if they have sensors for 3-D sensing, so I suspect they use internal sensors to detect head movement.

Sightful’s basic display specs resemble other birdbath AR glasses designs from companies like Xreal and Rokid. I have not had a chance, however, to compare them seriously.

LetinAR

I have been writing about LetinAR since 2018. LetinAR started with a “Pin Mirror” type of pupil replication. They have now moved on to a series of what I will call “horizontal slat pupil replicators.” They also use total internal reflections (TIR) and a curved mirror to move the focus of the image form an OLED microdisplay before going to the various pupil-expanding slats.

While LetinAR’s slat design improves image quality over its earlier pin mirrors, it is still imperfect. When looking through the lenses (without a virtual image), the view is a bit “disturbed” and seems to have diffraction line effects. Similarly, you can perceive gaps or double images depending on your eye location and movement. LetinAR is working on continuing to improve this technology. While their image quality is not as good as the birdbath designs, they offer much better transparency.

LetinAR seems to be making progress with multiple customers, including Jor Jin, who was demonstrating in the LentinAR booth, Sharp, which had a big demonstration in their booth (while they didn’t say whose optics were in the demo, it was obviously LentinARs – see pictures below), and the headset discussed above by Nimo.

Conclusions

Sorry, there is no time for major conclusions today. I’m off to the AR/VR/MR Conference and Exhibition.

I will note that regardless of the success of the AVP, Apple has already succeeded in changing the language of Augmented and Mixed reality. In addition to almost everyone in AR and Mixed reality talking “AI,” many companies now use “Spatial Computing” to refer to their products in their marketing.

CES (Pt. 2), Sony XR, DigiLens, Vuzix, Solos, Xander, EverySight, Mojie, TCL color µLED

Introduction

As I wrote last time, I met with nearly 40 companies at CES, of which 31 I can talk about. This time, I will go into more detail and share some photos. I picked the companies for this article because they seemed to link together. The Sony XR headset and how it fit on the user’s head was similar to the newer DigiLens Argo headband. DigiLens and the other companies had diffractive waveguides and emphasized lightweight and glass-like form factors.

I would like to caution readers of my saying that “all demos at conferences are magic shows,” something I warn about near the beginning of this blog in Cynics Guide to CES – Glossary of Terms). I generally no longer try to take “through the optics” pictures at CES. It is difficult to get good representative photos in the short time available with all the running around and without all the proper equipment. I made an exception for the TCL color MicroLED glasses as they readily came out better than expected. But at the same time, I was only using test images provided by TCL and not test patterns that I selected. Generally, the toughest test patterns (such as those on my Test Pattern Page) are simple. For example, if you put up a solid white image and see color in the white, you know something is wrong. When you put up colorful pictures with a lot of busy detail (like a colorful parrot in the TCL demo), it is hard to tell what, if anything, is wrong.

The SPIE AR/VR/MR 2024 in San Francisco is fast approaching. If you want to meet, contact me at meet@kgontech.com). I hope to get one or two more articles on CES before leaving for the AR/VR/MR conference.

Sony XR and DigiLens Headband Mixed Reality (with contrasts to Apple Vision Pro)

Sony XR (and others compared to Apple Vision Pro)

This blog expressed concerns about the Apple Vision Pro’s (AVP) poor mechanical ergonomics (AVP), completely blocking peripheral vision and the terrible placement of the passthrough cameras. My first reaction was that the AVP looked like it was designed by a beginner with too much money and an emphasis on style over functionality. What I consider Apple’s obvious mistakes seem to be addressed in the new Sony XR headset (SonyXR).

The SonyXR shows much better weight distribution, with (likely) the battery and processing moved to the back “bustle” of the headset and a rigid frame to transfer to the weight for balance. It has been well established that with designs such as the Hololens 2 and Meta Quest Pro, this type of design leads to better comfort. This design approach can also move a significant amount of power to the back for better heat management due to having a second surface radiating heat.

The bustle on the back design also avoids the terrible design decision by Apple to have a snag hazard and disconnection nuisance with an external battery and cable.

The SonyXR is shown to have enough eye relief to wear typical prescription glasses. This will be a major advantage in many potential XR/MR headset uses, making it more interchangeable. This is particularly important for use cases that are not all-day or one-time (ex., museum tours, and other special events). Supporting enough eye relief for glasses is more optically difficult and requires larger optics for the same field of view (FOV).

Another major benefit of the larger eye relief is that it allows for peripheral vision. Peripheral vision is considered to start at about 100 degrees or about where a typical VR headset’s FOV stops. While peripheral vision is low in resolution, it is sensitive to motion. It alerts the person to motion so they will turn their head. The saying goes that peripheral vision evolved to keep humans from being eaten by tigers. This translated to the modern world, being hit by moving machinery and running into things that might hurt you.

Another good feature shown in the Sony XR is the flip-up screen. There are so many times when you want to get the screen out of your way quickly. The first MR headset I used that supported this was the Hololens 2.

Another feature of the Hololens 2 is the front-to-back head strap (optional but included). Longtime VR gamer and YouTube personality Brad Lynch of the SadlyItsBradley YouTube channel has tried many VR-type headsets and optional headbands/straps. Brad says that front-to-back straps/pads generally provide the most comfort with extended use. Side-to-side straps, such as on the AVP, generally don’t provide the support where it is needed most. Brad has also said that while a forehead pad, such as on the Meta Quest Pro, helps, headset straps (which are not directly supported on the MQP) are still needed. It is not clear whether the Sony XR headset will have over-the-head straps. Even companies that support/include overhead straps generally don’t show them in the marketing photos and demos as they mess up people’s hair.

The SonyXR cameras are located closer to the user’s eyes. While there are no perfect placements for the two cameras, the further they are from the actual location of the eyes, the more distortion will be caused for making perspective/depth-correct passthrough (for more on this subject, see: Apple Vision Pro Part 6 – Passthrough Mixed Reality (PtMR) Problems).

Lynx R1

Lynx also used the headband with a forehead pad, with the back bustle and flip-up screen. Lynx also supports enough eye relief for glasses and good peripheral vision and locates their passthrough cameras near where the eye will be when in use. Unfortunately, I found a lot of problems with the optics Lynx chose for the R1 by the optics design firm Limbak (see also my Lynx R1 discussion with Brad Lynch). Apple has since bought Limbak, and it is likely Lynx will be moving on with other optical designs.

Digilens Argo New Head Band Version at CES 2024

I wrote a lot about Digilens Argo in last year’s coverage of CES and the AR/VR/MR conference in DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8). In the section Skull-Gripping “Glasses” vs. Headband or Open Helmet, I discussed how Digilens has missed an opportunity for both comfort and supporting the wearing of glasses. Digilens said they took my comments to heart and developed a variation with the rigid headband and flip-up display shown in their suite at CES 2024. Digilens said that this version let them expand their market (and no, I didn’t get a penny for my input).

The Argos are light enough that they probably don’t need an over-the-head band for extra support. If the headband were a ground-up design rather than a modular variation, I would have liked to see the battery and processing moved to a back bustle.

While on the subject of Digilens, they also had a couple of nice static displays. Pictured below right are variations in waveguide thickness they support. Generally, image quality and field of view can be improved by supporting more waveguide layers (with three layers supporting individual red, green, and blue waveguides). Digilens also had a static display using polarized light to show different configurations they can support for the entrance, expansion, and exit gratings (below right).

Vuzix

Vuzix has been making wearable heads-up displays for about 26 years and has a wide variety of headsets for different applications. Vuzix has been discussed on this blog many times. Vuzix primarily focuses on lightweight and small form factor glasses and attachments with displays.

Vuzix Ultralite Sport (S) and Forward Projection (Eye Glow) Elimination

New this year at CES was Vuzix’s Ultralite Sports (S) model. In addition to being more “sporty” looking, their waveguides are designed to eliminate forward projection (commonly referred to as “Eye Glow”). Eye glow was famously an issue with most diffractive waveguides, including the Hololens 1 & 2 (see right), Magic Leap 1 & 2, and previous Vuzix waveguide-based glasses.

Vuzix appears to be using the same method that both Digilens and Dispelix discussed in their AR/VR/MR 2022 papers that I discussed with Brad Lynch in a YouTube video after AR/VR/MR 2022 and in my blog article, DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8) in the sections on Eye Glow.

If the lenses are canted (tilted), the exit gratings, when designed to project to the eye, will then project down at twice the angle at which the waveguides are canted. Thus, with only a small change in the tilt of the waveguides, the projection will be far below the eyesight of others (unless they are on the ground).

Ultra Light Displays with Audio (Vuzix/Xander) & Solos

Last year, Vuzix introduced their lightweight (38 grams) Z100 Ultralite, which uses 640×480 green (only) MicroLED microdisplays. Xander, using the lightweight Vuzix’s Z100, has developed text-to-speech glasses for people with hearing difficulties (Xander was in the AARP booth at CES).

While a green-only display with low resolution by today’s standards is not something you will want to watch movies, there are many uses for having a limited amount of text and graphics in a lightweight and small form factor. For example, I got to try out Solos Audio glasses, which, among other things, use ChatGPT to do on-the-fly language translation. It’s not hard to imagine that a small display could help clarify what is being said about Solos and similar products, including the Amazon Echo Frames and the Ray-Ban Meta Wayfarer.

Mojie (Green) MicroLED with Plastic Waveguide

Like Vuzix Z100, the Mojie (a trademark of Meta-Bounds) uses green-only Jade Bird Display 640×480 microLEDs with waveguide optics. The big difference is that Mojie, along with Oppo Air 2 and Meizu MYVU, all use Meta-Bounds resin plastic waveguides. Unfortunately, I didn’t get to the Mojie booth until near closing time at CES, but they were nice enough to give me a short demo. Overall, regarding weight and size, the Mojie AR glasses are similar to the Vuzix Z100, but I didn’t have the time and demo content to judge the image quality. Generally, resin plastic diffractive waveguides to date have had lower image quality than ones on a glass substrate.

I have no idea what resin plastic Meta-Bounds uses or if they have their own formula. Mitsui Chemicals and Mitsubishi Chemicals, both of Japan, are known to be suppliers of resin plastic substrate material.

EverySight

ELBIT F35 Helmet and Skylens

Everysight (the company, not the front eye display feature on the Apple Vision Pro) has been making lightweight glasses primarily for sports since about 2018. Everysight spun out of the major defense (including the F35 helmet HUD) and commercial products company ELBIT. Recently, ELBIT had their AR glasses HUD approved by the FAA for use in the Boeing 737ng series. EverySight uses an optics technology, which I call “precompensated off-axis.” Everysight (and ELBIT) have an optics engine that projects onto a curved front lens with a partial mirror coating. The precompensation optics of the projector correct for the distortion from hitting a curved mirror off-axis.

The Everysight/Elbit technology is much more optically efficient than waveguide technologies and more transparent than “birdbath technologies” (the best-known birdbath technology today being Xreal). The amount of light from the display versus transparency is a function of the semi-transparent mirror coating. The downside of the Eversight optical system with small-form glasses is that the FOV and Eyebox tend to be smaller. The new Everysight Maverick glasses have a 22-degree FOV and produce over 1,000 nits using a 5,000 nit 640×400 pixel full-color Sony Micro-OLED.

The front lens/mirror elements are inexpensive and interchangeable. But the most technically interesting thing is that Everysight has figured out how to support prescriptions built into the front lens. They use a “push-pull” optics arrangement similar to some waveguide headsets (most notably Hololens 1&2 and Magic Leap). The optical surface on the eye side of the lens corrects for the virtual display of the eye, and the optical surface on the outside surface of the lens is curved to do what is necessary to correct vision correction for the real world.

TCL RayNeo X2 and Ray Neo X2 Lite

I generally no longer try to take “through the optics” pictures at CES. It is very difficult to get good representative photos in the short time available with all the running around and without all the proper equipment. I got some good photos through TCL’s RayNeo X2 and the RayNeo X2 Lite. While the two products sound very close, the image quality with the “Lite” version, which switched to using Applied Materials (AMAT) diffractive waveguides, was dramatically better.

The older RayNeo X2s were available to see on the floor and had problems, particularly with the diffraction gratings capturing stray light and the general color quality. I was given a private showing of the newly announced “Lite” version using the AMAT waveguides, and not only were they lighter, but the image quality was much better. The picture on the right below shows the RayNeo X2 (with an unknown waveguide) on the left that captures the stray overhead light (see streaks at the arrows). The picture via the Lite model (with the AMAT waveguide) does not exhibit these streaks, even though the lighting is similar. Although hard to see in the pictures, the color uniformity with the AMAT waveguide also seems better (although not perfect, as discussed later).

Both RayNeo models use 3-separate Jade Bird Display red, green, and blue MicroLEDs (inorganic) with an X-cube color combiner. X-cubes have long been used in larger LCD and LCOS 3-panel projectors and are formed with four prisms with different dichroic coatings that are glued together. Jade Bird Display has been demoing this type of color combiner since at least AR/VR/MR 2022 (above). Having worked with 3-Panel LCOS projectors in my early days at Syndiant, I know the difficulties in aligning three panels to an X-cube combiner. This alignment is particularly difficult with the size of these MicroLED displays and their small pixels.

I must say that the image quality of the TCL RayNeo X2 Lite exceeded my expectations. Everything seems well aligned in the close-up crop from the same parrot picture (below). Also, there seems to be relatively good color without the wide variation from pixel-to-pixel brightness I have seen in past MicroLED displays. While this is quite an achievement for a MicroLED system, the RayNeo X2 light only has a modest 640×480 resolution display with a 30-degree diagonal FOV. These specs result in about 26 pixels per degree or about half the angular resolution of many other headsets. The picture below was taken with a Canon R5 with a 16mm lens, which, as it turns out, has a resolving power close to good human vision.

Per my warning in the introduction, all demos are magic shows. I don’t know how representative this prototype will be of units in production, and perhaps most importantly, I did not try my test patterns but used the images provided by TCL.

Below is another picture of the parrot taken against a darker background. Looking at the wooden limb under the parrot, you will see it is somewhat reddish on the left and greenish on the right. This might indicate color shifting due to the waveguide, as is common with diffractive waveguides. Once again, taking quick pictures at shows (all these were handheld) and without controlling the source content, it is hard to know. This is why I would like to acquire units for extended evaluations.

The next two pictures, taken against a dark background and a dimly lit room, show what I think should be a white text block on the top. But the text seems to change from a reddish tint on the left to a blueish tint on the right. Once again, this suggests some color shifting across the diffractive waveguide.

Below is the same projected image taken with identical camera settings but with different background lighting.

Below is the same projected flower image with the same camera settings and different lighting.

Another thing I noticed with the Lite/AMAT waveguides is significant front projection/eye glow. I suspect this will be addressed in the future, as has been demonstrated by Digilens, Displelix, and Vuzix, as discussed earlier.

Conclusions

The Sony XR headset seems to showcase many of the beginner mistakes made by Apple with the AVP. In the case of the Digilens Argo last year, they seemed to be caught between being a full-featured headset and the glasses form factor. The new Argo headband seems like a good industrial form factor that allows people to wear normal glasses and flip the display out of the way when desired.

Vuzix, with its newer Ultralite Z100 and Sports model, seems to be emphasizing lightweight functionality. Vuzix and the other waveguide AR glasses have not given a clear path as to how they will support people who need prescription glasses. The most obvious approach they will do some form of “push-pull” with a lens before and after the waveguides. Luxexcel had a way to 3-D print prescription push-pull lenses, but Meta bought them. Add Optics (formed by former Luxexcel employees) has another approach with 3-D printed molds. Everysight tries to address prescription lenses with a somewhat different push-pull approach that their optical design necessitates.

While not perfect, the TCL color MicroLED, at least in the newer “Lite” version, was much better than I expected. At the same time, one has to recognize the resolution, FOV, and color uniformity are still not up to some other technologies. In other words, to appreciate it, one has to recognize the technical difficulty. I also want to note that Vuzix has said that they are also planning on color MicroLED glasses with three microdisplays, but it is not clear whether they will use an X-cube or a waveguide combiner approach.

The moderate success of smart audio glasses may be pointing the way for these ultra-light glasses form factor designs for a consumer AR product. One can readily see where adding some basic text and graphics would be of further benefit. We will know if this category has become successful if Apple enters this market 😁.

Apple Vision Pro Part 6 – Passthrough Mixed Reality (PtMR) Problems

Introduction

I planned to wrap up my first pass coverage of the Apple Vision Pro (AVP) with my summary and conclusions based on prior articles. But the more I thought about it, Apple’s approach to Passthrough Mixed Reality (PtMR) seems like it will be so egregiously bad that it should be broken out and discussed separately.

Apple Prioritized EyeSight “Gimmick” Over Ergonomics and Functionality

There are some features, particularly surrounding camera passthrough, where there should have been an internal battle between those who wanted the EyeSight™ gimmick and what I would consider more important functionality. The backers of EyeSight must have won and forced the horrible location of the passthrough cameras, optical distortion from the curved glass in front of all the forward-facing cameras and sensors, put a fragile piece of hard-to-replace glass on the front where it can be easily scratched and broken, and added weight to the front were it is least desired. Also, as discussed later, there are negative effects on the human visual system caused by misaligning the passthrough cameras with the eyes.

The negative effects of EyeSight are so bad for so many fundamental features that someone in power with little appreciation for the technical difficulties must have forced the decision (at least, that is the only way I can conceive of it happening).  People inside the design team must have known it would cause serious problems. Supporting passthrough mixed reality (PtMR) is hard enough without deliberately creating problems.

Meta Quest 3 Camera Location

As noted in Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough, Meta is locating the soon-to-be-released Quest 3 main passthrough camera closer to the center of view of the eyes. Fixed cameras in front of the eyes won’t be perfect and will still require digital correction for better functional use. It does appear that Meta is taking the PtMR more seriously than it did with the Meta Quest Pro and Quest 2.

I’m going to be looking forward to getting a Meta Quest 3 to test out when it is released soon.

Definitions of AR/VR/MR and PtMR

The terms used to describe mixed reality have been very fluid over the last few years. Before the introduction of Hololens, Augmented reality meant any headset that displayed virtual content on a see-through display. For example, just before Hololens went on sale, Wired in 2015 titled their article (with my bold emphasis): Microsoft Shows HoloLens’ Augmented Reality Is No Gimmick. With the introduction of Hololens, the term “Mixed Reality” was used to distinguish AR headsets with SLAM to lock the virtual to the real world. “AR” headsets without SLAM are sometimes called AR Heads-Up Displays (HUDs), but these get confused with automotive HUDs. Many today refer to a see-through headset without SLAM as “AR” and one with SLAM as “MR,” whereas previously, the terms “AR” covered both with and without SLAM.

Now we have the added confusion of optical see-through (e.x. Hololens) and camera passthrough “Mixed Reality.” While they may be trying to accomplish similar capabilities, they are radically different in their capabilities. Rather than constantly typing “passthrough” before MR, I abbreviated it as PtMR.

In Optical AR, the Virtual Content Augments the Real World – With PtMR, the Real World Augments the Virtual Content

Optical MR prioritizes seeing the real world at the expense of the virtual content. The real world is in perfect perspective, at the correct focus distance, with no limitation by a camera or display on brightness, with zero lag, etc. If done well, there is minimal light blocking and distortion of the real world and little blocking of the real-world FOV.

PtMR, on the other hand, prioritizes virtual image quality at the expense of the real world, both in how things behave in 3-D space (focus perspective) and in image quality.

We are likely many decades away, if ever, from passing what Douglas Lanman of Meta calls their Visual Turing Test (see also the video linked here).

Meta’s demonstrations at Siggraph 2023 of their Flamera with perspective-correct passthrough and Butterscotch with vergence accommodation conflict served to show how far PtMR is from optical passthrough. They can only address each problem individually, each with a large prototype, and even then, there are severe restrictions. The Flamera has a very low-resolution passthrough, and Butterscotch only supports a 50-degree FOV.

It is also interesting that Butterscotch moves back from Half Dome 3’s electronic LCD variable focus to electro-mechanical focusing to address VAC. As reported in Mixed Reality News, “However, the technology presented problems with light transmission and image quality [of the electronic LCD approach], so Meta discarded it for Butterscotch Varifocal at the expense of weight and size.”

All of this work is to try and solve some of the many problems created by PtMR that don’t exist with optical MR. PtMR does not “solve” the issues with optical MR. It just creates a long list of massively hard new problems. Optical AR has issues with the image quality of the virtual world, very large FOV, and hard-edge occlusion (see my article Magic Leap 2 (Pt. 3): Soft Edge Occlusion, a Solution for Investors and Not Users). I often say, “What is hard in optical MR is easy in PtMR and vice versa.”

Demo or Die

Meta and others seem to use Siggraph to show off research work that is far from practical. As stated by Lanman of Meta, of their Flamera and Butterscotch VAC demos at Siggraph 2023, Meta’s Reality Labs has a “Demo or Die” philosophy. They will not be tipping off their competition on concepts they will use within a few years. To be clear, I’m happy to see companies showing off their technical prowess, but at the same time, I want to put it in perspective.

Cosmetic vs. Functional Passthrough PtMR

JayzTwoCents video on the HTC Vive XR Elite has a presentation by Phil on what he calls “3D Depth Projection” (others refer to it as “perspective correct“). In the video (sequence of clips below), Phil demonstrates that because the passthrough video was not corrected in scale, position, and perspective in 3-D space, it deprives him of hand-eye coordination to catch a bottle tossed to him.

As discussed in Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough in the section The method in the Madness: MQP prioritizes 3-D spatial over image quality.

Phil demonstrated in the video (and in a sequence of clips below) that with the Meta Quest Pro, even though the image quality is much worse and distorted due to the 3D projection, he can at least catch the bottle.

I would classify the HTC Vive XR Elite as having a Cosmetic Passthrough.” While the image quality is better (but still not very good), it is non-functional. While Meta Quest Pro’s image quality is lousy, it is at least somewhat functional.

Something else to notice in the MQP frame sequence above is that there are both lag and accuracy errors in hand tracking.

Effects on Vision with Long-Term Use

It is less obvious that the human visual system will start adapting to any camera placement and then have to re-adapt after the headset is removed. This was briefly discussed in AVP Part 2 in the section titled Centering correctly for the human visual system, which references Steve Mann in his March 2013 IEEE Spectrum article, “What I’ve learned from 35 years of wearing computerized eyewear.” In the early days with Steve Mann, they had no processing power to attempt to move the effect of the camera images digitally. At the same time, I’m not sure how well the correction will work or how a distorted view will affect people’s visual perception during and after long exposure. As with most visual effects, it will vary from one individual to another.

Meta Flamera Light Field Camera at Siggraph 2023

As discussed in AVP Part 2 and Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough, having the passthrough cameras as close as possible to being coaxial to the eyes (among other things) is highly desirable.

To reduce any undesired negative effects on human vision caused by cameras not aligning with the eyes, some devices, such as the Quest 2 and Quest Pro from Meta, use processing to create what I will call “virtual cameras” with a synthesized view for each eye. The farther the physical cameras are from the eye’s location, the larger the correction will be required and the larger the distortion in the final result.

Meta at Siggraph 2023 presented the paper “Perspective-Correct VR Passthrough Without Reprojection” (and IEEE article) and showed their Flamera prototype with a light field camera (right). The figure below shows how the camera receives light rays from the same angle as the eye with the Light Field Passthrough Camera.

Below are a couple of still frames (with my annotations) from the related video that show how, with the Meta Quest 2, the eye and camera views differ (below left), resulting in a distorted image (below right). The distortion/error as the distance from the eye decreases.

It should be noted that while Flamera’s light field camera approach addresses the angular problems of the camera location, it does so with a massive loss in resolution (by at least “n,” where n is the number of light field subviews). So, while interesting in terms of research and highlighting the problem, it is still a highly impractical approach.

The Importance of “Perspective Correct” PtMR

In preparing this article, I returned to a thread on Hacker News about my Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough article. In my article, I was trying to explain why there was a “The method in the Madness: MQP prioritizes 3-D spatial over image quality” of why Meta was distorting the image.

Poster Zee2 took exception to my article and seemed to feel I was understating the problem of 3-D perspective. I think Zee2 missed what I meant by “pyrrhic victory.” I was trying to say they were correct to address the 3D depth issue but that doing so with a massive loss in image quality was not the solution. I was not dismissing the importance of perspective-correct passthrough.

Below, I am copying his comment from that thread (with my bold highlighting)), including a quote from my article. Interestingly, Zee2 comments on Varjo having good image quality with its passthrough, but it is not perspective-correct.

I also really don’t know why he [refering to my article] decided to deemphasize the perspective and depth correctness so much. He mentions it here:

>[Quoting Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough] In this case, they were willing to sacrifice image quality to try to make the position of things in the real world agree with where virtual objects appear. To some degree, they have accomplished this goal. But the image quality and level of distortion, particularly of “close things,” which includes the user’s hands, is so bad that it seems like a pyrrhic victory.

I don’t think this is even close to capturing how important depth and perspective correct passthrough is.

Reprojecting the passthrough image onto a 3D representation of the world mesh to reconstruct a perspective-correct view is the difference between a novelty that quickly gives people headaches and something that people can actually wear and look through for an extended period of time.

Varjo, as a counterexample, uses incredibly high-resolution cameras for their passthrough. The image quality is excellent, text is readable, contrast is good, etc. However, they make no effort to reproject their passthrough in terms of depth reconstruction. The result is a passthrough image that is very sharp, but is instantly, painfully, nauseatingly uncomfortable when walking around or looking at closeup objects alongside a distant background.

The importance of depth-correct passthrough reprojection (essentially, spacewarp using the depth info of the scene reconstruction mesh) absolutely cannot be understated and is a make or break for general adoption of any MR device. Karl is doing the industry a disservice with this article.

From: Hacker News Meta Quest Pro – Bad AR Passthrough comment by Zee2 

Does the AVP have Cosmetic or Functional PtMR or Something Else?

With the AVP’s passthrough cameras being so poorly located (thanks to EyeSight™), severe distortion would seem inevitable to support functional PtMR. I don’t believe there is some magic (perhaps a pun on Magic Leap) that Apple could employ that Meta couldn’t that would simultaneously support good image quality without serious distortion with the terrible camera placement due to the Eyesight(tm) feature.

So, based on the placement of the cameras, I have low expectations for the functionality of the AVP’s PtMR. The “instant experts” who got to try out the AVP would be more impressed by a cosmetically better-looking passthrough. Since there are no reports of distortion like the MQP, I’m left to conclude that, at least for the demo, they were only doing a cosmetic passthrough.

As I often say, “Nobody will volunteer information, but everyone will correct you.” Thus, it is better to take a position based on the current evidence and then wait for a correction or confirmation from the many developers with AVPs who read this blog.

Conclusion

I’m not discounting the technical and financial power of Apple. But then I have been writing about the exaggerated claims for Mixed Reality products by giant companies such as Google, Meta, and Microsoft, not to mention the many smaller companies, including the over $3B spent by Magic Leap, for the last ten years. The combined sunk cost of about $50B of these companies, not including Apple. As I’m fond of saying, “If all it took were money and smart people, it would already be solved.

Apple doesn’t fully appreciate the difficulties with Passthrough Mixed Reality, or they wouldn’t prioritize the EyeSight gimmick over core capabilities. I’m not saying the AVP would work well for passthrough AR without EyeSight, but it is hard enough without digging big technical holes to support a novelty feature.

Apple Vision Pro (Part 5C) – More on Monitor Replacement is Ridiculous

Introduction

In this series about the Apple Vision Pro, this sub-series on Monitor Replacement and Business/Text applications started with Part 5A, which discussed scaling, text grid fitting, and binocular overlap issues. Part 5B starts by documenting some of Apple’s claims that the AVP would be good for business and text applications. It then discusses the pincushion distortion common in VR optics and likely in the AVP and the radial effect of distortion on resolution in terms of pixels per degree (ppd).

The prior parts, 5A, and 5B, provide setup and background information for what started as a simple “Shootout” between a VR virtual monitor and physical monitors. As discussed in 5A, my office setup has a 34″ 22:9 3440×1440 main monitor with a 27″ 4K (3840×2160) monitor on the right side, which is a “modern” multiple monitor setup that costs ~$1,000. I will use these two monitors plus a 15.5″ 4K OLED Laptop display to compare to the Meta Quest Pro (MQP) since I don’t have an Apple AVP and then extrapolate the results to the AVP.

My Office Setup: 34″ 22:9 3440×1440 (left) and 27″ 4K (right)

I will be saving my overall assessment, comments, and conclusions about VR for Office Applications for Part 5D rather than somewhat burying them at the end of this article.

Office Text Applications and “Information Density” – Font Size is Important

A point to be made by using spreadsheets to generate the patterns is that if you have to make text bigger to be readable, you are lowering the information density and are less productive. Lowering the information density with bigger fonts is also true when reading documents, particularly when scanning web pages or documents for information.

Improving font readability is not solely about increasing their size. VR headsets will have imperfect optics that cause distortions, focus problems, chromatic aberrations, and loss of contrast. These issues make it harder to read fonts below a certain size. In Part 5A, I discussed how scaling/resampling and the inability to grid fit when simulating virtual monitors could cause fonts to appear blurry and scintillate/wiggle when locked in 3-D space, leading to reduced readability and distraction.

Meta Quest Pro Horizon Worktop Desktop Approach

As discussed in Part 5A, with Meta’s Horizon Desktop, each virtual monitor is reported to Windows as 1920 by 1200 pixels. When sitting at the nominal position of working at the desktop, the center virtual monitor fills about 880 physical pixels of the MQP’s display. So roughly 1200 virtual pixels are resampled into 880 vertical pixels in the center of view or by about 64%. As discussed in Part 5B, the scaling factor is variable due to severe pincushion distortion of the optics and the (impossible to turn off) curved screen effect in Meta Horizons.

The picture below shows the whole FOV captured by the camera before cropping shot through the left eye. The camera was aligned for the best image quality in the center of the virtual monitor.

Analogous to Nyquist sampling, when you scale pixel rendered image, you want about 2X (linearly) the number of pixels in the display of the source image to render it reasonably faithfully. Below left is a 1920 by 1200 pixel test pattern (a 1920×1080 pattern padded on the top and bottom), “native” to what the MQP reports to Windows. On the right is the picture cropped to that same center monitor.

1920×1200 Test Pattern
Through the optics picture

The picture was taken at 405mp, then scaled down by 3X linearly and cropped. When taking high-resolution display pictures, some amount of moiré in color and intensity is inevitable. The moiré is also affected by scaling and JPEG compression.

Below is a center crop from the original test pattern that has been 2x pixel-replicated to show the detail in the pattern.

Below is a crop from the full-resolution image with reduced exposure to show sub-pixel (color element) detail. Notice how the 1-pixel wide lines are completely blurred, and the test is just becoming fully formed at about Arial 11 point (close to, but not the same scale as used in the MS Excel Calibri 11pt tests to follow). Click on the image to see the full resolution that the camera captured (3275 x 3971 pixels).

The scaling process might lose a little detail for things like pictures and videos of the real world (such as the picture of the elf in the test pattern), but it will be almost impossible for a human to notice most of the time. Pictures of the real world don’t have the level of pixel-to-pixel contrast and fine detail caused by small text and other computer-generated objects.

Meta Quest Pro Virtual Versus Physical Monitor “Shootout”

For the desktop “shootout,” I picked the 34” 22:9 and 27” 4k monitors I regularly use (side by side as shown in Part 5A), plus a Dell 15.5” 4K laptop display. An Excel spreadsheet is used with various displays to demonstrate the amount of content that can be seen at one time on a screen. The spreadsheet allows for flexible changing of how the screen is scaled for various resolutions and text sizes, and the number of cells measures the information density. For repeatability, a screen capture of each spreadsheet was taken and then played back in full-screen mode (Appendix 1 includes the source test patterns)

The Shootout

The pictures below show the relative FOVs of the MQP and various physical monitors taken with the same camera and lens. The camera was approximately 0.5 meters from the center of the physical monitors, and the headset was at the initial position at the MQP’s Horizon Desktop. All the pictures were cropped to the size of a single physical or virtual monitor.

The following is the basic data:

  • Meta Quest Pro – Central Monitor (only) ~43.5° horizontal FOV. Used an 11pt font with Windows Display Text Scaling at 150% (100% and 175% also taken and included later)
  • 34″ 22:9 3440×1440 LCD – 75° FOV and 45ppd from 0.5m. 11pt font with 100% scaling
  • 27″ 4K (3840 x 2160) LCD – 56° FOV and 62ppd from 0.5m. 11pt font with 150% scaling (results in text the same size at the 34″ 3440×1400 at 100% – 2160/1440 = 150%)
  • 15.5″ 4K OLED – 32° FOV from 0.5m. Shown below is 11pt with 200% scaling, which is what I use on the laptop (a later image shows 250% scaling, which is what Windows “recommends” and would result in approximately the same size fonts at the 34″ 22:9 at 100%).
Composite image showing the relative FOV – Click to see in higher resolution (9016×5641 pixels)

The pictures below show the MQP with MS Windows display text scaling set to 100% (below left) and 175% (below middle). The 175% scaling would result in fonts with about the same number of pixels per font as the Apple Vision Pro (but with a larger angular resolution). Also included below (right) is the 15.5″ 4K display with 250% scaling (as recommended by Windows).

MQP -11pt scaled=100%
MQP – 11pt scaled=175%
15.5″ – 11pt scale=250%

The camera was aimed and focused at the center of the MQP, the best case for it, as the optical quality falls off radially (discussed in Part 5B). The text sharpness is the same for the physical monitors from center to outside, but they have some brightness variation due to their edge illumination.

Closeup Look at the Displays

Each picture above was initially taken 24,576 x 16,384 (405mp) by “pixel shifting” the 45MP R5 camera sensor to support capturing the whole FOV while capturing better than pixel-level detail from the various displays. In all the pictures above, including the composite image with multiple monitors, each image was reduced linearly by 3X.

The crops below show the full resolution (3x linearly the images above) of the center of the various monitors. As the camera, lines, and scaling are identical, the relative sizes are what you would see looking through the headset for the MQP sitting at the desktop and the physical monitors at about 0.5 meters. I have also included a 2X magnification of the MQP’s images.

With Windows 100% text scaling, the 11pt font on the MQP is about the same size as it is on the 34” 22:9 monitor at 100%, the 27” 4K monitor at 150% scaling, and the 15.5” 4K monitor at 250% scaling. But while the fonts are readable on the physical monitor, they are a blurry mess on the MQP at 100%. The MQP at 150% and 175% is “readable” but certainly does not look as sharp as the physical monitors.

Extrapolating to Apple Vision Pro

Apple’s AVP has about 175% linear pixel density of the MQP. Thus the 175% case gives a reasonable idea of how text should look on the AVP. For comparison below, the MQP’s 175% case has been scaled to match the size of the 34” 22:9 and 27” 4K monitors at 100%. While the text is “readable” and about the same size, it is much softer/blurrier than the physical monitor. Some of this softness is due to optics, but a large part is due to scaling. While the AVP may have better optics and a text rendering pipeline, they still don’t have the resolution to compete on content density and readability with a relatively inexpensive physical monitor.

Reportedly, Apple Vision Pro Directly Rendering Fonts

Thomas Kumlehn had an interesting comment on Part 5B (with my bold highlighting) that I would like to address:

After the VisionPro keynote in a Developer talk at WWDC, Apple mentioned that they rewrote the entire render stack, including the way text is rendered. Please do not extrapolate from the text rendering of the MQP, as Meta has the tech to do foveated rendering but decided to not ship it because it reduced FPS.

From Part 5A, “Rendering a Pixel Size Dot.

Based on my understanding, the AVP will “render from scratch” instead of rendering an intermediate image that is then rescaled as is done with the MQP discussed in Part 5A. While rendering from scratch has a theoretical advantage regarding text image quality, it may not make a big difference in practice. With an ~40 pixels per degree (ppd) display, the strokes and dots of what should be readable small text will be on the order of 1 pixel wide. The AVP will still have to deal with approximately pixel-width objects straddling four or more pixels, as discussed in Part 5A: Simplified Scaling Example – Rendering a Pixel Size Dot.

Some More Evaluation of MQP’s Pancake Optics Using immersed Virtual Monitor

I wanted to evaluate the MQP pancake optics more than I did in Part 5B. Meta’s Horizon Desktop interface was very limiting. So I decided to try out immersed Virtual Desktop software. Immersed has much more flexibility in the resolution, size, placements, and the ability to select flat or curved monitors. Importantly for my testing, I could create a large, flat virtual 4K monitor that could fill the entire FOV with a single test pattern (the pattern is included in Appendix 1).

Unfortunately, while the immersed software had the basic features I wanted, I found it difficult to precisely control the size and positioning of the virtual monitor (more on this later). Due to these difficulties, I just tried to fill the display with the test pattern with only a roughly perpendicular to the headset/camera monitor. It was a painfully time-consuming process, and I never could get the monitor where it seems perfectly perpendicular.

Below is a picture of the whole (camera) FOV taken at 405mp and then scaled down to 45mp. The image is a bit underexposed to show the sub-pixel (color) detail when viewed at full resolution. In taking the picture, I determined that the MQPs pancake optics focus appears to be a “dished,” with the focus in the center slightly different than on the outsides. The picture was taken focusing between the center and outside focus and using f11 to increase the photograph’s depth of focus. For a person using the headset, this dishing of the focus is likely not a problem as their eye will refocus based on their center of vision.

As discussed in Part 5B, the MQP’s pancake optics have severe pincushion distortion, requiring significant digital pre-correction to make the net result flat/rectilinear. Most notably, the outside areas of the display have about 1/3rd the linear pixel per degree of the center.

Next are shown 9 crops from the full-resolution (click to see) picture at the center, the four corners, top, bottom, left, and right of the camera’s FOV.

The main thing I learned out of this exercised is the apparent dish in focus of the optics and the fall off in brightness. I had determine the change in resolution in the studies shown in Part 5B.

Some feedback on immersed (and all other VR/AR/MR) virtual monitor placement control.

While the immersed had the features I wanted, it was difficult to control the setup of the monitors. The software feels very “beta,” and the interface I got differed from most of the help documentation and videos, suggesting it is a work in progress. In particular, I could’t figure out how to pin the screen, as the control for pinning shown in the help guides/videos didn’t seem to exist on my version. So I had to start from scratch on each session and often within a session.

Trying to orient and resize the screen with controllers or hand gestures was needlessly difficult. I would highly suggest immersed look at some of the 3-D CAD software controls of 3-D models. For example, it would be great to have a single (virtual) button that would position the center monitor directly in front and perpendicular to the user. It would also be a good idea to allow separate control for tilt, virtual distance, and zoom/resize while keeping the monitor centered.

It seemed to be “aware” of things in the room which only served to fight what I wanted to do. I was left contorting my wrist to try and get the monitor roughly perpendicular and then playing with the corners to try and both resized and center the monitor. The interface also appears to conflate “resizing” with moving the monitor closer. While moving the virtual monitor closer or resizing affect the size of everything, the effect will be different when the head moves. I would have a home (perpendicular and center) “button,” and then left-right, up-down, tilt, distance, and size controls.

To be fair, I wanted to set up the screen for a few pictures, and I may have overlooked something. Still, I found the user interface could be vastley better for the setting up the monitors, and the controller or gesture monitor size and positioning were a big fail in my use.

BTW, I don’t want to just pick on immersed for this “all-in-one” control problem. I have found it a pain on every VR and AR/MR headset I have tried that supports virtual monitors to give the user good simple intuitive controls for placing the monitors in the 3D space. Meta Horizons Desktop goes to the extreme of giving no control and overly curved screens.

Other Considerations and Conclusions in Part 5D

This series-within-a-series on the VR and the AVP use as an “office monitor replacement” has become rather long with many pictures and examples. I plan to wrap up this series within the series on the AVP with a separate article on issues to consider and my conclusions.

Appendix 1: Test Patterns

Below is a gallery of PNG file test patterns used in this article. Click on each thumbnail to see the full-resolution test pattern.

22:9 3440×1440 100% 11pt
MQP 1920×1200 100% 11pt
MQP 1920×1200 150% 11pt
MQP 1920×1200 175% 11pt
4K 150% 11pt
4K 200% 11pt
4K 250% 11pt
MQP 1920×1200 “Tuff Test” on Black
MQP 3840×2160 “immersed” lens test

Appendix 2: Some More Background Information

More Comments on Font Sizes with Windows

As discussed in Appendix 3: Confabulating typeface “points” (pt) with With Pixels – A Brief History, at font “point” is defined as 1/72nd of an inch (some use 1/72.272 or thereabout – it is a complicated history). Microsoft throws the concept of 96 dots per inch (dpi) as 100%. But it is not that simple.

I wanted to share measurements regarding the Calibri 11pt font size. After measuring it on my monitor with a resolution of 110 pixels per inch (PPI), I found that it translates to approximately 8.44pt (8.44/72 inches). However, when factoring in the monitor PPI of 110 and Windows DPI of 96, the font size increases to ~9.67pt. Alternatively, when using a monitor PPI of 72, the font size increases to ~12.89pt. Interestingly, if printed assuming a resolution of 96ppi, the font reaches the standard 11pt size. It seems Windows apply some additional scaling on the screen. Nevertheless, I regularly use the 11pt 100% font size on my 110ppi monitor, which is the Windows default in Excel and Word, and it is also the basis for the test patterns.

How pictures were shot and moiré

As discussed in 5A’s Appendix 2: Notes on Pictures, some moiré issues will be unavoidable when taking high-resolution pictures of a display device. As noted in that Appendix, all pictures in Lens Shootout were taken with the same camera and lens, and the original images were captured at 405 megapixels (Canon R5 “IBIS sensor shift” mode) and then scaled down by 3X. All test patterns used in this article are included in the Appendix below.

Apple Vision Pro (Part 5B) – More on Monitor Replacement is Ridiculous.

Introduction – Now Three Parts 5A-C

I want to address feedback in the comments and on LinkedIn from Part 5A about whether Apple claimed the Apple Vision Pro (AVP) was supposed to be a monitor replacement for office/text applications. Another theory/comment from more than one person is that Apple is hiding the good “spatial computing” concepts so they will have a jump on their competitors. I don’t know whether Apple might be hiding “the good stuff,” but it would seem better for Apple to establish the credibility of the concept. Apple is, after all, a dominant high-tech company and could stomp any competitor.

Studying the MQP’s images in more detail, it was too simplistic to use the average pixels per degree (ppd), given by dividing the resolution into the FOV of the MQP (and likely the AVP).

As per last time, since I don’t have an AVP, I’m using the Meta Quest Pro (MQP) and extrapolating the results to the AVP’s resolution. I will show a “shootout” comparing the text quality of the MQP to existing computer monitors. I will then wrap up with miscellaneous comments and my conclusions.

I have also included some discussion of Gaze-Contingent Ocular Parallax (GCOP) from some work by Stanford Computational Imaging Labs (SCIL) that a reader of this blog asked about. These videos and papers suggest that some amount of depth perception is conveyed to a person by the movement of each eye in addition to vergence (biocular disparity) and accommodation (focus distance).

I’m pushing out a set of VR versus Physical Monitor “Shootout” pictures and some overall conclusions to Part 5C to discuss the above.

Yes, Apple Claimed the AVP is a Monitor Replacement and Good for High-Resolution Text

Apple Vision Pro Concept

In Apple Vision Pro (Part 5A) – Why Monitor Replacement is Ridiculous, I tried to lay a lot of groundwork for why The Apple Vision Pro (AVP), and VR headsets in general, will not be a good replacement for a monitor. I thought it was obvious, but apparently not, based on some feedback I got.

So to be specific and quote directly from Apple’s WWDC 2023 presentation (YouTube transcript) with timestamps with my bold emphasis added and in-line comments about resolution are given below:

1:22:33 Vision Pro is a new kind of computer that augments reality by seamlessly blending the real world with the digital world.

1:31:42 Use the virtual keyboard or Dictation to type. With Vision Pro, you have the room to do it all. Vision Pro also works seamlessly with familiar Bluetooth accessories, like Magic Trackpad and Magic Keyboard, which are great when you’re writing a long email or working on a spreadsheet in Numbers.

Seamless makes many lists of the most overused high-tech marketing words. Marketeers seem to love it because it is both imprecise, suggests it works well, and unfalsifiable (how do you measure “seamless?”). Seamlessly was used eight times in the WWDC23 to describe the AVP and by Meta to describe the Meta Quest Pro (MQP) twice at Meta Connect 2022. From Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough, Meta also used “seamless” to describe the MQP’s MR passthrough:

Apple claims the AVP is good for text-intensive “writing a long email or working on a spreadsheet in numbers.”

1:32:10 Place your Mac screen wherever you want and expand it–giving you an enormous, private, and portable 4K display. Vision Pro is engineered to let you use your Mac seamlessly within your ideal workspace. So you can dial in the White Sands Environment, and use other apps in Vision Pro side by side with your Mac. This powerful Environment and capabilities makes Apple Vision Pro perfect for the office, or for when you’re working remote.

Besides the fact that it is not 4K wide, it is stretching those pixels over about 80 degrees so that there are only about 40 pixels per degree (ppd), much lower than typically with a TV or movie theater. There are the issues discussed in Part 5A that if you are going to make the display stationary in 3-D, the virtual monitor must be inscribed in the viewable area of the physical display with some margin for head movement, and content must be resampled, causing a loss of resolution. Movies are typically in a wide format, whereas the AVP’s FOV is closer to square. As discussed in Apple Vision Pro (Part 3) – Why It May Be Lousy for Watching Movies On a Plane, you have the issue that the AVP’s horizontal ~80° FOV where movies are designed for about 45 degrees.

Here, Apple claims that the “Apple Vision Pro; perfect for the office, or for when you’re working remote.”

1:48:06 And of course, technological breakthroughs in displays. Your eyes see the world with incredible resolution and color fidelity. To give your eyes what they need, we had to invent a display system with a huge number of pixels, but in a small form factor. A display where the pixels would disappear, creating a smooth, continuous image.

The AVP’s expected average of 40ppd is well below the angular resolution “where the pixels would disappear.” It is below Apple’s “retinal resolution.” If the AVP has a radial distortion profile similar to the MQP (discussed in the next section), then the center of the image will have about 60ppd or almost “retinal.” But most of the image will have jaggies that a typical eye can see, particularly when they move/ripple causing scintillation (discussed in part 5A).

1:48:56 We designed a custom three-element lens with incredible sharpness and clarity. The result is a display that’s everywhere you look, delivering jaw-dropping experiences that are simply not possible with any other device. It enables video to be rendered at true 4K resolution, with wide color and high dynamic range, all at massive scale. And fine text looks super sharp from any angle. This is critical for browsing the web, reading messages, and writing emails.

WWDC 2023 video at 1:56:08 with Excel shown

As stated above, the video will not be a “true 4K resolution.” Here is the claim, “fine text looks super sharp from any angle,” which is impossible with resampled text onto 40ppd displays.

1:56:08 Microsoft apps like Excel, Word, and Teams make full use of the expansive canvas and sharp text rendering of Vision Pro.

Here again, is the claim that there will be “sharp text” in text-intensive applications like Excel and Word.

I’m not sure how much clearer it can be that Apple was claiming that the AVP would be a reasonable monitor replacement, used even when a laptop display is present. Also, they were very clear that the AVP would be good for heavily text-based applications.

Meta Quest Pro (likely AVP) Pincushion Distortion and its Affect on Pixels Per Degree (ppd)

While I was aware, as discussed in Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough, that the MQP, like almost all VR optics, had a signification pincushion distortion, it didn’t quantify the amount of distortion and its effect on the angular resolution aka ppd. Below is the video capture from the MQP developers app on the left, and the resultant image is seen through the optics (middle).

Particularly note above how small the white wall to the left of the left bookcase is relative to its size after the optics; it looks more than 3X wide.

For a good (but old) video explaining how VR headsets map source pixels into the optics (among other concepts), I recommend watching How Barrel Distortion Works on the Oculus Rift. The image on the right shows how equal size rings in the display are mapped into ever-increasing width rings after the optics with a severe pincushion distortion.

Mapping Pixels Per Degree (ppd)

I started with a 405mp camera picture through the MQP optics (right – scaled down 3x linearly), where I could see most of the FOV and zoom in to see individual pixels. I then picked a series of regions in the image to evaluate. Since the pixels in the display device are of uniform size, any size change in their size/spacing must be due to the optics.

The RF16f2.8 camera lens has a known optical barrel distortion that was digitally corrected by the camera, so the camera pixels are roughly linear. The camera and lens combination has a horizontal FOV of 98 degrees and 24,576 pixels or ~250.8ppd.

The MQP display processing pre-compensates for the optics plus adds a cylindrical curvature effect to the virtual monitors. These corrections change the shape of objects in the image but not the physical pixels.

The cropped sections below demonstrate the process. For each region, 8 by 8 pixels were marked with a grid. The horizontal and vertical width of the 8 pixels was counted in terms of the camera pixels. The MQP display is rotated by about 20 degrees to clear the nose of the user, so the rectangular grids are rotated. In addition to the optical distortion in size, chroma aberrations (color separation) and focus worsen with increasing radii.

The image below shows the ppd at a few selected radii. Unlike the Oculus Rift video that showed equal rings, the stepping between these rings below is unequal. The radii are given in terms of angular distance from the optical center.

The plots below show the ppd verse radius for the MQP (left); interestingly, the relationship turns out to be close to linear. The right-hand plot assumes the AVP has a similar distortion profile and FOV, the l but three times the pixels, as reported. It should be noted that ppd is not the only factor affecting resolution; other factors include focus, chroma aberrations, and contrast which worsen with increasing radii.

The display on the MQP is 1920×1800 pixels, and the FOV is about 90° per eye diagonally across a roughly circular image, which works out to about 22 to 22.5 ppd. The optical center has about 1/3rd higher ppd with the pincushion distortion optics. For the MPQ Horizon Desktop application shown, the center monitor is mostly within the 25° circle, where the ppd is at or above average.

Gaze-Contingent Ocular Parallax

While a bit orthogonal to the discussion of ppd and resolution, Gazed-Contingent Ocular Parallax (GCOP) is another issue that may cause problems. A reader, VR user, claims to have noticed GCOP brought to my attention the work of the Stanford Computational Imaging Lab’s (SCIL) work in GCOP. SCIL has put out Multiple videos and articles, including Eye Tracking Revisited by Gordon Wetzstein and Gaze-Contingent Ocular Parallax Rendering for Virtual Reality (associated paper link). I’m a big fan of Wetzstein’s general presentations; per his usual standard, his video explains the concept and related issues well.

The basic concept is that because the center of projection (where the image land on the retina) and center of rotation of the eye are different, the human visual system can detect some amount of 3-D depth in each eye. A parallax and occlusion difference occurs when the eye moves (stills from some video sequences below). Since the eyes constantly move and fixate (saccades), depth can be detected.

GCOP may not be as big a factor as vergence and accommodation. I put it in the category of one of the many things that can cause people to perceive that they are not looking at the real world and may cause problems.

Conclusion

The marketing spin (I think I have heard this before) on VR optics is that they have “fixed foveated optics” in that there is a higher resolution in the center of the display. There is some truth that severe pincushion optical distortion improves the pixel density in the center, but it makes a mess of the rest of the display.

While MQP’s optics have a bigger sweet spot, and the optical quality falls off less rapidly than the Quest 2’s Fresnel optics, they are still very poor by camera standards (optical diagram for the 9-element RF16f2.8 lens, a very simple camera lens, used to take the main picture on the right). VR optics must compromise due to space, cost, and, perhaps most importantly, supporting a very wide FOV.

With a monitor, there is only air between the eye and the display device with no loss of image quality, and there is no need to resample the monitor’s image when the user’s head moves like there is with a VR virtual monitor.

As the MQP other pancake optics and most, if not all, other VR optics have major pincushion distortion; I fully expect the AVP will also. Regardless of the ppd, however, the MQP virtual monitor’s far left and right sides become difficult to read due to other optical problems. The image quality can be no better than its weakest link. If the AVP has 3X the pixels and roughly 1.75x the linear ppd, the optics must be much better than the MQP to deliver the same small readable text that a physical monitor can deliver.

❌