Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Apple Vision Pro Discussion Video by Karl Guttag and Jason McDowall

30 April 2024 at 14:35

Introduction

As discussed in Mixed Reality at CES and the AR/VR/MR 2024 Video (Part 1 – Headset Companies), Jason McDowall of The AR Show recorded over four hours of video discussing the 50 companies I met at CES and AR/VR/MR. The last thing we discussed for about 50 minutes was the Apple Vision Pro (AVP).

The AVP video amounts to a recap of the many articles I have written on the AVP. Where appropriate, I will give links to my more detailed coverage in prior articles and updates rather than rehash that information in this article.

It should be noted that Jason and I recorded the video on March 25th, 2024. Since then, there have been many articles from tech magazines saying the AVP sales are lagging, often citing Bloomberg’s Mark Gurman’s “Demand for demos is down” and Analyst Ming Quo reporting, “Apple has cut its 2024 Vision Pro shipments to 400–450k units (vs. market consensus of 700–800k units or more).” While many reviewers cite the price of the AVP, I have contended that price was not the problem as it was in line with a new high-tech device (adjusted for inflation, it is about the same price as the first Apple II). My criticism focuses on the utility and human factors. In high-tech, the cost is usually a fixable problem with time and effort, and people will pay more if something is of great utility.

I said the Apple Vision Pro would have utility problems before it was announced. See my 2023 AWE Presentation “Optical Versus Passthrough Mixed Reality“) and my articles on the AVP. I’m not about bashing a product or concept; when I find faults, I point them out and show my homework, so to speak, on this blog and in my presentations.

Before the main article, I want to repeat the announcement that I plan to go to DisplayWeek in May and AWE in June. I have also included a short section on YouTube personality/influence Marques Browlee’s Waveform Podast and Hugo Barra’s (former Head of Oculus at Meta) blog article discussing my controversial (but correct) assessment that the Apple Vision Pro’s optics are slightly out of focus/blurry.

DisplayWeek and AWE

I will be at SID DisplayWeek in May and AWE in June. If you want to meet with me at either event, please email meet@kgontech.com. I usually spend most of my time on the exhibition floor where I can see the technology.

AWE has moved to Long Beach, CA, south of LA, from its prior venue in Santa Clara, and it is about one month later than last year. Last year at AWE, I presented Optical Versus Passthrough Mixed Reality, available on YouTube. This presentation was in anticipation of the Apple Vision Pro.

At AWE, I will be on the PANEL: Current State and Future Direction of AR Glasses on Wednesday, June 19th, from 11:30 AM to 12:25 PM with the following panelists:

  • Jason McDowall – The AR Show (Moderator)
  • Jeri Ellsworth – Tilt Five
  • Adi Robertson – The Verge
  • Edward Tang – Avegant
  • Karl M Guttag – KGOnTech

There is an AWE speaker discount code – SPKR24D , which provides a 20% discount, and it can be combined with Early Bird pricing (which ends May 9th, 2024). You can register for AWE here.

“Controversy” of the AVP Being a Little Blurry Discussed on Marques Brownlee’s Podcast and Hugo Barra’s Blog

As discussed in Apple Vision Pro – Influencing the Influencers & “Information Density,” which included citing this blog on Linus Tips, this blog is read by other influencers, media, analysts, and key people at AR/VR/MR tech companies.

Marques Brownlee (MKBHD), another major YouTube personality, Waveform Podcast/WVFRM YouTube channel, discussed (link to the YouTube discussion) my March 1st article on Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3. Marques discussed Hugo Barra’s (former Head of Oculus at Meta) blog’s March 11, 2024 “Hot Take” article (about 1/3rd of the way down) on my blog article.

According to MKBHD and Hugo Barra, my comments about Vision Pro are controversial, but they agree that it would make sense based on my evidence and their experience. My discussion with Jason was recorded before the Waveform Podcast came out. I’m happy to defend and debate this issue.

Outline of the Video and Additional Information

The Video The times in blue on the left of each subsection give the link to the YouTube video section discussing that subject.

00:16 Ergonomics and Human Factors

I wrote about the issues with the AVP’s human factors design in Apple Vision Pro (Part 2) – Hardware Issues Mechanical Ergonomics. In a later article in CES Part 2, I compared the AVP to the new Sony XR headset in the Sony XR (and others compared to Apple Vision Pro) section.

08:23 Lynx and Hypervision

I wrote the article comparing the new Sony XR headset to the AVP mentioned the Lynx R1, first shown in 2021, in this comparison. But I didn’t realize how much they were alike until I saw a post somewhere (I couldn’t find it again) by Lynx’s CEO, Stan Larroque saying how much they were alike. It could be a matter of form following function, but how much they are alike from just about any angle is rather striking.

While on the subject of Lynx and Apple. Lynx used optic by Limbak for the Lynx R1. As I broke in December 2022 Limbak Bought by “Large US Company” (which soon was revealed as Apple) and discussed in more detail in a 2022 Video with Brad Lynch, I don’t like the R1’s Limbak “catadioptric” (combined mirror and refractive) optics. While the R1 optics are relatively thin, like pancake optics, they cause a significant loss of resolution due to their severe distortion, and worse, they have an optical discontinuity in the center of the image unless the eye is perfectly aligned.

In May 2023, Lynx and Hypervision announced that they were working together. In Apple Vision Pro (Part 4)—Hypervision Pancake Optics Analysis, Hypervision detailed the optics of the Apple Vision Pro. That article also discusses the Hypervision pancake optics it was showing at AR/VR/MR 2023. Hypervision demonstrated single pancake optics with a 140-degree FOV (the AVP is about 90 degrees) and blended dual pancake optics with a 240-degree FOV (see below right).

10:59 Big Screen Beyond Compared to AVP Comfort Issues

When I was at the LA SID One Day conference, I stopped by Big Screen Beyond to try out their headset. I wore Big Screen’s headset for over 2 hours and didn’t have any of the discomfort issues I had with the AVP. With the AVP, my eyes start bothering me after about 1/2 hours and are pretty sore by 1 hour. There are likely two major factors: one is that the AVP is applying pressure to the forehead, and the other is that something is not working right optically with the AVP.

Big Screen Beyond has a silicon gel-like custom interface that is 3-D printed based on a smartphone face scan. Like the AVP, they have magnetic prescription inserts. While the Big Screen Beyond was much more comfortable, the face interface has a large contact area with the face. While not that uncomfortable, I would like something that breathed more. When you remove the headset, you can feel the preparation evaporating from where the interface was contacting your face. I can’t imagine anyone wearing makeup being happy (the same with the with the AVP or any headset that presses against the face).

On a side note, I was impressed by Big Screen Beyond’s statement that it is cash flow positive. It is a sign that they are not wildly spending money on frills and that they understand the market they are serving. They are focused on serving dedicated VR gamers who want to connect the headset to a powerful computer.

Related to the Big Screen Beyond interface, a tip I picked up on Reddit is that you can use a silicon face pad made for the Meta Quest 2 or 3 on the AVP’s face interface (see above right). The silicon face pad gives some grip to the face interface and reduces the pressure required to hold the AVP steady. The pad adds about 1mm, but it so happens that I had recently swapped my original AVP face interface for one that is 5mm shorter. Now, I barely need to tighten the headband. A downside to the silicon pad, like the Big Screen Beyond, is that it more or less forms a seal with your face, and you can feel the perspiration evaporating when you remove it.

13:16 Some Basic AVP Information

In the video, I provide some random information about the AVP. I wanted to go into detail here about the often misquoted brightness of the AVP.

I started by saying that I have read or watched many people state that the AVP is much brighter than the Meta Quest 3 (MQ3) or Meta Quest Pro (MQP). They are giving ridiculously high brightness/nits values for the AVP. As I reported in my March 7th, 2024, comments in the article Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3, the AVP outputs to the eye about 100 nits and is only about 5-10% brighter than the MQ3 and ~20% less than the MQP.

Misinformation on AVP brightness via a Google Search

I will explain how this came about in the Appendix at the end. And to this day, if you do a Google search (captured below), it will prominently state that the AVP has a “50-fold improvement over the Meta’s Quest 2, which hits just 100 nits,” citing MIT Technology Review.

Nits are tricky to measure in a headset without the right equipment, and even then, they vary considerably from the center (usually the highest to the periphery).

The 5,000 nits cited by MIT Tech Review are the raw displays before the optics, whereas the nits for the MQ2 were those going to the eye. The AVP’s (and all other) pancake optics transmit about 11% (or less) of the light from an OLED in the center. With Pancake optics, there is the polarization of the OLED (>50% loss), a transmissive pass, and a reflective pass through a 50/50 mirror, which starts with at most 12.5% (50% cubed) before considering all the other losses from the optics. Then, there is the on-time-duty cycle of the AVP, which I have measured to be about 18.4%. VR devices want the on-time duty cycle to be low to reduce motion blur with the rapid motion of the head and 3-D game. The MQ3 only has a 10.3% on-time duty cycle (shorter duty cycles are easier with LED-illuminated LCDs). So, while the AVP display devices likely can emit about 5,000 nits, the nits reaching the eye are approximately 5,000 nits x 11% x 18.4% = 100 nits.

18:59 Computer Monitor Replacement is Rediculous

I wrote a three-part series on why I think monitor replacement by the Apple Vision Pro is ridiculous. Please see Apple Vision Pro (Part 5A) – Why Monitor Replacement is Ridiculous, Part 5B, and Part 5C. There are multiple fundamental problems that neither Apple nor anyone else is close to solving. The slide on the right summarizes some of the big issues.

Nyquist Sampling – Resampling Causes Blurring & Artifacts

I tried to explain the problem in two ways, one based on the frequency domain and the other on the spatial (pixel) domain.

19:29 Frequency Domain Discussion

Anyone familiar with signal processing may remember that a square wave has infinite odd harmonics. Images can be treated like 2-dimensional signals. A series of equally spaced, equal-width horizontal lines looks like a square wave in the vertical dimension. Thus, to represent them perfectly with a 3-D transform requires infinite resolution. Since the resolution of the AVP (or any VR headset) is limited, there will be artifacts such as blurring, wiggling, and scintillation.

As I pointed out in (Part 5A), computers tend to “cheat” and distort text and graphics to fit on the pixel grid and thus sidestep the Nyquist sampling problem that any VR headset must face when trying to make a 2-D image appear still in 3-D space. Those who know signal processing know that the Nyquist rate is 2x the highest frequency component. However, as noted above, horizontal lines have infinite frequency. Hence, some degradation is inevitable, but then we only have to beat the resolution limit of the eye, which, in effect, acts as a low-pass filter. Unfortunately, the AVP’s display is about 2-3x too low linearly (4-9x in two dimensions) in resolution for the artifacts not to be seen by a person with good vision.

22:15 Spatial Domain Discussion

To avoid relying on signal processing theory, in (Part 5A), I gave the example of how a single display pixel can be translated into 3-D space (right). The problem is that a pixel the size of a physical pixel in the headset will always cover parts of four physical pixels. Worse yet, with the slightest movement of a person’s head, how much of each pixel and even which pixels will be constantly changing, causing temporal artifacts such as wiggling and scintillation. The only way to reduce the temporal artifacts is to soften (low pass filter) the image in the resampling process.

23:19 Optics Distortion

In addition to the issues with representing a 2-D image in 3-D space, the AVP’s optics are highly distorting, as discussed in Apple Vision Pro’s (AVP) Image Quality Issues—First Impressions. The optical distortions can be “digitally corrected” but face the same resample issues discussed above.

25:51 Close-Up Center Crop and Foveated Boundary

The figures shown in this part of the video come from Apple Vision Pro’s (AVP) Image Quality Issues – First Impressions, and I will refer you to that article rather than repeat it here.

This image has an empty alt attribute; its file name is 2024-02-AVP-foveated-boundaries-2a-and-2b-copy-1024x428.jpg

28:52 AVP’s Pancake Optics and Comparison to MQ3 and Birdbath

Much of this part of the video is covered in more detail in Apple Vision Pro’s (AVP) Image Quality Issues—First Impressions.

Using Eye Tracking for Optics Has Wider Implications

A key point made in the video is that the AVP’s optics are much more “aggressive” than Meta’s, and as a result, they appear to require dynamic eye tracking to work well. I referred to the AVP optics as being “unstable.” The AVP is constantly pre-correcting for distortion and color based on eye tracking. While the use of eye tracking for Foveated Rendering and control input is much discussed by Apple and others, using eye tracking to correct the optics has much more significant implications, which may be why the AVP has to be “locked” onto a person’s face.

Eye tracking for foveated rendering does not have to be nearly as precise as it is for correction, but using it for optical correction does. This leads me to speculate that the AVP requires the facial interfaces to lock the headset to the face, which is horrible regarding human factors, to support pre-correcting the optics. This follows my rule, “when smart people do something that appears dumb, it is because the alternative was worse.”

Comparison to (Nreal/Xreal) Birdbath

One part not discussed in the video or that article but shown in the associated figure (below) is the similarity of Pancake Optics are similar to Birdbath Optics. Nreal (now Xreal) Birdbath optics are discussed in my Nreal teardown series in Nreal Birdbath Overview.

Both pancake and birdbath optics start by polarizing the image from an OLED microdisplay. They use quarter waveplates to “switch” the polarization, causing it to bounce off a polarizer and then pass through it. They both use a 50/50 coated semi-mirror. They both use a combination of refractive (lens) and reflective (mirror) optics. In the case of the birdbath, the polarizer acts as a beam splitter to the OLED display so it does not block the view out, whereas with pancake optics, everything is inline.

31:34 AVP Color Uniformity Problem

The color uniformity and the fact that the color shift moves around with eye movement were discussed in Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3.

32:11 Comparing Resolution vs a Monitor

In Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3, I compared the resolution of the AVP (below left) to various computer monitors (below right) and the Meta Quest 3.

Below is a close-up crop of the center of the same image shown on the AVP, a 28″ monitor, and the Meta Quest 3. See the article for an in-depth explanation.

33:03 Vision OS 1.1 Change in MacBook mirror processing

I received and saw some comments about my Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3 that Vision OS 1.1 MacBook mirroring was sharper. I had just run a side-by-side comparison of displaying an image from a file on the AVP versus displaying the same image via mirroring a MacBook in Apple Vision Pro Displays the Same Image Differently Depending on the Application. So, I downloaded Vision OS 1.1 to the AVP and reran the same test, and I found a clear difference in the rendering of the MacBook mirroring (but not the display from the AVP file). However, it was not that the MacBook mirror image was shaper per se, but it was less bold. Even in the thumbnails below (click on them to see the full-size images). In the thumbnails below, note how the text looks less bold on the right side of the left image (OS 1.2) versus the right side of the right image.

Below are crops from the two images above, with the OS 1.1 image on the top and OS 1.0 on the bottom. The MacBook mirroring comes from the right sides of both images. Note how much bold the text and lines are in the OS 1.1 crop.

35:57 AVP Passthrough Cameras in the Wrong Location

38:43 AVP’s Optics are Soft/Blurry

As stated in Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3, the AVP optics are a little soft. According to Marquees Brownlee (see above) and others, my statement has caused controversy. I have heard others question my methods, but I have yet to see any evidence to the contrary.

I have provided my photographic evidence (right) and have seen it with my eyes by swapping headsets back and forth with high-resolution content. For comparison, the same image was displayed on the Meta Quest 3, and the MQ3 was clearly sharper. The “blur” on the AVP is similar to what one would see with a Gaussian blur with a radius of about 0.5 to 1 pixel.

Please don’t confuse “pixel resolution” with optical sharpness. The AVP has more pixels per degree, but the optics are a bit out of focus and, thus, a little blurry/soft. One theory is that it is being done to reduce the screen door effect (seeing the individual pixels) and make the images on the AVP look smoother.

The slight blurring of the AVP may reduce the screen door effect as the gap between pixels is thinner on the OLED displays than on the MQ3’s LCDs. But jaggies and scintillation are still very visible on the AVP.

41:41 Closing Discussion: “Did Apple Move the Needle?”

The video wraps up with Jason asking the open-ended question, “Did Apple Move the Needle?” I discuss whether it will replace a cell phone, home monitor(s), laptop on the road, or home TV. I think you can guess that I am more than skeptical that the AVP now or in the future will change things for more than a very small fraction of the people who use cell phones, laptops, and TVs. As I say about some conference demos, “Not everything that would make a great theme park experience is something you will ever want in your home to use regularly.”

Appendix: Rumor Mill’s 5,000 Nits Apple Vision Pro

When I searched the Internet to see if anyone had independently reported on the brightness of the AVP, I got the Google search answer in big, bold letters: “5,000 Nits” (right). Then, I went to the source of this answer, and it was none other than the MIT Technology Review. I then thought they must be quoting the display’s brightness, not the headset’s, but it reports that it is a “50-fold improvement over Meta Quest 2,” which is ridiculous.

I see this all the time when companies quote a spec for the display device, and it gets reported as the headset’s brightness/nits to the eye. The companies are a big part of the problem because most headset makers won’t give a number for the eye’s brightness in their specs. I should note that with almost all headset optics, the peak nits in the center will be much higher than those in the periphery. Through the years, one thing I have found that all companies exaggerate in their marketing is the brightness, either in lumens for projectors or nits for headsets.

An LCOS or DLP display engine can output over a million nits into a waveguide, but that number is so big (almost never given) that it is not confused with the nits to the eye. Nits are a function of light output (measured in Lumens) and the ability to collimate the light (a function of the size of the light source and illumination optics).

The “5,000 nits” source was a tweet by Ross Young of DSCC. Part of the Tweet/X thread is copied on the right. A few respondents understood this could not be the nits to the eye, and a few responders understood that it could not be to the eye. Responder BattleZxeVR even got the part about the duty cycle being a factor, but that didn’t stop many other later responders from getting it wrong.

Citing some other publications that didn’t seem to understand the difference between nits-in versus nits-out:

Quoting from The Daejeon Chronicles (June 2023): Apple Vision Pro Screens: 5,000 Nits of Wholesome HDR Goodness (with my bold emphasis):

Dagogo Altraide of ColdFusion has this to say about the device’s brightness capability:

“The screens have 5,000 nits of peak brightness, and that’s a lot. The Meta Quest 2, for example, maxes out at about 100 nits of brightness and Sony’s PS VR, about 265 nits. So, 5,000 nits is crazy. According to display analyst Ross Young, this 5,000 nits of peak brightness isn’t going to blind users, but rather provide superior contrast, brighter colors and better highlights than any of the other displays out there today.”

Quoting from Mac Rumors (May 2023): Apple’s AR/VR Headset Display Specs: 5000+ Nits Brightness for HDR, 1.41-Inch Diagonal Display and More:

With ~5000 nits brightness or more, the AR/VR headset from Apple would support HDR or high dynamic range content, which is not typical for current VR headsets on the market. The Meta Quest 2, for example, maxes out around 100 nits of brightness and it does not offer HDR, and the HoloLens 2 offers 500 nits brightness. Sony’s PSVR 2 headset has around 265 nits of brightness, and it does have an advertised HDR feature when connected to an HDR display.

The flatpanelshd (June 2023): Apple Vision Pro: Micro-OLEDs with 3800×3000 pixels & 90/96Hz – a paradigm shift did understand that the 5,000 nist was the display device and not to the eye:

DSCC has previously said that the micro-OLED displays deliver over 5000 nits of brightness but a good portion of that is typically lost due to the lenses and the display driving method.

As I wrote in Apple Vision Pro (Part 1) – What Apple Got Right Compared to The Meta Quest Pro, Snazzy Labs had an excellent explanation of the issues with the applications shown by Apple at the AVP announcement (it is a fun and informative video). But in another otherwise excellent video, What Reviewers Aren’t Telling You About Apple Vision Pro, I have to give him credit for recognizing that the MIT Tech Review had confabulated the display’s brightness with the headset’s brightness. But then hazarded a guess that it would be “after the optics, I bet it’s around 1,000 nits.” His guess was “just a bit outside” by about 10x. I do not want to pick on Snazzy Labs, as I love the videos I have seen from them, but I want to point out how much even technically knowledgeable people without a background in optics underestimate the light losses in headset optics.

Mixed Reality at CES & AR/VR/MR 2024 (Part 3 Display Devices)

20 April 2024 at 14:59

Update 2/21/22: I added a discussion of the DLP’s new frame rates and its potential to address field sequential color breakup.

Introduction

In part 3 of my combined CES and AR/VR/MR 2024 coverage of over 50 Mixed Reality companies, I will discuss display companies.

As discussed in Mixed Reality at CES and the AR/VR/MR 2024 Video (Part 1 – Headset Companies), Jason McDowall of The AR Show recorded more than four hours of video on the 50 companies. In editing the videos, I felt the need to add more information on the companies. So, I decided to release each video in sections with a companion blog article with added information.

Outline of the Video and Additional Information

The part of the video on display companies is only about 14 minutes long, but with my background working in displays, I had more to write about each company. The times in blue on the left of each subsection below link to the YouTube video section discussing a given company.

00:10 Lighting Silicon (Formerly Kopin Micro-OLED)

Lighting Silicon is a spinoff of Kopin’s micro-OLED development. Kopin started making micro-LCD microdisplays with its transmissive color filter “Lift-off LCOS” process in 1990. 2011 Kopin acquired Forth Dimension Displays (FDD), a high-resolution Ferroelectric (reflective) LCOS maker. In 2016, I first reported on Kopin Entering the OLED Microdisplay Market. Lighting Silicon (as Kopin) was the first company to promote the combination of all plastic pancake optics with micro-OLEDs (now used in the Apple Vision Pro). Panasonic picked up the Lighting/Kopin OLED with pancake optics design for their Shift All headset (see also: Pancake Optics Kopin/Panasonic).

At CES 2024, I was invited by Chris Chinnock of Insight Media to be on a panel at Lighting Silicon’s reception. The panel’s title was “Finding the Path to a Consumer-Friendly Vision Pro Headset” (video link – remember this was made before the Apple Vision Pro was available). The panel started with Lighting Silicon’s Chairman, John Fan, explaining Lighting Silicon and its relationship with Lakeside Lighting Semiconductor. Essentially, Lightning Semiconductor designs the semiconductor backplane, and Lakeside Lighting does the OLED assembly (including applying the OLED material a wafer at a time, sealing the display, singulating the displays, and bonding). Currently, Lakeside Lighting is only processing 8-inch/200mm wafers, limiting Lighting Silicon to making ~2.5K resolution devices. To make ~4K devices, Lighting Semiconductor needs a more advanced semiconductor process that is only available in more modern 12-inch/300mm FABs. Lakeside is now building a manufacturing facility that can handle 12-inch OLED wafer assembly, enabling Lighting Silicon to offer ~4K devices.

Related info on Kopin’s history in microdisplays and micro-OLEDs:

02:55 RaonTech

RaonTech seems to be one of the most popular LCOS makers, as I see their devices being used in many new designs/prototypes. Himax (Google Glass, Hololens 1, and many others) and Omnivision (Magic Leap 1&2 and other designs) are also LCOS makers I know are in multiple designs, but I didn’t see them at CES or the AR/VR/MR. I first reported on RaonTech at CES 2018 (Part 1 – AR Overview). RaonTech makes various LCOS devices with different pixel sizes and resolutions. More recently, they have developed a 2.15-micron pixel pitch field sequential color pixel with an “embedded spatial interpolation is done by pixel circuit itself,” so (as I understand it) the 4K image is based on 2K data being sent and interpolated by the display.

In addition to LCOS, RaonTech has been designing backplanes for other companies making micro-OLED and MicroLED microdisplays.

04:01 May Display (LCOS)

May Display is a Korean LCOS company that I first saw at CES 2022. It surprised me, as I thought I knew most of the LCOS makers. May is still a bit of an enigma. They make a range of LCOS panels, their most advanced being an 8K (7980 x 4,320) 3.2-micron pixel pitch. May also makes a 4K VR headset with a 75-degree FOV using their LCOS devices.

May has its own in-house LCOS manufacturing capability. May demonstrated using its LCOS devices in projectors and VR headsets and showed them being used in a (true) holographic projector (I think using phase LCOS).

May Display sounds like an impressive LCOS company, but I have not seen or heard of their LCOS devices being used in other companies’ products or prototypes.

04:16 Kopin’s Forth Dimensions Display (LCOS)

As discussed earlier with Lighting Silicon, Kopin acquired Ferroelectric LCOS maker Forth Dimension Displays (FDD) in 2011. FDD was originally founded as Micropix in 1988 as part of CRL-Opto, then renamed CRLO in 2004, and finally Forth Dimension Displays in 2005, before Kopin’s 2011 acquisition.

I started working in LCOS in 1998 as the CTO of Silicon Display, a startup developing a VR/AR monocular headset. I designed an XGA (1024 x768) LCOS backplane and the FGA to drive it. We were looking to work with MicroPix/CRL-Opto to do the LCOS assembly (applying the cover glass, glue seal, and liquid crystal). When MicroPix/CRL-Opto couldn’t get their backplane to work, they ended up licensing the XGA LCOS backplane design I did at Silicon Display to be their first device, which they had made for many years.

FDD has focused on higher-end display applications, with its most high-profile design win being the early 4K RED cameras. But (almost) all viewfinders today, including RED, use OLEDs. FDD’s LCOS devices have been used in military and industrial VR applications, but I haven’t seen them used in the broader AR/VR market. According to FDD, one of the biggest markets for their devices today is in “structured light” for 3-D depth sensing. FDD’s devices are also used in industrial and scientific applications such as 3D Super Resolution Microscopy and 3D Optical Metrology.

05:34 Texas Instruments (TI) DLP®

Around 2015, DLP and LCOS displays seemed to have been used in roughly equal numbers of waveguide-based AR/MR designs. However, since 2016, almost all new waveguide-based designs have used LCOS, most notably the Hololens 1 (2016) and Magic Leap One (2018). Even companies previously using DLP switched to LCOS and, more recently, MicroLEDs with new designs. Among the reasons the companies gave for switching from DLP to LCOS were pixel size and, thus, a smaller device for a given resolution, lower power consumption of the display+asic, more choice in device resolutions and form factors, and cost.

While DLP does not require polarized light, which is a significant efficiency advantage in room/theater projector applications that project hundreds or thousands of lumens, the power of the display device and control logic/ASICs are much more of a factor in near-eye displays that require less than 1 to at most a few lumens since the light is directly aimed into the eye rather than illuminating the whole room. Additionally, many near-eye optical designs employ one or more reflective optics requiring polarized light.

Another issue with DLP is drive algorithm control. Texas Instruments does not give its customers direct access to the DLP’s drive algorithm, which was a major issue for CREAL (to be discussed in the next article), which switched from DLP to LCOS partly because of the need to control its unique light field driving method directly. VividQ (also to be discussed in the next article), which generates a holographic display, started with DLP and now uses LCOS. Lightspace 3D has similarly switched.

Far from giving up, TI is making a concerted effort to improve its position in the AR/VR/MR market with new, smaller, and more efficient DLP/DMD devices and chipsets and reference design optics.

Color Breakup On Hololens 1 using a low color sequential field rate

Added 2/21/22: I forgot to discuss the DLP’s new frame rates and field sequential color breakup.

I find the new, much higher frame rates the most interesting. Both DLP and LCOS use field sequential color (FSC), which can be prone to color breakup with eye and/or image movement. One way to reduce the chance of breakup is to increase the frame rate and, thus, the color field sequence rate (there are nominally three color fields, R, G, & B, per frame). With DLP’s new much higher 240Hz & 480Hz frame rates, the DLP would have 720 or 1440 color fields per second. Some older LCOS had as low as 60-frames/180-fields (I think this was used on Hololens 1 – right), and many, if not most, LCOS today use 120-frames/360-fields per second. A few LCOS devices I have seen can go as high as 180-frames/540-fields per second. So, the newer DLP devices would have an advantage in that area.

The content below was extracted from the TI DLP presentation given at AR/VR/MR 2024 on January 29, 2024 (note that only the abstract seems available on the SPIE website).

My Background at Texas Instruments:

I worked at Texas Instruments from 1977 to 1998, becoming the youngest TI Fellow in the company’s history in 1988. However, contrary to what people may think, I never directly worked on the DLP. The closest I came was a short-lived joint development program to develop a DLP-based color copier using the TMS320C80 image processor, for which I was the lead architect.

I worked in the Microprocessor division developing the TMS9918/28/29 (the first “Sprite” video chip), the TMS9995 CPU, the TMS99000 CPU, the TMS34010 (the first programmable graphics processor), the TMS34020 (2nd generation), the TMS302C80 (first image processor with 4 DSP CPUs and a RISC CPU) several generations of Video DRAM (starting with the TMS4161), and the first Synchronous DRAM. I designed silicon to generate or process pixels for about 17 of my 20 years at TI.

After leaving TI, ended up working on LCOS, a rival technology to DLP, from 1998 through 2011. But then when I was designing a aftermarket autmotive HUD at Navdy, I chose use a DLP engine for the projector for its advantages in that application. I like to think of myself as a product focused and want to use whichever technology works best for the given application. I see pros and cons in all the display technologies.

07:25 VueReal MicroLED

VueReal is a Canadian-based startup developing MicroLEDs. Their initial focus was on making single color per device microdisplays (below left).

However, perhaps VueReal’s most interesting development is their cartridge-based method of microprinting MicroLEDs. In this process, they singulate the individual LEDs, test and select them, and then transfer them to a substrate with either passive (wire) or active (ex., thin-film transistors on glass or plastic). They claim to have extremely high yields with this process. With this process, they can make full-color rectangular displays (above right), transparent displays (by spacing the LEDs out on a transparent substrate, and displays of various shapes, such as an automotive instrument panel or a tail light.

I was not allowed to take pictures in the VueReal suite, but Chris Chinnock of Insight Media was allowed to make a video from the suit but had to keep his distance from demos. For more information on VueReal, I would also suggest going to MicroLED-Info, which has a combination of information and videos on VueReal.

08:26 MojoVision MicroLED

MojoVision is pivoting from a “Contact Lens Display Company” to a “MicroLED component company.” Its new CEO is Dr. Nikhil Balram, formerly the head of Google’s Display Group. MojoVision started saying (in private) that it was putting more emphasis on being a MicroLEDs component company around 2021. Still, it didn’t publicly stop developing the contact lens display until January 2023 after spending more than $200M.

To be clear, I always thought the contact lens display concept was fatally flawed due to physics, to the point where I thought it was a scam. Some third-party NDA reasons kept me from talking about MojoVision until 2022. I outlined some fundamental problems and why I thought the contact lens display was a sham in my 2022 Video with Brad Lynch on Mojovision Contact Display in my 2022 CES Discussion video with Brad Lynch (if you take pleasure in my beating up on a dumb concept for about 14 minutes, it might be a fun thing to watch).

So, in my book, Mojovision, the company starts with a major credibility problem. Still, they are now under new leadership and focusing on what they got to work, namely very small MicroLEDs. Their 1.75-micron LEDs are the smallest I have heard about. The “old” Mojovision had developed direct/native green MicroLEDs, but the new MojoVision is developing native blue LEDs and then using quantum dot conversion to get green and red.

I have been hearing about using quantum dots to make full-color MicroLEDs for ~10 years, and many companies have said they are working on it. Playnitride demonstrated quantum dot-converted microdisplays (via Lumus waveguides) and larger direct-view displays at AR/VR/MR 2023 (see MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7)).

Mike Wiemer (CTO) gave a presentation on “Comparing Reds: QD vs InGaN vs AlInGaP” (behind the SPIE Paywall). Below are a few slides from that presentation.

Wiemer gave many of the (well-known in the industry) advantages of the blue LED with the quantum dot approach for MicroLEDs over competing approaches to full-color MicroLEDs, including:

  • Blue LEDs are the most efficient color
  • You only have to make a single type of LED crystal structure in a single layer.
  • It is relatively easy to print small quantum dots; it is infeasible to pick and place microdisplay size MicroLEDs
  • Quantum dots converted blue to green and red are much more efficient than native green and red LEDs
  • Native red LEDs are inefficient in GaN crystalline structures that are moderately compatible with native green and blue LEDs.
  • Stacking native LEDs of different colors on different layers is a complex crystalline growth process, and blocking light from lower layers causes efficiency issues.
  • Single emitters with multiple-color LEDs (e.g., See my article on Porotech) have efficiency issues, particularly in RED, which are further exacerbated by the need to time sequence the colors. Controlling a large array of single emitters with multiple colors requires a yet-to-be-developed, complex backplane.

Some of the known big issues with quantum dot conversion with MicroLED microdisplays (not a problem for larger direct view displays):

  • MicroLEDs can only have a very thin layer of quantum dots. If the layer is too thin, the light/energy is wasted, and the residual blue light must be filtered out to get good greens and reds.
    • MojoVision claims to have developed quantum dots that can convert all the blue light to red or green with thin layers
  • There must be some structure/isolation to prevent the blue light from adjacent cells from activating the quantum dots of a given cell, which would cause the desaturation of colors. Eliminating color crosstalk/desaturating is another advantage of having thinner quantum dot layers.
  • The lifetime and potential for color shifting with quantum dots, particularly if they are driven hard. Native crystalline LEDs are more durable and can be driven harder/brighter. Thus, quantum dot-converted blue LEDs, while more than 10x brighter than OLEDs, are expected to be less bright than native LEDs
  • While MojoVision has a relatively small 1.37-micron LED on a 1.87-micron pitch, that still gives a 3.74-micron pixel pitch (assuming MojoVision keeps using two reds to get enough red brightness). While this is still about half the pixel pitch of the Apple Vision’s Pro ~7.5-micron pitch OLED, a smaller pixel size such as with a single-emitter-with multiple-colors (e.g., Porotech) would be better (more efficient due to étendue see: MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7)) for semi-collimating the light using microlenses as needed by waveguides.

10:20 Porotech MicroLED

I covered Porotech’s single emitter, multiple color, MicroLED technology extensively last year in CES 2023 (Part 2) – Porotech – The Most Advanced MicroLED Technology, MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7), and my CES 2023 Video with Brad Lynch.

While technically interesting, Porotech’s single-emitter device will likely take considerable time to perfect. The single-emitter approach has the major advantage of supporting a smaller pixel since only one LED per pixel is required. This also results in only two electrical connections (power and ground) to LED per pixel.

However, as the current level controls the color wavelength, this level must be precise. The brightness is then controlled by the duty cycle. An extremely advanced semiconductor backplane will be needed to precisely control the current and duty cycle per pixel, a backplane vastly more complex than LCOS or spatial color MicroLEDs (such as MojoVision and Playnitride) require.

Using current to control the color of LEDs is well-known to experts in LEDs. Multiple LED experts have told me that based on their knowledge, they believe Porotech’s red light output will be small relative to the blue and green. To produce a full-color image, the single emitter will have to sequentially display red, green, and blue, further exacerbating the red’s brightness issues.

12:55 Brilliance Color Laser Combiner

Brilliance has developed a 3-color laser combiner on silicon. Light guides formed in/on the silicon act similarly to fiber optics to combine red, green, and blue laser diodes into a single beam. The obvious application of this technology would be a laser beam scanning (LBS) display.

While I appreciate Brilliance’s technical achievement, I don’t believe that laser beam scanning (LBS) is a competitive display technology for any known application. This blog has written dozens of articles (too many to list here) about the failure of LBS displays.

14:24 TriLite/Trixel (Laser Combiner and LBS Display Glasses)

Last and certainly least, we get to TriLite Laser Beam Scanning (LBS) glasses. LBS displays for near-eye and projector use have a perfect 25+ year record of failure. I have written about many of these failures since this blog started. I see nothing in TriLite that will change this trend. It does not matter if they shoot from the temple onto a hologram directly into the eye like North Focals or use a waveguide like TriLite; the fatal weak link is using an LBS display device.

It has reached the point when I see a device with an LBS display. I’m pretty sure it is either part of a scam and/or the people involved are too incompetent to create a good product (and yes, I include Hololens 2 in this category). Every company with an LBS display (once again, including Hololens 2) lies about the resolution by confabulating “scan lines” with the rows of a pixel-based display. Scan lines are not the same as pixel rows because the LBS scan lines vary in spacing and follow a curved path. Thus, every pixel in the image must be resampled into a distorted and non-uniform scanning process.

Like Brilliance above, TriLites’ core technology combines three lasers for LBS. Unlike Brilliance, TriLites does not end up with the beams being coaxial; rather, they are at slightly different angles. This will cause the various colors to diverge by different amounts in the scanning process. TriLite uses its “Trajectory Control Module” (TCM) to compute how to re-sample the image to align the red, green, and blue.

TriLite then compounds its problems with LBS using a Lissajous scanning process, about the worst possible scanning process for generating an image. I wrote about why the Lissajous scanning process, also used by Oqmented (TriLite uses Infineon’s scanning mirror), in AWE 2021 Part 2: Laser Scanning – Oqmented, Dispelix, and ST Micro. Lissajous scanning may be a good way to scan a laser beam for LiDAR (as I discussed in CES 2023 (4) – VoxelSensors 3D Perception, Fast and Accurate), but it is a horrible way to display an image.

The information and images below have been collected from TriLite’s website.

As far as I have seen, it is a myth that LBS has any advantage in size, cost, and power over LCOS for the same image resolution and FOV. As discussed in part 1, Avegant generated the comparison below, comparing North Focals LBS glasses with a ~12-degree FOV and roughly 320×240 resolution to Avegant’s 720 x 720 30-degree LCOS-based glasses.

Below is a selection (from dozens) of related articles I have written on various LBS display devices:

Next Time

I plan to cover non-display devices next in this series on CES and AR/VR/MR 2024. That will leave sections on Holograms and Lightfields, Display Measurement Companies, and finally, Jason and my discussion of the Apple Vision Pro.

CES (Pt. 3), Xreal, BMW, Ocutrx, Nimo Planet, Sightful, and LetinAR

28 January 2024 at 06:11

Update 1/28/2024 – Based on some feedback from Nimo Planet, I have corrected the description of their computer pod.

Introduction

The “theme” for this article is companies I met with at CES with optical see-through Augmented and Mixed Reality using OLED microdisplays.

I’m off to SPIE AR/VR/MR 2024 in San Francisco as I release this article. So, this write-up will be a bit rushed and likely have more than the usual typos. Then, right after I get back from the AR/VR/MR show, I should be picking up my Apple Vision Pro for testing.

Xreal

Xreal (formerly Nreal) says they shipped 350K units in 2023, more than all other AR/MR companies combined. They had a large booth on the CES floor, which was very busy. They had multiple public and private demo stations.

From 2021 KGOnTech Teardown

This blog has followed Xreal/Nreal since its first appearance at CES in 2019. Xreal uses an OLED microdisplay in a “birdbath” optical architecture first made popular by (the now defunct) Osterhout Design Group (ODG) with their R8 and R9, which were shown at CES in 2017. For more on this design, I would suggest reading my 2021 teardown articles on the Nreal first product (Nreal Teardown: Part 1, Clones and Birdbath Basics, Nreal Teardown: Part 2, Detailed Look Inside, and Nreal Teardown: Part 3, Pictures Through the Lens).

Inherent in the birdbath optical architecture Xreal still uses, they will block about 70% of the real-world light, acting like moderately dark sunglasses. About 10% of the display’s light makes it to the eye, which is much more efficient than waveguides, which are much thinner and more transparent. Xreal claims their newer designs support up to 500 nits, meaning the Sony Micro-OLEDs must output about 5,000 nits.

With investment, volume, and experience, Xreal has improved its optics and image quality. It can’t improve much over the inherent limitations of a birdbath, particularly in terms of transparency. Xreal recently added an LCD dimming shutter to selectively block out more or all of the real world fully with their new Xreal Air 2 Pro and their latest Air 2 Ultra, for which I was given a demo at CES.

The earlier Xreal/Nreal headsets were little more than 1920×1080 monitors you wore with a USB-C connection for power and video. Each generation has added more “smarts” to the glasses. The Air 2 Ultra includes dual 3-D IR camera sensors for spatial recognition. Xreal and (to be discussed later) Nimo, among others, have already picked up on Apple’s “Spatial Computing,” referring to their products as affordable ways to get into spatial computing.

Most of the newer headsets will support either via a cell phone or Xreal’s “Beam” compute module, which can act to mirror or cast one more virtual display from a computer, cell phone, or tablet. While virtually there may be more monitors, they are still represented on a 1920×1080 display device. I believe (I forgot to ask) that Xreal is using internal sensors to detect head movement to virtualize the monitors with head movement.

Xreals Air 2 Ultra demo showcased the new spatial sensors’ ability to recognize hand and finger gestures. Additionally, the sensors could read “bar-coded” dials and slides made from cardboard.

BMW AR Ride Concept (Using Xreal Glasses)

In addition to seeing Xreal devices on their own, I was invited by BMW to take a ride trying out their Augmented Reality HUD on the streets around the convention center. A video produced by BMW gives a slightly different and abbreviated trip. I should emphasize that this is just an R&D demonstration, not a product that BMW plans to introduce. Also, BMW made clear that they would be working with other makes of headsets but that Xreal was the most readily available.

To augment using the Xreal glasses, BWM mounted a head tracking camera under the rearview mirror. This allows the BMW to lock the image generated to the physical car. Specifically, it allowed them to (selectively) block/occlude parts of the virtual image hidden behind the front A-pillar of the car. Not shown in the pictures from BMW below (click on the picture to see them bigger) is that you could see the images would start in the front window but be hidden by the A-pillar and then continue in the side window.

BWM’s R&D is looking at driver and passenger AR glasses use. They discussed that they would have different content for the driver, which would have to be simplified and more limited than what they could show the passenger. There are many technical and government/legal issues (all 50 states in the U.S. have different laws regarding HUD displays) with supporting headsets on drivers. From a purely technical perspective, a hear-worn AR HUD has many advantages and some disadvantages versus a fixed HUD on the windshield or dash combiner (too much to get into in this quick article).

Ocutrx (for Low-Vision and other applications)

Ocutrx’s Oculenz is also using “birdbath” optics with the OcuLenz. The OcuLens was originally designed to support people with “low vision,” especially people with Macular Degeneration and eye problems that block parts of a person’s vision. People with Macular Degeneration lose their vision’s high-resolution, high-contrast, and color-sensitive parts. They must rely on other parts of the retina, commonly called peripheral vision (although it may include more than just what is technically considered peripheral vision).

A low-vision headset must have a wide FOV to reach the outer parts of the retina. They must magnify, increase color saturation, and improve contrast over what a person with normal vision would want to see. Note that while these people may be legally blind, they still can see, particularly with their peripheral vision. This is why a headset that still allows them to use their peripheral vision is important.

About 20 million people in the US alone have what is considered “low-vision,” and about 1 million more people each develop low-vision as the population ages. It is the biggest identifiable market I know of today for augmented reality headset headsets. But a catch needs to be fixed for this market to be served. By the very nature of the people involved, having low vision and often being elderly, they need a lot of professional help while at the same time being often on a fixed or limited income. Unfortunately, rarely will private or government (Medicare/Medicaid) insurance will rarely cover either the headset cost or the professional support required. There have been bills before Congress to change this, but so far, nothing has happened of which I am aware. Without a way to pay for the headsets, the volumes are low, which makes the headsets more expensive than they need to be.

In the past, I have reported on Evergaze’s seeBoost, which existed in this market while developing their second-generation product for the economic reasons (lack of insurance coverage) above. I have also discussed NuEyes with Bradley Lynch in a video after AWE 2022. The economic realities of the low-vision market cause companies like NuEye and Ocutrx to look for other business opportunities for the headsets. It is really a frustrating situation knowing that technology could help so many people. I hope to cover this topic in more detail in the future.

Nimo Planet (Nimo)

Nimo Planet (Nimo) makes a small computer that acts as a spatial mouse pointer for AR headsets with a USB-C port for power and video input. It replaces the need for a cell phone and can send mirror/casting video information from other devices to the headset. Still, Nimo Core is a fully standalone computer with Nimo OS, which simultaneously supports Android, Web, and Unity Apps. No other host computer is needed.

According to Nimo, every other multi-screen solution in the market is developed in web platforms or UnityApp, which limits them to running only Web Views. Nimo OS created a new Stereo Rendering and Multi-Window architecture in AOSP to run multiple Android, Unity, and Web Apps simultaneously.

Nimo developed their glasses based on LentinAR optics and supports other AR glasses. Most notably, they just announced a joint development agreement with Rokid.

I got a brief demonstration of Nimo’s multi-windows on an AR headset. They use the inertial sensors in the headset to detect head movement and move the view of the multiple windows accordingly. It is like you are looking at multiple monitors through a 1920×1080 window. No matter how big the size or number of virtual monitors, they will be clipped to that 1920×1080 view. This device lets you move your head to select what you see. I discussed some of the issues with simulating virtual monitors with head-mounted displays in Apple Vision Pro (Part 5A) – Why Monitor Replacement is Ridiculous, Apple Vision Pro (Part 5B) – More on Monitor Replacement is Ridiculous, and Apple Vision Pro (Part 5C) – More on Monitor Replacement is Ridiculous.

Sightful

The Sightful is similar to the Nimo Planet type of device in some ways. With the Sightful, the computer is built inside the keyboard and touchpad, making it a full-fledged computer. Alternatively, Sightful can be viewed as a laptop computer where the display uses AR glasses rather than a flat panel.

Like Nimo and Xreal’s Beam and many other new Mixed Reality devices, Sightful supports multiple windows. I don’t know if they have sensors for 3-D sensing, so I suspect they use internal sensors to detect head movement.

Sightful’s basic display specs resemble other birdbath AR glasses designs from companies like Xreal and Rokid. I have not had a chance, however, to compare them seriously.

LetinAR

I have been writing about LetinAR since 2018. LetinAR started with a “Pin Mirror” type of pupil replication. They have now moved on to a series of what I will call “horizontal slat pupil replicators.” They also use total internal reflections (TIR) and a curved mirror to move the focus of the image form an OLED microdisplay before going to the various pupil-expanding slats.

While LetinAR’s slat design improves image quality over its earlier pin mirrors, it is still imperfect. When looking through the lenses (without a virtual image), the view is a bit “disturbed” and seems to have diffraction line effects. Similarly, you can perceive gaps or double images depending on your eye location and movement. LetinAR is working on continuing to improve this technology. While their image quality is not as good as the birdbath designs, they offer much better transparency.

LetinAR seems to be making progress with multiple customers, including Jor Jin, who was demonstrating in the LentinAR booth, Sharp, which had a big demonstration in their booth (while they didn’t say whose optics were in the demo, it was obviously LentinARs – see pictures below), and the headset discussed above by Nimo.

Conclusions

Sorry, there is no time for major conclusions today. I’m off to the AR/VR/MR Conference and Exhibition.

I will note that regardless of the success of the AVP, Apple has already succeeded in changing the language of Augmented and Mixed reality. In addition to almost everyone in AR and Mixed reality talking “AI,” many companies now use “Spatial Computing” to refer to their products in their marketing.

CES (Pt. 2), Sony XR, DigiLens, Vuzix, Solos, Xander, EverySight, Mojie, TCL color µLED

24 January 2024 at 15:24

Introduction

As I wrote last time, I met with nearly 40 companies at CES, of which 31 I can talk about. This time, I will go into more detail and share some photos. I picked the companies for this article because they seemed to link together. The Sony XR headset and how it fit on the user’s head was similar to the newer DigiLens Argo headband. DigiLens and the other companies had diffractive waveguides and emphasized lightweight and glass-like form factors.

I would like to caution readers of my saying that “all demos at conferences are magic shows,” something I warn about near the beginning of this blog in Cynics Guide to CES – Glossary of Terms). I generally no longer try to take “through the optics” pictures at CES. It is difficult to get good representative photos in the short time available with all the running around and without all the proper equipment. I made an exception for the TCL color MicroLED glasses as they readily came out better than expected. But at the same time, I was only using test images provided by TCL and not test patterns that I selected. Generally, the toughest test patterns (such as those on my Test Pattern Page) are simple. For example, if you put up a solid white image and see color in the white, you know something is wrong. When you put up colorful pictures with a lot of busy detail (like a colorful parrot in the TCL demo), it is hard to tell what, if anything, is wrong.

The SPIE AR/VR/MR 2024 in San Francisco is fast approaching. If you want to meet, contact me at meet@kgontech.com). I hope to get one or two more articles on CES before leaving for the AR/VR/MR conference.

Sony XR and DigiLens Headband Mixed Reality (with contrasts to Apple Vision Pro)

Sony XR (and others compared to Apple Vision Pro)

This blog expressed concerns about the Apple Vision Pro’s (AVP) poor mechanical ergonomics (AVP), completely blocking peripheral vision and the terrible placement of the passthrough cameras. My first reaction was that the AVP looked like it was designed by a beginner with too much money and an emphasis on style over functionality. What I consider Apple’s obvious mistakes seem to be addressed in the new Sony XR headset (SonyXR).

The SonyXR shows much better weight distribution, with (likely) the battery and processing moved to the back “bustle” of the headset and a rigid frame to transfer to the weight for balance. It has been well established that with designs such as the Hololens 2 and Meta Quest Pro, this type of design leads to better comfort. This design approach can also move a significant amount of power to the back for better heat management due to having a second surface radiating heat.

The bustle on the back design also avoids the terrible design decision by Apple to have a snag hazard and disconnection nuisance with an external battery and cable.

The SonyXR is shown to have enough eye relief to wear typical prescription glasses. This will be a major advantage in many potential XR/MR headset uses, making it more interchangeable. This is particularly important for use cases that are not all-day or one-time (ex., museum tours, and other special events). Supporting enough eye relief for glasses is more optically difficult and requires larger optics for the same field of view (FOV).

Another major benefit of the larger eye relief is that it allows for peripheral vision. Peripheral vision is considered to start at about 100 degrees or about where a typical VR headset’s FOV stops. While peripheral vision is low in resolution, it is sensitive to motion. It alerts the person to motion so they will turn their head. The saying goes that peripheral vision evolved to keep humans from being eaten by tigers. This translated to the modern world, being hit by moving machinery and running into things that might hurt you.

Another good feature shown in the Sony XR is the flip-up screen. There are so many times when you want to get the screen out of your way quickly. The first MR headset I used that supported this was the Hololens 2.

Another feature of the Hololens 2 is the front-to-back head strap (optional but included). Longtime VR gamer and YouTube personality Brad Lynch of the SadlyItsBradley YouTube channel has tried many VR-type headsets and optional headbands/straps. Brad says that front-to-back straps/pads generally provide the most comfort with extended use. Side-to-side straps, such as on the AVP, generally don’t provide the support where it is needed most. Brad has also said that while a forehead pad, such as on the Meta Quest Pro, helps, headset straps (which are not directly supported on the MQP) are still needed. It is not clear whether the Sony XR headset will have over-the-head straps. Even companies that support/include overhead straps generally don’t show them in the marketing photos and demos as they mess up people’s hair.

The SonyXR cameras are located closer to the user’s eyes. While there are no perfect placements for the two cameras, the further they are from the actual location of the eyes, the more distortion will be caused for making perspective/depth-correct passthrough (for more on this subject, see: Apple Vision Pro Part 6 – Passthrough Mixed Reality (PtMR) Problems).

Lynx R1

Lynx also used the headband with a forehead pad, with the back bustle and flip-up screen. Lynx also supports enough eye relief for glasses and good peripheral vision and locates their passthrough cameras near where the eye will be when in use. Unfortunately, I found a lot of problems with the optics Lynx chose for the R1 by the optics design firm Limbak (see also my Lynx R1 discussion with Brad Lynch). Apple has since bought Limbak, and it is likely Lynx will be moving on with other optical designs.

Digilens Argo New Head Band Version at CES 2024

I wrote a lot about Digilens Argo in last year’s coverage of CES and the AR/VR/MR conference in DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8). In the section Skull-Gripping “Glasses” vs. Headband or Open Helmet, I discussed how Digilens has missed an opportunity for both comfort and supporting the wearing of glasses. Digilens said they took my comments to heart and developed a variation with the rigid headband and flip-up display shown in their suite at CES 2024. Digilens said that this version let them expand their market (and no, I didn’t get a penny for my input).

The Argos are light enough that they probably don’t need an over-the-head band for extra support. If the headband were a ground-up design rather than a modular variation, I would have liked to see the battery and processing moved to a back bustle.

While on the subject of Digilens, they also had a couple of nice static displays. Pictured below right are variations in waveguide thickness they support. Generally, image quality and field of view can be improved by supporting more waveguide layers (with three layers supporting individual red, green, and blue waveguides). Digilens also had a static display using polarized light to show different configurations they can support for the entrance, expansion, and exit gratings (below right).

Vuzix

Vuzix has been making wearable heads-up displays for about 26 years and has a wide variety of headsets for different applications. Vuzix has been discussed on this blog many times. Vuzix primarily focuses on lightweight and small form factor glasses and attachments with displays.

Vuzix Ultralite Sport (S) and Forward Projection (Eye Glow) Elimination

New this year at CES was Vuzix’s Ultralite Sports (S) model. In addition to being more “sporty” looking, their waveguides are designed to eliminate forward projection (commonly referred to as “Eye Glow”). Eye glow was famously an issue with most diffractive waveguides, including the Hololens 1 & 2 (see right), Magic Leap 1 & 2, and previous Vuzix waveguide-based glasses.

Vuzix appears to be using the same method that both Digilens and Dispelix discussed in their AR/VR/MR 2022 papers that I discussed with Brad Lynch in a YouTube video after AR/VR/MR 2022 and in my blog article, DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8) in the sections on Eye Glow.

If the lenses are canted (tilted), the exit gratings, when designed to project to the eye, will then project down at twice the angle at which the waveguides are canted. Thus, with only a small change in the tilt of the waveguides, the projection will be far below the eyesight of others (unless they are on the ground).

Ultra Light Displays with Audio (Vuzix/Xander) & Solos

Last year, Vuzix introduced their lightweight (38 grams) Z100 Ultralite, which uses 640×480 green (only) MicroLED microdisplays. Xander, using the lightweight Vuzix’s Z100, has developed text-to-speech glasses for people with hearing difficulties (Xander was in the AARP booth at CES).

While a green-only display with low resolution by today’s standards is not something you will want to watch movies, there are many uses for having a limited amount of text and graphics in a lightweight and small form factor. For example, I got to try out Solos Audio glasses, which, among other things, use ChatGPT to do on-the-fly language translation. It’s not hard to imagine that a small display could help clarify what is being said about Solos and similar products, including the Amazon Echo Frames and the Ray-Ban Meta Wayfarer.

Mojie (Green) MicroLED with Plastic Waveguide

Like Vuzix Z100, the Mojie (a trademark of Meta-Bounds) uses green-only Jade Bird Display 640×480 microLEDs with waveguide optics. The big difference is that Mojie, along with Oppo Air 2 and Meizu MYVU, all use Meta-Bounds resin plastic waveguides. Unfortunately, I didn’t get to the Mojie booth until near closing time at CES, but they were nice enough to give me a short demo. Overall, regarding weight and size, the Mojie AR glasses are similar to the Vuzix Z100, but I didn’t have the time and demo content to judge the image quality. Generally, resin plastic diffractive waveguides to date have had lower image quality than ones on a glass substrate.

I have no idea what resin plastic Meta-Bounds uses or if they have their own formula. Mitsui Chemicals and Mitsubishi Chemicals, both of Japan, are known to be suppliers of resin plastic substrate material.

EverySight

ELBIT F35 Helmet and Skylens

Everysight (the company, not the front eye display feature on the Apple Vision Pro) has been making lightweight glasses primarily for sports since about 2018. Everysight spun out of the major defense (including the F35 helmet HUD) and commercial products company ELBIT. Recently, ELBIT had their AR glasses HUD approved by the FAA for use in the Boeing 737ng series. EverySight uses an optics technology, which I call “precompensated off-axis.” Everysight (and ELBIT) have an optics engine that projects onto a curved front lens with a partial mirror coating. The precompensation optics of the projector correct for the distortion from hitting a curved mirror off-axis.

The Everysight/Elbit technology is much more optically efficient than waveguide technologies and more transparent than “birdbath technologies” (the best-known birdbath technology today being Xreal). The amount of light from the display versus transparency is a function of the semi-transparent mirror coating. The downside of the Eversight optical system with small-form glasses is that the FOV and Eyebox tend to be smaller. The new Everysight Maverick glasses have a 22-degree FOV and produce over 1,000 nits using a 5,000 nit 640×400 pixel full-color Sony Micro-OLED.

The front lens/mirror elements are inexpensive and interchangeable. But the most technically interesting thing is that Everysight has figured out how to support prescriptions built into the front lens. They use a “push-pull” optics arrangement similar to some waveguide headsets (most notably Hololens 1&2 and Magic Leap). The optical surface on the eye side of the lens corrects for the virtual display of the eye, and the optical surface on the outside surface of the lens is curved to do what is necessary to correct vision correction for the real world.

TCL RayNeo X2 and Ray Neo X2 Lite

I generally no longer try to take “through the optics” pictures at CES. It is very difficult to get good representative photos in the short time available with all the running around and without all the proper equipment. I got some good photos through TCL’s RayNeo X2 and the RayNeo X2 Lite. While the two products sound very close, the image quality with the “Lite” version, which switched to using Applied Materials (AMAT) diffractive waveguides, was dramatically better.

The older RayNeo X2s were available to see on the floor and had problems, particularly with the diffraction gratings capturing stray light and the general color quality. I was given a private showing of the newly announced “Lite” version using the AMAT waveguides, and not only were they lighter, but the image quality was much better. The picture on the right below shows the RayNeo X2 (with an unknown waveguide) on the left that captures the stray overhead light (see streaks at the arrows). The picture via the Lite model (with the AMAT waveguide) does not exhibit these streaks, even though the lighting is similar. Although hard to see in the pictures, the color uniformity with the AMAT waveguide also seems better (although not perfect, as discussed later).

Both RayNeo models use 3-separate Jade Bird Display red, green, and blue MicroLEDs (inorganic) with an X-cube color combiner. X-cubes have long been used in larger LCD and LCOS 3-panel projectors and are formed with four prisms with different dichroic coatings that are glued together. Jade Bird Display has been demoing this type of color combiner since at least AR/VR/MR 2022 (above). Having worked with 3-Panel LCOS projectors in my early days at Syndiant, I know the difficulties in aligning three panels to an X-cube combiner. This alignment is particularly difficult with the size of these MicroLED displays and their small pixels.

I must say that the image quality of the TCL RayNeo X2 Lite exceeded my expectations. Everything seems well aligned in the close-up crop from the same parrot picture (below). Also, there seems to be relatively good color without the wide variation from pixel-to-pixel brightness I have seen in past MicroLED displays. While this is quite an achievement for a MicroLED system, the RayNeo X2 light only has a modest 640×480 resolution display with a 30-degree diagonal FOV. These specs result in about 26 pixels per degree or about half the angular resolution of many other headsets. The picture below was taken with a Canon R5 with a 16mm lens, which, as it turns out, has a resolving power close to good human vision.

Per my warning in the introduction, all demos are magic shows. I don’t know how representative this prototype will be of units in production, and perhaps most importantly, I did not try my test patterns but used the images provided by TCL.

Below is another picture of the parrot taken against a darker background. Looking at the wooden limb under the parrot, you will see it is somewhat reddish on the left and greenish on the right. This might indicate color shifting due to the waveguide, as is common with diffractive waveguides. Once again, taking quick pictures at shows (all these were handheld) and without controlling the source content, it is hard to know. This is why I would like to acquire units for extended evaluations.

The next two pictures, taken against a dark background and a dimly lit room, show what I think should be a white text block on the top. But the text seems to change from a reddish tint on the left to a blueish tint on the right. Once again, this suggests some color shifting across the diffractive waveguide.

Below is the same projected image taken with identical camera settings but with different background lighting.

Below is the same projected flower image with the same camera settings and different lighting.

Another thing I noticed with the Lite/AMAT waveguides is significant front projection/eye glow. I suspect this will be addressed in the future, as has been demonstrated by Digilens, Displelix, and Vuzix, as discussed earlier.

Conclusions

The Sony XR headset seems to showcase many of the beginner mistakes made by Apple with the AVP. In the case of the Digilens Argo last year, they seemed to be caught between being a full-featured headset and the glasses form factor. The new Argo headband seems like a good industrial form factor that allows people to wear normal glasses and flip the display out of the way when desired.

Vuzix, with its newer Ultralite Z100 and Sports model, seems to be emphasizing lightweight functionality. Vuzix and the other waveguide AR glasses have not given a clear path as to how they will support people who need prescription glasses. The most obvious approach they will do some form of “push-pull” with a lens before and after the waveguides. Luxexcel had a way to 3-D print prescription push-pull lenses, but Meta bought them. Add Optics (formed by former Luxexcel employees) has another approach with 3-D printed molds. Everysight tries to address prescription lenses with a somewhat different push-pull approach that their optical design necessitates.

While not perfect, the TCL color MicroLED, at least in the newer “Lite” version, was much better than I expected. At the same time, one has to recognize the resolution, FOV, and color uniformity are still not up to some other technologies. In other words, to appreciate it, one has to recognize the technical difficulty. I also want to note that Vuzix has said that they are also planning on color MicroLED glasses with three microdisplays, but it is not clear whether they will use an X-cube or a waveguide combiner approach.

The moderate success of smart audio glasses may be pointing the way for these ultra-light glasses form factor designs for a consumer AR product. One can readily see where adding some basic text and graphics would be of further benefit. We will know if this category has become successful if Apple enters this market 😁.

DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8)

27 March 2023 at 19:46

Introduction – Contrast in Approaches and Technologies

This article will compare and contrast the Vuzix Ultralight, Lumus Z-lens, and DigiLens Argo waveguide-based AR prototypes I saw at CES 2023. I discussed these three prototypes with SadlyItsBradly in our CES 2023 video. It will also briefly discuss the related Avegant’s AR/VR/MR 2022 and 2023 presentations about their new smaller LCOS projection engine and Magic Leap 2’s LCOS design to show some other projection engine options.

It will go a bit deeper into some of the human factors of the Digitlens’ Argo. Not to pick on Digilens’ Argo, but because it has more features and demonstrates some common traits and issues of trying to support a rich feature set in a glasses-like form factor.

When I quote various specs below, they are all manufacturer’s claims unless otherwise stated. Some of these claims will be based on where the companies expect the product to be in production. No one has checked the claims’ veracity, and most companies typically round up, sometimes very generously, on brightness (nits) and field of view (FOV) specs.

This is a somewhat long article, and the key topics discussed include:

  • MicroLED versus LCOS Optical engine sizes
  • The image quality of MicroLED vs. LCOS and Reflective (Lumus) vs. Diffractive waveguides
  • The efficiency of Reflective vs. Diffractive waveguides with MicroLEDs
  • The efficiency of MicroLED vs. LCOS
  • Glasses form factor (using Digilens Argo as an example)

Overview of the prototypes

Vuzix Ultralite and Oppo Air Glass 2

The Vuzix Ultralite and Oppo Air Glass 2 (top two on the right) have 640 by 480 pixel Jade Bird Display (JBD) green-only per eye. And were discussed in MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7).

They are each about 38 grams in weight, including frames, processing, wireless communication, and batteries, and wirelessVuzix developed their own diffractive waveguide and support about a 30-degree FOV. Both are self-contained with wireless, with an integrated battery and processing.

Vuzix developed their own glass diffractive waveguides and optical engines for the Ultralight. They claim a 30-degree FOV with 3,000 nits.

Oppo uses resin plastic waveguides, and MicroLED optical engine developed jointly with Meta Bounds. I have previously seen prototype resin plastic waveguides from other companies for several years. This is the first time I have seen them in a product getting ready for production. The glasses (described in a 1.5-minute YouTube/CNET video) include microphones and speakers for applications, including voice-to-text and phone calls. They also plan on supporting vision correction with lenses built into the frames. Oppo claims the Air Glass 2 has a 27-degree FOV and outputs 1,400 nits.

Lumus Z-Lens

Lumus’s Z-Lens (third from the top right) supports up to a 2K by 2K full/true color LCOS display with a 50-degree FOV. Its FoV is 3 to 4 times the area of the other three headsets, so it must output more than 3 to 4 times the total light. It supports about 4.5x the number of pixels of the DigiLens Argo and over 13x the pixels of the Vuzix Ultralite and Oppo Air Glass 2.

The Z-Lens prototype is a demonstration of display capability and, unlike the other three, is not self-contained and has no battery or processing. A cable provides the display signal and power for each eye. Lumus is an optics waveguide and projector engine company and leaves it to its customers to make full-up products.

Digilens Argo

The DigiLens Argo (bottom, above right) uses a 1280 by 720 full/true color LCOS display. The Argo has many more features than the other devices, with integrated SLAM cameras, GNSS (GPS, etc.), Wi-Fi, Bluetooth, a 48mp (with 4×4 pixel “binning” like the iPhone 14) color camera, voice recognition, batteries, and a more advanced CPU (Qualcomm Snapdragon 2). Digilens intends to sell the Argo for enterprise applications, perhaps with partners, while continuing to sell waveguides optical engines as components for higher-volume applications. As the Argo has a much more complete feature set, I will discuss some of the pros and cons of some of the human factors of the Argo design later in this article.

Through the Lens Images

Below is a composite image from four photographs taken with the same camera (OM-D E-M5 Mark III) and lens (fixed 17mm). The pictures were taken at conferences, handheld, and not perfectly aligned for optimum image quality. The projected display and the room/outdoor lighting have a wide range of brightness between the pictures. None of the pictures have been resized, so the relative FoVs have been maintained, and you get an idea of the image content.

The Lumus Z-lens reflective waveguide has a much bigger FOV, significantly more resolution, and exhibits much better color uniformity with the same or higher brightness (nits). It also appears that reflective waveguides have a significant efficiency advantage with both MicroLEDs (and LCOS), as discussed in MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7). It should also be noted that the Lumus Z-lens prototype has only the display with optics and has no integrated processing, communication or battery. In contrast, the others are closer to full products.

A more complex issue is that of power consumption versus brightness. LCOS engines today are much more efficient for an image with full-screen bright images (by 10x or more) than MicroLEDs with similar waveguides. MicroLED’s big power advantage occurs when the content is sparse, as the power consumption is roughly proportional to the average pixel value, whereas, with LCOS, the whole display is illuminated regardless of the content.

If and when MicroLEDs support full color, the efficiency of nits-per-Watt will be significantly lower than monochrome green. Whatever method produces full color will detract from the overall electrical and optical efficiency. Additionally, color balancing for white requires adding blue and red light with lower nits-per-Watt.

Some caveats:

  • The Lumus Z-Lens is a prototype and does not have all the anti-reflective and other coatings of a production waveguide. Lumus uses an LCOS device with about ~3-micron pixels, which fits 1440 by 1440 within the ~50-degree FOV supported by the optics. Lumus is working with at least one LCOS maker to get an ~2-micron pixel size to support 2K by 2K resolution with the same size display. The image is cut off on the right-hand side of the image by the camera, which was rotated into portrait mode to fit inside the glasses.
  • The Digilens through the lens image is from Photonics West in 2022 (about one year old). Digilens has continued to improve its waveguide since this picture was taken.
  • The Vuzix picture was taken via Vuzix Shield, which uses the same waveguide and optics as the Vuzix Ultralight.
  • The Oppo image was taken at the AR/VR/MR 2023 conference.

Optical Engine Sizes

Vuzix has an impressively small optical engine driving Vuzix’s diffractive waveguides. Seen below left is a comparison of Vuzix’s older full-color DLP engine compared with an in-development color X-Cube engine and the green MicroLED engine used in the Vuzix Ultralite™ and Shield. In the center below is an exploded view of the Oppo and Meta Bound glasses (joint design as they describe it) with their MicroLED engine shown in their short CNET YouTube video. As seen in the still from the Oppo video, they have plans to support vision correction built into the glasses.

Below right is the Digilens LCOS engine, which uses a fairly conventional LCOS (using Ominivision’s LCOS device with driver ASIC showing). The dotted line indicates where the engine blocks off the upper part of the waveguide. This blocked-off area carries over to the Argo design.

The Digilens Argo, with its more “conventional” LCOS engine, requires are large “brow” above the eye to hide it (more on this issue later). All the other companies have designed their engine to avoid this level of intrusion into the front area of the glasses.

Lumus had developed their 1-D pupil-expanding reflective waveguide for nearly two decades, which needed a relatively wide optical engine. With the 2-D Maximus waveguide in 2021 (see: Lumus Maximus 2K x 2K Per Eye, >3000 Nits, 50° FOV with Through-the-Optics Pictures), Lumus demonstrated their ability to shrink the optical engine. This year, Lumus further reduced the size of the optical engine and its intrusion into the front lens area with their new Z-lens design (compare the two right pictures below of Maximus to Z-Lens)

Shown below are frontal views of the four lenses and their optical engines. The Oppo Air Glass 2 “disguises” the engine within the industrial design of a wider frame (and wider waveguide). The Lumus Z-Lens, with a full color about 3.5 times the FOV as the others, has about the same frontal intrusion as the green-only MicroLED engines. The Argo (below right) stands out with the large brow above the eye (the rough location of the optical engine is shown with the red dotted line).

Lumus Removes the Need for Air Gaps with the Z-Lens

Another significant improvement with Lumus’s Z-Lens is that unlike Lumus’s prior waveguides and all diffractive waveguides, it does not require an air gap between the waveguide’s surface and any encapsulating plastics. This could prove to be a big advantage in supporting integrated prescription vision correction or simple protection. Supporting air gaps with waveguides has numerous design, cost, and optical problems.

A typical full-color diffractive waveguide typically has two or three waveguides sandwiched together, with air gaps between them plus an air gap on each side of the sandwich. Everywhere there is an air gap, there is also a desire for antireflective coatings to remove reflections and improve efficiency.

Avegant and Magic Leap Small LCOS Projector Engines

Older LCOS projection engines have historically had size problems. We are seeing new LCOS designs, such as the Lumus Z-lens (above), and designs from Avegant and Magic Leap that are much smaller and no more intrusive into the lens area than the MicroLED engines. My AR/VR/MR 2022 coverage included the article Magic Leap 2 at SPIE AR/VR/MR 2022, which discusses the small LCOS engines from both Magic Leap and Avegant. In our AWE 2022 video with SadlyItsBradley, I discuss the smaller LCOS engines by Avegant, Lumus (Maximus), and Magic Leap.

Below is what Avegant demonstrated at AR/VR/MR 2022 with their small “L” shaped optical engines. These engines have very little intrusion into the front lenses, but they run down the temple of the glasses, which inhibits folding the temple for storage like normal glasses.

At the AR/VR/MR 2023, Avegant showed a newer optical design that reduced the footprint of their optics by 65%, including shortening them to the point that the temples can be folded, similar to conventional glasses (below left). It should be noted that what is called a “waveguide” in the Avegant diagram is very different from the waveguides used to show the image in AR glasses. Avegants waveguide is used to illuminate the LCOS device. Avengant, in their presentation, also discussed various drive modes of the LEDs to give higher brightness and efficiency with green-only and black-and-white modes. The 13-minute video of Avegant’s presentation is available at the SPIE site (behind SPIE’s paywall). According to Avegant’s presentation, the optics are 15.6mm long by 12.4mm wide, support a 30-degree FOV, with 34 pixels/degree, and 2 lumens of output in full color and up to 6 lumens in limited color outdoor mode. According to the presentation, they expect about 1,500 nits with typical diffractive waveguides in the full-color mode, which would roughly double in the outdoor mode.

The Magic Leap 2 (ML2) takes reducing the optics one step further and puts the illumination LEDs and LCOS on opposite sides of the display’s waveguide (below and described in Magic Leap 2 at SPIE AR/VR/MR 2022). The ML2 claims to have 2,000 nits with a much larger 70-degree FOV.

Transparency (vs. Birdbath) and “Eye Glow”

Transparency

As seen in the pictures above, all the waveguide-based glasses have transparency on the order of 80-90%. This is a far cry from the common birdbath optics, with typically only 25% transparency (see Nreal Teardown: Part 1, Clones and Birdbath Basics). The former Osterhout Design Group (ODG) made birdbath AR Glasses popular first with their R6 and then with the R8 and R9 models (see my 2017 article ODG R-8 and R-9 Optic with OLED Microdisplays) which served as the models for designs such at Nreal and Lenovo’s A3.

OGD Legacy and Progress

Several former ODG designers have ended up at Lenovo, the design firm Pulsar, Digilens, and elsewhere in the AR community. I found pictures of Digilens VP Nima Shams wearing the ODG R9 in 2017 and the Digilens Argo at CES. When I showed the pictures to Nima, he pointed out the progress that had been made. The 2023 Argo is lighter, sticks out less far, has more eye relief, is much more transparent, has a brighter image to the eye, and is much more power efficient. At the same time, it adds features and processing not found on the ODG R8 and R9.

Front Projection (“Eye Glow”)

Another social aspect of AR glasses is Front Projection, known as “Eye Glow.” Most famously, the Hololens 1 and 2 and the Magic Leap 1 and 2 project much of the light forward. The birdbath optics-based glasses also have front projection issues but are often hidden behind additional dark sunglasses.

When looking at the “eye glow” pictures below, I want to caution you that these are random pictures and not controlled tests. The glasses display radically different brightness settings, and the ambient light is very different. Also, front projection is typically highly directional, so the camera angle has a major effect (and there was no attempt to search for the worst-case angle).

In our AWE 2022 Video with SadlyItsBradley, I discussed how several companies, including Dispelix, are working to reduce front projection. Digilens is one of the companies I discussed that has been working to reduce front projection. Lumus’s reflective approach has inherent advantages in terms of front projection. DigiLens Argo (pictures 2 and 3 from the right) have greatly reduced their eye glow. The Vuzix Shield (with the same optics as the Ultralite) has some front projection (and some on my cheek), as seen in the picture below (4th from the left). Oppo appears to have a fairly pronounced front projection, as seen in two short videos (video 1 and video 2)

DigiLens Argo Deeper Look

DigiLens has been primarily a maker of diffractive waveguides, but it has, through the years, made several near-product demonstrations in the past. A few years ago, they when through a major management change (see 2021 article, DigiLens Visit), and with the management came changes in direction.

Argo’s Business Model

I’m always curious when a “component company” develops an end product. I asked DigiLens to help clarify their business approaches and received the following information (with my edits):

  1. Optical Solutions Licensing – where we provide solutions to our license to build their own waveguides using our scalable printing/contactless copy process. Our licensees can design their waveguides, which Digilens’ software tools enable.  This business is aimed at higher-volume applications from larger companies, mostly focused on, but not limited to, the consumer side of the head-worn market.
  1. Enterprise/Industrial Products – ARGO is the first product from DigiLens that targets the enterprise and industrial market as a full solution.  It will be built to scale and meet its target market’s compliance and reliability needs. It uses DigiLens optical technology in the waveguides and projector and is built by a team with experience shipping thousands of enterprise & Industrial glasses from Daqri, ODG, and RealWear. 

Image Quality

As I was familiar with Digilen’s image quality, I didn’t really check it out that much with the ARGO, but rather I was interested in the overall product concept. Over the last several years, I have seen improved image quality, including uniformity and addressing the “eye glow” issue (discussed earlier).

For the type of applications in the “enterprise market” ARGO is trying to serve, absolute image quality may not be nearly as important as other factors. As I have often said, “Hololens 2 proves that image quality for the customers that use it” (see this set of articles discussing the Hololen 2’s poor image quality). For many AR markets, the display information is simple indicators such as arrows, a few numbers, and lines. It terms of color, it may be good enough if only a few key colors are easily distinguishable.

Overall, Digilens has similar issues with color uniformity across the field of view of all other diffractive waveguides I have seen. In the last few years, they have gone from having poor color uniformity to being among the better diffractive waveguides I have seen. I don’t think any diffractive waveguide would be widely considered good enough for movies and good photographs, but they are good enough to show lines, arrows, and text. But let me add a key caveat, what all companies demonstrate are invariably certainly cherry-picked samples.

Field of View (FOV)

While the Argos 30-degree FOV is considered too small for immersive games, for many “enterprise applications,” it should be more than sufficient. I discussed why very large FOVs are often unnecessary in AR in this blog’s 2109 article FOV Obsession. Many have conflated VR emersion with AR applications that need to support key information with high transparency, lightweight, and hands-free. As Professor and decades-long AR advocate Thad Starner pointed out, requiring the eye to move too much causes discomfort. I make this point because a very large FOV comes at the expense of weight, power, and cost.

Key Feature Set

The diagram below is from DigiLen on the ARGO and outlines the key features. I won’t review all the features, but I want to discuss some of their design choices. Also, I can’t comment on the quality of their various features (SLAM, WiFi, GPS, etc.) as A) I haven’t extensively tried them, and B) I don’t have the equipment or expertise. But at least on the surface, in terms of feature set, Argo compares favorably to the Hololens 1 and 2, if having a smaller FOV than the Hololens 2 but with much better image quality.

Audio Input for True Hands-Free Operation

As stated above, Digilens’ management team includes experience from RealWear. RealWear acquired a lot of technology from Kopin’s Golden-i. Like ARGO, Golden-i was a system product outgrowth from display component maker Kopin with a legacy before 2011 when I first saw Golden-i. Even though Kopin was a display device company, Golden-i emphasized voice recognition with high accuracy even in noisy environments. Note the inclusion of 5 microphones on the ARGO.

Most realistic enterprise-use models for AR headsets include significant, if not exclusively, hands-free operation. The basic idea of mounting a display on the user’s head it so they can keep their hands free. You can’t be working with your hands and have a controller in your hand.

While hand tracking cameras remove the need for the physical controller, they do not free up the hands as the hands are busy making gestures rather than performing the task with their hands. In the implementations I have tried thus far, gestures are even worse than physical controllers in terms of distraction, as they force the user to focus on the gestures to make it (barely sometimes) work. One of the most awful experiences I have had in AR was trying to type in a long WiFi password (with it hidden as I typed by asterisk marks) using gestures on a Hololens 1 (my hands hurt just thinking about it – it was a beyond terrible user experience).

Similarly, as I discussed with SadlyItsBradley about Meta’s BCI wristband, using nerve and/or muscle-detecting wristbands still does not free up the hands. The user still has their hands and mental focus slaved to making the wristband work.

Voice control seems to have big advantages for hands-free operation if it can work accurately in a noisy environment. There is a delicate balance between not recognizing words and phrases, false recognition or activation, and becoming too burdensome with the need for verification.

Skull-Gripping “Glasses” vs. Headband or Open Helmet

In what I see as a futile attempt to sort of look like glasses (big ugly ones at that), many companies have resorted to skull-gripping features. Looking at the skull profile (right), there really isn’t much that will stop the forward rotation of front-heavy AR glasses unless they wrap around the lower part of the occipital bone at the back of the head.

Both the ARGO (below left) and Panasonic’s (Shiftall division) VR headsets (right two images below) take the concept of skull-grabbing glasses to almost comic proportions. Panasonic includes a loop for the headband, and some models also include a forehead pad. The Panasonic Shiftall uses pads pressed against the front of the head to support the front, while the ARGO uses an oversized large noise bridge as found on many other AR “glasses.”

ARGO supports a headband option, but they require the ends of the temples with the skull-grabbers temples to be removed and replaced by a headband.

As anyone who knows anything about human factors with glasses knows, the ears and the nose cannot support much weight, and the ears and nose will get sore if much weight is supported for a long time.

Large soft nose pads are not an answer. There is still too much weight on the nose, and the variety of nose shapes makes them not work well for everyone. In the case of the Argo, the large nose pads also interfere with wearing glasses; the nose pads are located almost precisely where the nose pads for glasses would go.

Bussel/Bun on the Back Weight Distribution – Liberating the Design

As was pointed about by Microsoft with their Hololens 2 (HL2), weight distribution is also very important. I don’t know if they were the first with what I call “the bustle on the back” approach, but it was a massive improvement, as I discussed in Hololens 2 First Impressions: Good Ergonomics, But The LBS Resolution Math Fails! Several others have used a similar approach, most notably with the Meta Quest Pro VR (it has very poor passthrough AR, as I discussed in Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough). Another feature of the HL2 ergonomics is the forehead pad eliminates weight from the nose and frees up that area in support of ordinary prescription glasses.

The problem with the sort-of-glasses form factor so common in most AR headsets today is that it locks the design into other poor decisions, not the least of which is putting too much weight too far forward. Once it is realized that these are not really glasses, it frees up other design features for improvement. Weight can be taken out of the front and moved to the back for better weight distribution.

ARGO’s Eye-Relief Missed Opportunity for Supporting Normal Glasses

Perhaps the best ergonomic/user feature of the Hololens 1 & 2 over most other AR headsets is that they have enough eye relief (distance from the waveguide to the eye) and space to support most normal eyeglasses. The ARGO’s waveguide and optical design have enough eye relief to support wearing most normal glasses, but still, they require specialized inserts.

You might notice some “eye glow” in the CNET picture (above right). I think this is not from the waveguide itself but is a reflection off of the prescription inserts (likely, they don’t have good anti-reflective coatings).

A big part of the problem with supporting eyeglasses goes back to trying to maintain the fiction of a “glasses form factor.” The nose bridge support will get in the way of the glasses, but the nose bridge support is required to support the headset. Additionally, hardware in the “brow” over the eyes could have been moved elsewhere, which may interfere.

Another technical issue is the location and shape of their optical engine. As discussed earlier, the Digilens engine shape causes issues with jutting into the front of glasses, resulting in a large brow over the eyes. This brow, in turn, may interfere with various eyeglasses.

It looks like Argo started with the premise of looking like glasses putting form ahead of function. As it turns out, they have what for me is an unhappy compromise that neither looks like glasses nor has the Hololens 2 advantage of working with most normal glasses. Starting from the comfort and functionality as primary would have also led to a different form factor for the optical engine.

Conclusions

While MicroLED may hold many long-term advantages, they are not ready to go head-to-head with LCOS engines regarding image quality and color. The LCOS engines are being shown by multiple companies that are more than competitive in size and shape with the small MicroLED engines. The LCOS engines are also supporting much higher resolutions and larger FOVs.

Lumus, with their Z-Lens 2-D reflective waveguides, seems to have a big advantage in image quality and efficiency over the many diffractive waveguides. Allowing the Z-lens to be encased without an air gap adds another significant advantage.

Yet today, most waveguide-based AR glasses use diffractive waveguides. The reasons include there being many sources of diffractive waveguides, and companies can make their own custom designs. In contrast, Lumus controls its reflective waveguide I.P. Additionally, Lumus has only recently developed 2-D reflective waveguides, dramatically reducing the size of the projection engine driving their waveguides. But the biggest reason for using diffraction waveguides is that the cost of Lumus waveguides is thought to be more expensive; Lumus and their new manufacturing partner Schott Glass claimed that they will be able to make waveguides at competitive or better costs.

A combination of cost, color, and image quality will likely limit MicroLEDs for use in ultra-small and light glasses with low amounts of visual content, known as “data snacking.” (think arrows and simple text and not web browsing and movies). This market could be attractive in enterprise applications. I’m doubtful that consumers will be very accepting of monochrome displays. I’m reminded of a quote from an IBM executive in the 1980s when asked whether resolution or color was more important said: “Color is the least necessary and most desired feature in a display.”

Not to pick on Argo, but it demonstrates many of the issues with making a full-featured device in a glasses form factor, as SLAM (with multiple spatially separated cameras), processing, communication, batteries, etc., the overall design strays away from looking like glasses. As I wrote in my 2019 article, Starts with Ray-Ban®, Ends Up Like Hololens.

The post DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8) first appeared on KGOnTech.

MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7)

13 March 2023 at 01:54

Introduction

My coverage of CES and SPIE AR/VR/MR 2023 continues, this time on MicroLEDs. MicroLEDs companies were abundant in the booths, talks, and private conversations at AR/VR/MR 2023.

The list on the right shows some of the MicroLED companies I have looked at in recent years. Marked with a blue asterisk “*” are companies I talked to at AR/VR/MR 2023, with Jade Bird Display (JBD), PlayNitride, Porotech, and MICLEDI having booths in the exhibition. The green bracket on the left indicates companies where I had seen a MicroLED display generating an image (not just one or a few LEDs). Inside the gold rectangle in the list above are MicroLED companies that system companies have bought. MicroLEDs are the display technology where tech giants Meta, Apple, and Google place their bets for the future.

A much more extensive list of companies involved in MicroLED development can be found at microled-info.com, a site dedicated to tracking the MicroLED industry. Microled-info’s parent company, Metalgrass, also organized the MicroLED Association, and I spoke at their Feb. 7th Webinar (but you have to join the association to see it).

The efficiency of getting the Lambertian light that most LEDs emit through a waveguide to the eye is a major issue I have studied for years and will be covered first. Then after covering recent MicroLED prototypes and discussions, I have included an appendix with background information in the subsections “What is a MicroLED company,” “Microdisplay vs. Direct View Pixel Sizes,” and “Multicolor, Full Color, or True Color.”

MicroLEDs and Waveguides; Millions of Nits-In to Thousands of Nits-Out with Waveguides

When first hearing of MicroLEDs outputting millions of nits, you might think it must be overkill to deliver thousands of nits to the eye for outdoor use with a waveguide. But due to pupil expansion and light losses, only a tiny fraction of the light-in makes it to the eye. The figure (right) diagrams the efficiency issues with waveguides using a diffractive waveguide.

Most LEDs output diffuse (roughly) Lambertian light, whereas waveguides require collimated light. Typically, micro-optics such as microlens arrays (MLA) are on top of the MicroLEDs’ semi-collimate the light. These optics increase the nits; typically, the nits quoted for the MicroLED display are after micro-optics. A waveguide’s small entrance area severely limits the light due to a physics property known as “etendue,” causing it to be called “etendue loss.” Then there are the losses due to the pupil expansion/replication structures (diffraction gratings in the case of diffractive waveguides, semi-reflective “facets” in the case of reflective waveguides). Finally, the light-in from the small entrance area ends up spread out over the much larger exit area to support seeing the image over the whole FOV as the eye moves.

Multiple Headsets Using Diffractive Waveguides with JBD MicroLED

I found it an interesting dichotomy that while all the other prototypes I have seen using Jade Bird Display (JBD) MicroLEDs, including Vuzix, Oppo, TCL, Dispelix, and Waveoptics (before being acquired by Snap), JBD themselves showed a prototype 3-chip color cube projector with a Lochn “clone” (with lesser image quality) of a Lumus 2D expanding reflective waveguide in their booth (I was asked not photograph). Then in the Playnitride booth, they featured Lumus reflective waveguides. I should note that while efficiency is a major factor, other design factors, including cost, will drive different decisions.

Reflective (Lumus) Waveguides are More Efficient than Diffractive Waveguides with MicroLEDs

According to Lumus, their 2-D reflective (Lumus) waveguides result in a 3 to 9 times larger entrance area, and their semi-reflective facets lose less light than diffraction gratings. The net result is that reflective waveguides can be 5 to >10 times more optically efficient than diffractive waveguides with the same microLEDs, a major advantage in brightness and power (= less heat and longer battery life). This efficiency advantage appears to have been playing out at AR/VR/MR 2023.

Playnitride prominently showed their MicroLEDs using Lumus 2D and older 1D reflective waveguides in their booth (below left and middle). Their full-color QD-MicroLEDs only output about 150K nits (compared to the millions of others’ single-color native LEDs), so they needed a more efficient waveguide. Playnitride uses Quantum Dot conversion of blue LEDs to give red and green.

Lumus CTO Dr. Yochay Danziger brought a 2D expanding waveguide with input optics that he held up to Porotech’s MicroLEDs. I captured a quick handheld (and thus not very good) shot (with ND filters to reduce the very bright image) of Porotech’s green MicroLED via Lumus’s handheld waveguide (above right).

Lumus was the only company featured in the Schott Glasses booth at AR/VR/MR 2023. The often-asked question about Lumus is whether they can make them in volume production. The Schott Glass representative assured me they could make Lumus’s 2-D waveguides in volume production.

I plan on covering Lumus’s new smaller (than their two year old Maximus 2D waveguide) Z-Lens 2D waveguide in an upcoming article. In the meantime, I discussed the Z-Lens in the CES 2023 Video with SadlyItsBradley.

Other Optics (ex., Bird Bath, Freeform, and VR-Pancake) and Micro-OLEDs

I want to note here that while MicroLEDs are hundreds to over a thousand times brighter than Micro-OLEDs, they are likely well more than five years away from having anywhere near the same color control and uniformity. Thus designs that favor image quality over brightness using optical designs that are much more efficient than waveguides, such as Bird Bath, Freeform, and VR-pancake optics, will continue to use Micro-OLEDs or LCDs for the foreseeable future. Micro-OLEDs are expected to continue getting brighter, with some claiming they have roadmaps to get to about 30K nits.

Jade Bird Display (JBD) Based AR Glasses

Jade Bird Display (JBD) is the only company I know to be shipping MicroLEDs in production. All working headsets I have seen use JBD’s 640×480 green (only) MicroLEDs, including ones from Vuzix (Ultralite and Shield), Oppo, and Waveoptics (shown in 2022 before being acquired by Snap). JBD is developing devices supporting higher pixel depth and higher resolution.

Also, as background to MicroLEDs in general, as well as JBD and the glasses using their MicroLEDs, there is my 2022 blog article AWE 2022 (Part 6) – MicroLED Microdisplays for Augmented Reality and the associated video with SadlyItsBradley. Additionally, there is my 2021 article on JBD and WaveOptics in News: WaveOptics & Jade Bird Display MicroLED Partnership.

The current green MicroLEDs support only 4 bits per pixel or 16 (24) brightness levels and will show contour lines with a smooth shaded area. I hear that JBD’s future designs will support more levels. While I have seen continuous improvement in the pixel-to-pixel brightness differences through the year, and while they are the most uniform MicroLED devices I have seen, there is still visible “grain” in what should be a solid area.

Vuzix

At CES 2023, Vuzix showed off the small size possible with their Utralite glasses (left side below) which weigh only 38 grams (not much more than most conventional glass). A tray full of display engines on public display was there to emphasize that they were in production. The comparison of light engines (below left) shows how compact the MicroLED green and color cube projector engines are compared with Vuzix’s older (but true color) DLP design with similar resolution. I discussed Vuzix’s Ultralite and Shield in the CES 2023 video with SadleyItsBradley.

The Vuzix Shield and Ultralite share the same small green MicroLED engine. The combination of the engine and Vuzix waveguide are capable of up to 4,100 nits which is bright enough to enable outdoor use. The power consumption of MicroLEDs is roughly proportional to the average pixel value (APV). Paul Travers, CEO of Vuzix, says that the Ultralites consume very little power and can work for two days in typical use on a charge. Vuzix has also improved their in-house developed waveguides, significantly reducing the forward projection (“eye-glow”).

Vuzix has been very involved with several MicroLED companies, as discussed with SadlyItsBradley in our AWE 2022 Video.

Oppo

At AR/VR/MR 2023, Oppo showed me their JBD green MicroLED based glassed with a form factor similar to the Vuzix Ultralite. The overall image quality and resolution seem similar on casual inspection. The Vuzix waveguides diffraction gratings seem less noticeable from the outside, but I have not compared them side by side in the same conditions.

TCL and JBD X-Cube Color

At CES 2023, TCL demonstrated a multicolor 3-Chip (R, G, and B) combined with an X-Cube prototype (using a Lochn reflective waveguide). Vuzix, in a 2020 concept video, and Meta (Facebook), in a 2019 patent application, have shown using three waveguides to combine the three primary colors (below right). I discussed the TCL glasses with JBD color X-Cube design and some of the issues with X-Cubes in the CES video with SadleyItsBradley.

The TCL glasses appear to be using a diffraction grating waveguide that is very different from others I have seen due to the way the exit grating has very big steps in the transmission of light (right). This waveguide differs from the reflective waveguide JBD was showing in their booth or other diffractive waveguides. I have seen diffractive waveguides that were none uniform but never with such large steps in the output gratings. While I didn’t get a chance to see an image through the TCL glasses, the reports I got from others were that the image quality was not very good.

Goertek/Goeroptics Design and Manufacturing JBD Projection Engines

In the CES 2023 TCL video, I discussed some of the issues associated X-Cube color combining and the problems with aligning the three panels. At the AR/VR/MR conference, the Goeroptics division of Goertek showed that they were making both green-only and Color X-Cube designs for JBD’s MicroLEDs (slide from their presentation below). While Goertek may not be a household name, they are a very large optics and not-optics design and OEM for many famous brands, including giants such as Apple, Microsoft, Sony, Samsung, and Lenovo.

Porotech, Ostendo, and Innovation Semiconductor color tunable LEDs

I met Porotech in their private suite at CES and their booth at AR/VR/MR 2023. They have already received much attention on this blog in CES 2023 (Part 2) – Porotech – The Most Advanced MicroLED Technology, AWE 2022 (Part 6) – MicroLED Microdisplays for Augmented Reality, and my CES 2023 video with SadlyIsBradley on Porotech. They have been making a lot of news in the last year with their development of single-color InGaS red, green, and blue MicroLEDs and particularly their single emitter color tunable LED (what Porotech calls DynamicPixelTuning ® or DPT ®)

Below is a very short video I captured in the Porotech booth with a macro lens of their DynamicPixelTuning demo. I apologize for the camera changing focus when I switched from still to video mode with the blooming due to the wide range of brightness as the color changes. The demo shows the whole display changing color, as Porotech does not have a backplane that can change colors pixel by pixel.

Porotech showed a combination of motion and color changing with their DynamicPixelTuning

At CES 2023, I was reminded by Ostendo, best known for the color-stacked MicroLEDs technology, that they had developed tunable color LEDs several years ago. Sure enough, six years ago, Ostendo presented the paper III-nitride monolithic LED covering full RGB color gamut in the Journal of the SPIE in February 2016. I have not seen evidence that Ostendo has come close to pursuing it beyond the single LED prototype stage, as Porotech has done with their DynamicPixelTuning.

The recent startup Innovation Semiconductor (below) is developing technology to integrate the control transistor circuitry into the InGaS substrate and avoid the more common hybrid InaS, and CMOS approaches almost all others are using. They are also developing a “V-grove” technology for making color-tunable LEDs. Innovation Semi cites work by the University of California at Stata Barbara (see paper 1 and paper 2 ) plus their own work that suggests that V-groves may be a more efficient way to produce color-tunable LEDs than the approach taken by Porotech and Ostendo.

A major concern I have with Innovation Semi’s approach to integrating the control transistors in GaN is whether they will be able to integrate enough control circuitry without making the devices too expensive and/or making the pixel size bigger.

PlayNitride (Blue with QD Conversion Spatial Color)

PlayNitride demonstrated its full-color MicroLED technology, which uses blue LEDs with Quantum Dot (QD) conversion to produce red and green. At 150K nits, they are extremely bright compared to Micro-OLEDs but are much less bright than native red, green, and blue MicroLEDs from companies including JBD and Porotech.

As discussed earlier, PlayNitride showed their MicroLEDs working with Lumus waveguides. But even though Lumus waveguides are more efficient than diffractive waveguides, 150K nits from the display are not bright enough for practical uses. They are about 1/10th the brightness of the native MicroLEDs of JBD and Porotech, and their pixels are bigger.

PlayNitride was the only company showing fairly high-resolution (1K by 1K and 1080P) full-color single-chip MicroLED microdisplays. Furthermore, these are only prototypes. Still, the green and red were substantially weaker than the blue, as seen in the direct (no waveguide) macro photograph of PlayNitrides MicroLED below. Also, the red was more magenta (mixed red and blue).

Looking at the 2X zoom, one sees the “grain” associated with the pixel-to-pixel brightness differences in all colors common to all MicroLEDs demonstrated to date. Additionally, in the larger reddish wedge pointed at by the red arrow, there are color differences/grain at the pixel level.

Known issue with QD spatial color conversion and microdisplays

While quantum dot (QD) color conversion of blue and UV LEDs has been proposed as a method to make full-color MicroLEDs for many years, there are particular issues with using QD with very small microdisplay pixels. Normally the QD layer required for conversion stays roughly the same thickness as the pixels become smaller, resulting in a very tall stack of QD compared to the pixel size. It then requires some form of microscopic baffling to prevent the light from adjacent LEDs from illuminating the wrong color.

Some have tried using thinner layers of QD and then relied on color filters to “clean up” the colors, but this comes with significant losses in efficiency and issues with heat. There are also issues with how hard the QD material can be driven before it degrades, which will limit brightness. Using spatial color itself has the issue of pixel sizes becoming too big for use in AR.

Many of these issues will be very different for making larger direct-view and VR pixels. The thickness of the QD layers becomes a non-issue as the pixels get bigger and spatial color has long been used with larger pixels. We have already seen where different OLED technologies have been used based on pixel size and application; for example, color-filtered OLEDs won out in large-screen TVs, whereas native color OLED subpixels are used in smartphone phones, smartwatches, and microdisplay OLEDs.

MICLEDI Reconstituted InGaS Wafers

MICLEDI is a spinout of the IMEC research institute in Belgium in 2019 with a booth at AR/VR/MR 2023. They are fabless with a mix of MicroLED technologies they have developed (right). They claim to have single color per die, spatial color (colors side by side), and stacked color technology. They have also developed GaN and Aluminum Gallium Phosphor (AlinGAP) red. After some brief discussions in their booth and going through their handout material, their MicroLEDs seem like a bit of a grab bag of technology for license without a clear direction.

The one technology that seems to set MICLEDI apart is for taking 100, 150mm, or 200mm GaN or AlinGap EPI wafers and making a “reconstituted” wafer with pick and placed known good dies. These reconstituted wafers can be “flip chipped” with today’s 300mm CMOS wafers. Today, almost all LED manufacturing is on much smaller wafers than mainstream production CMOS. For development today, companies are flipping small GaN wafers with spaced-out sets of LED arrays onto a larger CMOS wafer and throwing away most of the CMOS wafer.

Stacked MicroLEDs

While I didn’t see MIT at CES or AR/VR/MR 2023, MIT made news during AR/VR/MR with stacked color MicroLEDs. I don’t know the details, but it sounds similar to what Ostendo discussed, at least as far back as 2016 (see lower left). MICLEDI (above) has also developed a stated color LED technology where the LEDs are side by side.

The obvious advantage of stacked color is that the full color is smaller. But the disadvantage is that the LEDs and other circuitry above block light from lower LEDs. The net result is that stacked LEDs will likely be much brighter than Micro-OLEDs but much less bright than other MicroLED technologies. Also concerning is that while red is the color with the least efficiency today, it seems to end up on the lowest layer.

With their mid-range brightness, stacked MicroLEDs would likely be targeted at non-waveguide optics designs. Ostendo has been developing its optical design, which tiles multiple small MicroLEDs to give a wider FOV.

Conclusions

Many giant and small companies are betting that MicroLEDs will be the future of MicroDisplay technology for VR and AR. At the same time, one should realize that none of the technologies is competitive today regarding image quality with Micro-OLED, LCOS, or DLP. There are many manufacturing and technical hurdles yet to be solved. Each of the methods for producing full-color MicroLEDs has advantages and disadvantages. The race in AR is to support full-color displays and higher resolution at high brightness as, low power, and small size. I can’t see how multiple monochrome displays using X-Cubes, Waveguides, or other methods are long-term AR solutions.

I often warn people that if someone does a demo first, that does not mean they will be in production first. Some technical approaches will yield a hand-crafted one-off demo faster but are not manufacturable. The warning is doubly true when it comes to color MicroLEDs. It is easier to rule out certain approaches than to say which approach or approaches will succeed. For MicroDisplay MicroLEDs used in AR, I think native LEDs will win out over color-converted (ex., QD) blue LEDs. A different MicroLED technology will likely be better for direct-view displays.

It will be interesting to see the market adoptions of the new small form factor but green-only AR glasses. While they meet the form factor requirement of looking like glasses with acceptable weight, they don’t have great vision correction solutions, and being green-only will limit consumer interest.

A continuing issue with be which optics work best with MicroLEDs. Part of this issue will be affected by the degree of collimation of the light from the LEDs. The 2-D reflective waveguides developed by Lumus have a significant efficiency advantage, but still, many more companies are using diffractive waveguides today.

Appendix: MicroLED Background Information

What is a MicroLED Company?

To have a successful MicroLEDs is more than making the LEDs; it is about making a complete display and the ability to control it accurately at an affordable cost.

What constitutes a “MicroLED company” varies widely from a completely fabless design company to one that might design and fab the LEDs, design the (typically) CMOS control backplane, and then do the assembly and electrical connection of the (typically) Indium Gallium Nitride (InGaS) LEDs onto the CMOS backplane. Almost every company has a different “flow” or order in which they assemble/combine various component technologies. For example, shown below is the flow given by JBD, where they appear to be applying the Epi-lay to grow the LEDs on top of the CMOS wafer; other companies would form the LEDs first on the InGaN wafer and then bond the finished transistor arrays onto the finished CMOS control devices.

There is no common approach, and there are as many different methods as there are companies with some flows radically different from JBD’s. Greatly complicating matters is that most InGaN fabrication is done on 150mm to 200mm diameter wafers. In contrast, mainstream CMOS today is made on 300mm wafers which least to a variety of methods to address this issue, some of which are better suited to volume manufacturing than others.

Microdisplay vs. Direct View Pixel Sizes

What companies call MicroLED displays varies from wall-size monitors and TVs that can be more than a meter wide down to microdisplays typically less than 25mm in diagonal. As the table on the right shows, a small pixel on an AR microdisplay is about 300 to 600 times smaller than a direct-view smartphone or smartwatch. Pixel sizes get closer when comparing waveguide-based AR to VR pixels.

VR headsets started with essentially direct-view cell phone-type displays with some cheap optics to enable the human eye to focus but have been driving the pixel size down to improve angular resolution. The latest trend is to use pancake optics which can use even smaller pixels to enable smaller headsets.

There is some “bridging” between AR and VR with display types. For example, large combiner “bug-eye” AR often uses direct-view type displays common in VR. Some pancake optics-based VR displays use the same Micro-OLED displays used with AR birdbath optics.

With the radically different pixel sizes, it should not be surprising that the best technology to support that pixel size could change. Small microdisplays used by waveguide-based AR require microdisplays with semiconductor (usually CMOS) transistors. TVs, smartphones, and smartwatches use various types of thin film transistors.

Particularly regarding supporting color with MicroLEDs, it should be expected that the technologies used for microdisplays could be very different from those used for direct-view type displays. For example, while quantum dots color conversion of blue or UV light might be a good method for supporting larger displays, it does not seem to scale well to the small pixel sizes used in AR.

Multicolor, Full Color, or True Color

While not “industry standard definitions,” for the sake of discussion, I want to define three categories of color display:

  1. Multicolor – Provides multiple identifiable colors, including, at a minimum, the primary colors of red, green, and blue. This type of display is useful for providing basic information and color coding it. Photographic images will look cartoonish at best, and there are typically very visible “contour lines” in what should be smoothly shaded surfaces.
  2. Full Color – This case supports a wide range of colors, and smooth surfaces will not have significant contours, but the color control across the display is not good enough for showing pictures of people.
  3. True Color – The display is capable of reasonably accurate color control across the display. Importantly, faces and skin tones, to which human vision is particularly sensitive, look good. It a display is “true color,” it should also be able to control the “white point,” and whites will look white, and grays will be gray. There should be no visible contouring.

The images below are examples of “multicolor,” “full color,” and “true color” images.

JBD “Multicolor” Display
Playnitride “Full Color”
KGOnTech Test Pat. “True Color”

It might seem to some that my definition of “full” versus “true” color is redundant, but I have seen many demonstrations through the years where the display can display color but can’t control it well. In 2012, I wrote Cynics Guide to CES – Glossary of Terms. I called this issue “Pixar-ized” because there were so many demos of cartoon characters showing color saturation but none showing humans, which requires accurate color control.

Pixar-ized – The showing of only cartoons because the device can’t control color well and/or has low resolution.  People have very poor absolute color perception but tend to be are very sensitive to skin tones and know what looks right when viewing humans, but the human visual systems is very poor at judging whether the color is right in a cartoon.  Additionally it is very hard to tell resolution when viewing a cartoon.

I will add to this category above “artistic” false/shifted color images (see Playnitride’s above). Sometimes this is done because the work to calibrate the prototype has not been completed, even though the display can eventually support full color. Still, it is often done to hide problems.

I should note that what can be acceptable to the eye with a single-color image can look very bad when combined with other colors. What are weak or dead pixels with a monochrome display will turn into colorized or color-shifted pixels that will stick out. Anyone with a single dead color within a pixel on display has seen how the missing color sticks out. The images below are a simplified Photoshop (simulation) of what happens if random noise and dim areas occur in the various colors. The left image shows the effect on the full-color image, and the right image shows the same amount of random noise and dimming (in green) with the monochrome green (note, the image on the right is the grayscale image and then converted to green and not just the green channel from the true color image). In the green-only image, you can see some noise and a slight dimming that might not even be noticeable, whereas, in the color image, it turns into a magenta-colored area.

In that same 2012 article, I wrote about “Stilliphobia,” the fear of showing still images. We are seeing that with displaying content that is very busy and/or with lots of motion to hide dead or weak pixels or random pixel values in the display. When I see a needlessly busy image or lots of motion, I immediately think they are trying to hide problems. Someone with a great-looking display should show pictures of people and smooth images for at least some content.

Most of today’s MicroLED displays are working on getting to multicolor displays and are far from true color. All MicroLED microdisplays I have seen to date have large pixel-to-pixel variations. No amount of calibration or mura correction will be enough to produce a good photographic image if the individual colors can’t be controlled accurately. The good news is that most of today’s AR applications only require a multicolor display.

Cambridge Mechatronics and poLight Optics Micromovement (CES/PW Pt. 6)

4 March 2023 at 15:55

[March 4th, 2023 Corrections/Updates – poLight informed me of some corrections, better figures, and new information that I have added to the section on poLight. Cambridge Mechatronics informed me about their voltage and current requirements for pixel-shifting (aka wobulation).]

Introduction

For this next entry in my series on companies I met with at CES or Photonics West’s (PW) AR/VR/MR show in 2023, I will be covering two different approaches to what I call “optics micromovement.” Cambridge Mechatronics (CML) uses Shape Memory Alloys (SMA) wires to move optics and devices (including haptics). poLight uses piezoelectric actuators to bend thin glass over their flexible optical polymer. I met with both companies at CES 2023, and they both provided me with some of their presentation material for use in this article.

I would also like to point out that one alternative to moving lenses for focusing is electrically controlled LC lenses. In prior articles, I discussed implementations of LC lenses by Flexenable (CES & AR/VR/MR Pt. 4 – FlexEnable’s Dimming, Electronic Lenses, & Curved LCDs); Meta (Facebook) with some on DeepOptics (Meta (aka Facebook) Cambria Electrically Controllable LC Lens for VAC? and Meta’s Cambria (Part 2): Is It Half Dome 3?); and Magic Leap with some on DeepOptics (Magic Leap 2 (Pt. 2): Possible Answers from Patent Applications); and DeepOptics (CES 2018 (Part 1 – AR Overview).

After discussing the technologies from CML and poLight, it will be got into some of the new uses within AR and VR.

Beyond Camera Focusing and Optical Image Stabilization Uses of Optics Micromovement in AR and VR

Both poLight and CML have cell phone customers using their technology for camera auto-focus and optical image stabilization (OIS). This type of technology will also be used in the various cameras found on AR and VR headsets. poLight’s TLens is known to be used in the Magic Leap 2 reported by Yole Development and Sharp’s CES 2023 VR prototype (reported by SadlItsBradley).

While the potential use of their technology in AR and VR camera optics is obvious, both companies are looking at other ways their technologies could support Augmented and Virtual Reality.

Cambridge Mechatronics (CML) – How it works

Cambridge Mechatronics is an engineering firm that makes custom designs for miniature machines using shaped memory alloy (SMA). Their business is in engineering the machines for their customers. These machines can move optics or objects. The SMA wires contract when heated due to electricity moving through them (below left) and then act on spring structures to cause movement as the wires contract or relax. Using multiple wires in various structures can cause more complex movement. Another characteristic of the SMA wire is that as it heats and contracts, it makes the wire thicker and shorter, causing the resistance to be reduced. CML uses the change in resistance as feedback for closed-loop control (below right).

Show (below right) is a 4-wire actuator that can move horizontally, vertically, or rotate (arrows pointing at the relaxed wires). The SMA wires enable a very thin structure. Below is a still from a CML video showing this type of actuator’s motion.

Below is an 8-wire (2 crossed wires on four sides) mechanism for moving a lens in X, Y, and Z and Pitch and Yaw to control focusing and optical image stabilization (OIS). Below are five still frames from a CML video on how the 8-wire mechanism works.

CML is developing some new SMA technology called “Zero Hold Power.” With this technology, they only need to apply power when moving optics. They suggest this technology would be useful in AR headsets to adjust for temperature variations in optics and support vergence accommodation conflict.

CML’s SMA wire method makes miniature motors and machines that may or may not include optics. With various configurations of wires, springs, levers, ratcheting mechanisms, etc., all kinds of different motions are possible. The issue becomes the mass of the “payload” and how fast the SMA wires can respond.

CML expects that when continuously pixel shifting, they will use take than 3.2V at ~20mA.

poLight – How It Works

poLight’s TLens uses piezoelectric actuators to bend a thin glass membrane over poLight’s special optical clear, index-matched polymer (see below). This bending process changes the lens’s focal point, similar to how the human eye works. The TLens can also be combined with other optics (below right) to support OIS and autofocus.

The GIF animation (right) show how the piezo actuators can bend the top glass membrane to change the lens in the center for autofocus, tilt the lens to shift the image for OIS, and both perform autofocus and OIS.

poLight also proposes supporting “supra” resolution (pixel shifting) for MicroLEDs by tilting flat glass with poLight’s polymer using piezo actuators to shift pixels optically.

One concern is that poLight’s actuators require up to 50 Volts. Generating higher voltages typically comes with some power loss and more components. [Corrected – March 3, 2023] poLight’s companion driver ASIC (PD50) has built-in EMI reduction that minimizes external components (it only requires ext. capacitive load) and power/current consumption is kept very low (TLens® being an optical device, consumes virtually no power, majority of <6mW total power is consumed by our driver ASIC – see table below).

poLight says that the TLens is about 94% transparent. The front aperture diameter of the TLens, while large enough for small sensor (like a smartphone) cameras, seems small at just over 2mm. The tunable wedge concept could have a much wider aperture as it does not need to form a lens. While the poLight method may result in a more compact design, the range of optics would seem to be limited in both the size of the aperture and how much the optics change.

Uses for Optics Micromovement in AR and VR beyond cameras

Going beyond the established camera uses, including autofocus and OIS, outlined below are some of the uses for these devices in AR and VR:

  • Variable focus, including addressing vergence accommodation conflict (VAC)
  • Super-resolution – shifting the display device or the optic to improve the effective resolution
  • Aiming and moving cameras:
    • When doing VR with camera-passthrough, there are human factor advantages to having the cameras positioned and aimed the same as the person’s eyes.
    • For SLAM and tracking cameras, more area could be covered with higher precision if the cameras rotate.
  • I discussed several uses for MicroLED pixel shifting in CES 2023 (Part 2) – Porotech – The Most Advanced MicroLED Technology:
    • Shifting several LEDs to the same location to average their brightness and correct for any dead or weak pixels should greatly improve yields.
    • Shifting spatial color subpixels (red, green, and blue) to the same location for a full-color pixel. This would be a way to reduce the effective size of a pixel and “cheat” the etendue issue caused by a larger spatial color pixel.
    • Improve resolution as the MicroLED emission area is typically much smaller than the pitch between pixels. There might be no overlap when switching and thus give the full resolution advantage. This technique could provide even fewer pixels with fewer connections, but there will be a tradeoff in maximum brightness that can be achieved.

Conclusions

It seems clear that future AR and VR systems will require changing optics at a minimum for autofocusing. There is also the obvious need to support focus-changing optics for VAC. Moving/changing optics will find many other uses in future AR and VR systems.

Between poLight and Cambridge Mechatronic (CML), it seems clear that CML’s technology is much more adaptable to a wider range and types of motion. For example, CML could handle the bigger lenses required for VAC in VR. poLight appears to have an advantage in size for small cameras.

The post Cambridge Mechatronics and poLight Optics Micromovement (CES/PW Pt. 6) first appeared on KGOnTech.

The post Cambridge Mechatronics and poLight Optics Micromovement (CES/PW Pt. 6) appeared first on KGOnTech.

AR Longan Vision AR for First Responders (CES – AR/VR/MR 2023 Pt. 5)

1 March 2023 at 01:56

Introduction

This next entry in my series on companies I met with at CES or Photonics West’s (PW) AR/VR/MR show in 2023 will discuss a company working on a headset for a specific application, namely firefighting and related first responders. In discussing Longan Vision, I will mention ThermalGlass (by 360world using Vuzix Blaze optics), Campfire 3D, iGlass, and Mira, which have some similar design features. In addition to some issues common with all AR devices, Longan Vision has unique issues related to firefighting and other first responder applications.

This was my first meeting with Longan Vision, and it was not for very long. I want to be clear that I have no experience working with firefighters or their needs and opinions on AR equipment. In this short article, I want to point out how they tried to address the user’s needs in an AR headset.

Longan Vision

Below is a picture of Longan Vision’s booth, my notations, and some inset pictures from Longan’s website.

Hands-free operation is a big point and central to the use case for many AR designs. Longan uses AR to enhance vision by letting firefighters see through the smoke and darkness and providing additional life-saving information such as temperature and direction.

The AR optics are one of the simplest and least expensive possible; they use dual merged large curved free-space combiners, often called “bug-eye” combiners based on their appearance. They use a single cell phone-size display device to generate the image (some bug-eyes use two smaller displays). The combiner has a partial mirror coating to reflect the display’s image to the eye. The curvature of the semi-reflective combiner magnifies and moves the focus of the display, while light from the real world will be dimmed by roughly the amount of the display’s light reflected.

The bug-eye combiner has well-known good, bad, and other points (also discussed in a previous article).

Birdbath Optics
  • The combiner is inexpensive to produce with reasonably good image quality. This means it can also be replaced inexpensively if it becomes damaged.
  • It gives very large eye relief, so there are no issues with wearing glasses. Thus it can be worn interchangeably by almost everyone (one size fits all).
  • It is optically efficient compared to Birdbath, Waveguides, and most other AR optics.
  • While large, the combiner can be made out of very rugged plastics and is not likely to break and will not shatter. It can even serve as eye and face protection.
  • Where the eyes will verge is molded into the optics and will differ from person to person based on their IPD.
  • As the name “bug-eye” suggests, they are big and unattractive.
  • Because the combiner magnifies a very large (by near-eye standards) display with very large pixels, the angular resolution (pixels per degree) is very low, while the FOV is large.
  • Because the combiner is “off-axis” relative to the display, the magnification and focus are variable. This effect can be reduced but not eliminated by making the combiner aspherical. Birdbath optics (described here and shown above-right) have a beamsplitter, which greatly reduces efficiency but makes optics “on-axis” to eliminate these issues.
  • Brightness is limited by the display’s brightness multiplied by the fraction of light reflected by the combiner. Typically, flat panels will have between 500 and 1,000 nits. That fraction typically ranges between 50% and 20% depending on the tradeoff of display efficiency versus transparency of the real world. These factors and others typically limit their use of indoor applications.

Longan also had some unique requirements incorporated into their design:

  • The combiner had to be made out of high-temperature plastics
  • They had to use high-temperature batteries, which added some weight and bulk. Due to their flammability, they could not use the common, more energy-dense lithium batteries.
  • The combiner supports flipping up to get out of the user’s vision. This is a feature supported by some other bug-eye designs.
  • The combiner also acts as an eye and partial face shield. Their website demonstration video shows firefighters having an additional flip-up outer protective shield. It is not clear if these will interfere with each other when flipping up and down.
  • The combiner must accommodate the firefighting breathing apparatus.
  • An IR camera feeds the display to see what would otherwise be invisible.

Companies with related technologies

I want to mention a few companies that have related technologies.

At CES 2023, I met with ThermalGlass (by 360world), which combined infrared heat images with Vuzix blade technology to produce thermal vision AR glasses. I discussed ThermalGlass in my CES recap with SadlyItsBradley.

Mira has often been discussed on this blog as an example of a low-cost AR headset. Mira’s simple technology is most famously used in Universal Studios Japan, and Hollywood Mario Kart rides. Mira’s website shows a more industrially oriented product with a hard hat and an open frame/band version. Both, like Longan, support a flip-up combiner. The open headband version does not appear to have enough support, with just a headband and forehead pad. Usually, an over-the-head band is also desirable for comfort and a secure fit with this type of support.

In my video with SadlyItsBradley after AWE 2022, I discussed other large combiner companies, including Campfire, Mira, and iGlass.

The images below show some pictures I took at AWE 2018 of the iView prototype with a large off-axis combiner with a front view (upper left), a view directly of the displays (lower left), and a view through the combiner without any digital correction (below right). The football field in the picture below right illustrates how the image is distorted and how the focus varies from the top to the bottom of the display (the camera was focused at about the middle of the image). Typically the distortion can be corrected in software with some loss in resolution due to the resampling. The focusing issue, however, cannot be corrected digitally and relies on the eye to adjust focus depending on where the eye is centered.

Conclusions

Longan has thought through many features from the firefighter’s user perspective. In terms of optics, it is not the highest-tech solution, but it may not need to be for the intended application. The alternative approach might be to use a waveguide much closer to the eye but with enough eye relief to support glasses. But then the waveguide would have to be extremely ruggedized with its own set of issues in a firefighter’s extreme environment.

Unlike many AR headsets that have me scratching my head. With Longan Vision, I can see the type of customer that might want this product.

The post AR Longan Vision AR for First Responders (CES – AR/VR/MR 2023 Pt. 5) first appeared on KGOnTech.

The post AR Longan Vision AR for First Responders (CES – AR/VR/MR 2023 Pt. 5) appeared first on KGOnTech.

CES & AR/VR/MR Pt. 4 – FlexEnable’s Dimming, Electronic Lenses, & Curved LCDs

23 February 2023 at 21:00

Introduction – Combining 2023’s CES and AR/VR/MR

As I wrote last time, I met with over 30 companies, about 10 of which twice between CES and SPIE’s AR/VR/MR conferences. Also, since I started publishing articles and videos with SadlyItsBradley on CES, I have received information about other companies, corrections, and updates.

FlexEnable is developing technology that could affect AR, VR, and MR. FlexEnable offers an alternative to Meta Materials (not to be confused with Meta/Facebook) electronic dimming technology. Soon after publishing CES 2023 (Part 1) – Meta Materials’ Breakthrough Dimming Technology, I learned about FlexEnable. So to a degree, this article is an update on the Meta Materials article.

Additionally, FlexEnable also has a liquid crystal electronic lens technology. This blog has discussed Meta/Facebook’s interest in electronically switchable lens technology in Imagine Optix Bought By Meta – Half Dome 3’s Varifocal Tech – Meta, Valve, and Apple on Collision Course? and Meta’s Cambria (Part 2): Is It Half Dome 3?.

FlexEnable is also working on Biaxially Curved LCD technology. In addition to direct display uses, the ability to curve a display as needed will find uses in AR and VR. Curved LCDs could be particularly useful in very wide FOV systems. I discussed this briefly (discussed R6’s helmet having a curved LCD briefly in out AR/VR/MR 2022 video with SadlyItsBradley)

FlexEnable – Flexible/Moldable LC for Dimming, Electronic Lenses, Embedded Circuitry/Transistors, and Curved LCD

FlexEnable has many device structures for making interesting optical technologies that combine custom liquid crystal (LC), Tri-acetyl cellulose (TAC) clear sheets, polymer transistors, and electronic circuitry. While Flexenable has labs to produce prototypes, its business model is to design, develop, and adapt its technologies to its customers’ requirements for transfer to a manufacturing facility.

TAC films are often used in polarizers because they have high light transmittance and low birefringence (variable retardation and, thus, change in polarization). Unlike most plastics, TAC can retain its low birefringence when flexed or heated to its glass point (becomes rubbery but not melted) and molded to a biaxial curve. By biaxially curving, they can match the curvature of lenses or other product features.

FlexEnable’s Biaxially Curvable Dimming

Below is the FlexEnable approach to dimming, which is similar to how a traditional glass LC device is made. The difference is that they use TAC films to enclose the LC instead of glass. FlexEnable has formulated a non-polarization-based LC that can either darken or lighten when an electric field is applied (the undriven state can be transparent or dark). For AR, a transparent state, when undriven, would normally be preferred.

To form a biaxially curved dimmer, the TAC material is heated to its glass point (around 150°C) for molding. Below is the cell structure and an example of a curved dimmer in its transparent and dark state.

FlexEnable biaxially shapeable electronic dimming

The Need for Dimming Technology

As discussed in CES 2023 (Part 1) – Meta Materials’ Breakthrough Dimming Technology, there is a massive need in optical AR to support electronically controlled dimming that A) does not require light to be polarized, and B) has a highly transparent state when not dimming. Electronic dimming supports AR headset use in various ambient light conditions, from outside in the day to darker environments. It will make the virtual content easier to see without blasting very bright light to the eyes. Not only will it reduce system power, but it will also be easier on the eyes.

The Magic Leap has demonstrated the usefulness of electronic dimming with and without segmentation (also known as soft edge occlusion or pixelated dimming) with their Magic Leap 2 (and discussed with SadlyItsBradley). Segmented dimming allows the light blocking to be selective and more or less track the visual content and make it look more solid. Because the segmented dimming is out of focus can only do “soft edge occlusion,” where it dims general areas. “Hard-edge occlusion,” which would selectively dim the real work for each pixel in the virtual world, appears impossible with optical AR (but trivial in VR with camera passthrough).

The biggest problem with the ML2 approach is that it used polarization-based dimming that blocks about 65% of the light in its most transparent state (and ~80% after the waveguide). I discussed this problem in Magic Leap 2 (Pt. 3): Soft Edge Occlusion, a Solution for Investors and Not Users. The desire (I would say needed, as discussed in the Meta Materials article here) for light blocking in AR is undeniable, but blocking 80% of the light in the most transparent state is unacceptable in most applications. Magic Leap has been demonstrating that soft edge occlusion improves the virtual image.

Some of the possible dimming ranges

Dimming Range and Speed

Two main factors affect the darkening range and switching speed: the LC formulation and the cell gap thickness. For a given LC formula, the thicker the gap, the more light it will block in both the transmissive and the dark state.

Like with most LC materials, the switching speed increases roughly inversely proportional to the square cell gap thickness. For example, if the cell gap is half as thick, the LC will switch about 4 times faster. FlexEnable is not ready to specify the switching speeds.

The chart on the right shows some currently possible dimming ranges with different LC formulas and cell thicknesses.

Segmented/Pixelated Dimming

Fast switching speeds become particularly important for supporting segmented dimming (ex., Magic Leap 2) because the dimming switching speed needs to be about as fast as the display. Stacking two thin cells in series could give both faster switching with a larger dynamic range as the light blocking would be roughly squared.

FlexEnable supports passive and active (transistor) circuitry to segment/pixelate and control the dimming.

Electronically Controlled Lenses

FlexEnable is also developing what are known as GRIN (Gradient Index) LC lenses. With this type of LC, the electric field changes the LC’s refraction index to create a switchable lens effect. The index-changing effect is polarization specific, so to control unpolarized light, a two-layer sandwich is required (see below left). As evidenced by patent applications, Meta (Facebook) has been studying GRIN and Pancharatnam–Berry Phase (PBP) electronically switchable lenses (for more on the difference between GRIN and PBP switchable lenses, see the Appendix). Meta application 2020/0348528 (Figs. 2 and 12 right) shows using a GRIN-type lens with a Fresnel electrode pattern (what they call a Segmented Lens Profile or SPP). The same application also discusses PBP lenses.

FlexEnable (left) and Meta Patent Application 2020/0348528 Figs. 2 and 12 (right)

The switchable optical power of the GRIN lens can be increased by making the cell gap thicker, but as stated earlier, the speed of LC switching will reduce by roughly the square of the cell gap thickness. So instead, a Fresnel-like approach can be used, as seen diagrammatically in the Meta patent application figure (above right). This results in a thinner and faster switching lens but with Fresnel-type optical issues.

When used in VR (ex., Meta’s Half Dome 3), the light can be polarized, so only one layer is required per switchable lens.

There is a lot of research in the field of electronically switchable optics. DeepOptics is a company that this blog has referenced a few times since I saw them at CES 2018.

Miniature Electromechanical Focusing – Cambridge Mechatronics and poLight

At CES, I met with Cambridge Mechatronics (CML)and poLight, which have miniature electromechanical focusing and optical image stabilization devices used in cell phones and AR cameras. CML uses Shape Memory Alloy wire to move conventional optics for focusing and stabilization. poLight uses piezoelectric actuators to bend a clear deformable membrane over a clear but soft optical material to form a variable lens. They can also tilt a rigid surface against the soft optical material to control optical image stabilization and pixel shifting (often called wobulation) I plan to cover both technologies in more detail in a future article, but I wanted to mention them here as alternatives to LC control variable focus.

Polymer Transistors and Circuitry

FlexEnable has also developed polymer semiconductors that they claim perform better than amorphous silicon transistors (typically used in flat panel displays). Higher performance translates into smaller transistors. These transistors can be used in an active matrix to control higher-resolution devices.

Biaxially Curved LCD

Combining FlexEnable’s technologies together, including curved LCD, circuitry, and polymer semiconductors results in their ability to make biaxially curved LCD prototype displays (right).

Curved displays and Very Wide FOV

Curved displays become advantageous in making very wide FOV displays. At AWE 2022, Red 6 had private demonstrations (discussed briefly in my video with SadlyItsBradley) of a 100-degree FOV with no pupil swim (image distorting as the eye moves) military AR headset incorporating a curved LCD. Pulsar, an optical design consulting company, developed the concept of using a curved display and the optics for the new Red 6 prototype. To be clear, Red 6/Pulsar used a curved glass LCD display, not one from FlexEnable, but it shows that curved displays become advantageous.

Conclusions

In the near term, I find the non-polarized electronic dimming capabilities most interesting for AR. While FlexEnable doesn’t claim to have the light-to-dark range of Meta Materials, they appear to have enough range, particularly on the transparent end, for some AR applications. We must wait to see if the switching speeds are fast enough to support segmented dimming.

To have electronic dimming in a film that can be biaxially curved to add to a design will be seen by many to have design advantages over Meta Material’s rigid lens-like dimming technology. Currently, it seems that, at least on specs, Meta Materials has demonstrated a much wider dynamic range from the transparent to the dark state. I would expect that Flexenable’s LC characteristics will continue to improve.

Electronically changeable lenses are seen as a way to address vergence accommodation conflict (VAC) in VR (such as with Meta’s Half-Dome 3). They would be combined with eye tracking or other methods to move the focus based on where the user is looking. Supporting VAC with AR would be much more complex to prevent the focus changing in the real world a pre-compensation switchable lens would have to cancel out the effect on the real world. This complexity will likely prevent them from being used for VAR in optical AR anytime soon.

Biaxially curved LCDs would seem to offer optical advantages in very wide FOV applications.

Appendix: GRIN vs. Pancharatnam-Berry phase lenses

Simply put, the LC itself acts as a lens with a GRIN lens. The voltage across the LC and the LC’s thickness affects how the lens works. Pancharatnam-Berry phase (PBP) lenses use an LC shutter (uniform) to change the polarization of light that controls the effect of a film with the lens function recorded in it. The lens function film will act or not act based on the polarization of the light. As stated earlier, Meta has been considering both GRIN and PBP lenses (for example, both are shown in Meta application 2020/0348528)

For more on how GRIN lenses work, see Electrically tunable gradient-index lenses via nematic liquid crystals with a method of spatially extended phase distribution.

For more on PBP lenses, see the Augmented reality near-eye display using Pancharatnam-Berry phase lenses and my article, which discusses Meta’s use in the Half-Dome 3.

GRIN lenses don’t require light to be first polarized, but they require a sandwich of two cells. PBP in an AR application would require the real-world light to be polarized, which would lose more than 50% of the light and cause issues with looking at polarized light displays such as LCDs.

The PBP method would likely support more complex lens functions to be recorded in the films. The Meta Half-Dome 3 used a series of PBP lenses with binary-weighted lens functions (see below).

Meta patent application showing the use of multiple PBP lenses (link to article)

The post CES & AR/VR/MR Pt. 4 – FlexEnable’s Dimming, Electronic Lenses, & Curved LCDs first appeared on KGOnTech.

The post CES & AR/VR/MR Pt. 4 – FlexEnable’s Dimming, Electronic Lenses, & Curved LCDs appeared first on KGOnTech.

CES 2023 SadlyItsBradley Videos Part 1-4 and Meta Leak Controversy

16 February 2023 at 03:52

Introduction

Bradley Lynch of the SadleyItsBradley YouTube channel hosted my presentation about CES 2023. The video was recorded about a week after CES, but it took a few weeks to edit and upload everything. There are over 2 hours of Brad and me talking about things we saw at CES 2023.

Brad was doing his usual YouTube content: fully editing the video, improving the visual content, and breaking the video down into small chunks. But it took Brad weeks to get 3 “sub videos” (part 1, part 2, and part 3) posted while continuing to release his own content. Realizing that it would take a very long time at this rate, Brad released part 4 with the rest of the recording session with only light editing as a single 1-hour and 44-minute video with chapters.

For those that follow news about AR and VR, Brad got involved in a controversy with his leaks of information about the Meta Quest Pro and Meta Quest 3. The controversy occurred between the recording and the release of the videos, so I felt I should comment on the issue.

Videos let me cover many more companies

This blog has become highly recognized in the AR/MR community, and I have many more companies wanting me to write about their technology than I have the time. I also want to do in-depth articles, including major through-the-optics studies on “interesting” AR/MR devices.

I have been experimenting with ways to get more content out quicker. I can spend from 3 days to up to 2 months (such as the rest of the Meta Quest Pro series yet to be published) working on a single article about a technology or product. With CES and the AR/VR/MR conference only 3 weeks apart and meeting with about 20 companies at each conference.

In the past, I only had time to write about a few companies that I thought had the most interesting technology. For the CES 2023 video, It took about 3 days to organize the photos and then about 2.5 hours to discuss about 20 companies and their products, or about 5 to 7 minutes per topic (not including all the time spent by Brad doing the video editing).

I liked working with Brad; we hope to do videos together in the future; he is fun to talk to and adds a different perspective with his deep background in VR. But in retrospect, less than half of what we discussed fits with his primary VR audience.

Working on summary articles for CES and the SPIE AR/VR/MR conference

Over 2 hours of Brad and I discussing over 20 companies and various other subjects and opinions about the AR, VR, and MR technology and industry is a lot for people to go through. Additionally, the CES video was shot in one sitting non-stop. Unfortunately, my dog friends decided they wanted to see me in my closed office door closed and barked much more than I realized as I was focused on the presentation (I should have stopped the recording and quieted them down).

I’m working on a “quick take” summary guide with pictures from the video and some comments and corrections/updates. I expect to break the guide into several parts based on broad topics. It might take a few days before this guide gets published as there is so much material.

Assuming the CES quick take guide goes well, I plan to follow up with my quick takes on the AR/VR/MR conference. I’m also looking at recording a discussion at the AR/VR/MR conference that will likely be published on the KGOnTech YouTube channel.

Links to the Various Sections of the Video

Below is a list of topics with links for the four videos.

Video 1

  • 0:00 Ramblings About CES 2023
  • 6:36 Meta Materials Non-Polarized Dimmers
  • 8:15 Magic Leap 2
  • 14:05 AR vs VR Use Cases/Difficulties
  • 16:47 Meta’s BCI Arm Band MIGHT Help
  • 17:43 OpenBCI Project Galea

Video 2

  • 0.00 Porotech MicroLEDs

Video 3

  • 0:00 NewSight Reality’s Transparent uLEDs
  • 4:07 LetinAR Glasses (Bonus Example/Explanation)

Video 4

SadlyItsBradley’s Meta Leaks Controversy

Between the time of recording the CES 2023 video with Brad and the videos being released, there was some controversy involving Brad and Meta that I felt should be addressed because of my work with Brad.

Brad Lynch made national news when the Verge reported that Meta had caught Brad’s source for the Meta Quest Pro and Meta Quest 3 information and diagrams. Perhaps ironically, the source for the Verge article was a leaked memo by Meta’s CTO, Andrew Bosworth (who goes by Boz). According to The Verge, “In his post to Meta employees, Bosworth confirmed that the unnamed leaker was paid a small sum for sharing the materials with Lynch.

From what was written in The Verge article and Brad’s subsequent Twitter statement, it seems clear that Brad didn’t know that in journalism is considered unethical “checkbook journalism” to pay a source. It is one of those gray areas where, as I understand it (and not legal advice), it is not illegal unless the reporter is soliciting the leak. At the same time, if I knew Brad was going to pay a source, I would have advised him not to do it.

It is nice to know that news media that will out and out lie, distort, hide key information, and report as true information from highly biased named and unnamed sources still has one slim ethical pillar: leaks are our life’s blood but don’t get caught paying for one. It is no wonder public trust in the news media is so low.

The above said, and to be clear, I never have and would never pay a source or encourage anyone to leak confidential content. I also don’t think it was fair or right for a person under NDA to leak sensitive information except in cases of illegal or dangerous activity by the company.

KGOnTech (My) Stance on Confidentiality

Unless under contract with a significant sum of money, I won’t sign an NDA, as it means taking on a legal and, thus, financial risk. At the same time, when I meet privately with companies, I treat information and material as confidential, even if it is not marked as such, unless they want me to release it. I’m constantly asking companies, “what of this can I write about.”

My principle is that I never want to be responsible for hurting someone that shared information with me. And as stated above, I would never encourage, no less pay someone to break a confidence. If someone shares information with me to publish, I always try to know if they want their name to be public as I don’t want to either get them in trouble or take credit for their effort.

Closing

That’s it for this article. I’ve got to finish my quick take summaries on CES and the AR/VR/MR conference.

The post CES 2023 SadlyItsBradley Videos Part 1-4 and Meta Leak Controversy appeared first on KGOnTech.

CES 2023 SadlyIsBradley and KGOnTech Joint Review Video Series (Part 1)

26 January 2023 at 21:17

New Video Series on CES 2023

Brad Lynch of the SadlyItsBradley YouTube Channel and I sat down for over 2 hours a week after CES and recorded our discussion of more than 20 companies one or both of us met with at CES 2023. Today, Jan. 26, 2023, Brad released a 23-minute part 1 of the series. Brad is doing all the editing while I did much of the talking.

Brad primarily covers VR, while this blog mostly covers optical AR/MR. Our two subjects meet when we discuss “Mixed Reality,” where the virtual and the real world merge.

Brad’s title for part 1 is “XR at CES: Deep Dives #1 (Magic Leap 2, OpenBCI, Meta Materials).” While Brad describes the series as a “Deep Dive,” but I, as an engineer, consider it to be more of an “overview.” It will take many more days to complete my blog series on CES 2023. This video series will briefly discuss many of the same companies I plan to write about in more detail on this blog, so consider it a look ahead at some future articles.

Brad’s description of Part 1 of the series:

There have been many AR/VR CES videos from my channel and others, and while they gave a good overview of the things that could be seen on the show floor and in private demoes, many don’t have a technical background to go into how each thing may work or not work

Therefore I decided to team up with retired Electrical Engineer and AR skeptic, Karl Guttag, to go over all things XR at CES. This first part will talk about things such as the Magic Leap 2, Open BCI’s Project Galea, Meta Materials and a few bits more!

Brad also has broken the video into chapters by subject:

  • 0:00 Ramblings About CES 2023
  • 6:36 Meta Materials Non-Polarized Dimmers
  • 8:15 Magic Leap 2
  • 14:05 AR vs. VR Use Cases/Difficulties
  • 16:47 Meta’s BCI Arm Band MIGHT Help
  • 17:43 OpenBCI Project Galea

That’s it for today. Brad expects to publish about 2 to 3 videos in the next week. I will try and post a brief note as Brad publishes each video.

The post CES 2023 SadlyIsBradley and KGOnTech Joint Review Video Series (Part 1) appeared first on KGOnTech.

❌
❌