Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

How LG and Samsung Are Making TV Screens Disappear



A transparent television might seem like magic, but both LG and Samsung demonstrated such displays this past January in Las Vegas at CES 2024. And those large transparent TVs, which attracted countless spectators peeking through video images dancing on their screens, were showstoppers.

Although they are indeed impressive, transparent TVs are not likely to appear—or disappear—in your living room any time soon. Samsung and LG have taken two very different approaches to achieve a similar end—LG is betting on OLED displays, while Samsung is pursuing microLED screens—and neither technology is quite ready for prime time. Understanding the hurdles that still need to be overcome, though, requires a deeper dive into each of these display technologies.

How does LG’s see-through OLED work?

OLED stands for organic light-emitting diode, and that pretty much describes how it works. OLED materials are carbon-based compounds that emit light when energized with an electrical current. Different compounds produce different colors, which can be combined to create full-color images.

To construct a display from these materials, manufacturers deposit them as thin films on some sort of substrate. The most common approach arranges red-, green-, and blue-emitting (RGB) materials in patterns to create a dense array of full-color pixels. A display with what is known as 4K resolution contains a matrix of 3,840 by 2,160 pixels—8.3 million pixels in all, formed from nearly 25 million red, green, and blue subpixels.


The timing and amount of electrical current sent to each subpixel determines how much light it emits. So by controlling these currents properly, you can create the desired image on the screen. To accomplish this, each subpixel must be electrically connected to two or more transistors, which act as switches. Traditional wires wouldn’t do for this, though: They’d block the light. You need to use transparent (or largely transparent) conductive traces.

An image of an array of 15 transparent TVs, shot with a fish-eye lens and displaying white trees with pink and green swaths of color above them.    LG’s demonstration of transparent OLED displays at CES 2024 seemed almost magical. Ethan Miller/Getty Images

A display has thousands of such traces arranged in a series of rows and columns to provide the necessary electrical connections to each subpixel. The transistor switches are also fabricated on the same substrate. That all adds up to a lot of materials that must be part of each display. And those materials must be carefully chosen for the OLED display to appear transparent.

The conductive traces are the easy part. The display industry has long used indium tin oxide as a thin-film conductor. A typical layer of this material is only 135 nanometers thick but allows about 80 percent of the light impinging on it to pass through.

The transistors are more of a problem, because the materials used to fabricate them are inherently opaque. The solution is to make the transistors as small as you can, so that they block the least amount of light. The amorphous silicon layer used for transistors in most LCD displays is inexpensive, but its low electron mobility means that transistors composed of this material can only be made so small. This silicon layer can be annealed with lasers to create low-temperature polysilicon, a crystallized form of silicon, which improves electron mobility, reducing the size of each transistor. But this process works only for small sheets of glass substrate.

Faced with this challenge, designers of transparent OLED displays have turned to indium gallium zinc oxide (IGZO). This material has high enough electron mobility to allow for smaller transistors than is possible with amorphous silicon, meaning that IGZO transistors block less light.

These tactics help solve the transparency problem, but OLEDs have some other challenges. For one, exposure to oxygen or water vapor destroys the light-emissive materials. So these displays need an encapsulating layer, something to cover their surfaces and edges. Because this layer creates a visible gap when two panels are placed edge to edge, you can’t tile a set of smaller displays to create a larger one. If you want a big OLED display, you need to fabricate a single large panel.

The result of even the best engineering here is a “transparent” display that still blocks some light. You won’t mistake LG’s transparent TV for window glass: People and objects behind the screen appear noticeably darker than when viewed directly. According to one informed observer, the LG prototype appears to have 45 percent transparency.

How does Samsung’s magical MicroLED work?

For its transparent displays, Samsung is using inorganic LEDs. These devices, which are very efficient at converting electricity into light, are commonplace today: in household lightbulbs, in automobile headlights and taillights, and in electronic gear, where they often show that the unit is turned on.

In LED displays, each pixel contains three LEDs, one red, one green, and one blue. This works great for the giant digital displays used in highway billboards or in sports-stadium jumbotrons, whose images are meant to be viewed from a good distance. But up close, these LED pixel arrays are noticeable.

TV displays, on the other hand, are meant to be viewed from modest distances and thus require far smaller LEDs than the chips used in, say, power-indicator lights. Two years ago, these “microLED” displays used chips that were just 30 by 50 micrometers. (A typical sheet of paper is 100 micrometers thick.) Today, such displays use chips less than half that size: 12 by 27 micrometers.

A wooden frame surrounds a transparent display featuring an advertisement for a Black Friday Sale and a large image of a smartwatch. While transparent displays are stunning, they might not be practical for home use as televisions. Expect to see them adopted first as signage in retail settings. AUO

These tiny LED chips block very little light, making the display more transparent. The Taiwanese display maker AUO recently demonstrated a microLED display with more than 60 percent transparency.

Oxygen and moisture don’t affect microLEDs, so they don’t need to be encapsulated. This makes it possible to tile smaller panels to create a seamless larger display. And the silicon coating on such small panels can be annealed to create polysilicon, which performs better than IGZO, so the transistors can be even smaller and block less light.

But the microLED approach has its own problems. Indeed, the technology is still in its infancy, with costing a great deal to manufacture and requiring some contortions to get uniform brightness and color across the entire display.

For example, individual OLED materials emit a well-defined color, but that’s not the case for LEDs. Minute variations in the physical characteristics of an LED chip can alter the wavelength of light it emits by a measurable—and noticeable—amount. Manufacturers have typically addressed this challenge by using a binning process: They test thousands of chips and then group them into bins of similar wavelengths, discarding those that don’t fit the desired ranges. This explains in part why those large digital LED screens are so expensive: Many LEDs created for their construction must be discarded.

But binning doesn’t really work when dealing with microLEDs. The tiny chips are difficult to test and are so expensive that costs would be astronomical if too many had to be rejected.

A person wearing a white shirt with red text and a name badge is placing his hand behind a transparent display screen. The screen shows an image of splashing liquid and fire. Though you can see through today’s transparent displays, they do block a noticeable amount of light, making the background darker than when viewed directly. Tekla S. Perry

Instead, manufacturers test microLED displays for uniformity after they’re assembled, then calibrate them to adjust the current applied to each subpixel so that color and brightness are uniform across the display. This calibration process, which involves scanning an image on the panel and then reprogramming the control circuitry, can sometimes require thousands of iterations.

Then there’s the problem of assembling the panels. Remember those 25 million microLED chips that make up a 4K display? Each must be positioned precisely, and each must be connected to the correct electrical contacts.

The LED chips are initially fabricated on sapphire wafers, each of which contains chips of only one color. These chips must be transferred from the wafer to a carrier to hold them temporarily before applying them to the panel backplane. The Taiwanese microLED company PlayNitride has developed a process for creating large tiles with chips spaced less than 2 micrometers apart. Its process for positioning these tiny chips has better than 99.9 percent yields. But even at a 99.9 percent yield, you can expect about 25,000 defective subpixels in a 4K display. They might be positioned incorrectly so that no electrical contact is made, or the wrong color chip is placed in the pattern, or a subpixel chip might be defective. While correcting these defects is sometimes possible, doing so just adds to the already high cost.

A person looks at a transparent micro led screen displaying splashes of liquid in red, yellow, and green. Samsung’s microLED technology allows the image to extend right up to the edge of the glass panel, making it possible to create larger displays by tiling smaller panels together. Brendan Smialowski/AFP/Getty Images

Could MicroLEDs still be the future of flat-panel displays? “Every display analyst I know believes that microLEDs should be the ‘next big thing’ because of their brightness, efficiency, color, viewing angles, response times, and lifetime, “ says Bob Raikes, editor of the 8K Monitor newsletter. “However, the practical hurdles of bringing them to market remain huge. That Apple, which has the deepest pockets of all, has abandoned microLEDs, at least for now, and after billions of dollars in investment, suggests that mass production for consumer markets is still a long way off.”

At this juncture, even though microLED technology offers some clear advantages, OLED is more cost-effective and holds the early lead for practical applications of transparent displays.

But what is a transparent display good for?

Samsung and LG aren’t the only companies to have demonstrated transparent panels recently.

AUO’s 60-inch transparent display, made of tiled panels, won the People’s Choice Award for Best MicroLED-Based Technology at the Society for Information Display’s Display Week, held in May in San Jose, Calif. And the Chinese company BOE Technology Group demonstrated a 49-inch transparent OLED display at CES 2024.

These transparent displays all have one feature in common: They will be insanely expensive. Only LG’s transparent OLED display has been announced as a commercial product. It’s without a price or a ship date at this point, but it’s not hard to guess how costly it will be, given that nontransparent versions are expensive enough. For example, LG prices its top-end 77-inch OLED TV at US $4,500.

A diagram of the structure of a display pixel represented as a grey rectangle, which frames an open area labeled transmissive space, and three rectangular blocks labeled R, G, and B. Displays using both microLED technology [above] and OLED technology have some components in each pixel that block light coming from the background. These include the red, green, and blue emissive materials along with the transistors required to switch them on and off. Smaller components mean that you can have a larger transmissive space that will provide greater transparency. Illustration: Mark Montgomery; Source: Samsung

Thanks to seamless tiling, transparent microLED displays can be larger than their OLED counterparts. But their production costs are larger as well. Much larger. And that is reflected in prices. For example, Samsung’s nontransparent 114-inch microLED TV sells for $150,000. We can reasonably expect transparent models to cost even more.

Seeing these prices, you really have to ask: What are the practical applications of transparent displays?

Don’t expect these displays to show up in many living rooms as televisions. And high price is not the only reason. After all, who wants to see their bookshelves showing through in the background while they’re watching Dune? That’s why the transparent OLED TV LG demonstrated at CES 2024 included a “contrast layer”—basically, a black cloth—that unrolls and covers the back of the display on demand.

Transparent displays could have a place on the desktop—not so you can see through them, but so that a camera can sit behind the display, capturing your image while you’re looking directly at the screen. This would help you maintain eye contact during a Zoom call. One company—Veeo—demonstrated a prototype of such a product at CES 2024, and it plans to release a 30-inch model for about $3,000 and a 55-inch model for about $8,500 later this year. Veeo’s products use LG’s transparent OLED technology.

Transparent screens are already showing up as signage and other public-information displays. LG has installed transparent 55-inch OLED panels in the windows of Seoul’s new high-speed underground rail cars, which are part of a system known as the Great Train eXpress. Riders can browse maps and other information on these displays, which can be made clear when needed for passengers to see what’s outside.

LG transparent panels have also been featured in an E35e excavator prototype by Doosan Bobcat. This touchscreen display can act as the operator’s front or side window, showing important machine data or displaying real-time images from cameras mounted on the vehicle. Such transparent displays can serve a similar function as the head-up displays in some aircraft windshields.

And so, while the large transparent displays are striking, you’ll be more likely to see them initially as displays for machinery operators, public entertainment, retail signage, and even car windshields. The early adopters might cover the costs of developing mass-production processes, which in turn could drive prices down. But even if costs eventually reach reasonable levels, whether the average consumer really want a transparent TV in their home is something that remains to be seen—unlike the device itself, whose whole point is not to be.

Mixed Reality at CES & AR/VR/MR 2024 (Part 3 Display Devices)

20 April 2024 at 14:59

Update 2/21/22: I added a discussion of the DLP’s new frame rates and its potential to address field sequential color breakup.

Introduction

In part 3 of my combined CES and AR/VR/MR 2024 coverage of over 50 Mixed Reality companies, I will discuss display companies.

As discussed in Mixed Reality at CES and the AR/VR/MR 2024 Video (Part 1 – Headset Companies), Jason McDowall of The AR Show recorded more than four hours of video on the 50 companies. In editing the videos, I felt the need to add more information on the companies. So, I decided to release each video in sections with a companion blog article with added information.

Outline of the Video and Additional Information

The part of the video on display companies is only about 14 minutes long, but with my background working in displays, I had more to write about each company. The times in blue on the left of each subsection below link to the YouTube video section discussing a given company.

00:10 Lighting Silicon (Formerly Kopin Micro-OLED)

Lighting Silicon is a spinoff of Kopin’s micro-OLED development. Kopin started making micro-LCD microdisplays with its transmissive color filter “Lift-off LCOS” process in 1990. 2011 Kopin acquired Forth Dimension Displays (FDD), a high-resolution Ferroelectric (reflective) LCOS maker. In 2016, I first reported on Kopin Entering the OLED Microdisplay Market. Lighting Silicon (as Kopin) was the first company to promote the combination of all plastic pancake optics with micro-OLEDs (now used in the Apple Vision Pro). Panasonic picked up the Lighting/Kopin OLED with pancake optics design for their Shift All headset (see also: Pancake Optics Kopin/Panasonic).

At CES 2024, I was invited by Chris Chinnock of Insight Media to be on a panel at Lighting Silicon’s reception. The panel’s title was “Finding the Path to a Consumer-Friendly Vision Pro Headset” (video link – remember this was made before the Apple Vision Pro was available). The panel started with Lighting Silicon’s Chairman, John Fan, explaining Lighting Silicon and its relationship with Lakeside Lighting Semiconductor. Essentially, Lightning Semiconductor designs the semiconductor backplane, and Lakeside Lighting does the OLED assembly (including applying the OLED material a wafer at a time, sealing the display, singulating the displays, and bonding). Currently, Lakeside Lighting is only processing 8-inch/200mm wafers, limiting Lighting Silicon to making ~2.5K resolution devices. To make ~4K devices, Lighting Semiconductor needs a more advanced semiconductor process that is only available in more modern 12-inch/300mm FABs. Lakeside is now building a manufacturing facility that can handle 12-inch OLED wafer assembly, enabling Lighting Silicon to offer ~4K devices.

Related info on Kopin’s history in microdisplays and micro-OLEDs:

02:55 RaonTech

RaonTech seems to be one of the most popular LCOS makers, as I see their devices being used in many new designs/prototypes. Himax (Google Glass, Hololens 1, and many others) and Omnivision (Magic Leap 1&2 and other designs) are also LCOS makers I know are in multiple designs, but I didn’t see them at CES or the AR/VR/MR. I first reported on RaonTech at CES 2018 (Part 1 – AR Overview). RaonTech makes various LCOS devices with different pixel sizes and resolutions. More recently, they have developed a 2.15-micron pixel pitch field sequential color pixel with an “embedded spatial interpolation is done by pixel circuit itself,” so (as I understand it) the 4K image is based on 2K data being sent and interpolated by the display.

In addition to LCOS, RaonTech has been designing backplanes for other companies making micro-OLED and MicroLED microdisplays.

04:01 May Display (LCOS)

May Display is a Korean LCOS company that I first saw at CES 2022. It surprised me, as I thought I knew most of the LCOS makers. May is still a bit of an enigma. They make a range of LCOS panels, their most advanced being an 8K (7980 x 4,320) 3.2-micron pixel pitch. May also makes a 4K VR headset with a 75-degree FOV using their LCOS devices.

May has its own in-house LCOS manufacturing capability. May demonstrated using its LCOS devices in projectors and VR headsets and showed them being used in a (true) holographic projector (I think using phase LCOS).

May Display sounds like an impressive LCOS company, but I have not seen or heard of their LCOS devices being used in other companies’ products or prototypes.

04:16 Kopin’s Forth Dimensions Display (LCOS)

As discussed earlier with Lighting Silicon, Kopin acquired Ferroelectric LCOS maker Forth Dimension Displays (FDD) in 2011. FDD was originally founded as Micropix in 1988 as part of CRL-Opto, then renamed CRLO in 2004, and finally Forth Dimension Displays in 2005, before Kopin’s 2011 acquisition.

I started working in LCOS in 1998 as the CTO of Silicon Display, a startup developing a VR/AR monocular headset. I designed an XGA (1024 x768) LCOS backplane and the FGA to drive it. We were looking to work with MicroPix/CRL-Opto to do the LCOS assembly (applying the cover glass, glue seal, and liquid crystal). When MicroPix/CRL-Opto couldn’t get their backplane to work, they ended up licensing the XGA LCOS backplane design I did at Silicon Display to be their first device, which they had made for many years.

FDD has focused on higher-end display applications, with its most high-profile design win being the early 4K RED cameras. But (almost) all viewfinders today, including RED, use OLEDs. FDD’s LCOS devices have been used in military and industrial VR applications, but I haven’t seen them used in the broader AR/VR market. According to FDD, one of the biggest markets for their devices today is in “structured light” for 3-D depth sensing. FDD’s devices are also used in industrial and scientific applications such as 3D Super Resolution Microscopy and 3D Optical Metrology.

05:34 Texas Instruments (TI) DLP®

Around 2015, DLP and LCOS displays seemed to have been used in roughly equal numbers of waveguide-based AR/MR designs. However, since 2016, almost all new waveguide-based designs have used LCOS, most notably the Hololens 1 (2016) and Magic Leap One (2018). Even companies previously using DLP switched to LCOS and, more recently, MicroLEDs with new designs. Among the reasons the companies gave for switching from DLP to LCOS were pixel size and, thus, a smaller device for a given resolution, lower power consumption of the display+asic, more choice in device resolutions and form factors, and cost.

While DLP does not require polarized light, which is a significant efficiency advantage in room/theater projector applications that project hundreds or thousands of lumens, the power of the display device and control logic/ASICs are much more of a factor in near-eye displays that require less than 1 to at most a few lumens since the light is directly aimed into the eye rather than illuminating the whole room. Additionally, many near-eye optical designs employ one or more reflective optics requiring polarized light.

Another issue with DLP is drive algorithm control. Texas Instruments does not give its customers direct access to the DLP’s drive algorithm, which was a major issue for CREAL (to be discussed in the next article), which switched from DLP to LCOS partly because of the need to control its unique light field driving method directly. VividQ (also to be discussed in the next article), which generates a holographic display, started with DLP and now uses LCOS. Lightspace 3D has similarly switched.

Far from giving up, TI is making a concerted effort to improve its position in the AR/VR/MR market with new, smaller, and more efficient DLP/DMD devices and chipsets and reference design optics.

Color Breakup On Hololens 1 using a low color sequential field rate

Added 2/21/22: I forgot to discuss the DLP’s new frame rates and field sequential color breakup.

I find the new, much higher frame rates the most interesting. Both DLP and LCOS use field sequential color (FSC), which can be prone to color breakup with eye and/or image movement. One way to reduce the chance of breakup is to increase the frame rate and, thus, the color field sequence rate (there are nominally three color fields, R, G, & B, per frame). With DLP’s new much higher 240Hz & 480Hz frame rates, the DLP would have 720 or 1440 color fields per second. Some older LCOS had as low as 60-frames/180-fields (I think this was used on Hololens 1 – right), and many, if not most, LCOS today use 120-frames/360-fields per second. A few LCOS devices I have seen can go as high as 180-frames/540-fields per second. So, the newer DLP devices would have an advantage in that area.

The content below was extracted from the TI DLP presentation given at AR/VR/MR 2024 on January 29, 2024 (note that only the abstract seems available on the SPIE website).

My Background at Texas Instruments:

I worked at Texas Instruments from 1977 to 1998, becoming the youngest TI Fellow in the company’s history in 1988. However, contrary to what people may think, I never directly worked on the DLP. The closest I came was a short-lived joint development program to develop a DLP-based color copier using the TMS320C80 image processor, for which I was the lead architect.

I worked in the Microprocessor division developing the TMS9918/28/29 (the first “Sprite” video chip), the TMS9995 CPU, the TMS99000 CPU, the TMS34010 (the first programmable graphics processor), the TMS34020 (2nd generation), the TMS302C80 (first image processor with 4 DSP CPUs and a RISC CPU) several generations of Video DRAM (starting with the TMS4161), and the first Synchronous DRAM. I designed silicon to generate or process pixels for about 17 of my 20 years at TI.

After leaving TI, ended up working on LCOS, a rival technology to DLP, from 1998 through 2011. But then when I was designing a aftermarket autmotive HUD at Navdy, I chose use a DLP engine for the projector for its advantages in that application. I like to think of myself as a product focused and want to use whichever technology works best for the given application. I see pros and cons in all the display technologies.

07:25 VueReal MicroLED

VueReal is a Canadian-based startup developing MicroLEDs. Their initial focus was on making single color per device microdisplays (below left).

However, perhaps VueReal’s most interesting development is their cartridge-based method of microprinting MicroLEDs. In this process, they singulate the individual LEDs, test and select them, and then transfer them to a substrate with either passive (wire) or active (ex., thin-film transistors on glass or plastic). They claim to have extremely high yields with this process. With this process, they can make full-color rectangular displays (above right), transparent displays (by spacing the LEDs out on a transparent substrate, and displays of various shapes, such as an automotive instrument panel or a tail light.

I was not allowed to take pictures in the VueReal suite, but Chris Chinnock of Insight Media was allowed to make a video from the suit but had to keep his distance from demos. For more information on VueReal, I would also suggest going to MicroLED-Info, which has a combination of information and videos on VueReal.

08:26 MojoVision MicroLED

MojoVision is pivoting from a “Contact Lens Display Company” to a “MicroLED component company.” Its new CEO is Dr. Nikhil Balram, formerly the head of Google’s Display Group. MojoVision started saying (in private) that it was putting more emphasis on being a MicroLEDs component company around 2021. Still, it didn’t publicly stop developing the contact lens display until January 2023 after spending more than $200M.

To be clear, I always thought the contact lens display concept was fatally flawed due to physics, to the point where I thought it was a scam. Some third-party NDA reasons kept me from talking about MojoVision until 2022. I outlined some fundamental problems and why I thought the contact lens display was a sham in my 2022 Video with Brad Lynch on Mojovision Contact Display in my 2022 CES Discussion video with Brad Lynch (if you take pleasure in my beating up on a dumb concept for about 14 minutes, it might be a fun thing to watch).

So, in my book, Mojovision, the company starts with a major credibility problem. Still, they are now under new leadership and focusing on what they got to work, namely very small MicroLEDs. Their 1.75-micron LEDs are the smallest I have heard about. The “old” Mojovision had developed direct/native green MicroLEDs, but the new MojoVision is developing native blue LEDs and then using quantum dot conversion to get green and red.

I have been hearing about using quantum dots to make full-color MicroLEDs for ~10 years, and many companies have said they are working on it. Playnitride demonstrated quantum dot-converted microdisplays (via Lumus waveguides) and larger direct-view displays at AR/VR/MR 2023 (see MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7)).

Mike Wiemer (CTO) gave a presentation on “Comparing Reds: QD vs InGaN vs AlInGaP” (behind the SPIE Paywall). Below are a few slides from that presentation.

Wiemer gave many of the (well-known in the industry) advantages of the blue LED with the quantum dot approach for MicroLEDs over competing approaches to full-color MicroLEDs, including:

  • Blue LEDs are the most efficient color
  • You only have to make a single type of LED crystal structure in a single layer.
  • It is relatively easy to print small quantum dots; it is infeasible to pick and place microdisplay size MicroLEDs
  • Quantum dots converted blue to green and red are much more efficient than native green and red LEDs
  • Native red LEDs are inefficient in GaN crystalline structures that are moderately compatible with native green and blue LEDs.
  • Stacking native LEDs of different colors on different layers is a complex crystalline growth process, and blocking light from lower layers causes efficiency issues.
  • Single emitters with multiple-color LEDs (e.g., See my article on Porotech) have efficiency issues, particularly in RED, which are further exacerbated by the need to time sequence the colors. Controlling a large array of single emitters with multiple colors requires a yet-to-be-developed, complex backplane.

Some of the known big issues with quantum dot conversion with MicroLED microdisplays (not a problem for larger direct view displays):

  • MicroLEDs can only have a very thin layer of quantum dots. If the layer is too thin, the light/energy is wasted, and the residual blue light must be filtered out to get good greens and reds.
    • MojoVision claims to have developed quantum dots that can convert all the blue light to red or green with thin layers
  • There must be some structure/isolation to prevent the blue light from adjacent cells from activating the quantum dots of a given cell, which would cause the desaturation of colors. Eliminating color crosstalk/desaturating is another advantage of having thinner quantum dot layers.
  • The lifetime and potential for color shifting with quantum dots, particularly if they are driven hard. Native crystalline LEDs are more durable and can be driven harder/brighter. Thus, quantum dot-converted blue LEDs, while more than 10x brighter than OLEDs, are expected to be less bright than native LEDs
  • While MojoVision has a relatively small 1.37-micron LED on a 1.87-micron pitch, that still gives a 3.74-micron pixel pitch (assuming MojoVision keeps using two reds to get enough red brightness). While this is still about half the pixel pitch of the Apple Vision’s Pro ~7.5-micron pitch OLED, a smaller pixel size such as with a single-emitter-with multiple-colors (e.g., Porotech) would be better (more efficient due to étendue see: MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7)) for semi-collimating the light using microlenses as needed by waveguides.

10:20 Porotech MicroLED

I covered Porotech’s single emitter, multiple color, MicroLED technology extensively last year in CES 2023 (Part 2) – Porotech – The Most Advanced MicroLED Technology, MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7), and my CES 2023 Video with Brad Lynch.

While technically interesting, Porotech’s single-emitter device will likely take considerable time to perfect. The single-emitter approach has the major advantage of supporting a smaller pixel since only one LED per pixel is required. This also results in only two electrical connections (power and ground) to LED per pixel.

However, as the current level controls the color wavelength, this level must be precise. The brightness is then controlled by the duty cycle. An extremely advanced semiconductor backplane will be needed to precisely control the current and duty cycle per pixel, a backplane vastly more complex than LCOS or spatial color MicroLEDs (such as MojoVision and Playnitride) require.

Using current to control the color of LEDs is well-known to experts in LEDs. Multiple LED experts have told me that based on their knowledge, they believe Porotech’s red light output will be small relative to the blue and green. To produce a full-color image, the single emitter will have to sequentially display red, green, and blue, further exacerbating the red’s brightness issues.

12:55 Brilliance Color Laser Combiner

Brilliance has developed a 3-color laser combiner on silicon. Light guides formed in/on the silicon act similarly to fiber optics to combine red, green, and blue laser diodes into a single beam. The obvious application of this technology would be a laser beam scanning (LBS) display.

While I appreciate Brilliance’s technical achievement, I don’t believe that laser beam scanning (LBS) is a competitive display technology for any known application. This blog has written dozens of articles (too many to list here) about the failure of LBS displays.

14:24 TriLite/Trixel (Laser Combiner and LBS Display Glasses)

Last and certainly least, we get to TriLite Laser Beam Scanning (LBS) glasses. LBS displays for near-eye and projector use have a perfect 25+ year record of failure. I have written about many of these failures since this blog started. I see nothing in TriLite that will change this trend. It does not matter if they shoot from the temple onto a hologram directly into the eye like North Focals or use a waveguide like TriLite; the fatal weak link is using an LBS display device.

It has reached the point when I see a device with an LBS display. I’m pretty sure it is either part of a scam and/or the people involved are too incompetent to create a good product (and yes, I include Hololens 2 in this category). Every company with an LBS display (once again, including Hololens 2) lies about the resolution by confabulating “scan lines” with the rows of a pixel-based display. Scan lines are not the same as pixel rows because the LBS scan lines vary in spacing and follow a curved path. Thus, every pixel in the image must be resampled into a distorted and non-uniform scanning process.

Like Brilliance above, TriLites’ core technology combines three lasers for LBS. Unlike Brilliance, TriLites does not end up with the beams being coaxial; rather, they are at slightly different angles. This will cause the various colors to diverge by different amounts in the scanning process. TriLite uses its “Trajectory Control Module” (TCM) to compute how to re-sample the image to align the red, green, and blue.

TriLite then compounds its problems with LBS using a Lissajous scanning process, about the worst possible scanning process for generating an image. I wrote about why the Lissajous scanning process, also used by Oqmented (TriLite uses Infineon’s scanning mirror), in AWE 2021 Part 2: Laser Scanning – Oqmented, Dispelix, and ST Micro. Lissajous scanning may be a good way to scan a laser beam for LiDAR (as I discussed in CES 2023 (4) – VoxelSensors 3D Perception, Fast and Accurate), but it is a horrible way to display an image.

The information and images below have been collected from TriLite’s website.

As far as I have seen, it is a myth that LBS has any advantage in size, cost, and power over LCOS for the same image resolution and FOV. As discussed in part 1, Avegant generated the comparison below, comparing North Focals LBS glasses with a ~12-degree FOV and roughly 320×240 resolution to Avegant’s 720 x 720 30-degree LCOS-based glasses.

Below is a selection (from dozens) of related articles I have written on various LBS display devices:

Next Time

I plan to cover non-display devices next in this series on CES and AR/VR/MR 2024. That will leave sections on Holograms and Lightfields, Display Measurement Companies, and finally, Jason and my discussion of the Apple Vision Pro.

DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8)

27 March 2023 at 19:46

Introduction – Contrast in Approaches and Technologies

This article will compare and contrast the Vuzix Ultralight, Lumus Z-lens, and DigiLens Argo waveguide-based AR prototypes I saw at CES 2023. I discussed these three prototypes with SadlyItsBradly in our CES 2023 video. It will also briefly discuss the related Avegant’s AR/VR/MR 2022 and 2023 presentations about their new smaller LCOS projection engine and Magic Leap 2’s LCOS design to show some other projection engine options.

It will go a bit deeper into some of the human factors of the Digitlens’ Argo. Not to pick on Digilens’ Argo, but because it has more features and demonstrates some common traits and issues of trying to support a rich feature set in a glasses-like form factor.

When I quote various specs below, they are all manufacturer’s claims unless otherwise stated. Some of these claims will be based on where the companies expect the product to be in production. No one has checked the claims’ veracity, and most companies typically round up, sometimes very generously, on brightness (nits) and field of view (FOV) specs.

This is a somewhat long article, and the key topics discussed include:

  • MicroLED versus LCOS Optical engine sizes
  • The image quality of MicroLED vs. LCOS and Reflective (Lumus) vs. Diffractive waveguides
  • The efficiency of Reflective vs. Diffractive waveguides with MicroLEDs
  • The efficiency of MicroLED vs. LCOS
  • Glasses form factor (using Digilens Argo as an example)

Overview of the prototypes

Vuzix Ultralite and Oppo Air Glass 2

The Vuzix Ultralite and Oppo Air Glass 2 (top two on the right) have 640 by 480 pixel Jade Bird Display (JBD) green-only per eye. And were discussed in MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7).

They are each about 38 grams in weight, including frames, processing, wireless communication, and batteries, and wirelessVuzix developed their own diffractive waveguide and support about a 30-degree FOV. Both are self-contained with wireless, with an integrated battery and processing.

Vuzix developed their own glass diffractive waveguides and optical engines for the Ultralight. They claim a 30-degree FOV with 3,000 nits.

Oppo uses resin plastic waveguides, and MicroLED optical engine developed jointly with Meta Bounds. I have previously seen prototype resin plastic waveguides from other companies for several years. This is the first time I have seen them in a product getting ready for production. The glasses (described in a 1.5-minute YouTube/CNET video) include microphones and speakers for applications, including voice-to-text and phone calls. They also plan on supporting vision correction with lenses built into the frames. Oppo claims the Air Glass 2 has a 27-degree FOV and outputs 1,400 nits.

Lumus Z-Lens

Lumus’s Z-Lens (third from the top right) supports up to a 2K by 2K full/true color LCOS display with a 50-degree FOV. Its FoV is 3 to 4 times the area of the other three headsets, so it must output more than 3 to 4 times the total light. It supports about 4.5x the number of pixels of the DigiLens Argo and over 13x the pixels of the Vuzix Ultralite and Oppo Air Glass 2.

The Z-Lens prototype is a demonstration of display capability and, unlike the other three, is not self-contained and has no battery or processing. A cable provides the display signal and power for each eye. Lumus is an optics waveguide and projector engine company and leaves it to its customers to make full-up products.

Digilens Argo

The DigiLens Argo (bottom, above right) uses a 1280 by 720 full/true color LCOS display. The Argo has many more features than the other devices, with integrated SLAM cameras, GNSS (GPS, etc.), Wi-Fi, Bluetooth, a 48mp (with 4×4 pixel “binning” like the iPhone 14) color camera, voice recognition, batteries, and a more advanced CPU (Qualcomm Snapdragon 2). Digilens intends to sell the Argo for enterprise applications, perhaps with partners, while continuing to sell waveguides optical engines as components for higher-volume applications. As the Argo has a much more complete feature set, I will discuss some of the pros and cons of some of the human factors of the Argo design later in this article.

Through the Lens Images

Below is a composite image from four photographs taken with the same camera (OM-D E-M5 Mark III) and lens (fixed 17mm). The pictures were taken at conferences, handheld, and not perfectly aligned for optimum image quality. The projected display and the room/outdoor lighting have a wide range of brightness between the pictures. None of the pictures have been resized, so the relative FoVs have been maintained, and you get an idea of the image content.

The Lumus Z-lens reflective waveguide has a much bigger FOV, significantly more resolution, and exhibits much better color uniformity with the same or higher brightness (nits). It also appears that reflective waveguides have a significant efficiency advantage with both MicroLEDs (and LCOS), as discussed in MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7). It should also be noted that the Lumus Z-lens prototype has only the display with optics and has no integrated processing, communication or battery. In contrast, the others are closer to full products.

A more complex issue is that of power consumption versus brightness. LCOS engines today are much more efficient for an image with full-screen bright images (by 10x or more) than MicroLEDs with similar waveguides. MicroLED’s big power advantage occurs when the content is sparse, as the power consumption is roughly proportional to the average pixel value, whereas, with LCOS, the whole display is illuminated regardless of the content.

If and when MicroLEDs support full color, the efficiency of nits-per-Watt will be significantly lower than monochrome green. Whatever method produces full color will detract from the overall electrical and optical efficiency. Additionally, color balancing for white requires adding blue and red light with lower nits-per-Watt.

Some caveats:

  • The Lumus Z-Lens is a prototype and does not have all the anti-reflective and other coatings of a production waveguide. Lumus uses an LCOS device with about ~3-micron pixels, which fits 1440 by 1440 within the ~50-degree FOV supported by the optics. Lumus is working with at least one LCOS maker to get an ~2-micron pixel size to support 2K by 2K resolution with the same size display. The image is cut off on the right-hand side of the image by the camera, which was rotated into portrait mode to fit inside the glasses.
  • The Digilens through the lens image is from Photonics West in 2022 (about one year old). Digilens has continued to improve its waveguide since this picture was taken.
  • The Vuzix picture was taken via Vuzix Shield, which uses the same waveguide and optics as the Vuzix Ultralight.
  • The Oppo image was taken at the AR/VR/MR 2023 conference.

Optical Engine Sizes

Vuzix has an impressively small optical engine driving Vuzix’s diffractive waveguides. Seen below left is a comparison of Vuzix’s older full-color DLP engine compared with an in-development color X-Cube engine and the green MicroLED engine used in the Vuzix Ultralite™ and Shield. In the center below is an exploded view of the Oppo and Meta Bound glasses (joint design as they describe it) with their MicroLED engine shown in their short CNET YouTube video. As seen in the still from the Oppo video, they have plans to support vision correction built into the glasses.

Below right is the Digilens LCOS engine, which uses a fairly conventional LCOS (using Ominivision’s LCOS device with driver ASIC showing). The dotted line indicates where the engine blocks off the upper part of the waveguide. This blocked-off area carries over to the Argo design.

The Digilens Argo, with its more “conventional” LCOS engine, requires are large “brow” above the eye to hide it (more on this issue later). All the other companies have designed their engine to avoid this level of intrusion into the front area of the glasses.

Lumus had developed their 1-D pupil-expanding reflective waveguide for nearly two decades, which needed a relatively wide optical engine. With the 2-D Maximus waveguide in 2021 (see: Lumus Maximus 2K x 2K Per Eye, >3000 Nits, 50° FOV with Through-the-Optics Pictures), Lumus demonstrated their ability to shrink the optical engine. This year, Lumus further reduced the size of the optical engine and its intrusion into the front lens area with their new Z-lens design (compare the two right pictures below of Maximus to Z-Lens)

Shown below are frontal views of the four lenses and their optical engines. The Oppo Air Glass 2 “disguises” the engine within the industrial design of a wider frame (and wider waveguide). The Lumus Z-Lens, with a full color about 3.5 times the FOV as the others, has about the same frontal intrusion as the green-only MicroLED engines. The Argo (below right) stands out with the large brow above the eye (the rough location of the optical engine is shown with the red dotted line).

Lumus Removes the Need for Air Gaps with the Z-Lens

Another significant improvement with Lumus’s Z-Lens is that unlike Lumus’s prior waveguides and all diffractive waveguides, it does not require an air gap between the waveguide’s surface and any encapsulating plastics. This could prove to be a big advantage in supporting integrated prescription vision correction or simple protection. Supporting air gaps with waveguides has numerous design, cost, and optical problems.

A typical full-color diffractive waveguide typically has two or three waveguides sandwiched together, with air gaps between them plus an air gap on each side of the sandwich. Everywhere there is an air gap, there is also a desire for antireflective coatings to remove reflections and improve efficiency.

Avegant and Magic Leap Small LCOS Projector Engines

Older LCOS projection engines have historically had size problems. We are seeing new LCOS designs, such as the Lumus Z-lens (above), and designs from Avegant and Magic Leap that are much smaller and no more intrusive into the lens area than the MicroLED engines. My AR/VR/MR 2022 coverage included the article Magic Leap 2 at SPIE AR/VR/MR 2022, which discusses the small LCOS engines from both Magic Leap and Avegant. In our AWE 2022 video with SadlyItsBradley, I discuss the smaller LCOS engines by Avegant, Lumus (Maximus), and Magic Leap.

Below is what Avegant demonstrated at AR/VR/MR 2022 with their small “L” shaped optical engines. These engines have very little intrusion into the front lenses, but they run down the temple of the glasses, which inhibits folding the temple for storage like normal glasses.

At the AR/VR/MR 2023, Avegant showed a newer optical design that reduced the footprint of their optics by 65%, including shortening them to the point that the temples can be folded, similar to conventional glasses (below left). It should be noted that what is called a “waveguide” in the Avegant diagram is very different from the waveguides used to show the image in AR glasses. Avegants waveguide is used to illuminate the LCOS device. Avengant, in their presentation, also discussed various drive modes of the LEDs to give higher brightness and efficiency with green-only and black-and-white modes. The 13-minute video of Avegant’s presentation is available at the SPIE site (behind SPIE’s paywall). According to Avegant’s presentation, the optics are 15.6mm long by 12.4mm wide, support a 30-degree FOV, with 34 pixels/degree, and 2 lumens of output in full color and up to 6 lumens in limited color outdoor mode. According to the presentation, they expect about 1,500 nits with typical diffractive waveguides in the full-color mode, which would roughly double in the outdoor mode.

The Magic Leap 2 (ML2) takes reducing the optics one step further and puts the illumination LEDs and LCOS on opposite sides of the display’s waveguide (below and described in Magic Leap 2 at SPIE AR/VR/MR 2022). The ML2 claims to have 2,000 nits with a much larger 70-degree FOV.

Transparency (vs. Birdbath) and “Eye Glow”

Transparency

As seen in the pictures above, all the waveguide-based glasses have transparency on the order of 80-90%. This is a far cry from the common birdbath optics, with typically only 25% transparency (see Nreal Teardown: Part 1, Clones and Birdbath Basics). The former Osterhout Design Group (ODG) made birdbath AR Glasses popular first with their R6 and then with the R8 and R9 models (see my 2017 article ODG R-8 and R-9 Optic with OLED Microdisplays) which served as the models for designs such at Nreal and Lenovo’s A3.

OGD Legacy and Progress

Several former ODG designers have ended up at Lenovo, the design firm Pulsar, Digilens, and elsewhere in the AR community. I found pictures of Digilens VP Nima Shams wearing the ODG R9 in 2017 and the Digilens Argo at CES. When I showed the pictures to Nima, he pointed out the progress that had been made. The 2023 Argo is lighter, sticks out less far, has more eye relief, is much more transparent, has a brighter image to the eye, and is much more power efficient. At the same time, it adds features and processing not found on the ODG R8 and R9.

Front Projection (“Eye Glow”)

Another social aspect of AR glasses is Front Projection, known as “Eye Glow.” Most famously, the Hololens 1 and 2 and the Magic Leap 1 and 2 project much of the light forward. The birdbath optics-based glasses also have front projection issues but are often hidden behind additional dark sunglasses.

When looking at the “eye glow” pictures below, I want to caution you that these are random pictures and not controlled tests. The glasses display radically different brightness settings, and the ambient light is very different. Also, front projection is typically highly directional, so the camera angle has a major effect (and there was no attempt to search for the worst-case angle).

In our AWE 2022 Video with SadlyItsBradley, I discussed how several companies, including Dispelix, are working to reduce front projection. Digilens is one of the companies I discussed that has been working to reduce front projection. Lumus’s reflective approach has inherent advantages in terms of front projection. DigiLens Argo (pictures 2 and 3 from the right) have greatly reduced their eye glow. The Vuzix Shield (with the same optics as the Ultralite) has some front projection (and some on my cheek), as seen in the picture below (4th from the left). Oppo appears to have a fairly pronounced front projection, as seen in two short videos (video 1 and video 2)

DigiLens Argo Deeper Look

DigiLens has been primarily a maker of diffractive waveguides, but it has, through the years, made several near-product demonstrations in the past. A few years ago, they when through a major management change (see 2021 article, DigiLens Visit), and with the management came changes in direction.

Argo’s Business Model

I’m always curious when a “component company” develops an end product. I asked DigiLens to help clarify their business approaches and received the following information (with my edits):

  1. Optical Solutions Licensing – where we provide solutions to our license to build their own waveguides using our scalable printing/contactless copy process. Our licensees can design their waveguides, which Digilens’ software tools enable.  This business is aimed at higher-volume applications from larger companies, mostly focused on, but not limited to, the consumer side of the head-worn market.
  1. Enterprise/Industrial Products – ARGO is the first product from DigiLens that targets the enterprise and industrial market as a full solution.  It will be built to scale and meet its target market’s compliance and reliability needs. It uses DigiLens optical technology in the waveguides and projector and is built by a team with experience shipping thousands of enterprise & Industrial glasses from Daqri, ODG, and RealWear. 

Image Quality

As I was familiar with Digilen’s image quality, I didn’t really check it out that much with the ARGO, but rather I was interested in the overall product concept. Over the last several years, I have seen improved image quality, including uniformity and addressing the “eye glow” issue (discussed earlier).

For the type of applications in the “enterprise market” ARGO is trying to serve, absolute image quality may not be nearly as important as other factors. As I have often said, “Hololens 2 proves that image quality for the customers that use it” (see this set of articles discussing the Hololen 2’s poor image quality). For many AR markets, the display information is simple indicators such as arrows, a few numbers, and lines. It terms of color, it may be good enough if only a few key colors are easily distinguishable.

Overall, Digilens has similar issues with color uniformity across the field of view of all other diffractive waveguides I have seen. In the last few years, they have gone from having poor color uniformity to being among the better diffractive waveguides I have seen. I don’t think any diffractive waveguide would be widely considered good enough for movies and good photographs, but they are good enough to show lines, arrows, and text. But let me add a key caveat, what all companies demonstrate are invariably certainly cherry-picked samples.

Field of View (FOV)

While the Argos 30-degree FOV is considered too small for immersive games, for many “enterprise applications,” it should be more than sufficient. I discussed why very large FOVs are often unnecessary in AR in this blog’s 2109 article FOV Obsession. Many have conflated VR emersion with AR applications that need to support key information with high transparency, lightweight, and hands-free. As Professor and decades-long AR advocate Thad Starner pointed out, requiring the eye to move too much causes discomfort. I make this point because a very large FOV comes at the expense of weight, power, and cost.

Key Feature Set

The diagram below is from DigiLen on the ARGO and outlines the key features. I won’t review all the features, but I want to discuss some of their design choices. Also, I can’t comment on the quality of their various features (SLAM, WiFi, GPS, etc.) as A) I haven’t extensively tried them, and B) I don’t have the equipment or expertise. But at least on the surface, in terms of feature set, Argo compares favorably to the Hololens 1 and 2, if having a smaller FOV than the Hololens 2 but with much better image quality.

Audio Input for True Hands-Free Operation

As stated above, Digilens’ management team includes experience from RealWear. RealWear acquired a lot of technology from Kopin’s Golden-i. Like ARGO, Golden-i was a system product outgrowth from display component maker Kopin with a legacy before 2011 when I first saw Golden-i. Even though Kopin was a display device company, Golden-i emphasized voice recognition with high accuracy even in noisy environments. Note the inclusion of 5 microphones on the ARGO.

Most realistic enterprise-use models for AR headsets include significant, if not exclusively, hands-free operation. The basic idea of mounting a display on the user’s head it so they can keep their hands free. You can’t be working with your hands and have a controller in your hand.

While hand tracking cameras remove the need for the physical controller, they do not free up the hands as the hands are busy making gestures rather than performing the task with their hands. In the implementations I have tried thus far, gestures are even worse than physical controllers in terms of distraction, as they force the user to focus on the gestures to make it (barely sometimes) work. One of the most awful experiences I have had in AR was trying to type in a long WiFi password (with it hidden as I typed by asterisk marks) using gestures on a Hololens 1 (my hands hurt just thinking about it – it was a beyond terrible user experience).

Similarly, as I discussed with SadlyItsBradley about Meta’s BCI wristband, using nerve and/or muscle-detecting wristbands still does not free up the hands. The user still has their hands and mental focus slaved to making the wristband work.

Voice control seems to have big advantages for hands-free operation if it can work accurately in a noisy environment. There is a delicate balance between not recognizing words and phrases, false recognition or activation, and becoming too burdensome with the need for verification.

Skull-Gripping “Glasses” vs. Headband or Open Helmet

In what I see as a futile attempt to sort of look like glasses (big ugly ones at that), many companies have resorted to skull-gripping features. Looking at the skull profile (right), there really isn’t much that will stop the forward rotation of front-heavy AR glasses unless they wrap around the lower part of the occipital bone at the back of the head.

Both the ARGO (below left) and Panasonic’s (Shiftall division) VR headsets (right two images below) take the concept of skull-grabbing glasses to almost comic proportions. Panasonic includes a loop for the headband, and some models also include a forehead pad. The Panasonic Shiftall uses pads pressed against the front of the head to support the front, while the ARGO uses an oversized large noise bridge as found on many other AR “glasses.”

ARGO supports a headband option, but they require the ends of the temples with the skull-grabbers temples to be removed and replaced by a headband.

As anyone who knows anything about human factors with glasses knows, the ears and the nose cannot support much weight, and the ears and nose will get sore if much weight is supported for a long time.

Large soft nose pads are not an answer. There is still too much weight on the nose, and the variety of nose shapes makes them not work well for everyone. In the case of the Argo, the large nose pads also interfere with wearing glasses; the nose pads are located almost precisely where the nose pads for glasses would go.

Bussel/Bun on the Back Weight Distribution – Liberating the Design

As was pointed about by Microsoft with their Hololens 2 (HL2), weight distribution is also very important. I don’t know if they were the first with what I call “the bustle on the back” approach, but it was a massive improvement, as I discussed in Hololens 2 First Impressions: Good Ergonomics, But The LBS Resolution Math Fails! Several others have used a similar approach, most notably with the Meta Quest Pro VR (it has very poor passthrough AR, as I discussed in Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough). Another feature of the HL2 ergonomics is the forehead pad eliminates weight from the nose and frees up that area in support of ordinary prescription glasses.

The problem with the sort-of-glasses form factor so common in most AR headsets today is that it locks the design into other poor decisions, not the least of which is putting too much weight too far forward. Once it is realized that these are not really glasses, it frees up other design features for improvement. Weight can be taken out of the front and moved to the back for better weight distribution.

ARGO’s Eye-Relief Missed Opportunity for Supporting Normal Glasses

Perhaps the best ergonomic/user feature of the Hololens 1 & 2 over most other AR headsets is that they have enough eye relief (distance from the waveguide to the eye) and space to support most normal eyeglasses. The ARGO’s waveguide and optical design have enough eye relief to support wearing most normal glasses, but still, they require specialized inserts.

You might notice some “eye glow” in the CNET picture (above right). I think this is not from the waveguide itself but is a reflection off of the prescription inserts (likely, they don’t have good anti-reflective coatings).

A big part of the problem with supporting eyeglasses goes back to trying to maintain the fiction of a “glasses form factor.” The nose bridge support will get in the way of the glasses, but the nose bridge support is required to support the headset. Additionally, hardware in the “brow” over the eyes could have been moved elsewhere, which may interfere.

Another technical issue is the location and shape of their optical engine. As discussed earlier, the Digilens engine shape causes issues with jutting into the front of glasses, resulting in a large brow over the eyes. This brow, in turn, may interfere with various eyeglasses.

It looks like Argo started with the premise of looking like glasses putting form ahead of function. As it turns out, they have what for me is an unhappy compromise that neither looks like glasses nor has the Hololens 2 advantage of working with most normal glasses. Starting from the comfort and functionality as primary would have also led to a different form factor for the optical engine.

Conclusions

While MicroLED may hold many long-term advantages, they are not ready to go head-to-head with LCOS engines regarding image quality and color. The LCOS engines are being shown by multiple companies that are more than competitive in size and shape with the small MicroLED engines. The LCOS engines are also supporting much higher resolutions and larger FOVs.

Lumus, with their Z-Lens 2-D reflective waveguides, seems to have a big advantage in image quality and efficiency over the many diffractive waveguides. Allowing the Z-lens to be encased without an air gap adds another significant advantage.

Yet today, most waveguide-based AR glasses use diffractive waveguides. The reasons include there being many sources of diffractive waveguides, and companies can make their own custom designs. In contrast, Lumus controls its reflective waveguide I.P. Additionally, Lumus has only recently developed 2-D reflective waveguides, dramatically reducing the size of the projection engine driving their waveguides. But the biggest reason for using diffraction waveguides is that the cost of Lumus waveguides is thought to be more expensive; Lumus and their new manufacturing partner Schott Glass claimed that they will be able to make waveguides at competitive or better costs.

A combination of cost, color, and image quality will likely limit MicroLEDs for use in ultra-small and light glasses with low amounts of visual content, known as “data snacking.” (think arrows and simple text and not web browsing and movies). This market could be attractive in enterprise applications. I’m doubtful that consumers will be very accepting of monochrome displays. I’m reminded of a quote from an IBM executive in the 1980s when asked whether resolution or color was more important said: “Color is the least necessary and most desired feature in a display.”

Not to pick on Argo, but it demonstrates many of the issues with making a full-featured device in a glasses form factor, as SLAM (with multiple spatially separated cameras), processing, communication, batteries, etc., the overall design strays away from looking like glasses. As I wrote in my 2019 article, Starts with Ray-Ban®, Ends Up Like Hololens.

The post DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8) first appeared on KGOnTech.

MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7)

13 March 2023 at 01:54

Introduction

My coverage of CES and SPIE AR/VR/MR 2023 continues, this time on MicroLEDs. MicroLEDs companies were abundant in the booths, talks, and private conversations at AR/VR/MR 2023.

The list on the right shows some of the MicroLED companies I have looked at in recent years. Marked with a blue asterisk “*” are companies I talked to at AR/VR/MR 2023, with Jade Bird Display (JBD), PlayNitride, Porotech, and MICLEDI having booths in the exhibition. The green bracket on the left indicates companies where I had seen a MicroLED display generating an image (not just one or a few LEDs). Inside the gold rectangle in the list above are MicroLED companies that system companies have bought. MicroLEDs are the display technology where tech giants Meta, Apple, and Google place their bets for the future.

A much more extensive list of companies involved in MicroLED development can be found at microled-info.com, a site dedicated to tracking the MicroLED industry. Microled-info’s parent company, Metalgrass, also organized the MicroLED Association, and I spoke at their Feb. 7th Webinar (but you have to join the association to see it).

The efficiency of getting the Lambertian light that most LEDs emit through a waveguide to the eye is a major issue I have studied for years and will be covered first. Then after covering recent MicroLED prototypes and discussions, I have included an appendix with background information in the subsections “What is a MicroLED company,” “Microdisplay vs. Direct View Pixel Sizes,” and “Multicolor, Full Color, or True Color.”

MicroLEDs and Waveguides; Millions of Nits-In to Thousands of Nits-Out with Waveguides

When first hearing of MicroLEDs outputting millions of nits, you might think it must be overkill to deliver thousands of nits to the eye for outdoor use with a waveguide. But due to pupil expansion and light losses, only a tiny fraction of the light-in makes it to the eye. The figure (right) diagrams the efficiency issues with waveguides using a diffractive waveguide.

Most LEDs output diffuse (roughly) Lambertian light, whereas waveguides require collimated light. Typically, micro-optics such as microlens arrays (MLA) are on top of the MicroLEDs’ semi-collimate the light. These optics increase the nits; typically, the nits quoted for the MicroLED display are after micro-optics. A waveguide’s small entrance area severely limits the light due to a physics property known as “etendue,” causing it to be called “etendue loss.” Then there are the losses due to the pupil expansion/replication structures (diffraction gratings in the case of diffractive waveguides, semi-reflective “facets” in the case of reflective waveguides). Finally, the light-in from the small entrance area ends up spread out over the much larger exit area to support seeing the image over the whole FOV as the eye moves.

Multiple Headsets Using Diffractive Waveguides with JBD MicroLED

I found it an interesting dichotomy that while all the other prototypes I have seen using Jade Bird Display (JBD) MicroLEDs, including Vuzix, Oppo, TCL, Dispelix, and Waveoptics (before being acquired by Snap), JBD themselves showed a prototype 3-chip color cube projector with a Lochn “clone” (with lesser image quality) of a Lumus 2D expanding reflective waveguide in their booth (I was asked not photograph). Then in the Playnitride booth, they featured Lumus reflective waveguides. I should note that while efficiency is a major factor, other design factors, including cost, will drive different decisions.

Reflective (Lumus) Waveguides are More Efficient than Diffractive Waveguides with MicroLEDs

According to Lumus, their 2-D reflective (Lumus) waveguides result in a 3 to 9 times larger entrance area, and their semi-reflective facets lose less light than diffraction gratings. The net result is that reflective waveguides can be 5 to >10 times more optically efficient than diffractive waveguides with the same microLEDs, a major advantage in brightness and power (= less heat and longer battery life). This efficiency advantage appears to have been playing out at AR/VR/MR 2023.

Playnitride prominently showed their MicroLEDs using Lumus 2D and older 1D reflective waveguides in their booth (below left and middle). Their full-color QD-MicroLEDs only output about 150K nits (compared to the millions of others’ single-color native LEDs), so they needed a more efficient waveguide. Playnitride uses Quantum Dot conversion of blue LEDs to give red and green.

Lumus CTO Dr. Yochay Danziger brought a 2D expanding waveguide with input optics that he held up to Porotech’s MicroLEDs. I captured a quick handheld (and thus not very good) shot (with ND filters to reduce the very bright image) of Porotech’s green MicroLED via Lumus’s handheld waveguide (above right).

Lumus was the only company featured in the Schott Glasses booth at AR/VR/MR 2023. The often-asked question about Lumus is whether they can make them in volume production. The Schott Glass representative assured me they could make Lumus’s 2-D waveguides in volume production.

I plan on covering Lumus’s new smaller (than their two year old Maximus 2D waveguide) Z-Lens 2D waveguide in an upcoming article. In the meantime, I discussed the Z-Lens in the CES 2023 Video with SadlyItsBradley.

Other Optics (ex., Bird Bath, Freeform, and VR-Pancake) and Micro-OLEDs

I want to note here that while MicroLEDs are hundreds to over a thousand times brighter than Micro-OLEDs, they are likely well more than five years away from having anywhere near the same color control and uniformity. Thus designs that favor image quality over brightness using optical designs that are much more efficient than waveguides, such as Bird Bath, Freeform, and VR-pancake optics, will continue to use Micro-OLEDs or LCDs for the foreseeable future. Micro-OLEDs are expected to continue getting brighter, with some claiming they have roadmaps to get to about 30K nits.

Jade Bird Display (JBD) Based AR Glasses

Jade Bird Display (JBD) is the only company I know to be shipping MicroLEDs in production. All working headsets I have seen use JBD’s 640×480 green (only) MicroLEDs, including ones from Vuzix (Ultralite and Shield), Oppo, and Waveoptics (shown in 2022 before being acquired by Snap). JBD is developing devices supporting higher pixel depth and higher resolution.

Also, as background to MicroLEDs in general, as well as JBD and the glasses using their MicroLEDs, there is my 2022 blog article AWE 2022 (Part 6) – MicroLED Microdisplays for Augmented Reality and the associated video with SadlyItsBradley. Additionally, there is my 2021 article on JBD and WaveOptics in News: WaveOptics & Jade Bird Display MicroLED Partnership.

The current green MicroLEDs support only 4 bits per pixel or 16 (24) brightness levels and will show contour lines with a smooth shaded area. I hear that JBD’s future designs will support more levels. While I have seen continuous improvement in the pixel-to-pixel brightness differences through the year, and while they are the most uniform MicroLED devices I have seen, there is still visible “grain” in what should be a solid area.

Vuzix

At CES 2023, Vuzix showed off the small size possible with their Utralite glasses (left side below) which weigh only 38 grams (not much more than most conventional glass). A tray full of display engines on public display was there to emphasize that they were in production. The comparison of light engines (below left) shows how compact the MicroLED green and color cube projector engines are compared with Vuzix’s older (but true color) DLP design with similar resolution. I discussed Vuzix’s Ultralite and Shield in the CES 2023 video with SadleyItsBradley.

The Vuzix Shield and Ultralite share the same small green MicroLED engine. The combination of the engine and Vuzix waveguide are capable of up to 4,100 nits which is bright enough to enable outdoor use. The power consumption of MicroLEDs is roughly proportional to the average pixel value (APV). Paul Travers, CEO of Vuzix, says that the Ultralites consume very little power and can work for two days in typical use on a charge. Vuzix has also improved their in-house developed waveguides, significantly reducing the forward projection (“eye-glow”).

Vuzix has been very involved with several MicroLED companies, as discussed with SadlyItsBradley in our AWE 2022 Video.

Oppo

At AR/VR/MR 2023, Oppo showed me their JBD green MicroLED based glassed with a form factor similar to the Vuzix Ultralite. The overall image quality and resolution seem similar on casual inspection. The Vuzix waveguides diffraction gratings seem less noticeable from the outside, but I have not compared them side by side in the same conditions.

TCL and JBD X-Cube Color

At CES 2023, TCL demonstrated a multicolor 3-Chip (R, G, and B) combined with an X-Cube prototype (using a Lochn reflective waveguide). Vuzix, in a 2020 concept video, and Meta (Facebook), in a 2019 patent application, have shown using three waveguides to combine the three primary colors (below right). I discussed the TCL glasses with JBD color X-Cube design and some of the issues with X-Cubes in the CES video with SadleyItsBradley.

The TCL glasses appear to be using a diffraction grating waveguide that is very different from others I have seen due to the way the exit grating has very big steps in the transmission of light (right). This waveguide differs from the reflective waveguide JBD was showing in their booth or other diffractive waveguides. I have seen diffractive waveguides that were none uniform but never with such large steps in the output gratings. While I didn’t get a chance to see an image through the TCL glasses, the reports I got from others were that the image quality was not very good.

Goertek/Goeroptics Design and Manufacturing JBD Projection Engines

In the CES 2023 TCL video, I discussed some of the issues associated X-Cube color combining and the problems with aligning the three panels. At the AR/VR/MR conference, the Goeroptics division of Goertek showed that they were making both green-only and Color X-Cube designs for JBD’s MicroLEDs (slide from their presentation below). While Goertek may not be a household name, they are a very large optics and not-optics design and OEM for many famous brands, including giants such as Apple, Microsoft, Sony, Samsung, and Lenovo.

Porotech, Ostendo, and Innovation Semiconductor color tunable LEDs

I met Porotech in their private suite at CES and their booth at AR/VR/MR 2023. They have already received much attention on this blog in CES 2023 (Part 2) – Porotech – The Most Advanced MicroLED Technology, AWE 2022 (Part 6) – MicroLED Microdisplays for Augmented Reality, and my CES 2023 video with SadlyIsBradley on Porotech. They have been making a lot of news in the last year with their development of single-color InGaS red, green, and blue MicroLEDs and particularly their single emitter color tunable LED (what Porotech calls DynamicPixelTuning ® or DPT ®)

Below is a very short video I captured in the Porotech booth with a macro lens of their DynamicPixelTuning demo. I apologize for the camera changing focus when I switched from still to video mode with the blooming due to the wide range of brightness as the color changes. The demo shows the whole display changing color, as Porotech does not have a backplane that can change colors pixel by pixel.

Porotech showed a combination of motion and color changing with their DynamicPixelTuning

At CES 2023, I was reminded by Ostendo, best known for the color-stacked MicroLEDs technology, that they had developed tunable color LEDs several years ago. Sure enough, six years ago, Ostendo presented the paper III-nitride monolithic LED covering full RGB color gamut in the Journal of the SPIE in February 2016. I have not seen evidence that Ostendo has come close to pursuing it beyond the single LED prototype stage, as Porotech has done with their DynamicPixelTuning.

The recent startup Innovation Semiconductor (below) is developing technology to integrate the control transistor circuitry into the InGaS substrate and avoid the more common hybrid InaS, and CMOS approaches almost all others are using. They are also developing a “V-grove” technology for making color-tunable LEDs. Innovation Semi cites work by the University of California at Stata Barbara (see paper 1 and paper 2 ) plus their own work that suggests that V-groves may be a more efficient way to produce color-tunable LEDs than the approach taken by Porotech and Ostendo.

A major concern I have with Innovation Semi’s approach to integrating the control transistors in GaN is whether they will be able to integrate enough control circuitry without making the devices too expensive and/or making the pixel size bigger.

PlayNitride (Blue with QD Conversion Spatial Color)

PlayNitride demonstrated its full-color MicroLED technology, which uses blue LEDs with Quantum Dot (QD) conversion to produce red and green. At 150K nits, they are extremely bright compared to Micro-OLEDs but are much less bright than native red, green, and blue MicroLEDs from companies including JBD and Porotech.

As discussed earlier, PlayNitride showed their MicroLEDs working with Lumus waveguides. But even though Lumus waveguides are more efficient than diffractive waveguides, 150K nits from the display are not bright enough for practical uses. They are about 1/10th the brightness of the native MicroLEDs of JBD and Porotech, and their pixels are bigger.

PlayNitride was the only company showing fairly high-resolution (1K by 1K and 1080P) full-color single-chip MicroLED microdisplays. Furthermore, these are only prototypes. Still, the green and red were substantially weaker than the blue, as seen in the direct (no waveguide) macro photograph of PlayNitrides MicroLED below. Also, the red was more magenta (mixed red and blue).

Looking at the 2X zoom, one sees the “grain” associated with the pixel-to-pixel brightness differences in all colors common to all MicroLEDs demonstrated to date. Additionally, in the larger reddish wedge pointed at by the red arrow, there are color differences/grain at the pixel level.

Known issue with QD spatial color conversion and microdisplays

While quantum dot (QD) color conversion of blue and UV LEDs has been proposed as a method to make full-color MicroLEDs for many years, there are particular issues with using QD with very small microdisplay pixels. Normally the QD layer required for conversion stays roughly the same thickness as the pixels become smaller, resulting in a very tall stack of QD compared to the pixel size. It then requires some form of microscopic baffling to prevent the light from adjacent LEDs from illuminating the wrong color.

Some have tried using thinner layers of QD and then relied on color filters to “clean up” the colors, but this comes with significant losses in efficiency and issues with heat. There are also issues with how hard the QD material can be driven before it degrades, which will limit brightness. Using spatial color itself has the issue of pixel sizes becoming too big for use in AR.

Many of these issues will be very different for making larger direct-view and VR pixels. The thickness of the QD layers becomes a non-issue as the pixels get bigger and spatial color has long been used with larger pixels. We have already seen where different OLED technologies have been used based on pixel size and application; for example, color-filtered OLEDs won out in large-screen TVs, whereas native color OLED subpixels are used in smartphone phones, smartwatches, and microdisplay OLEDs.

MICLEDI Reconstituted InGaS Wafers

MICLEDI is a spinout of the IMEC research institute in Belgium in 2019 with a booth at AR/VR/MR 2023. They are fabless with a mix of MicroLED technologies they have developed (right). They claim to have single color per die, spatial color (colors side by side), and stacked color technology. They have also developed GaN and Aluminum Gallium Phosphor (AlinGAP) red. After some brief discussions in their booth and going through their handout material, their MicroLEDs seem like a bit of a grab bag of technology for license without a clear direction.

The one technology that seems to set MICLEDI apart is for taking 100, 150mm, or 200mm GaN or AlinGap EPI wafers and making a “reconstituted” wafer with pick and placed known good dies. These reconstituted wafers can be “flip chipped” with today’s 300mm CMOS wafers. Today, almost all LED manufacturing is on much smaller wafers than mainstream production CMOS. For development today, companies are flipping small GaN wafers with spaced-out sets of LED arrays onto a larger CMOS wafer and throwing away most of the CMOS wafer.

Stacked MicroLEDs

While I didn’t see MIT at CES or AR/VR/MR 2023, MIT made news during AR/VR/MR with stacked color MicroLEDs. I don’t know the details, but it sounds similar to what Ostendo discussed, at least as far back as 2016 (see lower left). MICLEDI (above) has also developed a stated color LED technology where the LEDs are side by side.

The obvious advantage of stacked color is that the full color is smaller. But the disadvantage is that the LEDs and other circuitry above block light from lower LEDs. The net result is that stacked LEDs will likely be much brighter than Micro-OLEDs but much less bright than other MicroLED technologies. Also concerning is that while red is the color with the least efficiency today, it seems to end up on the lowest layer.

With their mid-range brightness, stacked MicroLEDs would likely be targeted at non-waveguide optics designs. Ostendo has been developing its optical design, which tiles multiple small MicroLEDs to give a wider FOV.

Conclusions

Many giant and small companies are betting that MicroLEDs will be the future of MicroDisplay technology for VR and AR. At the same time, one should realize that none of the technologies is competitive today regarding image quality with Micro-OLED, LCOS, or DLP. There are many manufacturing and technical hurdles yet to be solved. Each of the methods for producing full-color MicroLEDs has advantages and disadvantages. The race in AR is to support full-color displays and higher resolution at high brightness as, low power, and small size. I can’t see how multiple monochrome displays using X-Cubes, Waveguides, or other methods are long-term AR solutions.

I often warn people that if someone does a demo first, that does not mean they will be in production first. Some technical approaches will yield a hand-crafted one-off demo faster but are not manufacturable. The warning is doubly true when it comes to color MicroLEDs. It is easier to rule out certain approaches than to say which approach or approaches will succeed. For MicroDisplay MicroLEDs used in AR, I think native LEDs will win out over color-converted (ex., QD) blue LEDs. A different MicroLED technology will likely be better for direct-view displays.

It will be interesting to see the market adoptions of the new small form factor but green-only AR glasses. While they meet the form factor requirement of looking like glasses with acceptable weight, they don’t have great vision correction solutions, and being green-only will limit consumer interest.

A continuing issue with be which optics work best with MicroLEDs. Part of this issue will be affected by the degree of collimation of the light from the LEDs. The 2-D reflective waveguides developed by Lumus have a significant efficiency advantage, but still, many more companies are using diffractive waveguides today.

Appendix: MicroLED Background Information

What is a MicroLED Company?

To have a successful MicroLEDs is more than making the LEDs; it is about making a complete display and the ability to control it accurately at an affordable cost.

What constitutes a “MicroLED company” varies widely from a completely fabless design company to one that might design and fab the LEDs, design the (typically) CMOS control backplane, and then do the assembly and electrical connection of the (typically) Indium Gallium Nitride (InGaS) LEDs onto the CMOS backplane. Almost every company has a different “flow” or order in which they assemble/combine various component technologies. For example, shown below is the flow given by JBD, where they appear to be applying the Epi-lay to grow the LEDs on top of the CMOS wafer; other companies would form the LEDs first on the InGaN wafer and then bond the finished transistor arrays onto the finished CMOS control devices.

There is no common approach, and there are as many different methods as there are companies with some flows radically different from JBD’s. Greatly complicating matters is that most InGaN fabrication is done on 150mm to 200mm diameter wafers. In contrast, mainstream CMOS today is made on 300mm wafers which least to a variety of methods to address this issue, some of which are better suited to volume manufacturing than others.

Microdisplay vs. Direct View Pixel Sizes

What companies call MicroLED displays varies from wall-size monitors and TVs that can be more than a meter wide down to microdisplays typically less than 25mm in diagonal. As the table on the right shows, a small pixel on an AR microdisplay is about 300 to 600 times smaller than a direct-view smartphone or smartwatch. Pixel sizes get closer when comparing waveguide-based AR to VR pixels.

VR headsets started with essentially direct-view cell phone-type displays with some cheap optics to enable the human eye to focus but have been driving the pixel size down to improve angular resolution. The latest trend is to use pancake optics which can use even smaller pixels to enable smaller headsets.

There is some “bridging” between AR and VR with display types. For example, large combiner “bug-eye” AR often uses direct-view type displays common in VR. Some pancake optics-based VR displays use the same Micro-OLED displays used with AR birdbath optics.

With the radically different pixel sizes, it should not be surprising that the best technology to support that pixel size could change. Small microdisplays used by waveguide-based AR require microdisplays with semiconductor (usually CMOS) transistors. TVs, smartphones, and smartwatches use various types of thin film transistors.

Particularly regarding supporting color with MicroLEDs, it should be expected that the technologies used for microdisplays could be very different from those used for direct-view type displays. For example, while quantum dots color conversion of blue or UV light might be a good method for supporting larger displays, it does not seem to scale well to the small pixel sizes used in AR.

Multicolor, Full Color, or True Color

While not “industry standard definitions,” for the sake of discussion, I want to define three categories of color display:

  1. Multicolor – Provides multiple identifiable colors, including, at a minimum, the primary colors of red, green, and blue. This type of display is useful for providing basic information and color coding it. Photographic images will look cartoonish at best, and there are typically very visible “contour lines” in what should be smoothly shaded surfaces.
  2. Full Color – This case supports a wide range of colors, and smooth surfaces will not have significant contours, but the color control across the display is not good enough for showing pictures of people.
  3. True Color – The display is capable of reasonably accurate color control across the display. Importantly, faces and skin tones, to which human vision is particularly sensitive, look good. It a display is “true color,” it should also be able to control the “white point,” and whites will look white, and grays will be gray. There should be no visible contouring.

The images below are examples of “multicolor,” “full color,” and “true color” images.

JBD “Multicolor” Display
Playnitride “Full Color”
KGOnTech Test Pat. “True Color”

It might seem to some that my definition of “full” versus “true” color is redundant, but I have seen many demonstrations through the years where the display can display color but can’t control it well. In 2012, I wrote Cynics Guide to CES – Glossary of Terms. I called this issue “Pixar-ized” because there were so many demos of cartoon characters showing color saturation but none showing humans, which requires accurate color control.

Pixar-ized – The showing of only cartoons because the device can’t control color well and/or has low resolution.  People have very poor absolute color perception but tend to be are very sensitive to skin tones and know what looks right when viewing humans, but the human visual systems is very poor at judging whether the color is right in a cartoon.  Additionally it is very hard to tell resolution when viewing a cartoon.

I will add to this category above “artistic” false/shifted color images (see Playnitride’s above). Sometimes this is done because the work to calibrate the prototype has not been completed, even though the display can eventually support full color. Still, it is often done to hide problems.

I should note that what can be acceptable to the eye with a single-color image can look very bad when combined with other colors. What are weak or dead pixels with a monochrome display will turn into colorized or color-shifted pixels that will stick out. Anyone with a single dead color within a pixel on display has seen how the missing color sticks out. The images below are a simplified Photoshop (simulation) of what happens if random noise and dim areas occur in the various colors. The left image shows the effect on the full-color image, and the right image shows the same amount of random noise and dimming (in green) with the monochrome green (note, the image on the right is the grayscale image and then converted to green and not just the green channel from the true color image). In the green-only image, you can see some noise and a slight dimming that might not even be noticeable, whereas, in the color image, it turns into a magenta-colored area.

In that same 2012 article, I wrote about “Stilliphobia,” the fear of showing still images. We are seeing that with displaying content that is very busy and/or with lots of motion to hide dead or weak pixels or random pixel values in the display. When I see a needlessly busy image or lots of motion, I immediately think they are trying to hide problems. Someone with a great-looking display should show pictures of people and smooth images for at least some content.

Most of today’s MicroLED displays are working on getting to multicolor displays and are far from true color. All MicroLED microdisplays I have seen to date have large pixel-to-pixel variations. No amount of calibration or mura correction will be enough to produce a good photographic image if the individual colors can’t be controlled accurately. The good news is that most of today’s AR applications only require a multicolor display.

❌
❌