Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Canon R5 Mk ii Drops Pixel Shift High Res. – Is Canon Missing the AI Big Picture?

23 August 2024 at 03:31

Introduction

Sometimes, companies make what seems, on the surface, technically poor decisions. I consider this the case with Canon’s new R5 Mark ii (and R1) dropping support for sensor Pixel Shifting High Resolution (what Canon calls IBIS High Res). Canon removed the IBIS High Res mode, which captures (as I will demonstrate) more real information and seemingly adds an AI upscaling to create fake information. AI upscaling, if desired, can be done better and more conveniently on a computer, but Pixel Shift/IBIS High Res cannot.

The historical reason for pixel shift is to give higher resolution in certain situations. Still, because the cameras combine the images “in-camera” with the camera’s limited processing and memory resources plus simple firmware algorithms, they can’t deal with either camera or subject motion. Additionally, while the Canon R5 can take 20 frames per second (the R5 Mark ii can take 30 frames per second), taking the nine frames takes about half a second, but then it takes another ~8 seconds for the camera to process them. Rather than putting more restrictions on shooting, it would have been much easier and faster to save the raw frames (with original sensor subpixels) to the flash drive for processing later by a much more capable computer using better algorithms that can constantly be improved.

Canon’s competitors, Sony and Nikon, are already saving raw files with their pixel-shift modes. I hoped Canon would see the light with the new R5 mark ii (R5m2) and support IBIS HR in saving the raw frames. Instead, Canon went in the wrong direction; they dropped IBIS High Res altogether and added an in-camera “AI upscaling.” computer. The first-generation R5 didn’t have IBIS High Res, but a firmware release later added this capability. I’m hoping the same will happen with the R5 Mark ii, only this time saving the RAW frames rather than creating an in-camera JPEG.

Features Versus Capabilities

I want to distinguish between a “feature” and a “capability.” Take, for example, high dynamic range. The classical photography problem is taking a picture in a room with a window with a view; you can expose inside the room, in which case the view out the window will be blown out, or you expose the view out the window, in which case the room will look nearly black. The Canon R5 has an “HDR Mode” that takes multiple frames at different exposure settings and allows you to save a single processed image only or with all the frames saved. The “feature” was making a single HDR image, and the “capability” was rapidly taking multiple frames with different exposures and saving those frames.

The Canon R5 made IBIS High Res a feature when it only offered a single JPEG output without the capability of saving individual frames with the sensor shifted by sub-pixel amounts. By saving raw frames, the software could better combine frames. Additionally, the software could deal with camera and subject motion, which are unsavable artifacts in an IBIS high-res JPEG. As such, when I use IBIS High Res, I typically take three pictures just in case, as one of the pictures often will have unfixable problems that can only be seen once viewed on a computer monitor. It would also be desirable to select how many frames to save; for example, saving more than one cycle of frames would help deal with subject or camera motion.

Cameras today support some aspects of “computational photography.” Saving multiple images can be used for panoramic stitching, high dynamic range, focus stacking (to support larger than possible depths of focus with a single picture), and astrophotography image stacking (using interval timers to take many shots that are added together). Many cameras, like the R5, have even added modes to support taking multiple pictures for focus stacking, high dynamic range, and interval timers. So for the R5 mk. ii to have dropped sensor pixel shifting seems like a backward direction in the evolution of photography.

This Blog’s Use of Pixel Shifting for Higher Resolution

Both cameras have “In-Body-Stabilization” (IBIS) that normally moves the camera sensor based on motion detection to reduce camera/lens motion blur. They both also support a high-resolution mode where, instead of using the IBIS for stabilization, they use it to shift the sensor by a fraction of a pixel to take a higher-resolution image. Canon called this capability “IBIS High Res.” The R5 in-camera combines nine images, each shifted by 1/3rd of a pixel, to make a 405mp JPEG image. The D5 combines four images, each shifted by a half pixel.

In the past year, I started using my “personal camera,” the Canon R5 (45MP “full frame” 35mm), to take pictures of VR/Passthrough-AR and optical AR glasses (where possible). I also use my older Olympus D5 Mark iii (20MP Micro 4/3rd) because it is a smaller camera with smaller lenses that lets it get into the optimum optical location in smaller form factor AR glasses.

The cameras and lenses I use most are shown on the right, except for the large RF15-35mm lens on the R5 camera, which is shown for comparison. To take pictures through the optics and get inside the eye box/pupil, the lens has to be physically close to the image sensor in the camera, which limits lens selection. Thus, while the RF15-35mm lens is “better” than the fixed focus 28mm and 16mm lenses, it won’t work to take a headset picture. The RF28mm and RF16mm lenses are the only full-frame Canon lenses I found to work. Cell phones with small lenses “work,” but they don’t have the resolution of a dedicated camera, aperture control, and shutter speed control necessary to get good pictures through headsets.

Moiré

Via Big Screen Beyond

In addition to photography being my hobby, I take tens of thousands of pictures a year via the optics of AR and VR headsets, which pose particular challenges for this blog. Because I’m shooting at displays with a regular pattern of pixels with a camera its regular pattern of pixels, there is a constant chance for moiré due to the beat frequencies between the pixels and color subpixels of the camera and the display device as magnified by the camera and headset optics (left). To keep within the eye box/pupil of the headset, I am limited to simpler lenses that are physically short to keep the distance from the headset optics to the camera short, which limits the focal lengths and thus magnification to combat moiré. In camera, pixel-shifting has proven to be a way to not only improve resolution but greatly reduce moiré effects.

Issues with moiré are not limited to taking pictures via AR and VR headsets; it is a problem with real-world pictures that include things like patterns in clothing (famously with fences (from a distance where they form a small pattern) and other objects with a regular pattern (see typical photographic moiré problems below).

Anti-Aliasing

Those who know signal theory know that a low-pass cutoff filter reduces/avoids aliasing (moiré is a form of aliasing). Cameras have also used “anti-aliasing” filters, which very slightly blur the image to reduce aliasing, but this comes at the expense of resolution. In the past, with lower-resolution sensors, the chance of encountering real-world things in a picture that would cause aliasing was more likely, and the anti-aliasing filters were more necessary.

As the resolution of sensors has increased, there is a lesser likelihood that something in the typical picture that is in focus will be at the point it aliases and combined with better algorithms that can detect and reduce the effect of moiré. Still, while sometimes the moiré can be fixed in post-processing, in critical or difficult situations, it would be better if additional frames were stored to clue software into processing it as aliasing/moiré rather than “real” information.

Camera Pixels and Bayer Filter (and misunderstanding)

Most cameras today (including Canon) use a Bayer Filter pattern (below right) with two green-filtered pixels for each red or blue pixel. When producing an image for a person to view, a computer’s camera or RAW conversion software, often called “debayering” or “demosaicing,” generates a full-color pixel by combining the information from many (8 or more) surrounding single-color pixels with the total number of full-color pixels equaling the number of photosites.

Camera makers count every photosite as a pixel even though the camera only captured “one color” at that photosite. Some people, somewhat mistakenly, think the resolution is one-quarter claimed since only one-quart red and blue photosites exist. After all, with a color monitor, we don’t count the red, green, and blue subpixels as 3 pixels but just one. However, Microsoft’s ClearType does gain some resolution from the color subpixels to refine text better.

It turns out that except for extreme image cases, including special test patterns, the effective camera resolution is close to the number of photosites (and not 1/4th or 1/2). There are several reasons why this is true. First, note the red, green, and blue filter’s frequency responses for the color camera sensor (above left – taken from a Sony sensor as it was available). Notice how their spectrums are wide and overlapping. The wide spectral nature of these filters is necessary to capture all the continuous spectrums of color in the real world (every call “red” does not have the same wavelength). If the filters were very narrow and only captured a single wavelength, then any colors that are not that wavelength would be black. Each photosite captures intensity information for all colors, but the filtering biases it toward bands of colors.

Almost everything (other than spectral lines from plasmas, lasers, and some test patterns) that can be seen in the real world is not a single wavelength but a mix of wavelengths. There is even the unusual case of magenta, which does not have a wavelength (and thus, many claim it is not a color) but is a mix of blue and red. With a typical photo, we have wide-spectrum filters capturing wide-spectrum colors.

It turns out that humans sense resolution mostly in intensity and not color. This fact has been exploited to reduce the bandwidth of early color television and to reduce data in all the video and image compression algorithms. Thanks to the overlap in the color filters in the camera filters, there is considerable intensity information in the various color pixels.

Human Vision and Color

Consider human vision if the camera sensor’s Bayer patterns and color filter spectral overlaps were bad, then consider the human retina. On average, humans have 7 million cones in the retina, of which ~64% are long (L) wavelength (red), ~32% medium (M – green), and ~2% short (S – blue). However, these percentages vary widely from person to person, particularly the percentage of short/blue cones. The cones that sense color support high resolution are concentrated in the center of vision.

Notice the spectral response of the so-called red, green, and blue cones (below left) and compare it to the camera sensor filters’ response above. Note how much the “red” and “green” responses overlap. On the right is a typical distribution of cones near the fovea (center) of vision, and note there are zero “blue”/short cones in the very center of the fovea; it makes the Bayer pattern look great😁.

Acuity of the Eye

Next, we have the fact that the cones are concentrated in the center of vision and that visual acuity falls off rapidly. The charts below show the distribution of rods and cones in the eye (left) and the sharp fall-off in visual acuity from the center of vision.

Saccadic Eye Movement – The Eyes’ “Pixel Shifting”

Looking at the distribution of cones and the lack of visual acuity outside the fovea, you might wonder how humans see anything in detail. The eye constantly moves in a mix of large and small steps known as saccades. The eye tends to blank while it moves and then takes a metaphorical snapshot. The visual cortex takes the saccade’s “snapshots” and forms a composite image. In effect, the human visual system is doing “pixel shifting.”

My Use of Pixel Shifting (IBIS High-Res)

I am a regular user of the IBIS High-Resolution on this blog. Taking pictures of displays with their regular patterns is particularly prone to moiré. Plus, with the limited lenses I can use that are all wide-angle (and thus low magnification), it helps to get some more resolution. With IBIS, a single picture 405 mp (24,576 by 16,384 pixels) IBIS High-Resolution image can capture ~100-degree wide FOV and yet see details of individual pixels from a 4K display device.

It seems a bit afterthought on the R5 with the JPEG output. Even with the camera on a tripod, it screws up, so usually, I take three shots just in case because I will only know later when I look at the results blown up on a monitor if one of them messed up. The close-in crops (right) are from two back-to-back shots with IBIS high-res. In the bad shot, you can see how the edges look feathered/jagged (particularly comparing vertical elements like the “l” in Arial). I would much rather have had the IBIS HR output the 9 RAW images.

IBIS High-Res Comparison to Native Resolution

IBIS High Res helps provide higher resolution and can significantly reduce moiré. Often, the pixel shift output will have much less moiré. I can often reduce the IBIS high-res to a lower resolution, and the image has much less moiré and is a bit sharper even when scaled down to the size of a “native” resolution picture as shown below.

The crops below show the IBIS High Res image at full resolution and the native resolution scaled up to match, along with insets of the IBIS High Res picture scaled down to match the native resolution.

The Image below was taken in IBIS High Resolution and then scaled down by 33.33% for publication on this blog (from the article AWE 2024 VR – Hypervision, Sony XR, Big Screen, Apple, Meta, & LightPolymers).

The crops below compare the IBIS High Res at full resolution to a native image upscaled by 300%. Notice how the IBIS High Res has better color detail. If you look at the white tower on a diagonal in the center of the picture (pointed to by the red arrow), you can see the red (on the left) blue chroma aberrations caused by the headset’s optics, but these and other color details are lost in the native shot.

Conclusions

While my specific needs are a little special, I think Canon is missing out on a wealth of computational photography options by not supporting IBIS High-Res with RAW output. The obvious benefits are helping with moiré and getting higher-resolution still lifes. By storing RAW, there is also the opportunity to deal with movement in the scene, which may even be hand-held. It would be great to have the option to control the shift amount (shift by 1/3 and 1/2 would be good options) and the number of pictures. For example, it would be good to capture more than one “cycle” to help deal with motion.

Smartphones are cleaning up on dedicated cameras in “computational photography” to make small sensors with mediocre optics look very good. Imagine what could be done with better lenses and cameras. Sony, a leader in cell phone sensors, knows this and has pixel shift with RAW output. I don’t understand why Canon is ceding the pixel shift to Sony and Nikon. Hopefully, it will be a firmware update like it was on the original R5. Only this time, please save the RAW/cRAW files.

In related news, I’m working on an article about Texas Instrument’s renewed thrust into AR with DLP. TI DLP has been working with PoLight to support Pixel Shift (link to video with PoLight) for resolution enhancement with AR glasses (see also Cambridge Mechatronics and poLight Optics Micromovement (CES/PW Pt. 6))

Mixed Reality at CES & AR/VR/MR 2024 (Part 3 Display Devices)

20 April 2024 at 14:59

Update 2/21/22: I added a discussion of the DLP’s new frame rates and its potential to address field sequential color breakup.

Introduction

In part 3 of my combined CES and AR/VR/MR 2024 coverage of over 50 Mixed Reality companies, I will discuss display companies.

As discussed in Mixed Reality at CES and the AR/VR/MR 2024 Video (Part 1 – Headset Companies), Jason McDowall of The AR Show recorded more than four hours of video on the 50 companies. In editing the videos, I felt the need to add more information on the companies. So, I decided to release each video in sections with a companion blog article with added information.

Outline of the Video and Additional Information

The part of the video on display companies is only about 14 minutes long, but with my background working in displays, I had more to write about each company. The times in blue on the left of each subsection below link to the YouTube video section discussing a given company.

00:10 Lighting Silicon (Formerly Kopin Micro-OLED)

Lighting Silicon is a spinoff of Kopin’s micro-OLED development. Kopin started making micro-LCD microdisplays with its transmissive color filter “Lift-off LCOS” process in 1990. 2011 Kopin acquired Forth Dimension Displays (FDD), a high-resolution Ferroelectric (reflective) LCOS maker. In 2016, I first reported on Kopin Entering the OLED Microdisplay Market. Lighting Silicon (as Kopin) was the first company to promote the combination of all plastic pancake optics with micro-OLEDs (now used in the Apple Vision Pro). Panasonic picked up the Lighting/Kopin OLED with pancake optics design for their Shift All headset (see also: Pancake Optics Kopin/Panasonic).

At CES 2024, I was invited by Chris Chinnock of Insight Media to be on a panel at Lighting Silicon’s reception. The panel’s title was “Finding the Path to a Consumer-Friendly Vision Pro Headset” (video link – remember this was made before the Apple Vision Pro was available). The panel started with Lighting Silicon’s Chairman, John Fan, explaining Lighting Silicon and its relationship with Lakeside Lighting Semiconductor. Essentially, Lightning Semiconductor designs the semiconductor backplane, and Lakeside Lighting does the OLED assembly (including applying the OLED material a wafer at a time, sealing the display, singulating the displays, and bonding). Currently, Lakeside Lighting is only processing 8-inch/200mm wafers, limiting Lighting Silicon to making ~2.5K resolution devices. To make ~4K devices, Lighting Semiconductor needs a more advanced semiconductor process that is only available in more modern 12-inch/300mm FABs. Lakeside is now building a manufacturing facility that can handle 12-inch OLED wafer assembly, enabling Lighting Silicon to offer ~4K devices.

Related info on Kopin’s history in microdisplays and micro-OLEDs:

02:55 RaonTech

RaonTech seems to be one of the most popular LCOS makers, as I see their devices being used in many new designs/prototypes. Himax (Google Glass, Hololens 1, and many others) and Omnivision (Magic Leap 1&2 and other designs) are also LCOS makers I know are in multiple designs, but I didn’t see them at CES or the AR/VR/MR. I first reported on RaonTech at CES 2018 (Part 1 – AR Overview). RaonTech makes various LCOS devices with different pixel sizes and resolutions. More recently, they have developed a 2.15-micron pixel pitch field sequential color pixel with an “embedded spatial interpolation is done by pixel circuit itself,” so (as I understand it) the 4K image is based on 2K data being sent and interpolated by the display.

In addition to LCOS, RaonTech has been designing backplanes for other companies making micro-OLED and MicroLED microdisplays.

04:01 May Display (LCOS)

May Display is a Korean LCOS company that I first saw at CES 2022. It surprised me, as I thought I knew most of the LCOS makers. May is still a bit of an enigma. They make a range of LCOS panels, their most advanced being an 8K (7980 x 4,320) 3.2-micron pixel pitch. May also makes a 4K VR headset with a 75-degree FOV using their LCOS devices.

May has its own in-house LCOS manufacturing capability. May demonstrated using its LCOS devices in projectors and VR headsets and showed them being used in a (true) holographic projector (I think using phase LCOS).

May Display sounds like an impressive LCOS company, but I have not seen or heard of their LCOS devices being used in other companies’ products or prototypes.

04:16 Kopin’s Forth Dimensions Display (LCOS)

As discussed earlier with Lighting Silicon, Kopin acquired Ferroelectric LCOS maker Forth Dimension Displays (FDD) in 2011. FDD was originally founded as Micropix in 1988 as part of CRL-Opto, then renamed CRLO in 2004, and finally Forth Dimension Displays in 2005, before Kopin’s 2011 acquisition.

I started working in LCOS in 1998 as the CTO of Silicon Display, a startup developing a VR/AR monocular headset. I designed an XGA (1024 x768) LCOS backplane and the FGA to drive it. We were looking to work with MicroPix/CRL-Opto to do the LCOS assembly (applying the cover glass, glue seal, and liquid crystal). When MicroPix/CRL-Opto couldn’t get their backplane to work, they ended up licensing the XGA LCOS backplane design I did at Silicon Display to be their first device, which they had made for many years.

FDD has focused on higher-end display applications, with its most high-profile design win being the early 4K RED cameras. But (almost) all viewfinders today, including RED, use OLEDs. FDD’s LCOS devices have been used in military and industrial VR applications, but I haven’t seen them used in the broader AR/VR market. According to FDD, one of the biggest markets for their devices today is in “structured light” for 3-D depth sensing. FDD’s devices are also used in industrial and scientific applications such as 3D Super Resolution Microscopy and 3D Optical Metrology.

05:34 Texas Instruments (TI) DLP®

Around 2015, DLP and LCOS displays seemed to have been used in roughly equal numbers of waveguide-based AR/MR designs. However, since 2016, almost all new waveguide-based designs have used LCOS, most notably the Hololens 1 (2016) and Magic Leap One (2018). Even companies previously using DLP switched to LCOS and, more recently, MicroLEDs with new designs. Among the reasons the companies gave for switching from DLP to LCOS were pixel size and, thus, a smaller device for a given resolution, lower power consumption of the display+asic, more choice in device resolutions and form factors, and cost.

While DLP does not require polarized light, which is a significant efficiency advantage in room/theater projector applications that project hundreds or thousands of lumens, the power of the display device and control logic/ASICs are much more of a factor in near-eye displays that require less than 1 to at most a few lumens since the light is directly aimed into the eye rather than illuminating the whole room. Additionally, many near-eye optical designs employ one or more reflective optics requiring polarized light.

Another issue with DLP is drive algorithm control. Texas Instruments does not give its customers direct access to the DLP’s drive algorithm, which was a major issue for CREAL (to be discussed in the next article), which switched from DLP to LCOS partly because of the need to control its unique light field driving method directly. VividQ (also to be discussed in the next article), which generates a holographic display, started with DLP and now uses LCOS. Lightspace 3D has similarly switched.

Far from giving up, TI is making a concerted effort to improve its position in the AR/VR/MR market with new, smaller, and more efficient DLP/DMD devices and chipsets and reference design optics.

Color Breakup On Hololens 1 using a low color sequential field rate

Added 2/21/22: I forgot to discuss the DLP’s new frame rates and field sequential color breakup.

I find the new, much higher frame rates the most interesting. Both DLP and LCOS use field sequential color (FSC), which can be prone to color breakup with eye and/or image movement. One way to reduce the chance of breakup is to increase the frame rate and, thus, the color field sequence rate (there are nominally three color fields, R, G, & B, per frame). With DLP’s new much higher 240Hz & 480Hz frame rates, the DLP would have 720 or 1440 color fields per second. Some older LCOS had as low as 60-frames/180-fields (I think this was used on Hololens 1 – right), and many, if not most, LCOS today use 120-frames/360-fields per second. A few LCOS devices I have seen can go as high as 180-frames/540-fields per second. So, the newer DLP devices would have an advantage in that area.

The content below was extracted from the TI DLP presentation given at AR/VR/MR 2024 on January 29, 2024 (note that only the abstract seems available on the SPIE website).

My Background at Texas Instruments:

I worked at Texas Instruments from 1977 to 1998, becoming the youngest TI Fellow in the company’s history in 1988. However, contrary to what people may think, I never directly worked on the DLP. The closest I came was a short-lived joint development program to develop a DLP-based color copier using the TMS320C80 image processor, for which I was the lead architect.

I worked in the Microprocessor division developing the TMS9918/28/29 (the first “Sprite” video chip), the TMS9995 CPU, the TMS99000 CPU, the TMS34010 (the first programmable graphics processor), the TMS34020 (2nd generation), the TMS302C80 (first image processor with 4 DSP CPUs and a RISC CPU) several generations of Video DRAM (starting with the TMS4161), and the first Synchronous DRAM. I designed silicon to generate or process pixels for about 17 of my 20 years at TI.

After leaving TI, ended up working on LCOS, a rival technology to DLP, from 1998 through 2011. But then when I was designing a aftermarket autmotive HUD at Navdy, I chose use a DLP engine for the projector for its advantages in that application. I like to think of myself as a product focused and want to use whichever technology works best for the given application. I see pros and cons in all the display technologies.

07:25 VueReal MicroLED

VueReal is a Canadian-based startup developing MicroLEDs. Their initial focus was on making single color per device microdisplays (below left).

However, perhaps VueReal’s most interesting development is their cartridge-based method of microprinting MicroLEDs. In this process, they singulate the individual LEDs, test and select them, and then transfer them to a substrate with either passive (wire) or active (ex., thin-film transistors on glass or plastic). They claim to have extremely high yields with this process. With this process, they can make full-color rectangular displays (above right), transparent displays (by spacing the LEDs out on a transparent substrate, and displays of various shapes, such as an automotive instrument panel or a tail light.

I was not allowed to take pictures in the VueReal suite, but Chris Chinnock of Insight Media was allowed to make a video from the suit but had to keep his distance from demos. For more information on VueReal, I would also suggest going to MicroLED-Info, which has a combination of information and videos on VueReal.

08:26 MojoVision MicroLED

MojoVision is pivoting from a “Contact Lens Display Company” to a “MicroLED component company.” Its new CEO is Dr. Nikhil Balram, formerly the head of Google’s Display Group. MojoVision started saying (in private) that it was putting more emphasis on being a MicroLEDs component company around 2021. Still, it didn’t publicly stop developing the contact lens display until January 2023 after spending more than $200M.

To be clear, I always thought the contact lens display concept was fatally flawed due to physics, to the point where I thought it was a scam. Some third-party NDA reasons kept me from talking about MojoVision until 2022. I outlined some fundamental problems and why I thought the contact lens display was a sham in my 2022 Video with Brad Lynch on Mojovision Contact Display in my 2022 CES Discussion video with Brad Lynch (if you take pleasure in my beating up on a dumb concept for about 14 minutes, it might be a fun thing to watch).

So, in my book, Mojovision, the company starts with a major credibility problem. Still, they are now under new leadership and focusing on what they got to work, namely very small MicroLEDs. Their 1.75-micron LEDs are the smallest I have heard about. The “old” Mojovision had developed direct/native green MicroLEDs, but the new MojoVision is developing native blue LEDs and then using quantum dot conversion to get green and red.

I have been hearing about using quantum dots to make full-color MicroLEDs for ~10 years, and many companies have said they are working on it. Playnitride demonstrated quantum dot-converted microdisplays (via Lumus waveguides) and larger direct-view displays at AR/VR/MR 2023 (see MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7)).

Mike Wiemer (CTO) gave a presentation on “Comparing Reds: QD vs InGaN vs AlInGaP” (behind the SPIE Paywall). Below are a few slides from that presentation.

Wiemer gave many of the (well-known in the industry) advantages of the blue LED with the quantum dot approach for MicroLEDs over competing approaches to full-color MicroLEDs, including:

  • Blue LEDs are the most efficient color
  • You only have to make a single type of LED crystal structure in a single layer.
  • It is relatively easy to print small quantum dots; it is infeasible to pick and place microdisplay size MicroLEDs
  • Quantum dots converted blue to green and red are much more efficient than native green and red LEDs
  • Native red LEDs are inefficient in GaN crystalline structures that are moderately compatible with native green and blue LEDs.
  • Stacking native LEDs of different colors on different layers is a complex crystalline growth process, and blocking light from lower layers causes efficiency issues.
  • Single emitters with multiple-color LEDs (e.g., See my article on Porotech) have efficiency issues, particularly in RED, which are further exacerbated by the need to time sequence the colors. Controlling a large array of single emitters with multiple colors requires a yet-to-be-developed, complex backplane.

Some of the known big issues with quantum dot conversion with MicroLED microdisplays (not a problem for larger direct view displays):

  • MicroLEDs can only have a very thin layer of quantum dots. If the layer is too thin, the light/energy is wasted, and the residual blue light must be filtered out to get good greens and reds.
    • MojoVision claims to have developed quantum dots that can convert all the blue light to red or green with thin layers
  • There must be some structure/isolation to prevent the blue light from adjacent cells from activating the quantum dots of a given cell, which would cause the desaturation of colors. Eliminating color crosstalk/desaturating is another advantage of having thinner quantum dot layers.
  • The lifetime and potential for color shifting with quantum dots, particularly if they are driven hard. Native crystalline LEDs are more durable and can be driven harder/brighter. Thus, quantum dot-converted blue LEDs, while more than 10x brighter than OLEDs, are expected to be less bright than native LEDs
  • While MojoVision has a relatively small 1.37-micron LED on a 1.87-micron pitch, that still gives a 3.74-micron pixel pitch (assuming MojoVision keeps using two reds to get enough red brightness). While this is still about half the pixel pitch of the Apple Vision’s Pro ~7.5-micron pitch OLED, a smaller pixel size such as with a single-emitter-with multiple-colors (e.g., Porotech) would be better (more efficient due to étendue see: MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7)) for semi-collimating the light using microlenses as needed by waveguides.

10:20 Porotech MicroLED

I covered Porotech’s single emitter, multiple color, MicroLED technology extensively last year in CES 2023 (Part 2) – Porotech – The Most Advanced MicroLED Technology, MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7), and my CES 2023 Video with Brad Lynch.

While technically interesting, Porotech’s single-emitter device will likely take considerable time to perfect. The single-emitter approach has the major advantage of supporting a smaller pixel since only one LED per pixel is required. This also results in only two electrical connections (power and ground) to LED per pixel.

However, as the current level controls the color wavelength, this level must be precise. The brightness is then controlled by the duty cycle. An extremely advanced semiconductor backplane will be needed to precisely control the current and duty cycle per pixel, a backplane vastly more complex than LCOS or spatial color MicroLEDs (such as MojoVision and Playnitride) require.

Using current to control the color of LEDs is well-known to experts in LEDs. Multiple LED experts have told me that based on their knowledge, they believe Porotech’s red light output will be small relative to the blue and green. To produce a full-color image, the single emitter will have to sequentially display red, green, and blue, further exacerbating the red’s brightness issues.

12:55 Brilliance Color Laser Combiner

Brilliance has developed a 3-color laser combiner on silicon. Light guides formed in/on the silicon act similarly to fiber optics to combine red, green, and blue laser diodes into a single beam. The obvious application of this technology would be a laser beam scanning (LBS) display.

While I appreciate Brilliance’s technical achievement, I don’t believe that laser beam scanning (LBS) is a competitive display technology for any known application. This blog has written dozens of articles (too many to list here) about the failure of LBS displays.

14:24 TriLite/Trixel (Laser Combiner and LBS Display Glasses)

Last and certainly least, we get to TriLite Laser Beam Scanning (LBS) glasses. LBS displays for near-eye and projector use have a perfect 25+ year record of failure. I have written about many of these failures since this blog started. I see nothing in TriLite that will change this trend. It does not matter if they shoot from the temple onto a hologram directly into the eye like North Focals or use a waveguide like TriLite; the fatal weak link is using an LBS display device.

It has reached the point when I see a device with an LBS display. I’m pretty sure it is either part of a scam and/or the people involved are too incompetent to create a good product (and yes, I include Hololens 2 in this category). Every company with an LBS display (once again, including Hololens 2) lies about the resolution by confabulating “scan lines” with the rows of a pixel-based display. Scan lines are not the same as pixel rows because the LBS scan lines vary in spacing and follow a curved path. Thus, every pixel in the image must be resampled into a distorted and non-uniform scanning process.

Like Brilliance above, TriLites’ core technology combines three lasers for LBS. Unlike Brilliance, TriLites does not end up with the beams being coaxial; rather, they are at slightly different angles. This will cause the various colors to diverge by different amounts in the scanning process. TriLite uses its “Trajectory Control Module” (TCM) to compute how to re-sample the image to align the red, green, and blue.

TriLite then compounds its problems with LBS using a Lissajous scanning process, about the worst possible scanning process for generating an image. I wrote about why the Lissajous scanning process, also used by Oqmented (TriLite uses Infineon’s scanning mirror), in AWE 2021 Part 2: Laser Scanning – Oqmented, Dispelix, and ST Micro. Lissajous scanning may be a good way to scan a laser beam for LiDAR (as I discussed in CES 2023 (4) – VoxelSensors 3D Perception, Fast and Accurate), but it is a horrible way to display an image.

The information and images below have been collected from TriLite’s website.

As far as I have seen, it is a myth that LBS has any advantage in size, cost, and power over LCOS for the same image resolution and FOV. As discussed in part 1, Avegant generated the comparison below, comparing North Focals LBS glasses with a ~12-degree FOV and roughly 320×240 resolution to Avegant’s 720 x 720 30-degree LCOS-based glasses.

Below is a selection (from dozens) of related articles I have written on various LBS display devices:

Next Time

I plan to cover non-display devices next in this series on CES and AR/VR/MR 2024. That will leave sections on Holograms and Lightfields, Display Measurement Companies, and finally, Jason and my discussion of the Apple Vision Pro.

❌
❌