Reading view

There are new articles available, click to refresh the page.

Mixed Reality at CES & AR/VR/MR 2024 (Part 2 Mostly Optics)

Introduction

In part 1, I wrote that I was planning on covering optics and display companies at CES and the SPIE AR/VR/MR conferences in 2024 in part 2 of the video I made with Jason McDowall in this article. However, as I started filling in extra information on the various companies, the article was getting long, so I broke the optics and displays into two separate articles.

In addition to optics companies, I will also be touching on eye track with Tobii, who is doing both optics and eye tracking, and Zinn Labs.

Subscription Options Coming to KGOnTech

Many companies, including other news outlets and individuals, benefit from this blog indirectly through education or directly via the exposure it gives to large and small companies. Many, if not most, MR industry insiders read this blog worldwide based on my conference interactions. I want to keep the main blog free and not filled with advertising while still reporting on large and small companies. To make financial sense of all this and pay some people to help me, I’m in the process of setting up subscription services for companies and planning on (paid) webinars for individuals. If you or your company might be interested, please email subscriptions@kgontech.com.

Outline of the Video and Additional Information

Below is an outline of the second hour of the video, as well as additional comments and links to more information. The times in blue on the left of each subsection below link to the time in the YouTube video discussing a given company.

0:00 Waveguides and Slim Optics

0:03 Schott and Lumus

Schott AG is one of the world’s biggest makers of precision glass. In 2020, Schott entered into a strategic partnership with Lumus, and at AR/VR/MR 2024 and 2023, Lumus was prominently featured in the Schott booth. While Schott also makes the glass for diffractive waveguides, the diffraction gratings are usually left to another company. In the case of the Lumus Reflective waveguides, Schott makes the glass and has developed high-volume waveguide manufacturing processes.

Lumus waveguides consistently have significantly higher optical efficiency (for a given FOV), better color uniformity, better transparency, higher resolution, and less front projection (“eye glow”) than any diffractive waveguide. Originally, Lumus had 1-D pupil-expanding waveguides, whereas diffractive waveguides were 2-D pupil-expanding. The 1-D expanding waveguides required a large projection engine in the non-expanding direction, thus making the projection optics bigger and heavier. In 2021, Lumus first demonstrated their 2-D expanding Maximus prototype waveguides with excellent image quality, 2K by 2K resolution, and 50° FOV. With 2-D expansion, projection image optics could be much smaller. Lumus has continued to advance its Reflective 2D expanding waveguide technology with the “Z-Lens.” Lumus says that variants of this technology could support more than a 70-degree FOV.

Waveguides depend on “total internal reflection” (TIR). For this TIR to work, diffractive waveguides and earlier Lumus waveguides require an “air gap” between the waveguide surface and any other surfaces, including “push-pull” lenses, for moving the waveguide’s apparent focus distance and vision correction. These air gaps can be hard to maintain and source unwanted reflections. Lumus Z-Lens can be embedded in optics with no air gap (and the first waveguide to make this claim) due to the shallower angles of the TIR reflections.

While Lumus waveguides are better than any diffractive waveguide in almost every image quality and performance metric, their big questions have always revolved around volume manufacturing and cost. Schott thinks that the Lumus waveguides can be manufactured in high volume at a reasonable cost.

Over the last ten years, I have seen significant improvements in almost every aspect of diffractive waveguides from many companies (for example, my articles on DigiLens and Dispelix). Diffractive waveguides are easier, less expensive, and much easier to customize. Multiple companies have diffraction waveguide design tools, and there are multiple fabrication companies.

As I point out in the video, many MR applications don’t need the highest image quality or resolution; they need “good enough” for the application. Many MR applications only need simple graphics and small amounts of text. Many applications only require limited colors, such as red=bad, green=good, yellow=caution, and white or cyan for everything else. While others can get away with monochrome (say green-only). For example, many military displays, including night vision, are often monochrome (green or white), and most aviation HUDs are green-only.

I often say there is a difference between being “paid to use” and ” paying for” a headset. By this, I mean that someone is paid to use the headset to help them be more effective in their job, whereas a consumer would be paying for the headset.

For more on Lumus’s 2-D expanding waveguides:

For more on Schott and Lumus’s newer Z-Lens at AR/VR/MR 2023:

For more on green-only (MicroLED headsets) and full-color MicroLEDs through diffractive and Lumus reflective waveguides, see:

4:58 Fourier (Metasurface)

Fourier is developing metasurface technology to reflect and redirect light from a projector in the temple area of AR glasses to the eye. If a simple mirror-type coating were placed on the lens, projected light from the temple would bounce off at an angle that would miss the eye.

Multiple companies have previously created holographic Optical Elements (HOEs) for a similar optical function. Luminit developed the HOE used with North Focals, and TruLife Optics has developed similar elements (both Luminit and TruLife’s HOEs are discussed in my AWE 2022 video with Brad Lynch).

Fourier’s metasurface (and HOEs) can act not only as a tilted flat mirror but also as a tilted curved mirror with “optical power” to change magnification and focus. At least in theory (I have not seen it, and Fourier is still in development), the single metasurface would be simpler, compact, and have better optical efficiency than birdbath optics (e.g., Xreal and many others) and lower cost and with much better optical efficiency than waveguides. But while the potential benefits are large, I have yet to see a HOE (or metasurface) with great image quality. Will there, for example, be color uniformity, stray light capture, and front projection (“eye glow”) issues as seen with diffractive waveguides?

Laser beam scanning with direct temple projection, such as North Focals (see below left), uses a Hologram embedded or on the surface of a lens to redirect the light. This has been a common configuration for the lower resolution, small FOV, and very small eyebox Laser Beam Scanning (LBS) glasses shown by many companies, including North, Intel, and Bosch. Alternatively, LCOS, DLP, MicroLED, and laser beam scanning projectors have used waveguides to redirect the light and increase the eyebox size (the eyebox is the range of movement of the eye relative to the glasses where the whole image can be seen).

Avegant (above right), Lumus, Vuzix, Digilens, Oppo, and many others have demonstrated that with waveguides with DLP, LCOS, and MicroLEDs in very small form factors as HOEs and Metasufaces (see DigiLens, Lumus, Vuzix, Oppo, & Avegant Optical AR (CES & AR/VR/MR 2023 Pt. 8). Still, waveguides are much lower in efficiency, so much so that the use of MicroOLED is impractical with waveguides. In contrast, using MicroOLED displays is possible with HOEs and Fourier’s metalenses. There are also potential differences in how prescription lenses could be supported.

As discussed above, holographic mirrors can also be used to form the equivalent of a curved mirror that is also tilted. The large CREAL3D prototype (below left) shows the two spherical semi-mirrors. CREAL3D planes to replace these physical mirrors with a flat HOE (below right).

Fourier metalens would perform the same optical function as the HOE. We will have to wait and see the image quality and whether there are significant drawbacks with either HOEs or metalenses. My expectation is that both metalenses and HOEs will have similar issues as diffraction gratings.

Some related articles and videos on small form factor optics and Videos.

6:23 Morphonics

Morphontonics has developed methods for making waveguides and similar diffractive structures on large sheets of glass. They can make many small diffractive waveguides at a time or fewer large optical devices. In addition to waveguides, Morphotonics makes a light guide structure for the Leia Lightfield monitor and tablet.

Morphotonics presentation at AR/VR/MR 2023 can be found here: Video of Morphotonics AR/VR/MR 2023 presentation.

From Morphotnics 2023 AR/VR/MR Presentation

10:33 Cellid (Wave Guides)

Cellid is a relatively new entrant in waveguide making. I have seen their devices for several years. As discussed in the video, Cellid has been continually improving its waveguides. However, at least at present, it still seems to be behind the leading diffractive waveguide companies in terms of color uniformity, FOV, and front projection (“eye glow).

11:47 LetinAR

Several companies are using LetinAR’s PinTilt optics in the AR glasses. At CES, JorJin was showing their J8L prototypes in the LetinAR booth. Nimo (as discussed in Mixed Reality at CES and the AR/VR/MR 2024 Video (Part 1 – Headset Companies) was showing their LentinAR-based glasses in their own booth. Sharp featured their LentinAR glasses in their booth but didn’t mention they were based on LetinAR optics.

LetinAR’s optics were also used in an AT&T football helmet display application for the deaf (upper left below).

LetinAR originally developed “pin mirror” optics, which I first covered in 2018 (see CES 2018 in the listings below). The pin-mirror technology has evolved into their current “PinTilt” technology.

While LetinAR has several variations of the PinTilt, the “B-Type” (right) is the one I see being used. They use an OLED microdisplay as the display device. The image light from the OLED makes a TIR (total internal reflection) bounce off the outside surface into a collimating/focusing mirror and then back up through a series of pupil-replicating slats. The pupil replication slats enable the eye to move around and support a larger FOV.

As I discussed in the video, the image quality is much better than with the Pin-Mirrors, but gaps can be seen if your eye is not perfectly placed relative to the slats. Additionally, with the display off, the view can be slightly distorted, which can likely be improved in the manufacturing process. LetinAR also let me know that they are working on other improvements.

LetinAR’s PinTilt is much more optically efficient than diffractive or even Lumus-type reflective waveguides, as evidenced by its use of micro-OLEDs rather than much brighter LCOS, DLP, or micro-LEDs. At the same time, they offer a form factor that is close to waveguides.

Some other articles and videos covering LetinAR:

13:57 Tooz

Tooz was originally spun out of Zeiss Group in 2018, but in March 2023, they returned to become part of Zeiss. Zeiss is an optical giant founded in 1846 but is probably most famous to Americans as the company making the inserts for the Apple Vision Pro.

Tooz’s “Curved Waveguide” works differently than diffractive and Lumus-type reflective waveguides, which require the image to be collimated, use many more TIR light bounces, and have pupil replication. Strictly speaking, none of these are”waveguides,” but the diffractive and Lumus-type devices are what most people in the industry call waveguides.

The Tooz device molds optics and a focusing mirror to move the focus of the display device, which currently can be either a Micro-OLED or, more recently, (green only) Micro-LED. The image light then makes a few TIR bounces before hitting a Fresnel semi-mirror, which directs the light toward the user’s eye (above right). The location of the Fresnel semi-mirror, and thus the image, is not centered in the user’s field of view but slightly off to one side. It is made for a monocular (single-eye) display. The FOV is relatively small with 11- and 15-degree designs.

Tooz’s Curved Waveguide is aimed at data snacking. It has a small FOV and a Monocular display off the side. The company emphasizes the integration of prescription optics and the small and lightweight design, which is optically much more efficient than other waveguides.

Tooz jointly announced just before the AR/VR/MR conference that they were working with North Ocean Photonics to develop push-pull optics to go with waveguides. Tooz, in their AR/VR/MR 2024 presentation, discussed how they were trying to be the prescription optics provider for both their curved waveguides and what they call planar waveguides. One of their slides demonstrated the thickness issue with putting a push/pull set of lenses around a flat waveguide. The lenses need to be thicker to “inscribe” the waveguide due to their curvature (below right).

19:08 Oorym

Oorym is a small startup founded by Yaakov Amitai, a founder and former CTO of Lumus. Oorym has a “waveguide” with many more TIR bounces than Tooz’s design but many less than diffractive and Lumus waveguides. They use a Fresnel light redirecting element. It does not require collimated light and is much more efficient than other waveguides. They can support more than a 50-degree FOV. It is thicker and more diffractive, and Lumus waveguides are in the same order as the thickness of LetinAR. Oorym is also developing a non-head-mounted Heads-Up Display (HUD) device.

Oorym

21:57 Gixel

Gixel’s technology has to be among the most “different” I have seen in a long time. The concept is to have a MicroLED “bar” display with only a single or a few rows of pixels in one direction and with the full horizontal resolution in the other. The “rows” may have full-color pixels or a series of 3 single-color row arrays. Then, a series of pupil-replicating slats rotate to scan the bar/row image vertically synchronously with a time-sequential change of the row display. In this way, the slats scan row display forms a whole image to the eye (and combines the colors if there are separate displays for each color).

They didn’t have a full working prototype, but they did have the rotating slats working.

My first impression is that it has a Steampunk feel to the design. I can see a lot of issues with the rotating slats, their speed and vibration, the time-sequential display, and a mirage of other potential issues. But still, it wins for the sheer audacity of the approach.

23:42 Meta Research (Time Sequential Fixed Foveated Display) & Varjo

From 2017 Article of Varjo

Meta Research presented the concept of a time-sequence fixed-foveated display using single pancake optics. The basic idea is that pancake optics work by making two passes through some of the refractive and mirror optics, which magnifies the display. In a normal pancake, quarter waveplates change the light’s polarization and affect the two passes. A (pixel-less) liquid crystal shutter can act as a switchable quarter waveplate. This way, the display light will make one or two passes through part of the optics to cause two different magnifications. By time sequencing the display with the LC shutter’s switching, both a lower angular resolution but larger image and a higher angular resolution but smaller “foveated” display will be seen by the eye time sequentially.

This basically happens with a single set of optics and a single display, which is what Varjo was doing with their “fixed foveated display,” which used two display devices, optics, and a combining beam splitter.

I like to warn people that when a research group from a big company presents a concept like this to all their competitors at a conference like AR/VR/MR, it is definitely NOT what they are doing in a product.

Fixed (and Eye Tracking) Foveated Displays

In 2017, Varjo was focused on its foveated display technology. Their first prototype had a “fixed foveated display,” meaning the central high-resolution region didn’t move. Varjo claimed they would soon have the foveated display tracking the eye, but as far as I know, they never solved the problem.

It turns out that tracking the eye and moving the display is a seemingly impossible problem to solve with the eye’s saccadic movement, even with exceptional eye tracking. As I like to say, “While eye tracking may know where the eye is pointing, you don’t know what the eye has seen.” Originally, researchers thought that human vision fully blanks with saccadic movement, but later research suggests that it only semi-blanks out vision with movement. Combined with the fact that what a human “sees” is basically a composite of multiple eye positions, making a foveated display that tracks the eye is exceedingly difficult, if not impossible. The problem with artifacts due to eye movement, such as field sequential color breakup, they will tend to appear as flashes that are distracting.

We are seven years since Varjo told me they were close to solving the eye-tracking foveated display. Varjo figured out that about 90% of the benefit of a moving foveated display could be realized with a fixed foveated display near the center of the FOV. They may also have realized that solving the problems with a moving foveated display was more difficult than they thought. Regardless, Varjo has pivoted from being a “foveated display company” to a “high-resolution VR/MR company” aimed primarily at enterprise applications. Pixel sizes and resolution of display devices improved to the point where it is now better to use a higher resolution display than to combine two displays optically.

Eyeway Vision Foveated Display (and Meta)

In 2021, I visited Eyeway Vision, which also worked on foveated displays using dual laser scanning displays per eye. After an acquisition by Meta fell through, Eyeway Vision went bankrupt. Eyeway Vision had a fixed foveated display and sophisticated eye tracking, but it went bankrupt before solving the moving foveated display.

Eyeway Vision’s founder, Boris Greenburg, has recently joined VoxelSensors, and VoxelSensors is looking at using their technology for eye/gaze tracking and SLAM (see Zinn Labs later)

Foveated Display (ex., Varjo) vs. Foveated Rending (ex., Apple Vision Pro)

I want to be clear between foveated rendering, where the display is fixed, and just the level of detail in the rendering changes based on eye tracking, from a foveated display, where a high-resolution sub-display is inset within a lower resolution display. Foveated rendering such as the Apple Vision Pro or Meta Quest Pro is possible, although today’s implementations have problems. However, it may be impossible to have a successful eye-tracking foveated display.

For more on this blog’s coverage of Foveated Displays, see:

32:05 Magic Leap (Mostly Human Factors)

At AR/VR/MR 2024, Magic Leap gave a presentation that mostly discussed human factors. They discussed some issues they encountered when developing the Magic Leap One, including fitting a headset to a range of human faces (below right). I thought the presentation should have been titled “Why the Apple Vision Pro is having so many problems with fitting.”

In 2016, This Blog Caught Magic Leap’s Misleading Video

In showing Magic Leap’s history, they showed a prototype headset that used birdbath optics (above left). Back in 2016, Magic Leap released a video that stated, “Shot directly through Magic Leap technology . . . without the use of special effects or compositing.I noted at the time that this left a lot of legal wiggle room and that it might not be the same “technology” they would use in the final product, and this turned out to be the case. I surmised that the video used OLED technology. It’s also clear from the video that it was not shot through a waveguide. It appears likely that the video was shot using an OLED through birdbath optics, not with the Waveguide Optics and LCOS display that the Magic Leap One eventually used.

In 2019, Magic Leap sued (and lost to) Nreal (now Xreal), which developed an AR headset using birdbath optics and an OLED display. Below are links to the 2016 article analyzing the Magic Leap deceptive video and my 2020 follow-up article:

36:45 NewSight Reality (Not Really “Transparent” MicroLED)

Sorry for being so blunt, but NewSight Reality’s “transparent” MicroLED concept does not and will not ever work. The basic concept is to put optics over small arrays of LEDs, and similar to pupil replication, the person will see an image. It is the same “physics” as MojoVision’s contact display (which I consider a scam). In fact, NewSight’s prototype has nine MojoVision displays on a substrate (below center)

The fundamental problem is that to get a display of any resolution, plus the optics, the “little dots” are so big that they, combined with diffraction, cause a blurry set of gray dots in a person’s vision. Additionally, the pupil replication effect ends up with a series of circles where you can see the image.

38:55 Other Optics and Eye Tracking

The next section is on other optics and eye tracking. Thanks to Tobii being involved in both, they sort of tie this section together.

39:01 AddOptics

AddOptics developed a 3-D-printed optical mold process. It was founded by former Luxexcel employees (Luxexcel was subsequently acquired by Meta in 2022).

I covered AddOptics last year in CES 2023 (Part 3)—AddOptics Custom Optics. The big addition in 2024 was that they showed their ability to make push-pull optics for sandwiching a waveguide. They showed they could support waveguides that required an air gap or not. As far as I am aware, most, if not all, diffractive waveguides require an air gap. The only waveguide I know of that claims they don’t need an air gap is the newer Lumus reflective-based waveguide (discussed in a previous article). Still, I have not heard of whether AddOptics is working with Lumus or one of Lumus’s customers.

Luxexcel had developed a process to directly 3-D print optics without the need for any resurfacing. This means they need to print very fine layers very precisely, lens by lens. While it means each lens it custom can be custom fit, it also seems to be an expensive process compared to the way prescription lenses are made today. By making “low run” 3-D printed molds (something that Luxexcel could also do), AddOptics would have a lower cost per unit and a faster approach. It would require having a stock of molds, but it would not require a prohibitive number of molds to support most combinations of diopter and cylinder (astigmatism) correction.

42:12 Tobii

Tobii, founded in 2001, has long been known for its eye-tracking technology. Tobii was looking to embed LED illuminators in lenses and was working with Interglass. When Interglass (founded in 2004) went bankrupt in 2020, Tobii hired the key technical team members from Integlass. Meta Materials (not to be confused with Meta, formerly Facebook) acquired the assets of Interglass and is also making a similar technology.

The Interglass/Tobii/Meta-Materials process uses many glass molds to support variations of diopter and cylinder adjustments for prescriptions. The glass molds are injected with UV-cured plastic resin, which, after curing, forms lens blanks/rounds. When molding, the molds can be rotated to set the cylinder angle. The round lens blanks can then be cut by conventional lens fitting equipment.

At 2023’s AR/VR/MR, Tobii demonstrated (left two pictures below) how their lenses were non-birefringent, which is important when working with polarized light-based optics (e.g., Pancake Optics, which Tobii says they can make) and displays (LCDs and LCOS). Tobii has videos on its website that show the lens-making and electronic integrating process (below right).

43:44 Zinn (and VoxelSensors)

Zinn Labs uses a Prophesee event-based camera sensor (Zinn and Prophesee announcement). The Prophesee event camera sensor was jointly developed with Sony. Zinn uses Prophesee’s 320×320 6.3μm pixel BSI (BackSide Illuminated) event-based sensor in a 1/5” optical format.

Event camera pixels work like the human eye in detecting changes rather than the absolute value of each pixel. The pixels are much more complex than a conventional camera sensor, with photodiodes and comparators integrated into each pixel using Sony’s BSI process. Rather than scanning out the pixel value at a frame rate, each pixel reports when it changes significantly (more details can be found in the Prophesee white paper – free, but you have to give an email address). The advantage of the event camera in image recognition is that it tends to filter out/ignore everything that is not changing.

Zinn Labs has developed algorithms that then take the output from the event camera and turn it into where the eye is gazing (for more information, see here).

VoxelSensors (and Zinn Labs)

VoxelSensors has a very different type of event sensor called a “SPAES (Single Photon Active Event Sensor)” that could be used for eye/gaze tracking. Quoting from VoxelSensors:

VoxelSensors leverages its distinctive SPAES (Single Photon Active Event Sensor) technology, allowing the integration of multimodal perception sensors, such as innovative hand and gaze tracking and SLAM, with high precision, low power consumption, and low latency. Fusing these key modalities will enable the development of next-gen XR systems.

As discussed earlier, VoxelSensors also recently hired Eyeway Vision found Boris Greenberg, who has extensive experience in eye/gaze tracking.

VoxelSensors’s SPAES uses a laser scanner to scan the area of interest in a narrow-band infrared laser (where the Prophesee event camera would use IR LED flood illumination) and then detect the laser scanner’s return to the area of interest. With narrow-band filtering to filter out all but the laser’s wavelength, the SPAES is designed to be extremely sensitive (they claim as little as a single photon) to the laser’s return. Like the Prophesee event camera, the VoxelSensors’s SPAES returns the pixel location when an event occurs.

While the VoxelSensor’s pixel is more complex than a traditional sensor, it seems simpler than Prophesee’s event camera pixel, but then VoxelSensor requires scanning lasers versus LED. Both are using event sensors to reduce the computational load. I have no idea at this point which will be better at eye tracking.

VoxelSensors with one or more sets of laser scanners and sensors can detect in three dimensions, which is obviously useful for SLAM but might also have advantages for eye tracking.

For more on Voxel Sensors my 2023 CES article: CES 2023 (4) – VoxelSensors 3D Perception, Fast and Accurate.

44:13 Lumotive (LCOS-Based Laser Scanning for LiDAR)

Lumotive has a technology that uses LCOS devices to scan a laser beam. Today, LiDAR systems use a motor-driven rotating prism or a MEMs mirror to scan a laser beam, resulting in a fixed scanning process. The Lumotive method will let them dynamically adjust and change the scanning pattern.

46:03 GreenLight Optics

I’ve known Green Light Optics since its founding in 2009 and have worked with them to help me with several optical designs over the years. Greenlight can design and manufacture optics and is located in Cincinnati, Ohio. I ran into GreenLight at the Photonics West exhibit following the AR/VR/MR conference. I thought it would be helpful for other companies that might need optical design and manufacturing to mention them.

Quoting GreenLights website:

Greenlight Optics is an optical systems engineering and manufacturing company specializing in projection displays, LED and laser illumination, imaging systems, plastic optics, and the integration of optics with electrical and mechanical systems.”

Next Time – Display Devices and Test and Measurement Companies

In the next part of this series will on CES and AR/VR/MR 2023, I plan to cover display devices and a few test and measurement companies.

Mixed Reality at CES and the AR/VR/MR 2024 Video (Part 1 – Headset Companies)

Update 4/2/2024: Everysight corrected a comment I made about the size of their eyebox.

Introduction

This blog has covered mixed reality (MR) headsets, displays, and optics at CES since 2017 and SPIE’s AR/VR/MR conference since 2019. Both conferences occur in January each year. With this blog’s worldwide reputation (about half of the readers are from outside the U.S.), many companies want to meet. This year, I met with over 50 companies in just one month. Then Apple released the Apple Vision Pro on Feb. 2nd.

As this blog is a one-person operation, I can’t possibly write in detail about all the companies I have met with, yet I want to let people know about them. Last year, in addition to articles on some companies, Brad Lynch of the SadlyIsBradley YouTube channel and I made videos about many companies I met at CES 2023. Then, for AR/VR/MR 2023, I wrote an eight (8) part series of articles on AR/VR/MR. For CES 2024, I wrote a three (3) part series covering many companies.

However, with my Apple Vision Pro (AVP) coverage plus other commitments, I couldn’t see how to cover the over 50 companies I met with in January. While the AVP is such a major product in mixed reality and is important for a broad audience, I don’t want the other companies working on MR headsets, displays, and optics to be forgotten. So, I asked Jason McDowall of The AR Show to moderate a video presentation of the over 50 companies, with each company getting one slide.

Jason and I recorded for about 4 hours (before editing), split over two days, which works out to less than 5 minutes per company. This first hour of the video covers primarily headset companies. I made an exception for the combination of Avegant’s prototype that used Dispelix as it seemed to fit with the headsets.

In editing the video, I realized my presentation was a little “thin” regarding details on some companies. I’m adding some supplementary information and links to this article. I also moved a few companies around in the editing process and re-recorded a couple of sections, so the side numbers don’t always go in order.

Subscription Options Coming to KGOnTech

Between travel expenses and buying an Apple Vision Pro (AVP) with a MacBook for testing the AVP, I spent about $12,000 out of pocket in January and early February alone. Nobody has ever paid to be included (or excluded) in this blog. This blog, which started as a part-time hobby, has become expensive in terms of money and a full-time job. What makes it onto the blog is the tip of the iceberg of time spent on interviews, research, photographing and editing pictures and videos, and travel.

Many companies, including other news outlets and individuals, benefit from this blog indirectly through education or directly via the exposure it gives to large and small companies. Many, if not most, MR industry insiders read this blog worldwide based on my conference interactions. I want to keep the main blog free and not filled with advertising while still reporting on large and small companies. To make financial sense of all this and pay some people to help me, I’m in the process of setting up subscription services for companies and planning on (paid) webinars for individuals. If you or your company might be interested, please email subscriptions@kgontech.com.

Outline of the Video and Additional Information

Below is an outline of the first hour of the video, along with some additional comments and links to more information. The times in blue on the left of each subsection below are the times in the YouTube video discussing a given company.

0:00 Jason McDowall of the AR Show and Karl Guttag of KGOnTech introductions.

Jason and I briefly introduced ourselves.

2:59 Mixed Reality Major Design Challenges

My AR/MR design challenge list started with 11 items in a guest article in Display Daily in December 2015 with Sorry, but there is no Santa Claus – Display Daily. Since then, the list has grown to 23.

The key point is that improving any of these items will negatively affect multiple other items. For example, having a wider field of view (FOV) will make the optics bigger, heavier, and more expensive. It will also require a higher resolution display to support the same or better angular resolution, which, in turn, means more pixels requiring more processing, which will need more power, which means bigger batteries and more thermal management. All these factors combine to hurt cost and weight.

6:34 Xreal (Formerly Nreal)

I’ve followed Nreal (now Xreal) since its first big splash in the U.S. at CES 2019 (wow, five years ago). Xreal claims to have shipped 300,000 units last year, making it by far the largest unit volume shipper of optical AR headsets.

At CES 2024, Xreal demonstrated a future design that goes beyond their current headsets and adds cameras for image recognition and SLAM-type features.

BMW invited me to a demo of their proof-of-concept glasses-based heads-up display. The demo used Xreal glasses as the display device. BWM had added a head-tracking device under its rearview mirror to lock the user’s view of the car.

But even at CES 2019, Nreal was a case of déjà vu, as it looked so much like a cost-reduced version of the Osterhaut Design Group (ODG) R-9 that I first saw at CES 2017 and started covering and discussing in 2016. The ODG R-9 and the original X-Real had similar birdbath designs and used a Sony 1920×1080 Micro-OLED display. According to a friend of this blog and a former ODG R-9 designer and now CEO of the design firm PulsAR, David Bonelli, there are still some optical advantages of the ODG R-9 that others have yet to copy.

Below is a link to my recent article on CES, which discusses Xreal and my ride wearing the BMW AR demo. I have also included some links to my 2021 teardown of the Nreal birdbath optics and 2016 and 2017 articles about the ODG-R9.

11:48 Vuzix

Vuziz was founded in 1997 before making see-through AR devices, no less waveguides, became practical. It now has a wide range of products aimed at different applications. Vuzix founder and CEO Paul Travers has emphasized the need for rugged, all-day wearable AR glasses.

Vuzix historically has primarily had small, lightweight designs, with most later products having a glasses-like form factor. Vuzix originally licensed waveguide technology from Nokia, the same technology Microsoft licensed and later acquired for its Hololens 1. Vuzix says its current waveguide designs are very different from what it licensed from Nokia.

Vuzix’s current waveguide-based products include the monocular BLADE and the biocular SHIELD, which use Texas Instruments DLP displays.  Vuzix ‘s latest products are the Ultralight and Ultralight-S, which use Jade Bird Display MicroLEDs driving a waveguide. The current monocular designs use a green-only Jade Bird Display (JBD) with a 640 by 480 resolution and weigh only 34 grams. Vuzix has also announced plans to partner with the French startup Atomistic to develop full-color on a single device, MicroLEDs.

Multiple companies use Vuzix glasses as the headset platform to add other hardware and software layers to make application AR headsets. Xander was at CES with their AI voice-to-text glasses (discussed later). The company 3D2Cut has AI software that shows unskilled workers where to prune wine grape vines based on inputs from vine pruning experts. At last year’s CES, I met with 360world and their ThermalGlass prototype, which added thermal cameras to a Vuzix headset.

Below are links to my 2024 CES article that included Vuzix, plus a collection of other articles about Vuzix from prior years:

17:13 Digilens

I’ve met with Digilens many times through the years. This year was primarily an update and improvements on this major announcement of their Argo headset from last year (see 2023 article and video via the links below).

Digilens said that in response to my comments last year, they designed an Argo headband variant with a rigid headband that does not rest on the nose and can be flipped up out of view. This new design supports wearing ordinary glasses and is more comfortable for long-term wear. Digilens said many of their customers like this new design variation. A major problem I see with the Apple Vision Pro is the way it is uncomfortably clamped to the face and that it does not flip up like, say, the Lynx MR headset (see also video with Brad Lynch) and Sony MR Headset announced at CES 2024 (which looks very much like the Lynx headset).

Digilens also showed examples of their one-, two-, and three-layer waveguides, which can trade in weight and cost for differences in image quality. They also showed examples of moving the exit grating to different locations in the waveguide.

As I have covered Digilens so much in the past (see links below for some more recent articles), this year’s video was just an update:

20:00 Avegant

Avegant has become a technology development company. They are currently focused on designing small LCOS engines for AR glasses. They presented an update at the AR/VR/MR 2024 conference. Right before the conference, Avegant announced its development of “Spotlight™” to improve contrast by selective illumination of the LCOS panel, similar to LED array LCD TVs with local dimming.

Avengant has shown a very small 30-degree FOV, LCOS-based, 1280×720 pixel, light engine supporting a glasses-like form factor. Avegant’s glasses designs support higher resolution, larger FOV, and a smaller form factor than laser beam scanning or X-Cube-based MicroLEDs (see TCL below). They also got over 1 million nits out of their 30-degree FOV engines. While Avegant designed and built the projector engine and prototype glasses, they used Dispelix waveguides (to be discussed next).

Below are links to blog articles about Avegant’s small LCOS engines:

24:46 Dispelix (and Avegant)

Dispelix is a waveguide design company, not a headset maker. Avegant, among others, was using Dispelix waveguides (and why they were discussed at this point in the video).

Dispelix presented at the AR/VR/MR conference, where they discussed their roadmap to improve efficiency, reduce “eye glow,” and reduce “rainbow artifacts” caused by diffraction grating light capture.

Dispelix claims to have a roadmap to improve light throughput by a factor of ~4.5 over its current Selva design.

Dispelix, like several other diffractive waveguide companies, including Vuzix and Digilens, uses pantoscopic (front to back) tilt to reduce the eye glow effect, which is common with most other diffractive waveguides (most famously, Hololens). It turns out that for every one-degree of tilt, the “glow” is tilted down by two degrees such that with just a few degrees of tilt, the glow is projected well below most people’s view. Displelix has said that a combination of grating designs and optical coatings can nearly eliminate the glow in future designs.

Another problem (not discussed in the video) that has plagued diffractive waveguides has been the “rainbow artifact” caused by external light, particularly overhead from in front or behind the waveguide, being directed to the eye from the diffraction gratings. Because the gratings effect is wavelength-dependent, the light is broken into multiple colors (like a rainbow). Dispelix says they are developing designs that will direct the unwanted external light away from the eye.

(2024) CES (Pt. 2), Sony XR, DigiLens, Vuzix, Solos, EverySight, Mojie, TCL color µLED

30:50 Tilt-Five (and CEO Jeri Ellsworth)

I met with Jeri Ellsworth, the CEO of Tilt-Five, at CES. In addition to getting an update on Tilt-Five (with nothing I can’t talk about), Jeri and I discussed our various histories working on video game hardware, graphics co-processors, and augmented reality.

BTW, Jeri Ellsworth, Jason McDowall, Adi Robertson (editor at The Verge), Ed Tang (CEO of Avegant), and I are slated to be on a panel discussion at AWE 2024.

Below are some links to my prior reporting on Tilt-Five.

36:05 Sightful Spacetop

Sightful’s Spacetop is essentially a laptop-like keyboard and computer with Xreal-type birdbath optics using 1920×1080 OLED microdisplays with a 52-degree FOV. Under the keyboard are the processing system (Qualcomm Snapdragon XR2 Kryo 585TM 8-core 64-bit CPU and AdrenoTM 650 GP), memory (8GB), flash (128GB), and battery (5 hours of typical use). The system runs a “highly modified” Android operating system.

I saw Sightful at the Show Stoppers media event at CES, and they were nice enough to bring me custom prescription inserts to the AR/VR/MR conference. Sightful’s software environment supports multiple virtual- monitors/windows of various sizes, which are clipped to the glasses’ 1920×1080, 52-degree view. I believe the system uses the inertial sensors in the headset to make the virtual monitors appear stationary as opposed to the more advanced SLAM (simultaneous localization and mapping) used by many larger headsets.

As a side note, my first near-eye-display work in 1998 was on a monocular headset to be used with laptops as a private display when traveling. I designed the 1024×768 (high resolution for a 1998 microdisplay) LCOS display device and its controller. The monocular headset used color sequential LED illumination with birdbath mirror optics. Given the efficiency and brightness of LEDs of the day, it was all we could do to make a non-see-through monocular device. Unfortunately, the dot-com bust happened in 1999, which took out many high-tech startups.

I wrote about Sightful in my 2024 CES coverage:

36:05 Nimo

Nimo’s “Spatial Computing” approach is slightly different from Sightful’s. Instead of combining the computing hardware with the keyboard like Sightful, Nimo has a small computing and battery module that works as a 3-D spatial mouse with a trackpad (on top). Nimo has a USB-C connection for AR glasses, WiFi 6, and Bluetooth 5.1 for communication with an (optional) wireless keyboard.

The computing specs resemble Sightful’s, with a Qualcomm® XR2 8-core CPU, 8GB RAM, and 128GB Storage. Nimo supports working with Rokid, Xreal, and its own LetinAR-Optics-based 1920×1080 OLED AR glasses via its USB-C port, which provides display information and power.

Like Sightful, Nimo has a modified Android Operating system that supports multiple virtual monitors/windows. It uses the various glasses’ internal sensors to detect head movement to keep the monitors stationary in 3-D space as the user’s head moves.

I wrote about Nimo Planet in my 2024 CES coverage:

38:59 .Lumen (headset for the blind)

Lumen is a headset for blind people that incorporates lidar, cameras, and other sensors. Rather than outputting a display image, it provides haptic and audible feedback to the user. I don’t know how to judge this technology, but it seems like an interesting case where today’s technology could help people.

40:07 Ocutrx Oculenz

Ocutrx’s OcuLenz was initially aimed at helping people with macular degeneration and other forms of low vision. However, at the Ocutrx booth on its website at the CES ShowStoppers event, Ocutrx emphasized that the headset could be used for more than low vision, including gamers, surgeons, and military personnel. The optical design was done by an old friend, David Kessler, whom I ran into at the Ocutrx booth at CES and the AR/VR/MR conference.

The Oculenz uses larger-than-typical birdbath optics to support a 72-degree (diagonal) FOV. It uses 2560 x 1440 pixels per eye, so they will have a similar angular resolution but wider FOV than the more common 1920×1080 birdbath glasses (e.g., Xreal), which typically have 45- to ~50-degree FOVs. Unlike the typical birdbath glasses, which have separate processing, the Oculenz integrates a Qualcomm Snapdragon® XR2 processor, Wi-Fi, and cellular connectivity. This headset was originally aimed at people with low vision as a stand-alone device.

I wrote about Ocutrx and some of the issues of funding low-vision glasses in my earlier report on CES 2024, linked below:

44:22 Everysight

Everysight has AR glasses in a glasses-like form factor. They are designed to be self-contained, weigh only 47 grams, and have no external wiring. They use a 640×400 pixel full-color OLED display and can achieve >1000 nits to the eye.

Everysight uses a “Pre-Compensated Off-Axis” optical design, which tends to get more than double the light from the display to the eye while enabling more than three times the real-world light to pass through the display area compared to birdbath (e.g., Xreal) designs. With this design, the pre-compensation optics pre-correct for hitting the curved semi-mirror combiner off-axis. Typically, this mirror will be 50% or less reflective and only has to be applied over where the display is to be seen.

However, the Everysight glasses only support a rather small 22-degree FOV, and the eyebox is rather small. While Everysight has reduced the panoscopic tilt of the lenses over prior models, the latest Maverick modes still tilt toward the user’s cheeks more than most common glasses.

UPDATE 4/2/2024: Everysight responded to my original eyebox comment, “With respect to the eyebox, we take care of that with different sizes (Maverick today has two sizes – Medium and Large). The important part is that once you have the correct size, glass or eye movements won’t take you out of the eyebox. We believe that this is a much better tradeoff than a one-size-fits-all [with] low optical efficiency and enables you to use OLEDs in sunny days outdoors, even with clear visors.

Thus far, Everysight seems to be marketing its glasses more to the sports market, which needs s, lightight headsets with bright displays for outdoor use.

If vision correction is not required, the lenses can be easily swapped out for various types of tint. More recently, Everysight has been able to support prescription lenses. For prescriptions, the inner curved mirror corrects for the virtual image, and a corrective lens on the outside corrects for the real world, including correcting for the curvature of the inner surface with the semi-mirror.

Everysight spun out of the large military company ELBIT, which perfected the pre-compensated off-axis design for larger headsets. This optical design is famously used in the F35 helmet and, more recently, in the civilian aircraft Skylens head-wearable HUD display, which has received FAA approval for use in multiple civilian aircraft, including recently the 737ng family.

Everysight was discussed in my CES 2024 coverage linked to below:

48:42 TCL RayNeo X2 and X2 Lite

At CES 2024, TCL showed their RayNex X2 and their newer X2 Light. I have worked with 3-chip LCOS projectors with an X-Cube in the past, and I was curious to see the image quality as I know from experience aligning to X-Cubes is non-trivial, particularly with the smaller sizes of the Jade Bird Display red, green, and blue MicroLED displays.

Overall, the newer X2 Lite using the Applied Materials (AMAT) waveguides looked much better than the earlier RayNeo X2 (non-Lite). Even the AMAT had significant front projection, but as discussed with respect to Displelix above, this problem can be managed, at least for smaller FOVs (the RayNeo X2s have a ~30-degree diagonal FOV).

I covered the TCL color µLED in more detail in my CES 2024 coverage (link below). I have also included links to articles discussing the Jade Bird Displays MicroLEDs and their use of an X-Cub for a color combiner:

55:54 Mojie/Meta Bounds

Mojie/Meta Bound showed 640×480 green-only MicroLED-based glasses claiming 3,000 nits (to the eye), 90% transparency (without tinting), a 28-degree FOV, and a weight of only 38 grams. These were also wireless and, to a first approximation, very similar to Vuzix UltraLite. One thing that makes them stand out is that they use a waveguide technology made of plastic resin (most use glass).

Many companies are experimenting with plastic waveguides to reduce weight and improve safety. So far, the color uniformity with full-color displays has been worse than with glass-based waveguides. However, the uniformity issues are less noticeable with a monochrome (green) display. Mitsui Chemicals and Mitsubishi Chemicals, both of Japan, are suppliers of resin plastic substrate material for waveguides.

Below is a link to my article on Mojie/Meta Bounds in my CES 2024 coverage:

57:59 Canon Mixed Reality

Canon had a fun demo based on the 100+ camera Free Viewpoint Video System VR system. Basically, you could sit around a table and see a basketball game (I think it was the 2022 NBA All-Stars Game) played on that table from any angle. Canon has been working on this technology for a decade or more, with demos for both basketball and soccer (football). While it’s an interesting technology demo, I don’t see how this would be a great way to watch a complete game. Even with over 100 cameras and the players being relatively small (far away virtually), one could see gaps where that the cameras couldn’t cover.

Canon also showed a very small passthrough AR camera and lens setup. While it was small, the FOV and video quality were not impressive. Brad Lynch of SadlyItsBradley found it to be pointless.

I have personally purchased a lot of Canon camera equipment over the last 25 years (including my Canon R5, which I take pictures with for this blog), so I am not in any way against Canon. However, as I discussed with Brad Lynch about Canon’s booth at CES 2023 (YouTube Link), I can’t see where Canon is going or what message they are trying to send in terms of mixed reality despite their very large and expensive booth. On the surface, Canon seems to be dabbling in various MR technologies, but it is not moving in a clear direction.

59:54 Solos (and Audio Glasses)

Solos makes audio-only glasses similar to the Meta/RayBand Wayfarer (but without cameras). These glasses emphasize modular construction, with all the expensive “smarts” in the temples so that the front-part lenses can be easily swapped.

Like several others, Solos uses cellular communication to connect to ChatGPT to do on-the-fly translations. What makes Solos more interesting is that Its Chairman is John Fan, also the chairman of Lightning Silicon Technology (a spinoff of Kopin Displays), a maker of OLED Microdisplays. At Lighting Silicon’s CES 2024 suite, John Fan discussed that incorporating the displays into the Solos glasses was an obvious future step.

CES (Pt. 2), Sony XR, DigiLens, Vuzix, Solos, EverySight, Mojie, TCL color µLED

1:01:16 Xander

While I saw Xander in the AARP sponsor AgeTech Summit booth at CES 2024, I didn’t get to meet with them. Xander hits at a couple of issues I feel are important. First, they show how AR technology can be used to help people. Secondly, they show what is expected to be a growing trend of adding basic visual information to augment audio.

While I (Karl) missed Xander at CES 2024, it turns out that Jason McDowall’s The AR Show (with guest host Kaden Pierce) recently interviewed Xander CEO Alex Westner on The AR Show.

Next Time – Optics and Display Devices

The video’s next part will discuss optical and display device companies.

CES (Pt. 3), Xreal, BMW, Ocutrx, Nimo Planet, Sightful, and LetinAR

Update 1/28/2024 – Based on some feedback from Nimo Planet, I have corrected the description of their computer pod.

Introduction

The “theme” for this article is companies I met with at CES with optical see-through Augmented and Mixed Reality using OLED microdisplays.

I’m off to SPIE AR/VR/MR 2024 in San Francisco as I release this article. So, this write-up will be a bit rushed and likely have more than the usual typos. Then, right after I get back from the AR/VR/MR show, I should be picking up my Apple Vision Pro for testing.

Xreal

Xreal (formerly Nreal) says they shipped 350K units in 2023, more than all other AR/MR companies combined. They had a large booth on the CES floor, which was very busy. They had multiple public and private demo stations.

From 2021 KGOnTech Teardown

This blog has followed Xreal/Nreal since its first appearance at CES in 2019. Xreal uses an OLED microdisplay in a “birdbath” optical architecture first made popular by (the now defunct) Osterhout Design Group (ODG) with their R8 and R9, which were shown at CES in 2017. For more on this design, I would suggest reading my 2021 teardown articles on the Nreal first product (Nreal Teardown: Part 1, Clones and Birdbath Basics, Nreal Teardown: Part 2, Detailed Look Inside, and Nreal Teardown: Part 3, Pictures Through the Lens).

Inherent in the birdbath optical architecture Xreal still uses, they will block about 70% of the real-world light, acting like moderately dark sunglasses. About 10% of the display’s light makes it to the eye, which is much more efficient than waveguides, which are much thinner and more transparent. Xreal claims their newer designs support up to 500 nits, meaning the Sony Micro-OLEDs must output about 5,000 nits.

With investment, volume, and experience, Xreal has improved its optics and image quality. It can’t improve much over the inherent limitations of a birdbath, particularly in terms of transparency. Xreal recently added an LCD dimming shutter to selectively block out more or all of the real world fully with their new Xreal Air 2 Pro and their latest Air 2 Ultra, for which I was given a demo at CES.

The earlier Xreal/Nreal headsets were little more than 1920×1080 monitors you wore with a USB-C connection for power and video. Each generation has added more “smarts” to the glasses. The Air 2 Ultra includes dual 3-D IR camera sensors for spatial recognition. Xreal and (to be discussed later) Nimo, among others, have already picked up on Apple’s “Spatial Computing,” referring to their products as affordable ways to get into spatial computing.

Most of the newer headsets will support either via a cell phone or Xreal’s “Beam” compute module, which can act to mirror or cast one more virtual display from a computer, cell phone, or tablet. While virtually there may be more monitors, they are still represented on a 1920×1080 display device. I believe (I forgot to ask) that Xreal is using internal sensors to detect head movement to virtualize the monitors with head movement.

Xreals Air 2 Ultra demo showcased the new spatial sensors’ ability to recognize hand and finger gestures. Additionally, the sensors could read “bar-coded” dials and slides made from cardboard.

BMW AR Ride Concept (Using Xreal Glasses)

In addition to seeing Xreal devices on their own, I was invited by BMW to take a ride trying out their Augmented Reality HUD on the streets around the convention center. A video produced by BMW gives a slightly different and abbreviated trip. I should emphasize that this is just an R&D demonstration, not a product that BMW plans to introduce. Also, BMW made clear that they would be working with other makes of headsets but that Xreal was the most readily available.

To augment using the Xreal glasses, BWM mounted a head tracking camera under the rearview mirror. This allows the BMW to lock the image generated to the physical car. Specifically, it allowed them to (selectively) block/occlude parts of the virtual image hidden behind the front A-pillar of the car. Not shown in the pictures from BMW below (click on the picture to see them bigger) is that you could see the images would start in the front window but be hidden by the A-pillar and then continue in the side window.

BWM’s R&D is looking at driver and passenger AR glasses use. They discussed that they would have different content for the driver, which would have to be simplified and more limited than what they could show the passenger. There are many technical and government/legal issues (all 50 states in the U.S. have different laws regarding HUD displays) with supporting headsets on drivers. From a purely technical perspective, a hear-worn AR HUD has many advantages and some disadvantages versus a fixed HUD on the windshield or dash combiner (too much to get into in this quick article).

Ocutrx (for Low-Vision and other applications)

Ocutrx’s Oculenz is also using “birdbath” optics with the OcuLenz. The OcuLens was originally designed to support people with “low vision,” especially people with Macular Degeneration and eye problems that block parts of a person’s vision. People with Macular Degeneration lose their vision’s high-resolution, high-contrast, and color-sensitive parts. They must rely on other parts of the retina, commonly called peripheral vision (although it may include more than just what is technically considered peripheral vision).

A low-vision headset must have a wide FOV to reach the outer parts of the retina. They must magnify, increase color saturation, and improve contrast over what a person with normal vision would want to see. Note that while these people may be legally blind, they still can see, particularly with their peripheral vision. This is why a headset that still allows them to use their peripheral vision is important.

About 20 million people in the US alone have what is considered “low-vision,” and about 1 million more people each develop low-vision as the population ages. It is the biggest identifiable market I know of today for augmented reality headset headsets. But a catch needs to be fixed for this market to be served. By the very nature of the people involved, having low vision and often being elderly, they need a lot of professional help while at the same time being often on a fixed or limited income. Unfortunately, rarely will private or government (Medicare/Medicaid) insurance will rarely cover either the headset cost or the professional support required. There have been bills before Congress to change this, but so far, nothing has happened of which I am aware. Without a way to pay for the headsets, the volumes are low, which makes the headsets more expensive than they need to be.

In the past, I have reported on Evergaze’s seeBoost, which existed in this market while developing their second-generation product for the economic reasons (lack of insurance coverage) above. I have also discussed NuEyes with Bradley Lynch in a video after AWE 2022. The economic realities of the low-vision market cause companies like NuEye and Ocutrx to look for other business opportunities for the headsets. It is really a frustrating situation knowing that technology could help so many people. I hope to cover this topic in more detail in the future.

Nimo Planet (Nimo)

Nimo Planet (Nimo) makes a small computer that acts as a spatial mouse pointer for AR headsets with a USB-C port for power and video input. It replaces the need for a cell phone and can send mirror/casting video information from other devices to the headset. Still, Nimo Core is a fully standalone computer with Nimo OS, which simultaneously supports Android, Web, and Unity Apps. No other host computer is needed.

According to Nimo, every other multi-screen solution in the market is developed in web platforms or UnityApp, which limits them to running only Web Views. Nimo OS created a new Stereo Rendering and Multi-Window architecture in AOSP to run multiple Android, Unity, and Web Apps simultaneously.

Nimo developed their glasses based on LentinAR optics and supports other AR glasses. Most notably, they just announced a joint development agreement with Rokid.

I got a brief demonstration of Nimo’s multi-windows on an AR headset. They use the inertial sensors in the headset to detect head movement and move the view of the multiple windows accordingly. It is like you are looking at multiple monitors through a 1920×1080 window. No matter how big the size or number of virtual monitors, they will be clipped to that 1920×1080 view. This device lets you move your head to select what you see. I discussed some of the issues with simulating virtual monitors with head-mounted displays in Apple Vision Pro (Part 5A) – Why Monitor Replacement is Ridiculous, Apple Vision Pro (Part 5B) – More on Monitor Replacement is Ridiculous, and Apple Vision Pro (Part 5C) – More on Monitor Replacement is Ridiculous.

Sightful

The Sightful is similar to the Nimo Planet type of device in some ways. With the Sightful, the computer is built inside the keyboard and touchpad, making it a full-fledged computer. Alternatively, Sightful can be viewed as a laptop computer where the display uses AR glasses rather than a flat panel.

Like Nimo and Xreal’s Beam and many other new Mixed Reality devices, Sightful supports multiple windows. I don’t know if they have sensors for 3-D sensing, so I suspect they use internal sensors to detect head movement.

Sightful’s basic display specs resemble other birdbath AR glasses designs from companies like Xreal and Rokid. I have not had a chance, however, to compare them seriously.

LetinAR

I have been writing about LetinAR since 2018. LetinAR started with a “Pin Mirror” type of pupil replication. They have now moved on to a series of what I will call “horizontal slat pupil replicators.” They also use total internal reflections (TIR) and a curved mirror to move the focus of the image form an OLED microdisplay before going to the various pupil-expanding slats.

While LetinAR’s slat design improves image quality over its earlier pin mirrors, it is still imperfect. When looking through the lenses (without a virtual image), the view is a bit “disturbed” and seems to have diffraction line effects. Similarly, you can perceive gaps or double images depending on your eye location and movement. LetinAR is working on continuing to improve this technology. While their image quality is not as good as the birdbath designs, they offer much better transparency.

LetinAR seems to be making progress with multiple customers, including Jor Jin, who was demonstrating in the LentinAR booth, Sharp, which had a big demonstration in their booth (while they didn’t say whose optics were in the demo, it was obviously LentinARs – see pictures below), and the headset discussed above by Nimo.

Conclusions

Sorry, there is no time for major conclusions today. I’m off to the AR/VR/MR Conference and Exhibition.

I will note that regardless of the success of the AVP, Apple has already succeeded in changing the language of Augmented and Mixed reality. In addition to almost everyone in AR and Mixed reality talking “AI,” many companies now use “Spatial Computing” to refer to their products in their marketing.

❌