Reading view
The First $100 You Should Spend on Meta Quest Games
Quest 3S launches on October 15th, letting you dive into some of the best free games and experiences out there on the cheap, as well as a ton of paid Quest content built up over the years. Whether you’re into active games, puzzles, or just want to slice or shoot the ever-living crap out of something, we’re here to help you settle into your new headset with a few games that should keep you playing for hundreds of hours yet to come.
Note: This list includes only Quest-native games. Don’t forget that you can also play PC games with Quest Link, Air Link, or Valve’s Steam Link, and of course a VR-ready PC. Find out if your PC is ready for Link.
This list is a great starting point if you’re looking to burn pretty close to a single Benjamin, with each genre section featuring some tried-and-true games for cardio freaks, shooter fans, puzzle nerds, fantasy swordplay geeks, and much more. Click through each category, or pick and mix using the legend below:
- Multiplayer Shooters (below)
- Single Player Shooters
- Active Music Games
- Purely Puzzles
- Adventure + Puzzles
- Swords & Sorcery
- Fitness Games
- Relaxing Casual Games
Multiplayer Shooting Madness
Zero Caliber 2 – $28
While Zero Caliber 2 packs in an eight-hour single player campaign, not only can you play in four-player co-op, but also classic multiplayer game modes with up to 10 players. It looks (and plays) amazing on Quest 3.
Pavlov Shack – $20
After a long stint in free early access, Pavlov Shack brings a torrent of awesome features, making it truly worthy of its $20 price tag. Including 65 realistic weapons, you can team up in a 5v5 match of classic Search and Destroy, uncover traitors in a casual murder mystery, monster hunt in an asymmetrical 1 vs. 9 game mode. Operate vehicles in a 4v4 WWII match to defend Stalingrad, and hundreds of community made mods.
Breachers – $30
In Breachers, you plan your assault or orchestrate your defense as a team through intense close-quarters combat. Whether you play as an enforcer or a revolter, master your nifty gadgetry, customize your powerful weaponry and beat your opponents in stunning environments. Intuitive to grasp. Endlessly playable. Basically Counter-Strike.
Ghosts of Tabor – $20
Ghosts of Tabor is an extraction-based game with both PVP and PVE survival where you will have to use your wits, skills and resources to survive. Inspired by games such as Escape from Tarkov and Day Z, the game features a variety of scenarios from scavenging, looting to crafting. Make your safehouse your own by building your personal collection weapons and gear to display in your armory.
Into Black – $25
This is another strong single player game that just so happens to make everything more fun with the addition of four-player co-op, essentially replicating a lot of the action of Deep Rock Galatic in VR. Mine the caves. Shoot the many-legged beasties, and try to repair your ship to get the hell out (and back in again, because it’s so fun).
Continue on Page 2: Single Player Shooters»
The post The First $100 You Should Spend on Meta Quest Games appeared first on Road to VR.
Meta Orion AR Glasses (Pt. 1 Waveguides)
Introduction
While Meta’s announced Orion prototype AR Glasses at Meta Connect made big news, there were few technical details beyond it having a 70-degree field of view (FOV) and using Silicon Carbide waveguides. While they demoed to the more general technical press and “influencers,” they didn’t seem to invite the more AR and VR-centric people who might be more analytical. Via some Meta patents, a Reddit post, and studying videos and articles, I was able to tease out some information.
This first article will concentrate on Orion’s Silicon Carbide diffractive waveguide. I have a lot of other thoughts on the mismatch of features and human factors that I will discuss in upcoming articles.
Wild Enthusiasm Stage and Lack of Technical Reviews
In the words of Yogi Berra, “It’s like deja vu all over again.” We went through this with the Apple Vision Pro, which went from being the second coming of the smartphone to almost disappearing earlier this year. This time, a more limited group of media people has been given access. There is virtually no critical analysis of the display’s image quality or the effect on the real world. I may be skeptical, but I have seen dozens of different diffractive waveguide designs, and there must be some issues, yet nothing has been reported. I expect there are problems with color uniformity and diffraction artifacts, but nothing was mentioned in any article or video. Heck, I have yet to see anyone mention the obvious eye glow problem (more on this in a bit).
The Vergecast podcast video discusses some of the utility issues and their related video, Exclusive: We tried Meta’s AR glasses with Mark Zuckerberg, which gives some more information about the experience. Thankfully, unlike Meta or any other (simulated) through-the-optics videos, The Verge clearly marked the videos as “Simulated” (screen capture on the right).
As far as I can tell, there are no true “through-the-optics” videos or pictures (likely at Meta’s request). All the images and videos I found that may look like they could have been taken through the optics have been “simulated.”
Another informative video was by Norm Chan of Adam Savages Tested, particularly in the last two-thirds of the video after his interview with Meta CTO Andrew Bosworth. Norm discussed that the demo was “on rails” with limited demos in a controlled room environment. I’m going to quote Bosworth a few times in this article because he added information; while he may have been giving some level of marketing spin, he seems to be generally truthful, unlike former Hololens 2 leader Alex Kipman, who was repeatedly dishonest in his Hololens 2 presentation (which I documented in several articles including Hololens 2 and why the resolution math fails, and Alex Kipman Fibbing about the field of view, Alex Kipman’s problems at Microsoft with references to other places where Kipman was “fibbing,” and Hololens 2 Display Evaluation (Part 2: Comparison to Hololens 1) or input “Kipman” on this blog’s search feature)
I’m not against companies making technology demos in general. However, making a big deal about a “prototype” and not a “product” at Meta Connect rather than at a technical conference like Siggraph indicates AR’s importance to Meta. It invites comparisons to the Apple Vision Pro, which Meta probably intended.
It is a little disappointing that they also only share the demos with selected “invited media” that, for the most part, lack deep expertise in display technology and are easily manipulated by a “good” demo (see Appendix: “Escape from a Lab” and “Demos Are a Magic Show”). They will naturally tend to pull punches to keep access to new product announcements from Meta and other major companies. As a result, there is no information about the image quality of the virtual display or any reported issues looking through the waveguides (which there must be).
Eye Glow
I’ve watched hours of videos and read multiple articles, and I have yet to hear anyone mention the obvious issue of “eye glow” (front projection). They will talk about the social acceptance of them looking like glasses and being able to see the person’s eyes, but then they won’t mention the glaring problem of the person’s eyes glowing. It stuck out to me because they didn’t mention the eye glow issue, evident in all the videos and many photos.
Eye glow is an issue that diffractive waveguide designers have been trying to reduce/eliminate for years. Then there are Lumus reflective waveguides with inherently little eye glow. Vuzix, Digilens, and Dispelix make big points about how they have reduced the problem with diffractive waveguides (see Front Projection (“Eye Glow”) and Pantoscoptic Tilt to Eliminate “Eye Glow”). However, these diffractive waveguide designs with greatly reduced eye glow issues have relatively small (25-35 degree) FOVs. The Orion design supports a very wide 70-degree FOV while trying to make it fit the size of a “typical” (if bulky) glasses frame; I suspect that the design methods to meet the size and FOV requirements meant that the issue of “eye glow” could not be addressed.
Light Transmission (Dimming?)
The transmissivity seems to vary in the many images and videos of people wearing Orions. It’s hard to tell, but it seems to change. On the right, two frames switch back and forth, and the glasses darken as the person puts them on (from video Orion AR Glasses: Apple’s Last Days)
Because I’m judging from videos and pictures with uncontrolled lighting, it’s impossible to know the transmissivity, but I can compare it to other AR glasses. Below are the highly transmissive Lumus Maximus glasses with greater than 80% transmissivity and the Hololens 2 with ~40% compared to the two dimming levels of the Orion glasses.
Below is a still frame from a Meta video showing some of the individual parts of the Orion glasses. They appear to show unusually dark cover glass, a dimming shutter (possibly liquid crystal) with a drive circuit attached, and a stack of flat optics with the waveguide with electronics connected to it. In his video, Norm Chen stated, “My understanding is the frontmost layer can be like a polarized layer.” This seems consistent with what appears to be the cover “glass” (which could be plastic), which looks so dark compared to the dimming shutter (LC is nearly transparent as it only changes the polarization of light).
If it does use a polarization-based dimming structure, this will cause problems when viewing polarization-based displays (such as LCD-based computer monitors and smartphones).
Orion’s Unusual Diffractive Waveguides
Axel Wong‘s analysis of Meta Orion’s Waveguide, which was translated and published on Reddit as Meta Orion AR Glasses: The first DEEP DIVE into the optical architecture, served as a starting point for my study of the Meta Orions optics, and I largely agree with his findings. Based on the figures he showed, his analysis was based on Meta Platforms’ (a patent holding company of Meta) US patent application 2024/0179284. Three figures from that application are shown below.
[10-08-2024 – Corrected the order of the Red, Green, and Blue inputs in Fig 10 below]
Overlapping Diffraction Gratings
It appears that Orion uses waveguides with diffraction gratings on both sides of the substrate (see FIG. 12A above). In Figure 10, the first and second “output gratings” overlap, which suggests that these gratings are on different surfaces. Based on FIGs 12A and 7C above, the gratings are on opposite sides of the same substrate. I have not seen this before with other waveguides and suspect it is a complicated/expensive process.
As Alex Wong pointed out in his analysis, supporting such a wide FOV in a glass form factor necessitated that the two large gratings overlap. Below (upper-left) is shown the Hololens 1 waveguide, typical of most other diffractive waveguides. It consists of a small input grating, a (often) trapezoidal-shaped expansion grating, and a more rectangular second expansion and output/exit grating. In the Orion (upper right), the two larger gratings effectively overlap so that the waveguide fits in the eyeglasses form factor. I have roughly positioned the Hololens 1 and Orion waveguides at the same vertical location relative to the eye.
Also shown in the figure above (lower left) is Orion’s waveguide wafer, which I used to generate the outlines of the gratings, and a picture (lower right) showing the two diffraction gratings in the eye glow from Orion.
It should be noted that while the Hololens 1 has only about half the FOV of the Orion, the size of the exit gratings is similar. The size of the Hololens 1 exit grating is due to the Hololen 1 having enough eye relief to support most wearing glasses. The farther away the eye is from the grating, the bigger the grating needs to be for a given FOV.
Light Entering From the “wrong side” of the waveguide
The patent application figures 12A and 7C are curious because the projector is on the opposite side of the waveguide from the eye/output. This would suggest that the projectors are outside the glasses rather than hidden in the temples on the same side of the waveguide as the eye.
Meta’s Bosworth in The WILDEST Tech I’ve Ever Tried – Meta Orion at 9:55 stated, “And so, this stack right here [pointing to the corner of the glasses of the clear plastic prototype] gets much thinner, actually, about half as thick. ‘Cause the protector comes in from the back at that point.”
Based on Bosworth’s statement, some optics route the light from the projectors in the temples to the front of the waveguides, necessitating thicker frames. Bosworth said that the next generation’s waveguides will accept light from the rear side of the waveguide. I assume that making the waveguides work this way is more difficult, or they would have already done it rather than having thicker frames on Orion.
However, Bosworth said, “There’s no bubbles. Like you throw this thing in a fish tank, you’re not gonna see anything.” This implies that everything is densely packed into the glasses, so other than saving the volume of the extra optics, there may not be a major size reduction possible. (Bosworth referenced Steve Jobs Dropping an iPod prototype in water story to prove that it could be made smaller due to the air bubbles that escaped)
Disparity Correction (Shown in Patent Application but not in Orion)
Meta’s application 2024/0179284, while showing many other details of the waveguide, is directed to “disparity correction.” Bosworth discusses in several interviews (including here) that Orion does not have disparity correction but that they intend to put it in future designs. As Bosworth describes it, the disparity correction is intended to correct for any flexing of the frames (or other alignment issues) that would cause the waveguides (and their images relative to the eyes) to move. He seems to suggest that this would allow Meta to use frames that would be thinner and that might have some flex to them.
Half Circular Entrance Gratings
Wong, in the Reddit article, also noticed that small input/entrance gratings visible on the wafer looked to be cut-off circles and commented:
However, if the coupling grating is indeed half-moon shaped, the light spot output by the light engine is also likely to be this shape. I personally guess that this design is mainly to reduce a common problem with SRG at the coupling point, that is, the secondary diffraction of the coupled light by the coupling grating.
Before the light spot of the light engine embarks on the great journey of total reflection and then entering the human eye after entering the coupling grating, a considerable part of the light will unfortunately be diffracted directly out by hitting the coupling grating again. This part of the light will cause a great energy loss, and it is also possible to hit the glass surface of the screen and then return to the grating to form ghost images.
Single Waveguide for all three colors?
The patent application seems to suggest that there is a single (double-sided) waveguide for all three colors (red, green, and blue). Most larger FOV full-color diffractive AR glasses will stack three (red, green, and blue—Examples Hololens One and Magic Leap 1&2) or two waveguides (red+blue and blue+green—Example Hololens 2). Dispelix has single-layer, full-color diffractive waveguides that go up to 50 degrees FOV.
Diffraction gratings have a line spacing based on the wavelengths of light they are meant to diffract. Supporting full color with such a wide FOV in a single waveguide would typically cause issues with image quality, including light fall-off in some colors and contrast losses. Unfortunately, there are no “through the optics” pictures or even subjective evaluations by an independent expert as to the image quality of Orion.
Silicon Carbide Waveguide Substrate
The idea of using silicon carbide for Waveguides it not unique to Meta. Below is an image from GETTING THE BIG PICTURE IN AR/VR, which discusses the advantages of using high-index materials like Lithium Niobate and Silicon Carbide to make waveguides. It is well known that going to a higher index of refraction substrates supports wider FOVs, as shown in the figure below. The problem, as Bosworth points out, is that growing silicon carbide wafers are very expensive. The wafers are also much smaller, enabling fewer waveguides per wafer. From the pictures of Meta’s wafers, they only get four waveguides per wafer, whereas there can be a dozen or more diffractive waveguides made on larger and much less expensive glass wafers.
Bosworth says “Nearly Artifact Free” and with Low “Rainbow” capture
A common issue with diffractive waveguides is that the diffraction gratings will capture light in the real world and then spread it out by wavelength like a prism, which creates a rainbow-like effect.
In Adam Savage’s Tested interview (@~5:10), Bosworth said, “The waveguide itself is nano etched into silicon carbide, which is a novel material with a super high index of refraction, which allows us to minimize the Lost photons and minimize the number of photons we capture from the world, so it minimizes things like ghosting and Haze and rainbow all these artifacts while giving you that field of view that you want. Well it’s not artifact free, it’s very close to artifact-free.” I appreciate that while Bosworth tried to give the advantages of their waveguide technology, he immediately corrected himself when he had overstated his case (unlike Hololens’ Kipman as cited in the Introduction). I would feel even better if they let some independent experts study it and give their opinions.
What Bosworth says about rainbows and other diffractive artifacts may be true, but I would like to see it evaluated by independent experts. Norm said in the same video, “It was a very on-rails demo with many guard rails. They walked me through this very evenly diffused lit room, so no bright lights.” I appreciate that Norm recognized he was getting at least a bit of a “magic show” demo (see appendix).
Wild Enthusiasm Stage and Lack of Technical Reviews
In the words of Yogi Berra, “It’s like deja vu all over again.” We went through this with the Apple Vision Pro, which went from being the second coming of the smartphone to almost disappearing earlier this year. This time, a more limited group of media people has been given access. There is virtually no critical analysis of the display’s image quality or the effect on the real world. I may be skeptical, but I have seen dozens of different diffractive waveguide designs, and there must be some issues, yet nothing has been reported. I’m expecting there to be problems with color uniformity and diffraction artifacts, but nothing was mentioned.
Strange Mix of a Wide FOV and Low Resolution
There was also little to no discussion in the reviews of Orion’s very low angular resolution of only 13 pixels per degree (PPD) spread over a 70-degree FOV (a topic for my next article on Orion). This works to about a 720- by 540-pixel display resolution.
Several people reported seeing a 26PPD demo, but it was unclear if this was a form factor or a lab-bench demo. Even 26PPD is a fairly low angular resolution.
Optical versus Passthough AR – Orion vs Vision Pro
Meta’s Orion demonstration is a declaration that optical AR (e.g., Orion) and non-camera passthrough AR, such as Apple Vision Pro, are the long-term prize devices. It makes the point that no passthrough camera and display combination can come close to competing with the real-world view in terms of dynamic range, resolution, biocular stereo, and infinite numbers of focus depths.
As I have repeatedly pointed out in writing and presentations, optical AR prioritizes the view of the real world, while camera passthrough AR prioritizes the virtual image view. I think there is very little overlap in their applications. I can’t imagine anyone allowing someone out on a factor floor or onto the streets of a city in a future Apple Vision Pro type device, but one could imagine it with something like the Meta Orion. And I think this is the point that Meta wanted to make.
Conclusions
I understand that Meta was demonstrating, in a way, “If money was not an obstacle, what could we do?” I think they were too fixated on the very wide FOV issue. I am concerned that the diffractive Silicon Carbide waveguides are not the right solution in the near or long term. They certainly can’t have a volume/consumer product with a significant “eye glow” problem.
This is a subject I have discussed many times, including in Small FOV Optical AR Discussion with Thad Starner and FOV Obsession. They have the worst of all worlds in some ways, with a very large FOV and a relatively low-resolution display; they block most of the real world for a given amount of content. With the same money, I think they could have made a more impressive demo with exotic waveguide materials that didn’t seem so far off in the future. I intend to get more into the human factors and display utility in this series on Meta Orion.
Appendix: “Demos Are a Magic Show”
Seeing the way Meta introduced Orion and hearing of the crafted demos they gave reminded me of one of my earliest blog articles from 2012 call Cynics Guide to CES – Glossary of Terms which gave warning about seeing demos.
Escaped From the Lab
Orion seems to fit the definition of an “escape from the lab.” Quoting from the 2012 article:
“Escaped from the lab” – This is the demonstration of a product concept that is highly impractical for any of a number of reasons including cost, lifetime/reliability, size, unrealistic setting (for example requires a special room that few could afford), and dangerous without skilled supervision. Sometimes demos “escape from the lab” because a company’s management has sunk a lot of money into a project and a public demo is an attempt to prove to management that the concepts will at least one day appeal to consumers.
I have used this phrase a few times over the years, including The Hololens 2 (Hololens 2 Video with Microvision “Easter Egg” Plus Some Hololens and Magic Leap Rumors), which was officially discontinued this month, although it has long since been seen as a failed product. I also commented (in Magic Leap Review Part 1 – The Terrible View Through Diffraction Gratings – see my Sept. 27, 2019 comment) that the Magic Leap One was “even more of a lab project.”
Why make such a big deal about Orion, a prototype with a strange mix of features and impractically expensive components? Someone(s) is trying to prove that the product concept was worth continued investment.
Magic Show
I also warned that demos are “a magic show.”
A Wizard of Oz (visual) – Carefully controlling the lighting, image size, viewing location and/or visual content in order to hide what would be obvious defects. Sometimes you are seeing a “magic show” that has little relationship to real world use.
I went into further detail in this subject in my early coverages of the Hololens 2 in the section, “Demos are a Magic Show and why are there no other reports of problems?“:
I constantly try and remind people that “demos are a magic show.” Most people get wowed by the show or being one of the special people to try on a new device. Many in the media may be great at writing, but they are not experts on evaluating displays. The imperfections and problems go unnoticed in a well-crafted demo with someone that is not trained to “look behind the curtain.”
The demo content is often picked to best show off a device and avoid content that might show flaws. For example, content that is busy with lots of visual “noise” will hide problems like image uniformity and dead pixels. Usually, the toughest test patterns are the simplest, as one will immediately be able to tell if something is wrong. I typically like patterns with a mostly white screen to check for uniformity and a mostly black screen to check for contrast, with some details in the patterns to show resolution and some large spots to check for unwanted reflections. For example, see my test patterns, which are free to download. When trying on a headset that supports a web browser, I will navigate to my test pattern page and select one of the test patterns.
Most of the companies that are getting early devices will have a special relationship with the manufacturer. They have a vested interest in seeing that the product succeeds either for their internal program or because they hope to develop software for the device. They certainly won’t want to be seen as causing Microsoft problems. They tend to direct their negative opinions to the manufacturer, not public forums.
Only with independent testing by people with display experience using their own test content will we understand the image quality of the Hololens 2.
Here’s what I made of Snap’s new augmented-reality Spectacles
Before I get to Snap’s new Spectacles, a confession: I have a long history of putting goofy new things on my face and liking it. Back in 2011, I tried on Sony’s head-mounted 3D glasses and, apparently, enjoyed them. Sort of. At the beginning of 2013, I was enamored with a Kickstarter project I saw at CES called Oculus Rift. I then spent the better part of the year with Google’s ridiculous Glass on my face and thought it was the future. Microsoft HoloLens? Loved it. Google Cardboard? Totally normal. Apple Vision Pro? A breakthrough, baby.
Anyway. Snap announced a new version of its Spectacles today. These are AR glasses that could finally deliver on the promises devices like Magic Leap, or HoloLens, or even Google Glass, made many years ago. I got to try them out a couple of weeks ago. They are pretty great! (But also: See above)
These fifth-generation Spectacles can display visual information and applications directly on their see-through lenses, making objects appear as if they are in the real world. The interface is powered by the company’s new operating system, Snap OS. Unlike typical VR headsets or spatial computing devices, these augmented-reality (AR) lenses don’t obscure your vision and re-create it with cameras. There is no screen covering your field of view. Instead, images appear to float and exist in three dimensions in the world around you, hovering in the air or resting on tables and floors.
Snap CTO Bobby Murphy described the intended result to MIT Technology Review as “computing overlaid on the world that enhances our experience of the people in the places that are around us, rather than isolating us or taking us out of that experience.”
In my demo, I was able to stack Lego pieces on a table, smack an AR golf ball into a hole across the room (at least a triple bogey), paint flowers and vines across the ceilings and walls using my hands, and ask questions about the objects I was looking at and receive answers from Snap’s virtual AI chatbot. There was even a little purple virtual doglike creature from Niantic, a Peridot, that followed me around the room and outside onto a balcony.
But look up from the table and you see a normal room. The golf ball is on the floor, not a virtual golf course. The Peridot perches on a real balcony railing. Crucially, this means you can maintain contact—including eye contact—with the people around you in the room.
To accomplish all this, Snap packed a lot of tech into the frames. There are two processors embedded inside, so all the compute happens in the glasses themselves. Cooling chambers in the sides did an effective job of dissipating heat in my demo. Four cameras capture the world around you, as well as the movement of your hands for gesture tracking. The images are displayed via micro-projectors, similar to those found in pico projectors, that do a nice job of presenting those three-dimensional images right in front of your eyes without requiring a lot of initial setup. It creates a tall, deep field of view—Snap claims it is similar to a 100-inch display at 10 feet—in a relatively small, lightweight device (226 grams). What’s more, they automatically darken when you step outside, so they work well not just in your home but out in the world.
You control all this with a combination of voice and hand gestures, most of which came pretty naturally to me. You can pinch to select objects and drag them around, for example. The AI chatbot could respond to questions posed in natural language (“What’s that ship I see in the distance?”). Some of the interactions require a phone, but for the most part Spectacles are a standalone device.
It doesn’t come cheap. Snap isn’t selling the glasses directly to consumers but requires you to agree to at least one year of paying $99 per month for a Spectacles Developer Program account that gives you access to them. I was assured that the company has a very open definition of who can develop for the platform. Snap also announced a new partnership with OpenAI that takes advantage of its multimodal capabilities, which it says will help developers create experiences with real-world context about the things people see or hear (or say).
Having said that, it all worked together impressively well. The three-dimensional objects maintained a sense of permanence in the spaces where you placed them—meaning you can move around and they stay put. The AI assistant correctly identified everything I asked it to. There were some glitches here and there—Lego bricks collapsing into each other, for example—but for the most part this was a solid little device.
It is not, however, a low-profile one. No one will mistake these for a normal pair of glasses or sunglasses. A colleague described them as beefed-up 3D glasses, which seems about right. They are not the silliest computer I have put on my face, but they didn’t exactly make me feel like a cool guy, either. Here’s a photo of me trying them out. Draw your own conclusions.
New Valve VR Game Reportedly in Development Alongside Long-rumored Standalone Headset
There’s no shortage of speculation when it comes to all things Valve. Tyler McVicker, YouTuber and one of the leading voices dedicated to deciphering Valve’s various internal developments, however now reports that not only is the company’s long-awaited standalone VR headset still coming, but it may arrive alongside its own Half-Life game.
Valve’s much hyped standalone, known only as ‘Deckard’, is “still very much in production,” McVicker maintains, saying that according to his sources that Valve “still intend[s] on shipping this piece of hardware.”
Check out his latest latest video, linked below:
While rumors swirl around the next Half-Life game, which may not be a VR-supported title (aka ‘HLX’), McVicker speculates a Half-Life game built specifically to showcase Deckard is likely in the cards, much like how Half-Life: Alyx (2020) showed off the capabilities of Valve Index.
Echoing a previous rumor first reported in 2020, McVicker renews speculation that two Half-Life games could be in development, making for what could be an asymmetric co-op game across PC and Deckard.
The result would be “an asymmetric multiplayer game taking place in the Half-Life universe,” McVicker says, “where one player is in VR and the other on a computer. The computer player would always be Gordon Freeman, while the VR player would be Alyx Vance. The idea was that these two characters would interact, with the VR player experiencing Alyx’s story and the PC player experiencing Gordon’s story, both having cooperative elements between them.”
While that specific claim is still very much a rumor, McVicker does a lot of sleuthing when it comes to code published by Valve across its various first-party titles and services, which can hold some clues as to what’s coming down the pipeline. He admits he’s “nowhere near done” sifting through all code published by Valve in 2024 however, so we may learn more at some point later this year.
The post New Valve VR Game Reportedly in Development Alongside Long-rumored Standalone Headset appeared first on Road to VR.
UbiSim, a Labster company
UbiSim is the first immersive virtual reality (VR) training platform built specifically for nurses. It is a complete simulation lab that provides nursing trainees with virtual access to a variety of clinical situations and diverse patients in a broad continuum of realistic care settings, helping institutions to overcome limited access to hospitals and other clinical sites for nursing students.
This cool tool allows institutions to create repeatable, real-life scenarios that provide engaging, standardized multi-learner experiences using VR headsets. It combines intuitive interactions, flexible modules, and immediate feedback. These contribute to developing clinical judgment, critical thinking, team interaction, clear communication, and patient engagement skills that enhance safe clinical practice and are essential to improving Next Generation NCLEX test scores.
UbiSim reduces the burden of purchasing and maintaining expensive simulation lab equipment, allowing nursing programs to scale and standardize their simulation activities. Faculty choose from 50-plus existing training scenarios created in collaboration with nursing educators and simulation experts. Educators may also customize content or create original scenarios to fit learning objectives.
Founded in 2016, UbiSim has been a Labster company since 2021. The UbiSim customer roster has grown by 117% since Fall 2022, extending its footprint at universities, community colleges, technical colleges, and medical centers within 9 countries, including 21 American states. UbiSim now partners with 100-plus nursing institutions in North America and Europe to advance the shared mission of addressing the nursing shortage by reducing the cost, time, and logistical challenges of traditional simulation methods and scaling high-quality nursing education. For these reasons and more, UbiSim, a Labster company, is a Cool Tool Award Winner for “Best Virtual Reality / Augmented Reality (AR/VR) Solution” as part of The EdTech Awards 2024 from EdTech Digest. Learn more.
The post UbiSim, a Labster company appeared first on EdTech Digest.
Spiders Are So Scary in VR That ‘Dungeons of Eternity’ Added an Option to Censor Them
Dungeons of Eternity (2023) is chock-full of skeletons, zombies, slimes and yes… spiders. But now developer Othergate has added an ‘Arachnophobia Mode’ that lets you take on the VR hack-n-slash adventure on Quest without fear of eight-legged creepy crawlies.
The recent update is a game of give and take, it seems. On one hand, the new Arachnophobia Mode replaces the spider monsters with a new enemy type—accessible by enabling the mode in Settings > Visuals > Enable ‘Arachnophobia Mode’.
On the other, the studio is also including a new monster to “balance things out”: Kamikaze exploding spiders.
The update also comes with a host of new features and bug fixes, according to the patch notes:
- Tier 3 Epic Chest Improvements. Hint: They can sometimes be dangerous but rewarding!
- Improved Crossbow head tilt/aiming assist mechanic
- Upgraded voice chat SDK and improved reliability of voice chat
- Kick feature allows you to kick the same person repeatedly if they rejoin
- Increased the AOE damage on bombs again (almost 2X)
The Arachnophobia Mode release follows the game’s most recent ‘Longsword and Traps’ update in May, which brought two-handed longswords, host of new traps and 25+ new chambers to the game.
The game’s roadmap also maintains we’ll be getting a few things later this year, including new a quest mode, single player DLC, and of course more monsters, bosses and weapons. You can find it on the Horizon Store for Quest, priced at $30.
The post Spiders Are So Scary in VR That ‘Dungeons of Eternity’ Added an Option to Censor Them appeared first on Road to VR.
‘The New Salsa Academy’ Teaches You All The Right Moves, Now Available on Quest
Taking a dance course can be intimidating, not to mention time consuming—but it doesn’t have to be. At least not when you can do it in VR (and MR).
Led by instructors Rodrigo Cortazar and Asya Sonina, The New Salsa Academy launched recently, guiding you through each step of an entire beginner salsa course.
Exclusively available on Quest, The New Salsa Academy comes with a few unique features to get you up and salsa-ing, making for a much more immersive experience than simply following dance tutorials on YouTube.
Boasting a AI-powered virtual dance partner that follows you as you dance, the app is said to analyze your dance performance, adapting the exercises to your skill level. You’ll need to master timing, accuracy, and connection to your partner to get the best grade—whether you’re learning to follow or lead.
While you can dance in the virtual studio, the app also includes a mixed reality mode, letting you practice your moves at home with your virtual partner. You can find The New Salsa Academy on Quest 2/3/Pro on the Horizon Store, priced at $20.
You may recognize The New Salsa Academy developers Dance Reality from their eponymous mobile AR app for Android and iOS, which teaches you to dance by following animated footprints and a virtual dance instructor.
The post ‘The New Salsa Academy’ Teaches You All The Right Moves, Now Available on Quest appeared first on Road to VR.
VR Port Studio Joins Andreessen Horowitz Accelerator, Earning Investment and Validation
Flat2VR Studios has been accepted into the SPEEDRUN accelerator, not only giving the studio a financial boost on its mission to bring VR support to non-VR games, but a good measure of validation too.
Hosted by A16Z Games, a games-focused investment arm of Andreessen Horowitz, Speedrun is early-stage accelerator for startups that includes $750,000 as well as “a highly curated set of industry coaches, mentors, and a community of ambitious founders,” the VC firm says.
In a post on X, Flat2VR says the accelerator “should help open some doors to porting more of those absolute dream titles officially into VR!”
We’re so honored to have been accepted into the @a16z speed run program which should help open some doors to porting more of those absolute dream titles officially into VR! https://t.co/s5NPJQfy88
Can’t wait to give you a little sneak peek at what we’ve been up to in the… pic.twitter.com/U8i3RwI2sK
— Flat2VR Studios Gamescom (@Flat2VRStudios) August 1, 2024
Flat2VR Studios co-founder Elliot Tate reveals that only 30 of around 4.000 applicants were chosen for the accelerator, putting the studio in the company of Speedrun veterans such as Oculus, Gym Class VR, and Sandbox VR.
Earlier this year, VR publishing and marketing firm Impact Reality founded Flat2VR Studios with the aim of engaging leading developers from the VR modding scene to create officially licensed VR adaptations of popular flatscreen games.
While traditionally the work of hobbyists and distributed modding groups, Flat2VR Studios works directly with developers to create official VR versions of their titles. Among their ranks the studio counts VR porting veterans ‘Cabalistic’ and ‘Raicuparta’, renowned for their VR adaptations of games like Half-Life 2 and Outer Wilds respectively.
The studio is currently working on an official VR port of a still undisclosed game, which is slated to release in late 2024 or early 2025 on major VR platforms. We’re looking forward to learning more at the VR Games Showcase on August 15th to learn more.
The post VR Port Studio Joins Andreessen Horowitz Accelerator, Earning Investment and Validation appeared first on Road to VR.
25 Free Games & Apps Quest 3S Owners Should Download First
Not ready to plonk down your first $100 on Quest games? Thankfully there’s an impressive number of free games, experiences, apps, and social VR platforms to keep you playing before you’re paying.
Looking to make your Quest 3S gaming experience even better? Don’t miss our top picks for the most essential Quest 3 accessories. The new hotness supports all of the same Quest 3 accessories, save the facial interfaces, which are unique to each headset.
Free Quest Games
Yeeps: Hide and Seek
As a Yeep, your belly is full of stuffing used to craft anything from pillows for building to bombs for destruction. Pull items from your vast imagination and toss them into the world. The game’s intuitive block-based building makes it easy to express your creativity at any skill level.
- Developer: Trass Games
- Store link
Gorilla Tag
Like your primitive ancestors, Gorilla Tag will have you lumbering around a tree-lined arena using its unique ‘grab-the-world’ locomotion style that lets you amble around like a great ape. Chase the other apes and infect them or climb for your life as the infected chase you. Pure and simple. Make sure you’re far from TVs, furniture, babies, and pets because you will punch something in the mad dash for sweet, low-poly freedom.
- Developer: Another Axiom
- Store link
Maestro: The Masterclass
Step on the podium and become a true orchestra conductor in Maestro: The Masterclass. Play hands free or grab a chopstick and master the real hands motions that command the orchestra through an off the rail conducting masterclass that culminates with an epic symphonic concert in a packed opera house. Good luck, Maestro!
- Developer: Double Jack
- Store link
Noclip VR
Riding off the success of cult-like status of ‘The Backrooms’ Internet lore, Noclip VR lets you and online players explore the liminal spaces, solve puzzles, and escape that which lurks within. To move, you’ll need to swing your arms, and always keep in ear-shot of your friends, otherwise they won’t hear you scream. Gameplay is a bit barebones, making it feel more like something you’d find imported on a social platform like VR Chat or Rec Room, although it’s definitely invoking Gorilla Tag vibes.
- Developer: SuperFPS
- Store link
Population: One
Population: One is basically VR’s most successful battle royale, letting you climb, fly, shoot, and team-up with whoever dares. The free-to-play game does feature microtransactions, but only for cosmetics, which is nice. It’s more than just a battle royale though: you can play in the sandbox for custom maps and rules, team deathmatch with customizable loadouts, a 12v12 war mode, and more.
- Developer: BixBox VR
- Store link
Gun Raiders
Gun Raiders serves up a healthy slice of multiplayer shooter action with multiple game modes that let you jetpack through the air, climb from wall to wall, and shoot down the competition. There’s the same sort of microtransactions you see in bigger games, but it they’re all avatar skin stuff, so no pay-to-win here.
- Developer: Gun Raiders Entertainment Inc.
- Store link
Gym Class – Basketball
Gym Class – Basketball is the solution if you’re looking to shoot some hoops and dunk like you probably can’t on a physical court. Online multiplayer lets you go head-to-head for a pretty convincing game of b-ball thanks to the game’s physics-based and full-body kinematics.
- Developer: IRL Studios
- Store link
Blaston
This room-scale shooter is now free-to-play, letting you take on friends, family and foes in head-to-head 1v1 dueling. Refine your loadout and jump into the action as you scramble for weapons and send a volley of hellfire at your enemies, all the while Matrix dodging through this innovative bullet hell meets futuristic dueling game.
- Developer: Resolution Games
- Store link
Hyper Dash
Hyper Dash is a multiplayer shooter that basically fills in where Echo Combat never could (never mind that Echo Combat was never on Quest, and is now entirely defunct on Oculus PC). Letting you quick dash, sprint, and rail grind around, Hyper Dash manages to serve up an impressive number of modes, including Payload, Domination, Control Point, (Team) Deathmatch, Capture The Flag, and Elimination. You can also take on both Quest and SteamVR users thanks to the inclusion of cross-play.
- Developer: Triangle Factory
- Store link
Ultimechs
Ultimechs should look pretty familiar: it’s basically Rocket League, but instead of driving around in cars, you’re given rocket-powered fists to punch balls into the goal. Online multiplayer includes both 1v1 and 2v2 matches, offering up tons of opportunities to earn cosmetic gear that will let you outfit your battle mech into something unique. There are also now two paid battle passes too, offering up a ton of cosmetics to set you apart from the competition.
- Developer: Resolution Games
- Store link
FRENZIES (early access)
Fans of arena shooters, get ready to battle in this lucky dip of game modes, including all of your favorite modes and a few new ones too, like Red Light, Green Light and Glitter Pig. Now in early access, the stylish, neon-soaked free-to-play team shooter packs in some serious style.
- Developer: Near Light (nDreams)
- Store link
Cards & Tankards
Cards & Tankards is a pretty addictive social collectible card game, letting you collect and battle friends with over 180 cards. With cross-play against SteamVR headsets (also free on PC), you may consider hosting your regular game night playing more than a few rounds in the game’s characteristic medieval fantasy tavern.
- Developer: Divergent Realities
- Store link
Vegas Infinite
No real cash gambling here, but PokerStars’ Vegas Infinite not only let you go all-in on games of Texas Hold’em, but now a full casino’s worth of table games a machines that are sure to light up the dopamine starved pleasure centers of your brain. It’s all free play, so you won’t be risking real cash unless you buy in-game chips, which cannot be turned back into real money: it’s only to keep your bankroll flush for free play.
- Developer: LuckyVR Inc
- Store link
Bait!
Since the Fishin’ Buddies update, this classic VR title has gotten a whole new lease on life as a multiplayer VR fishing game that lets you sit back and crack a cold one with the boys as you reel in the big’uns. The additional social areas also let you sit back between your fishing adventures to take part in casual mini-games.
- Developer: Resolution Games
- Store link
Gods of Gravity
Gods of Gravity is an arcade-style RTS game where you compete in an epic showdown of between celestial gods (2-8 players). Scoop up ships and fling them to capture a nearby planet, or open wormholes to teleport them across the solar system. Hold planets and moons to boost your production. Mine asteroids for the powerful resources within. And if you dare, capture the sun for the ultimate buff. Then send a massive fleet to conquer your enemy’s home planet. Last god standing wins.
- Developer: Trass Games
- Store link
Social VR Platforms
Rec Room
Without a doubt one of the most fun, and most expansive VR titles out there… and it’s free. Sure, you can pay real cash for in-game tokens to buy spiffy clothes for your avatar, but that’s really up to you. Gads of mini-games await you in both first-party creations such as the ever so popular co-op Quests—that could be games in their own right—to user-created stuff that will keep your pocket book gathering dust. It’s social VR, so meet people and have a ball for zero dollarydoos. Fair warning: there’s a ton of kids.
- Developer: Rec Room
- Store link
VRChat
If you’ve been anywhere near the Internet in the last few years, it’s likely you’ve already heard about VRChat, the user-generated social VR space filled with… well… everything you can imagine, re-pro games included like Among Us, Mario Kart, and even a version of Beat Saber. Fashion your own avatar or download the millions of user-generated avatars out there so you can embody SpongeBob, Kirito from Sword Art Online, or any one of the million anime girl avatars that you’re bound to see there.
- Developer: VRChat
- Store link
Horizon Worlds
Horizon Worlds has changed a lot since launch. It now includes more tools, user-generated content, and some more compelling first-party games which has rounded out things to make it more competitive with Rec Room and VRChat. You may want to check in just to see the state of Meta’s first-party VR social platform. Whatever the case, the price of ‘free’ is hard to argue with.
- Developer: Meta
- Store link
Continue on Page 2: Free Experiences & Apps»
The post 25 Free Games & Apps Quest 3S Owners Should Download First appeared first on Road to VR.
‘Alien: Rogue Incursion’ Shows off Stealth Action in First Gameplay Trailer, Coming Holiday 2024
Alien: Rogue Incursion just got its first gameplay trailer, showing off its first real look at the game’s Xenomorph enemies, weapons, setting and more.
When it was revealed back in April, Veteran VR studio Survios said the upcoming action-horror game was set to include an “all-new storyline full of heart-pounding action, exploration, and terrifying Xenomorphs.”
Now, the studio released a first look at gameplay centered on protagonist Zula Hendricks, a fearless ex-Colonial Marine turned ultimate Xenomorph hunter. As the game’s main protagonist, the studio reveals Zula Hendricks is on her way to the uncharted planet Purdan, accompanied by her sentient AI companion, Davis-01.
“Zula must fight her way to the heart of the infested Gemini Exoplanet Solutions black-site facility. There she will need to survive deadly attacks from the most cunning Xenomorphs ever encountered and discover new horrors and threats that once unleashed could spell the end for humankind,” Survios says.
We also get a look at the motion-tracking radar and a number of weapons, including the series’ iconic pulse rifle, revolver, and pump shotgun.
Alien: Rogue Incursion is releasing sometime around Holiday 2024, coming to PSVR 2, Meta Quest 3, and PC VR. Notably, the studio says its Meta release will only include Quest 3, but not Quest 2 or Quest Pro—making it one of the first big titles to drop the older Quest headsets.
The post ‘Alien: Rogue Incursion’ Shows off Stealth Action in First Gameplay Trailer, Coming Holiday 2024 appeared first on Road to VR.
Hands-on: ‘Attack on Titan VR’ Could Be a Diamond in the Rough — Emphasis on Rough
Attack on Titan VR: Unbreakable is here, officially bringing the hit anime to VR for the first time, albeit in early access. We got an eye-full last month when developer UNIVRS released its first trailer, which admittedly looked pretty rough. While that’s still true for the game in its current state, it actually packs in some fun mechanics, leaving me holding out hope for the AoT VR game that it might become.
In its current state, Attack on Titan VR: Unbreakable feels very much like a tech demo, offering up a single mission (aka ‘chapter’), a few unlockable blades, and only a few bits of story to chew on out of the gate, offering up about 30 minutes of content which you can replay as much as you want if you’re looking to move up the scoreboard and unlock more weapons.
Essentially, you’ll get a quick Power Point at the beginning recapping the anime’s premise, and then you’re launched right into the tutorial, which is segmented into discrete mini-missions: i.e. do the thing, fade to black—rinse and repeat until you make it to the first and only mission in the game at present. I’d expect a less disjointed tutorial in the future, but hey, this is early access we’re talking about.
There, you’ll learn how to fight against Titans; you can slice their limbs, although they regrow back after a period, so you’ll need to cut them down for good by slicing at the back of their necks. To do this, you’ll need to lock on and retract your omni-directional mobility gear, which works similar to the grappling hooks from the Windlands series.
How the game differentiates itself from mission to mission is going to be a big factor in whether its most fun bit—swinging around the walled city filled with the series’ iconic red-roofed buildings—really has staying power, and doesn’t just evolve into a bunch of samey swinging and slicing. It will also need to tighten up Titan interactions, as AI pathfinding feels very blocky and artificial, and you can usually clip through Titans upon death, which ruins a bit of the first few ‘wow’ moments when bringing them down for the first time. They could also benefit from a visual overhaul—although I can see what the studio is trying to go for in terms of keeping it grounded in the anime’s visual style.
That said, swinging around using the omni-directional mobility gear and using the blades are undeniable high points, as you lock onto the neck of a Titan, and make big and flashy cuts, red indicating you’re using your full strength.
Despite some pretty frenetic movement, it’s also a really comfortable experience thanks to the constant visual effects that surround you as you flight through the air—the sort of speed lines you regularly see in manga.
There are a few other clear wins here too. The game incorporates diegetic UI as much as it can, giving you a pen to paper to start chapters from your mission log—certainly more interesting than using a laser pointer on a 2D monitor. To start the mission, you even need to leave your John Hancock, which feels like an immersive touch.
The team has their work cut out for them. Visuals feel middling, if not downright ugly at points, as the trailer suggests, and it seems to be suffering from stability issues. The game also needs to add in two-player co-op mode, which is slated to launch with its 1.0 release later this year.
That said, it’s too early to tell whether Attack on Titan VR: Unbreakable is going to be the sort of VR game you and your AoT-loving friend definitely need to play. There are still a lot of questions about level and enemy variety, and how much of the story will play a part.
The big question though is it worth the $5 entry price to get early access. AoT superfans will probably want to jump in no matter the state of the game. I’m a casual enjoyer of the series, and I’d personally wait for successive chapters to be released to see where the game is actually going first. Still, that $5 entry fee feels like an honest price for what VR veteran developer UNIVRS is planning.
If you’re curious to see for yourself, you can nab Attack on Titan VR: Unbreakable in early access right for Quest 2/3/Pro.
The post Hands-on: ‘Attack on Titan VR’ Could Be a Diamond in the Rough — Emphasis on Rough appeared first on Road to VR.
AWE 2024 VR – Hypervision, Sony XR, Big Screen, Apple, Meta, & LightPolymers
Introduction
Based on information gathered at SID Display Week and AWE, I have many articles to write based on the thousands of pictures I took and things I learned. I have been organizing and editing the pictures.
As its name implies, Display Week is primarily about display devices. My major takeaway from that conference is that many companies work on full-color MicroLEDs with different approaches, including quantum dot color conversion, stack layers, and single emitter with color shifting based on current or voltage.
AWE moved venues from the Santa Clara Convention Center in Silicon Valley to the larger Long Beach Convention Center south of LA. More than just a venue shift, I sensed a shift in direction. Historically, at AWE, I have seen many optical see-through AR/MR headsets, but there seem to be fewer optical headsets this year. Instead, I saw many companies with software running on VR/Passthrough AR headsets, primarily on the Meta Quest 3 (MQ3)and Apple Vision Pro (AVP).
This article was partly inspired by Hypervision’s white paper discussing whether micro-OLEDs or small LCDs were the best path to 60 pixels per degree (PPD) with a wide FOV combined with the pictures I captured through Hypervision’s HO140 (140° diagonal FOV per eye) optics at AWE 2024. I have taken thousands of pictures through various headsets, and the Hypervision picture stood out in terms of FOV and sharpness. I have followed Hypervision since 2021 (see Appendix: More on Hypervision).
I took my first pictures at AWE through the Sony XR (SXR) Headset optics. At least subjectively, in a short demo, the SXR’s image quality (sharpness and contrast) seemed higher than that of the AVP, but the FOV was smaller. I had on hand (thousands) of pictures I had taken through the Big Screen Beyond (BSB), AVP, Meta Quest Pro (MQP), and Meta Quest 3 (MQ3) optics with the same camera and lens, plus a few of the Hypervision HO140 prototype. So, I decided to make some comparisons between various headsets.
I also want to mention LightPolymers’ new Quarter Waveplate (QWP) and Polarization technologies, which I first learned about from a poster in the Hypervision AWE booth. In April 2024, the two companies announced a joint development grant. They offer an alternative to the plastic film QWP and Polarizers, where 3M dominates today.
Hypervision’s HO140 Display
Based on my history of seeing Hypervision’s 240° prototypes for the last three years, I had, until AWE 2024, largely overlooked their single display 140° models. I had my Canon R5 (45Mp with 405mp ” 3×3 sensor pixel shift mode”) and tripod with me at AWE this year, so I took a few high-resolution pictures through the optics of the HO140. Below are pictures of the 240° (left) and 140° (right) prototypes in the Hypervsion Booth. Hypervision is an optics company and not a headset maker and the demos are meant to show off their optics.
When I got home and looked at the pictures through the HO140, I was impressed by the overall image quality of the HO140, after having taken thousands of pictures through the Apple Vision Pro (with Micro-OLED displays) and Meta’s Quest Pro, Quest 3 (both with mini-LCD displays), the Big Screen Beyond. It usually takes me considerable time and effort, as well as multiple reshoots, to find the “sweet spot” for the other devices, but I got good pictures through the HO140 with minimal effort and only a few pictures, which suggests a very large sweet spot in Hypervision’s optical design. The HO140 is a prototype of unknown cost that I am comparing to production products. I only have this one image to go by and not a test pattern.
The picture below is from my Canon R5, with a 16mm lens netting a FOV of 97.6° horizontal by 73.7° vertical. It was shot at 405mp and then reduced to 45mp to avoid moiré effects due to the “beat frequencies” between the camera sensor and the display devices with their color subpixels. All VR optics pincushion, which causes the pixel sizes to vary across the display and increases the chance of getting moiré in some regions.
The level of sharpness throughout the HO140’s image relative to other VR headsets suggests that it could support a higher-resolution LCD panel with a smaller pixel size if it existed. Some significant chroma aberrations are visible in the outer parts of the image, but these could be largely corrected in software.
Compared to other VR-type headsets I have photographed, I was impressed by how far out into the periphery of the FOV the image maintains sharpness while supporting a significantly larger FOV than any other device I have photographed. What I can’t tell without being able to run other content, such as test patterns, is the contrast of the display and optics combination.
I suggest also reading Hypervision’s other white papers on their Technology & Research page. Also, if you want an excellent explanation of pancake optics, I recommend Arthur Rabner’s, CTO of Hypervision, one-hour and 25-minute presentation on YouTube.
Sony XR (SXR)
Mechanical Ergonomics
AWE was my first time trying the new Sony XR (SXR) headset. In my CES 2024 coverage, I wrote about the ergonomic features I liked in Sony XR (and others compared to Apple Vision Pro). In particular, I liked the headband approach with the flip-up display, and my brief try with the Sony headset at AWE seemed to confirm the benefits of this design choice (which is very similar to the Lynx R1 headset), at least from the ergonomics perspective relative to the Apple Vision Pro.
Still, the SXR is still pretty big and bulky, much more so than the AVP or Lynx. Having only had a short demo, I can’t say how comfortable it will be in extended use. As was the case for the HO140, I couldn’t control the content.
“Enterprise” Product
Sony has been saying that this headset primarily aims at “enterprise” (= expensive high-end) applications, and they partner with Siemens. It is much more practical than the Apple Vision Pro (AVP). The support on the head is better; it supports users wearing their glasses, and the display/visor flips up so you can see the real world directly. There is air circulation to the face and eyes. The headset also supports adjustment of the distance from the headset to the eyes. The headset allows peripheral vision but does have a light shield for full VR operation. The headset is also supposed to support video passthrough, but that capability was not demonstrated. As noted in my CES article, the SXR headset put the pass-through cameras in a much better position than the AVP.
Display Devices and Image Quality
Both the AVP and SXR use ~4K micro-OLED display devices. While Sony does the OLED Assembly (applying the OLED and packaging) for its headset and the AVP’s display devices, the AVP reportedly uses a custom silicon backplane designed by Apple. The SXR’s display has ~20% smaller 6.3-micron pixels than the AVP’s 7.5-micron. The device size is also smaller. The size factors of the SXR favor higher angular resolution and a smaller FOV, as is seen with the SXR.
The picture below was taken (handheld) with my 45MP Canon R5 camera with a 16mm lens like the HO140, but because I couldn’t use a tripod, I couldn’t get a 405MP picture with the camera’s sensor shifting. I was impressed that I got relatively good images handheld, which suggests the optics have a much larger sweet spot than the AVP, for example. To get good images with the AVP requires my camera lens to be precisely aligned into the relatively small sweep spot of the AVP’s optics (using a 6-degree-of-freedom camera rig on a tripod). I believe the Apple Vision Pro’s small sweet spot and the need for eye-tracking-based lens correction, and not just for foveated rendering, are part of why the AVP has to be uncomfortably clamped against the user’s face.
Given that I was hand-holding both the headset and camera, I was rather surprised that the pictures came out so well (click on the image to see it in higher, 45mp resolution).
At least in my brief demo, the SXR’s optics image quality seems better than the AVP’s. The images seem sharper with lesser chroma (color) aberrations. The AVP seems heavily dependent on eye tracking to correct optics problems with the optics, but it does not always succeed.
Much more Eye Refief (enabling eye glasses) but lower FOV
I was surprised by how much eye relief the SXR optics afforded compared to the AVP and BSB, which also use Micro-OLED microdisplays. Typically, the requirement for high magnification of the micro-OLED pixels compared to LCD pixels inherently makes eye relief more difficult. The SXR magnifies less, resulting in a smaller FOV, but also makes it easier optically for them to support more eye relief. But note, taking advantage of the greater eye relief will further reduce the FOV. The SXR headset has a smaller FOV than any other VR-type headset I have tried recently.
Novel Sony controllers were not a hit
While I will credit Sony for trying something new with the controllers, I didn’t like finger trackpad and ring color are great solutions. I talked with several people who tried them, and no one seemed to like either controller. It is hard to judge control devices in a short demo; you must work with them for a while. Still, they didn’t make a good first impression.
VR Headset “Shootout” between AVP, MQP, Big Screen Beyond, Hypervision, and Sony XR
I have been shooting VR headsets with the Canon R5 with a 16mm lens for some time and built up a large library of pictures. For the AVP, Big Screen Beyond (BSB), and Meta Quest Pro (MQP), I had both the the headset and the camera locked down on tripods so I could center the lens in the sweet spot of the optics. For the Hypervision, while the camera and headset were on tripods, my camera was only on a travel tripod without my 6-degree-of-freedom rig and the time to precisely locate the headset’s optical sweet spot. The SXR picture was taken with my hand holding the headset and the camera.
Below are through-the-optics pictures of the AVP, BSB, MQP, Hypervision HO140, and SXR headsets, all taken with the same camera and lens combination and scaled identically. This is not a perfect comparison as the camera lens does not work identically to the eye (which also rotates), but it is reasonably close. The physically shorter and simpler 16mm prime (non-zoom) lens lets it get inside the eye box of the various headsets for the FOV it can capture.
FOV Comparison (AVP, SXR, BSB, HO140, MQ3/MQP)
While companies will talk about the number of horizontal and vertical pixels of the display device, the periphery of the display’s pixels are cut off by the optics, which tend to be circular. All the VR headset optics have a pincushion distortion, which results in higher resolution in the sweet spot (optical center), which is always toward the nose side and usually above the center for VR headsets.
In the figure below, I have overlaid the FOV of the left eye for the headsets on top of the picture HO140 image. I had to extrapolate somewhat on the image circles on the top and bottom as the headset FOVs exceeded the extent of the camera’s FOV. The HO140 supports up to a 2.9″ diagonal LCD (that does not exist yet), but they currently use a 2.56″ 2160×2160 Octagonal BOE LCD and are so far beyond the FOV of my camera lens that I used their information.
As can be seen, the LCD-based headsets of Hypervision and Meta typically have larger FOV than the micro-OLED-based headsets of AVP, Meta, and Sony. However, as will be discussed, the micro-OLED-based headsets have smaller pixels (angularly and on the physical display device).
Center Pixels (Angular Size in PPD)
Due to handholding the SXR and having pixels smaller than the AVP, I couldn’t get a super-high-resolution (405 mp) image from the center of the FOV and didn’t have the time to use a longer focal length lens to show the pixel boundaries. The SXR has roughly the same number of pixels as the AVP but a smaller FOV, so its pixels are angularly smaller than the AVP’s. I would expect the SXR to be near 60 pixels per degree (PPD) in the center of the FOV. The BSB has about the same FOV as the AVP but has a ~2.5K micro-OLED compared to the AVP’s ~4K; thus, the BSB pixels in the center are about 1.5x bigger (linearly). The Hypervision’s display has a slightly smaller center pixel pitch than the MQP (and MQ3) but with a massively bigger FOV.
The MQP (and the very similar MQ3) rotate the display device. To make it easier to compare the pixel pitches, I included a rotated inset of the MQP pixels to match the alignment of the other devices. Note that the pictures below are all “through the optics” and thus include the headset’s optical magnification. I have given the angular resolution in PPD for each headset. I have indicated the angular resolution (in pixels-per-degree, PPD) for each of the headset’s center pixels. For the center pixels pictures below, I used a 28mm lens to get more magnification to see sub-pixel detail for the AVP, BSB, and MQP. I only took 16mm lens pictures of the HO140 and, therefore, rescaled the image based on the different focal lengths of the lens.
The Micro-OLED base headsets require significantly more optical magnification than the LCD models. For example, the AVP has 3.2x (linearly) smaller display device pixels than the MQP, but after optics, the pixels are ~1.82x smaller. As a specific example, the AVP magnifies the display by ~1.76 more than the MQP.
Outer Pixels
I capture pixels from a similar (very approximately) distance from the optical center of the lens. The AVP’s “foveated rendering” makes it look worse than it is, but you can still see the pixel grid with the others. Of the micro-OLED headsets, the BSB and SXR seem to do the best regarding sharpness in the periphery. The Hypervision HO140 pixels seem much less distorted and blurry than any of the headsets, including the MQP and MP3, which have much smaller FOVs.
Micro-OLED vs. Mini-LCD Challenges
Micro-OLEDs are made by applying OLEDs on top of a CMOS substrate. CMOS transistors provide a high current per unit area, and all the transistors and circuitry are underneath the OLED pixels, so it doesn’t block light. These factors enable relatively small pixels of 6.3 to 10 microns. However, CMOS substrates are much more expensive per unit area, and modern semiconductor FABs limit of CMOS devices is about 1.4-inch diagonal (ignoring expensive and low-yielding “reticle stitched” devices).
A basic issue with OLEDs is that the display device must provide the power/current to drive each OLED. In the case of LCDs, only a small amount of capacitance has to be driven to change the pixel, after which there is virtually no current. The table on the right (which I discussed in 2017) shows the transistor mobility and the process requirements for the transistors for various display backplanes. The current need for an emitting display device like OLEDs and LEDs requires crystalline silicon (e.g., CMOS) or much larger thin-film transistors on glass. There are also issues of the size and resistivity of the wires used to provide the current and heat issues.
The OLED’s requirement for significant current/power limits how small the pixels can get on a given substrate/technology. Thin-film transistors have to be physically big to supply the current. For example, the Apple Watch Ultra Thin Film transistor OLED display has 326 PPI (~78 microns), which is more than 10x larger linearly (100x the area) than the Apple Vision Pro’s pixel, even though both are “OLEDs.”
Another issue caused by trying to support large FOVs with small devices is that the higher magnification reduces eye relief. Most of the “magnification” comes from moving the device closer to the eye. Thus, LCD headsets tend to have more eye relief. Sony’s XR headset is an exception because it has enough eye relief for glasses but does so with a smaller FOV than the other headsets.
Small LCDs used in VR displays have different challenges. They are made on glass substrates, and the transistors and circuitry must be larger. Because they are transmissive, this circuitry in the periphery of each pixel blocks light and causes more of a screen door effect. The cost per unit area is much lower than that of CMOS, and LCD devices can be much larger. Thus, less aggressive optical magnification is required for the same FOV with LCDs.
LCDs face a major challenge in making the pixels smaller to support higher resolution. As the pixels get smaller, the size of the circuitry relative to the pixel size becomes bigger, blocking more light and causing a worse screen door effect. To make the pixels smaller, they must develop higher-performance thin-film transistors and lower resistance interconnection to keep blocking too much light. This subject is discussed in an Innolux Research Paper published by SPIE in October 2023 (free to download). Innolux discusses how to go from today’s typical “small” LCD pixel of 1200 ppi (=~21 microns) to their research device with 2117 ppi (=~12 microns) to achieve a 3840 x 3840 (4K by 4k) display in a 2.56″ diagonal device. Hypervision’s HO140 white paper discusses Innolux’s 2022 research prototype with the same pixel size but with 3240×3240 pixels and a 2.27-inch panel, as well as the current prototype. The current HO140 uses a BOE 2.56″ 2160×2160 panel with 21-micron pixels, as the Innolux panel is not commercially available.
Some micro-OLED and small LCD displays for VR
YouTuber Brad Lynch of SadlyItsBradley, in an X post, listed the PPI of some common VR headset display devices. I have added more entries and the pixel pitch in microns. Many VR panels are not rectangular and may have cut corners on the bottom (and top). The size of the panels given in inches is for the longest diagonal. As you can see, Innolux’s prototypes have significantly smaller pixels, but almost 2x linearly, than the VR LCDs in volume production today:
- Vive: 3.6″, 1080p, ~360 PPI (70 microns)
- Rift S*: 5.5″, 1280P, ~530 PPI (48 microns)
- Valve Index: 3.5″, 1440p, ~600 PPI (42 microns)
- Quest 2*: 5.5″, 1900p, ~750 PPI (34 microns)
- Quest 3: ~2.55″ 2064 × 2208, 1050 PPI (24 microns) – Pancake Optics
- Quest Pro: 2.5″, 1832×1920, ~1050 PPI (24 microns) – Might be BOE 2.48″ miniLED LCD
- Varjo Aero: 3.2″, 2880p, ~1200 PPI (21 microns)
- Pico 4: 2.5″, 2160p, 1192 PPI (21 microns)
- BOE 2.56″ LCD, 2160×2160, 1192 PPI (21 microns) – Used in Hypervision HO140 at AWE 2024
- Innolux 2023 Prototype 2.56″, 3840×3840, 2117 ppi (12 microns) -Research prototype
- Apple Vision Pro 1.4″ Micro-OLED, 3,660×3,200, 3386 PPI (7.5 microns)
- SeeYa 1.03″ Micro-OLED, 2560×2560, 3528 PPI (7.2 microns) – Used in Big Screen Beyond
- Sony ~1.3″ Micro-OLED, 3552 x 3840, 4032 PPI (6.3 microns) – Sony XR
- BOE 1.35″ Micro-OLED 3552×3840, 4032 PPI (6.3 microns) – Demoed at Display Week 2024
In 2017, I wrote Near Eye Displays (NEDs): Gaps In Pixel Sizes (table from that article on the right) talks about what I call the pixel size gap between microdisplays (on Silicon) and small LCDs (on glass). While the pixel sizes have gotten smaller for both micro-OLED and LCDs for VR in the last ~7 years, there remains a sizable gap.
Contrast – Factoring the Display and Pancake Optics
Micro-OLEDs at the display level certainly have a better inherent black level and can turn pixels completely off. LCDs work by blocking light using cross-polarization, which results in imperfect blacks. Thus, with micro-OLEDs, a large area of black will look black, whereas with LCDs, it will be dark gray.
However, we are not looking at the displays directly but through optics, specifically pancake optics, which dominate new VR designs today. Pancake optics, which use polarized light and QWP to recirculate the image twice through parts of the optics, are prone to internal reflections that cause “ghosts” (somewhat out-of-focus reflections) and contrast loss.
Using smaller micro-OLEDs requires more “aggressive” optical designs that support higher magnification to support a wide FOV. These more aggressive optical designs can be more prone to being more expensive, less sharp, and loss of polarization. Any loss of polarization in pancake optics will cause a loss of contrast and ghosting. There seems to be a tendency with pancake optics for the stray light to bounce around and end up in the periphery of the image, causing a glow if the periphery of the image is supposed to be black.
For example, the AVP is known to have an outer “glow” when watching movie content on a black background. Most VR headsets default to a “movie or home theater” rather than a background. While it may be for aesthetics, the engineer in me thinks it might help hide the glow. People online suggest turning on some background with the AVP for people bothered by the glow on a black background.
The complaints of outer glow when watching movies seem more prevalent when using headsets micro-OLEDs, but this is hardly scientific. It could be just that the micro-OLEDs have a better black level and make the glow more noticeable, but it might also be caused by their more aggressive optical magnification (something that might be or has been (?) studied). My key point is that it is not as simple as considering the display’s inherent contrast, you have to consider the whole optical system.
LightPolymers’ Alternative to Plastic Films for QWP & Polarizers
LightPolymers has a Lyotropic (water-based) Liquid Crystal (LC) material that can make optical surfaces like QWP and polarizers. Silicon Optix, which the blog broke the news of Meta buying them in December 2021 (Exclusive: Imagine Optix Bought By Meta), was also developing LC-based polarized light control films.
Like Silicon Optix, Light Polymers has been coating plastic films with LCs, but LightPolymers is developing the ability to directly apply their films to flat and curved lenses, which is a potential game changer. In April 2024, LightPolymers and Hypervision announced the joint development of this lens-coating technology and had a poster in their Hypervision’s booth showing it (right)
3M Dominates Polarized Light Plastic Films for Pancake Optics
3M is today the dominant player in polarized light-control plastic films and is even more dominant in these films for pancake optics. At 3M’s SID Display Week booth in June 2024, they showed the ByteDance PICO4, MQP, and MQ3 pancake optics using 3M polarization films. Their films are also used in the Fresnel lens-based Quest 2. It is an open secret (but 3M would not confirm or deny) that the Apple Vision Pro also uses 3M polarization films.
“3M did not invent the optical architecture of pancake lenses. However, 3M was the first company to successfully demonstrate the viability of pancake lenses in VR headsets by combining it with its patented reflective polarizer technology.“
That same article supports Kopin’s (now spun out to Lightning Silicon) claims to have been the first to develop pancake optics. Kopin has been demonstrating pancake optics combined with their Micro-OLEDs for years, which are used in Panasonic-ShiftAll headsets.
3M’s 2017 SPIE Paper Folded Optics with Birefringent Reflective Polarizers discusses the use of their films (and also mentions Kopin developments) in cemented (e.g., AVP) and air gap (e.g., MQP and MP3) pancake optics. The paper also discusses how their polarization films can be made (with heat softening) to conform to curved optics such as the AVP.
LightPolymers’ Potential Advantage over Plastic Films
The most obvious drawbacks of plastic films are that they are relatively thick (on the order of 70+ microns per film, and there are typically multiple films per lens) and are usually attached using adhesive coatings. The thickness, particularly when trying to conform to a curved surface, can cause issues with polarized light. The adhesives introduce some scatter, resulting in some loss of polarization.
By applying their LCs directly to the lens, LightPolymer claims they could reduce the thickness of the polarization control (QWP and Polarizers) by as much as 10x and would eliminate the use of adhesives.
In the photos below (taken with a 5x macro lens), I used a knife to slightly separate the edges of the films from the Meta Quest 3’s eye-side and display-side lenses to show them. On the eye-side lens, there are three films, which are thought to be a QWP, absorptive polarizer, and reflective polarizer. On the display-side lens, there are two films, one of which is a QWP, and the other may be just a protective film. In the eye-side lens photo, you can see where the adhesive has bubbled up after separation. The diagram on the right shows the films and paths for light with the MQ3/MQP pancake optics.
Because LighPolymers’ LC coating is applied to each lens, it could also be applied/patterned to improve or compensate for other issues in the optics.
Current State of LightPolymer’s Technology
LightPolymers is already applying its LC to plastic films and flat glass. Their joint agreement with Hypervision involves developing manufacturable methods for directly applying the LC coatings to curved lens surfaces. This technology will take time to develop. LightPolymer business of making the LC materials and then works with partners such as Hypervision to apply the LC to their lenses. They say the equipment necessary to apply the LCs is readily available and low-cost (for manufacturing equipment).
Conclusion
Hypervision has demonstrated the ability to design very wide FOV pancake optics with a large optical sweet spot and maintains a larger area of sharpness than any other design I have seen.
Based on my experience in both Semiconductors and Optics, I think Hypervision makes a good case in their white paper 60PPD: by fast LCD but not by micro OLED, getting to a wide FOV while approaching “retinal” 60PPD is more likely to happen using LCD technology than micro-OLEDs.
Fundamentally, micro-OLEDs are unlikely to get much bigger than 1.4″ diagonally, at least commercially, for many years, if not more than a decade. While they could make the pixels smaller, today’s pancake optics struggle to resolve ~7.5-micron pixels, no less small ones.
On the other hand, several companies, including Innoulux and BOE, have shown research prototypes of 12-micron LCD pixels, or half the (linear) size of today’s LCDs used in VR headsets in high volume. If BOE or Innolux went into production with these displays, it would enable Hypervision’s HO140 to reach about 48 PPD in the center with a roughly 140-degree FOV, and only small incremental changes would get them to 60 PPD with the same FOV.
Appendix: More on Hypervision
I first encountered Hypervision at AWE 2021 with their blended Fresnel lens 240-degree design, but as this blog primarily covered optical AR, it slipped under my radar. Since then, I have been covering Optical and Pass-Through mixed reality, particularly pass-through MR using Pancake Optics. By AR/VR/MR 2023, Hypervsion demonstrated a single lens (per eye) 140-degree and a blended dual lens and display 240-degree FOV (diagonal) Pancake Optics designs.
These were vastly better than their older Fresnel designs and demonstrated Hypervision’s optical design capability. In May 2023, passthrough MR startup Lynx and Hypervision announced they were collaborating. For some more background on my encounters with Hypervision, see Hypervision Background.
Hypervision has been using its knowledge of pancake optics to analyze the Apple Vision Pro’s optical design, which I have reported on in Hypervision: Micro-OLED vs. LCD – And Why the Apple Vision Pro is “Blurry,” Apple Vision Pro Discussion Video by Karl Guttag and Jason McDowall, Apple Vision Pro – Influencing the Influencers & “Information Density,” and Apple Vision Pro (Part 4)—Hypervision Pancake Optics Analysis.
VR MMO ‘Zenith’ Releases Final Content Update & Drops Price Amid Development Halt
Ramen VR, the studio behind Zenith: The Last City, announced last month it would cease development on the VR MMORPG, citing a struggle to retain players. Now the game has received its final content update along with a bittersweet farewell to new players: a lower price.
Update (August 29th, 2024): Ramen VR has pushed its final content drop to Zenith. Detailed in a blog post, the last content update Season 4: Golden Isles has been the result of player requests sourced from community members in the Zenith Discord and customer feedback portal Feedbear.
The last season brings a smattering of new content to the free-to-play section Infinite Realms, as well as bug fixes for the top issues across both Infinite Realms and Zenith: The Last City paid DLC. Some of this includes new layouts in Infinite Realms, and new cosmetics, return of original Fast Fly, and a doubling of the limit for daily raids in The Last City.
As a farewell, Zenith: The Last City paid DLC has now dropped from its regular price of $30 to $10, available across all major VR headsets.
“We’re grateful to all the Zenitheans who have been here since the beginning, as well as anyone who chooses to pick up the game in the future. Your passion is what brings Zenith to life, and we hope you continue to make new friends and cherished memories in Zenith for the foreseeable future,” the team says,
The game will also host “nostalgic community events” following the release of Season 4, which will take place September 6th-8th. The original article detailing the development shutdown follows below:
Original Article (July 15th, 2024): The studio announced the news in a video, linked below, which describes some of the reasons behind the decision:
“Zenith has struggled with retaining players since very early on. Even though we’ve had hundreds of thousands of players, the vast majority of them stopped playing Zenith after about a month,” the company says in an FAQ.
Initially the result of a successful Kickstarter campaign in 2019, the Steam Early Access title went on to secure $10 million Series A funding round, later landing a $35 million Series B in March 2022. Just two months before securing its Series B, the studio released Zenith on PSVR and Quest 2, putting it in the best possible position to capitalize on its ability to play cross-platform.
In early 2024, Ramen VR revealed Zenith was running at a loss on a month-to-month basis “for the better part of a year,” which prompted the studio to release Infinite Realms, a free-to-play model, in hopes of attracting paid users.
“Despite our best efforts over the 5.5 years of development (and well before Infinite Realms launched), we weren’t able to improve retaining players. Zenith started losing money and it isn’t feasible to continue running it at a loss,” the FAQ continues.
While the studio is shutting down development, it’s not killing off the game entirely. Shards for both its paid Zenith: The Last City game and free-to-play Zenith: Infinite Realms version will be running “for the foreseeable future,” Ramen VR says. “The community will be the first to know far in advance if that changes.”
The post VR MMO ‘Zenith’ Releases Final Content Update & Drops Price Amid Development Halt appeared first on Road to VR.
Hypervision: Micro-OLED vs. LCD – And Why the Apple Vision Pro is “Blurry”
Introduction
The optics R&D company Hypervision provided a detailed design analysis of the Apple Vision Pro’s optical design in June 2023 (see Apple Vision Pro (Part 4) – Hypervision Pancake Optics Analysis). Hypervision just released an interesting analysis exploring whether Micro-OLEDs, as used by the Apple Vision Pro, or LCDs used by Meta and most others, can support high 60 pixels per degree, angular resolution, and a wide FOV. Hypervision’s report is titled 60PPD: by fast LCD but not by micro OLED.
The optics R&D company Hypervision provided a detailed design analysis of the Apple Vision Pro’s optical design in June 2023 (see Apple Vision Pro (Part 4) – Hypervision Pancake Optics Analysis). Hypervision just released an interesting analysis exploring whether Micro-OLEDs, as used by the Apple Vision Pro, or LCDs used by Meta and most others, can support high 60 pixels per degree, angular resolution, and a wide FOV. Hypervision’s report is titled 60PPD: by fast LCD but not by micro OLED. I’m going to touch on some highlights from Hypervision’s analysis. Please see their report for more details.
I Will Be at AWE Next Week
AWE is next week. I will be on the PANEL: Current State and Future Direction of AR Glasses at AWE on Wednesday, June 19th, from 11:30 AM to 12:25 PM. I still have a few time slots. If you want to meet, please email meet@kgontech.com.
AWE has moved to Long Beach, CA, south of LA, from its prior venue in Santa Clara. Last year at AWE, I presented Optical Versus Passthrough Mixed Reality, which is available on YouTube. This presentation was in anticipation of the Apple Vision Pro.
An AWE speaker discount code – SPKR24D- provides a 20% discount. You can register for AWE here.
Apple Vision Pro Sharpness Study at AWE 2024 – Need Help
As Hypervision’s analysis finds, plus reports I have received from users, the Apple Vision Pro’s sharpness varies from unit to unit. AWE 2024 is an opportunity to sample many Apple Vision Pro headsets to see how the focus varies from unit to unit. I will be there with my high-resolution camera.
While not absolutely necessary, it would be helpful if you could download my test pattern, located here, and install it on your Apple Vision Pro. If you want to help, contact me via meet@kgontech.com or flag me down at the show. I will be spending most of my time on the Expo floor. If you participate, you can remain anonymous or receive a mention of you or your company at the end of a related article thanking you for your participation. I can’t promise anything, but I thought it would be worth trying.
AVP Burry Image Controversy
My article Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3 was the first to report that the AVP was a little blurry. I compared high-resolution pictures showing the same FOV with the AVP and the Meta Quest 3 (MQ3) in that article.
This article caused controversy and was discussed in many forums and influencers, including Linus Tech Tips and Marquess Brownlee (see Apple Vision Pro—Influencing the Influencers & “Information Density” and “Controversy” of the AVP Being a Little Blurry Discussed on Marques Brownlee’s Podcast and Hugo Barra’s Blog).
I have recently been taking pictures through Bigscreen Beyond’s (BSB) headset and decided to compare it with the same test (above right). In terms of optical sharpness, it is between the AVP and the MQ3. Interestingly, the BSB headset has a slightly lower angular resolution (~32 pixels per degree) than the AVP (~40 ppd) in the optically best part of the lens where these crops were taken. Yet, the text and line patterns look better on the BSB than AVP.
Hypervision’s Correction – The AVP is Not Out of Focus, and the Optics are Blurry
I speculated that the AVP seemed out of focus in Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3. Hypervision corrected me that the softness could not be due to being out of focus. Hypervision has found that sharpness varies from one AVP to the next. The AVP’s best focus nominally occurs with an apparent focus of about 1 meter. Hypervision pointed out that if the headset’s device focus were slightly wrong, it would simply shift the apparent focus distance as the eye/camera would adjust to a small change in focus (unless it was so far off that eye/camera focusing was impossible). Thus, the blur is not a focus problem but rather a resolution problem with the optics.
Hypervision’s Analysis – Tolerances Required Beyond that of Today’s Plastic Optics
The AVP has very aggressive and complex pancake optics for a compact form factor while supporting a wide FOV with a relatively small Micro-OLED. Most other pancake optics have two elements, which mate with a flat surface for the polarizers and quarter waveplates that manipulate the polarized light to cause the light to pass through the optics twice (see Meta example below left). Apple has a more complex three-lens optic with curved polarizers and quarter waveplates (below right).
Based on my studies of how the AVP dynamically adjusts optical imperfections like chroma aberrations based on eye tracking, the AVP’s optics are “unstable” because, without dynamic correction, the imperfections would be seen as much worse.
Hypervision RMS Analysis
Hypervision did an RMS analysis comparing a larger LCD panel with a small Micro-OLED. It should probably come as no surprise that requiring about 1.8x (2.56/1.4) greater magnification makes everything more critical. The problem, as Hypervision points out, is that Micro-OLED on silicon can’t get bigger for many years due to semiconductor manufacturing limitations (reticle limit). Thus, the only way for Micro-OLED designs to support higher resolution and wider FOV is to make the pixels smaller and the optics much more difficult.
Hypervision Monte-Carlo Analysis
Hypervision then did a Monte-Carlo analysis factoring in optical tolerances. Remember, we are talking about fairly large plastic-molded lenses that must be reasonably priced, not something you would pay hundreds of dollars for in a large camera or microscope.
Hypervision’s 140 Degree FOV with 60PPD Approach
Hypervision believes that the only practical path to ~60PPD and ~140-degree FOV is with a 2.56″ LCD display. LCDs’ natural progression toward smaller pixels will enable higher resolution than their optics can support.
Conclusion
Overall, Hypervision makes a good case that current designs with Micro-OLED with pancake optics are already pushing the limits of reasonably priced optics. Using technology with somewhat bigger pixels makes resolving them easier, and having a bigger display makes supporting a wider FOV less challenging.
It might be that the AVP is slightly burry because it is already beyond the limits of a manufacturable design. So the natural question is, if AVP already has problems, how could they support higher resolution and wider FOV?
The size of Micro-OLEDs built on silicon backplanes is limited by a reticle limit of chip size of above ~1.4″ diagonally, at least without resorting to multiple reticle “stitching” (which is possible but not practical for a cost-effective device). Thus, for Micro-OLEDs to increase resolution, the pixels must be smaller, requiring even more magnification out of the optics. Then, increasing the FOV will require even more optical magnification of ever-tinier pixels.
LCDs have issues, particularly with black levels and contrast. Smaller illumination LEDs with local dimming may help, but they have not proven to work as well as micro-OLEDs.
Hide-and-Seek VR Shooter ‘Mannequin’ Launches on Quest & Steam, Trailer Here
Mannequin has graduated from early access on Steam and Quest, bringing the 2v3 hide-and-seek style shooter to the respective main stores.
Update (September 13th, 2024): Fast Travel Games has fully released its asymmetrical stealth VR shooter Mannequin, which is said to include visual improvements with enhancements for Quest 3. In addition to its four maps, the studio says there’s also a new map called ‘Excavation’ releasing later this month as a free update.
The original article highlighting its early access release on Steam follows below:
Original Article (June 12th, 2024): The hide-and-seek style VR shooter will launch on Steam Early Access on June 20th, which includes cross-play support with Quest.
The studio says it’s also offering a free Steam weekend at release, which will arrive with a two-week launch discount, taking the game from regular price of $20 to $15.
Fast Travel Games says the PC VR version of Mannequin is slated to include improved visuals over the Quest version, however a new Quest patch will also add some Quest-specific visual improvements in addition to cross-play support.
Notably, both the Steam and Quest versions will feature all four levels with “fully finalized” design and first responder layouts, and a smoother UI than that seen at launch of the Quest Early Access version in early May.
Mannequin is also set launch new modes and features “soon,” which the studio promises will be a “major update.”
First revealed in September, Mannequin brings a 2v3 experience akin to a deadly game of cat and mouse, letting two elite Agents hunt three shape-shifting aliens, aka ‘Mannequins’. You can wishlist the Steam version here, where you can also keep an eye out for the free Steam weekend.
The post Hide-and-Seek VR Shooter ‘Mannequin’ Launches on Quest & Steam, Trailer Here appeared first on Road to VR.
Apple Vision Pro – Influencing the Influencers & “Information Density”
Introduction
Many media outlets, large and small, both text and video, use this blog as a resource for technical information on mixed reality headsets. Sometimes, they even give credit. In the past two weeks, this blog was prominently cited in YouTube videos by Linus Tech Tips (LTT) and Artur Tech Tales. Less fortunately, Adam Savage’s Tested, hosted by Norman Chen in his Apple Vision Pro Review, used a spreadsheet test pattern from this blog to demonstrate foveated rendering issues.
I will follow up with a discussion of Linus’s Tech Tips video, which deals primarily with human factors. In particular, I want to discuss the “Information Density issue” of virtual versus physical monitors, which the LTT video touched on.
Influencing the Influencers On Apple Vision Pro
Linus Tech Tips (LTT)
In their “Apple Vision Pro—A PC Guy’s Perspective,” Linus Tech Tips showed several pages from this blog that were nice enough to prominently feature the pages they were using and the web addresses (below). Additionally, I enjoyed their somewhat humorous physical “simulation” of the AVP (more on that in a bit). LTT used images (below-left and below-center) from the blog to explain how the optics distort the display and how the processing in the AVP is used in combination with eye tracking to reduce that distortion. LTT also uses images from the blog (below-right) to show how the field of view (FOV) changes based on the distance from the eye to the optics.
Adam Savages’ Tested
Adam Savage’s Test with host Norman Chan’s review of the Apple Vision Pro used this blog’s AVP-XLS-on-BLACK-Large-Array from Spreadsheet “Breaks” The Apple Vision Pro’s (AVP) Eye-Tracking/Foveation & the First Through-the-optics Pictures to discuss how the foveated boundaries of the Apple Vision Pro are visible. While the spreadsheet is taken from this blog, I didn’t see any references given.
The Adam Savages Tested video either missed or was incorrect on several points it made:
- It missed the point of the blog article that the foveated rendering has problems with spreadsheets when directly rendered from Excel on the AVP instead of mirrored by a MacBook.
- It stated that taking pictures through the optics is impossible, which this blog has been doing for over a month (including in this article).
- It said that the AVP’s passthrough 3-D perspective was good with short-range but bad with long-range objects, but Linuses Tech tips (discussed later) find the opposite. The AVP’s accuracy is poor with short-range objects due to the camera placement.
- It said there was no “warping” of the real world with video passthrough, which is untrue. The AVP does less warping than the Meta Quest 3 and Quest Pro, but it still warps objects less than 0.6 meters (2 feet) away and toward the center to the upper part of the user’s view. It is impossible to be both perspective-correct and not warp with the AVP’s camera placement with near objects; the AVP seems to trade off being perspective-correct to have less warping than the Meta headsets.
Artur’s Tech Tales – Interview on AVP’s Optical Design
Artur’s Tech Tales Apple Vision Pro OPTICS—Deep Technical Analysis, featuring Arthur Rabner (CEO of Hypervision), includes an interview and presentation by Hypervision’s CEO, Arther Rabner. In his presentation, Rabner mentions this blog several times. The video details the AVP optics and follows up on Hypervision’s white paper discussed in Apple Vision Pro (Part 4) – Hypervision Pancake Optics Analysis.
Linus Tech Tips on Apple Vision Pro’s Human Factors
Much of the Linus Tech Tips (LTT) videos deal with human factors and user interface issues. For the rest of this article, I will discuss and expand upon comments made in the LTT video. Linus also commented on the passthrough camera’s “shutter angle,” but I moved my discussion on that subject to the “Appendix” at the end as it was a bit off-topic and needed some explanation.
It makes a mess of your face
At 5:18 in the video, Linus takes the headset off and shows the red marks left by the Apple Vision Pro (left), which I think may have been intentional after Linus complained about issues with the headband earlier. For reference, I have included the marks left by the Apple Vision Pro on my face (below-right). I sometimes joke that I wonder if I wear it long enough, it will make a groove in my skull to help hold up the headset.
An Apple person who is an expert at AVP fitting will probably be able to tell based on the marks on our faces if we have the “wrong” face interface. Linus’s headset makes stronger marks on his cheeks, whereas mine makes the darkest marks on my forehead. As I use inserts, I have a fairly thick (but typical for wearing inserts) 25W face hood with the thinner “W” interface, and AVP’s eye detection often complains that I need to get my eyes closer to the lenses. So, I end up cranking the solo band almost to the point where I feel my pulse on my forehead like a blood pressure measuring cuff (perhaps a health “feature” in the future?).
Need for game controllers
For virtual reality, Linus is happy with the resolution and placement of virtual objects in the real world. But he stated, “Unfortunately, the whole thing falls apart when you interact with the game.” Linus then goes into the many problems of not having controllers and relying on hand tracking alone.
I’m not a VR gamer, but I agree with The Verge that AVP’s hand and eye tracking is “magic until it’s not.” I am endlessly frustrated with eye-tracking-based finger selection. Even with the headset cranked hard against my face, the eye tracking is unstable even after recalibration of the IPD and eye tracking many times. I consider eye and hand tracking a good “secondary” selection tool that needs an accurate primary selection tool. I have an Apple Magic Pad that “works” with the AVP but does not work in “3-D space.”
Windows PC Gaming Video Mirroring via WiFi has Lag, Low Resolution, and Compression Artifacts
Linus discussed using the Steam App on the AVP to play games. He liked that he could get a large image and lay back, but there is some lag, which could be problematic for some games, particularly competitive ones; the resolution is limited to 1080p, and compression artifacts are noticeable.
Linus also discussed using the Sunshine (streaming server on the PC) and Moonlight (remote access on the AVP) apps to mirror Windows PCs. While this combination supports up to 4K at 120p, Linus says you will need an incredibly good wireless access point for the higher resolution and frame rates. In terms of effective resolution and what I like to call “Information Density,” these apps will still suffer the loss of significant resolution due to trying to simulate a virtual monitor in 3-D space, as I have discussed in Apple Vision Pro (Part 5C) – More on Monitor Replacement is Ridiculous and Apple Vision Pro (Part 5A) – Why Monitor Replacement is Ridiculous and shown with through the lens pictures in Apple Vision Pro’s (AVP) Image Quality Issues – First Impressions and Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3.
From a “pro” design perspective, it is rather poor on Apple’s part that the AVP does not support a direct Thunderbolt link for both data and power, while at the same time, it requires a wired battery. I should note that the $300 developer’s strap supports a lowish 100Mbs ethernet (compared to USB-C/Thunderbolt 0.48 to 40 Gbs) speed data through a USB-C connector while still requiring the battery pack for power. There are many unused pins on the developer’s strap, and there are indications in the AVP’s software that the strap might support higher-speed connections (and maybe access to peripherals) in the future.
Warping effect of passthrough
In terms of video passthrough, at 13:43 in the video, Linus comments about the warping effect of close objects and depth perception being “a bit off.” He also discussed that you are looking at the world through phone-type cameras. When you move your head, the passthrough looks duller, with a significant blur (“Jello”).
The same Linus Tech Tip video also included humorous simulations of the AVP environment with people carrying large-screen monitors. At one point (shown below), they show a person wearing a respirator mask (to “simulate” the headset) surrounded by three very large monitors/TVs. They show how the user has to move their head around to see everything. LTT doesn’t mention that those monitors’ angular resolution is fairly low, which is why those monitors need to be so big.
Sharing documents is a pain.
Linus discussed the AVP’s difficulty sharing documents with others in the same room. Part of this is because the MacBook’s display goes blank when mirroring onto the AVP. Linus discussed how he had to use a “bizarre workaround” of setting up a video conference to share a document with people in the same room.
Information Density – The AVP Delivers Effectively Multiple Large but Very Low-Resolution Monitors
The most important demonstration in the LTT video involves what I like to call the “Information Density” problem. The AVP, or any VR headset, has low information density when trying to emulate a 2-D physical monitor in 3-D space. It is a fundamental problem; the effective resolution of the AVP well less than half (linearly, less than a quarter two-dimensionally) of the resolution of the monitors that are being simulated (as discussed in Apple Vision Pro (Part 5C) – More on Monitor Replacement is Ridiculous and Apple Vision Pro (Part 5A) and shown with through the lens pictures in Apple Vision Pro’s (AVP) Image Quality Issues – First Impressions and Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3). The key contributors to this issue are:
- The peak display resolution in the center of the optics is only 44.4 pixels per degree (human vision it typically better than 60 ppd).
- The 2-D/Monitor image must be resampled into 3-D space with an effective resolution loss greater than 2x.
- If the monitor is to be viewable, it must be inscribed inside the oval sweet spot of the optics. In the case of the AVP, this cuts off about half the pixels.
- While the AVP’s approximate horizontal FOV is about 100 degrees, the optical resolution drops considerably in the outer third of the optics. Only about the center 40-50 degrees of the FOV is usable for high-resolution content.
- Simply put, the AVP needs more than double the PPD and better optics to provide typical modern computer monitors’ information/resolution density. Even then, it would be somewhat lacking in some aspects.
Below, show the close-up center (best case) through the AVP’s optics on the (left) and the same image at about the same FOV on a computer monitor (right). Things must be blown up about 2x (linearly) to be as legible on the AVP as on a good computer monitor.
Some current issues with monitor simulation are “temporary software issues” that can be improved, but that is not true with the information density problem.
Linus states in the video (at 17:48) that setting up the AVP is a “bit of a chore,” but it should be understood most of the “chore” is due to current software limitations that could be fixed with better software. The most obvious problems, as identified by Linus, are that the AVP does not currently support multiple screens from a MacBook, and it does not save the virtual screen location of the MacBook. I think most people expect Apple to fix these problems at some point in the near future.
At 18:20, Linus showed the real multiple-monitor workspace of someone doing video editing (see below). While a bit extreme for some people with two vertically stacked 4K monitors in landscape orientation monitors and a third 4K monitor in portrait mode, it is not that far off what I have been using for over a decade with two large side-by-side monitors (today I have a 34″ 22:9 1440p “center monitor” and a 28″ 4K side monitor both in landscape mode).
I want to note a comment made by Linus (with my bold emphasis):
“Vision Pro Sounds like having your own personal Colin holding a TV for you and then allowing it to be repositioned and float effortlessly wherever you want. But in practice, I just don’t really often need to do that, and neither do a lot of people. For example, Nicole, here’s a real person doing real work [and] for a fraction of the cost of a Vision Pro, she has multiple 4K displays all within her field of view at once, and this is how much she has to move her head in order to look between them. Wow.
Again, I appreciate this thing for the technological Marvel that it is—a 4K display in a single Square inch. But for optimal text clarity, you need to use most of those pixels, meaning that the virtual monitor needs to be absolutely massive for the Vision Pro to really shine.“
The bold highlights above make the point about information density. A person can see all the information all at once and then, with minimal eye and head movement, see the specific information they want to see at that moment. Making text bigger only “works” for small amounts of content as it makes reading slower with larger head and eye movement and will tend to make the eyes more tired with movement over wider angles.
To drive the point home, the LTT video “simulates” an AVP desktop, assuming multiple monitor support but physically placing three very large monitors side by side with two smaller displays on top. They had the simulated user wear a paint respirator mask to “simulate” the headset (and likely for comic effect). I would like to add that each of those large monitors, even at that size, with the AVP, will have the resolution capability of more like a 1920x1080p monitor or about half linearly and one-fourth in area, the content of a 4K monitor.
Quoting Linus about this part of the video (with my bold emphasis):
It’s more like having a much larger TV that is quite a bit farther away, and that is a good thing in the sense that you’ll be focusing more than a few feet in front of you. But I still found that in spite of this, that it was a big problem for me if I spent more than an hour or so in spatial-computing-land.
Making this productivity problem worse is the fact that, at this time, the Vision Pro doesn’t allow you to save your layouts. So every time you want to get back into it, you’ve got to put it on, authenticate, connect to your MacBook, resize that display, open a safari window, put that over there where you want it, maybe your emails go over here, it’s a lot of friction that our editors, for example, don’t go through every time they want to sit down and get a couple hours of work done before their eyes and face hurt too much to continue.
I would classify Many of the issues Linus gave in the above quote as solvable in software for the AVP. What is not likely solvable in software are headaches, eye strain, and low angular resolution of the AVP relative to a modern computer monitor in a typical setup.
While speaking in the Los Angeles area at the SID LA One Day conference, I stopped in a Bigscreen Beyond to try out their headset for about three hours. I could wear the Bigscreen Beyond for almost three hours, where typically, I get a spitting headache with the AVP after about 40 minutes. I don’t know why, but it is likely a combination of much less pressure on my forehead and something to do with the optics. Whatever it is, there is clearly a big difference to me. It was also much easier to drink from a can (right) with the Bigscreen’s much-reduced headset.
Conclusion
It is gratifying to see the blog’s work reach a wide audience worldwide (about 50% of this blog’s audience is outside the USA). As a result of other media outlets picking up this blog’s work, the readership roughly doubled last month to about 50,000 (Google Analytics “Users”).
I particularly appreciated the Linus Tech Tip example of a real workspace in contrast to their “simulation” of the AVP workspace. It helps illustrate some human factor issues with having a headset simulate a computer monitor, including information density. I keep pounding on the Information Density issue because it seems underappreciated by many of the media reports on the AVP.
Appendix Linus Comments on AVP’s “Weird Camera Shutter Angle”
I moved this discussion to this Appendix because it involves some technical discussion that, while it may be important, may not be of interest to everyone and takes some time to explain. At the same time, I didn’t want to ignore it as it brings up a potential issue with the AVP.
At about 16:30 in the LTT Video, Linus also states that the Apple Vision Pro cameras use “weird shutter angles to compensate for the flickering of lights around you, causing them [the AVP] to crank up the ISO [sensitivity], adding a bunch of noise to the image.”
For those that don’t know, “shutter angle” (see also https://www.digitalcameraworld.com/features/cheat-sheet-shutter-angles-vs-shutter-speeds) is a hold-over term from the days of mechanical movie shutters where the shutter was open for a percentage of a 360-degree rotating shutter (right). Still, it is now applied to camera shutters, including “electronic shutters” (many large mirrorless cameras have mechanical and electronic shutter options with different effects). A 180-degree shutter angle means the shutter/camera scanning is open one-half the frame time, say 1/48th of a 1/24th of a second frame time or 1/180th of a 1/90th of a second frame rate. Typically, people talk about how different shutter angles affect the choppiness of motion and motion blur, not brightness or ISO, even though it does affect ISO/Brightness due to the change in exposure time.
I’m not sure why Linus is saying that certain lights are reducing the shutter angle, thus increasing ISO, unless he is saying that the shutter time is being reduced with certain types of light (or simply bright lights) or with certain types of flickering lights the cameras are missing much of the light. If so, it is a roundabout way of discussing the camera issue; as discussed above, the term shutter angle is typically used in the context of motion effects, with brightness/ISO being more of a side issue.
A related temporal issue is the duty cycle of the displays (as opposed to the passthrough cameras), which has a similar “shutter angle” issue. VR users have found that displays with long on-time duty cycles cause perceived blurriness with rapid head movement. Thus, they tend to prefer display technologies with low-duty cycles. However, low display duty cycles typically result in less display brightness. LED backlit LCDs can drive the LEDs harder for shorter periods to help make up for the brightness loss. However, OLED microdisplays commonly have relatively long (sometimes 100%) on-time duty cycles. I have not yet had a chance to check the duty cycle of the AVP, but it is one of the things on my to-do list. In light of Linus’s comments, I will want to set up some experiments to check out the temporal behavior of the AVP’s passthrough camera.
Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3
Introduction – Sorry, But It’s True
I have taken thousands of pictures through dozens of different headsets, and I noticed that the Apple Vision Pro (AVP) image is a little blurry, so I decided to investigate. Following up on my Apple Vision Pro’s (AVP) Image Quality Issues – First Impressions article, this article will compare the AVP to the Meta Quest 3 by taking the same image at the same size in both headsets, and I got what many will find to be surprising results.
I know all “instant experts” are singing the praises of “the Vision Pro as having such high resolution that there is no screen door effect,” but they don’t seem to understand that the screen door effect is hiding in plain sight, or should I say “blurry sight.” As mentioned last time, the AVP covers its lower-than-human vision angular resolution by making everything bigger and bolder (defaults, even for the small window mode setting, are pretty large).
While I’m causing controversies by showing evidence, I might as well point out that the AVP’s contrast and color uniformity are also slightly lower than the Meta Quest 3 on anything but a nearly black image. This is because the issues with AVP’s pancake optics dominate over AVP’s OLED microdisplay. This should not be a surprise. Many people have reported “glow” coming from the AVP, particularly when watching movies. That “glow” is caused by unwanted reflections in the pancake optics.
If you click on any image in this article, you can access it in full resolution as cropped from a 45-megapixel original image. The source image is on this blog’s Test Pattern Page. As if the usual practice of this blog, I will show my work below. If you disagree, please show your evidence.
Hiding the Screen Door Effect in Plain Sight with Blur
The numbers don’t lie. As I reported last time in Apple Vision Pro’s (AVP) Image Quality Issues – First Impressions, the AVP’s peak center resolution is about 44.4 pixels per degree (PPD), below 80 PPD, what Apple calls “retinal resolution,” and the pixel jaggies and screen door should be visible — if the optics were sharp. So why are so many reporting that the AVP’s resolution must be high since they don’t see the screen door effect? Well, because they are ignoring the issue of the sharpness of the optics.
Two factors affect the effective resolution: the PPD of optics and the optics’ modulation transfer function sharpness and contrast of the optics, commonly measured by the Modulation Transfer Function (MTF — see Appendix on MTF).
People do not see the screen door effect with the AVP because the display is slightly out of focus/blurry. Low pass filtering/blurring is the classic way to reduce aliasing and screen door effects. I noticed that when playing with the AVP’s optics, the optics have to be almost touching the display to be in focus. The AVP’s panel appears to be recessed by about 1 millimeter (roughly judging by my eye) beyond the best focus distance. This is just enough so that the thinner gaps between pixels are out of focus while only making the pixels slightly blurry. There are potentially other explanations for the blur, including the microlenses over the OLED panel or possibly a softening film on top of the panel. Still, the focus seems to be the most likely cause of the blurring.
Full Image Pictures from the center 46 Degrees of the FOV
I’m going to start with high-resolution pictures through the optics. You won’t be able to see any detail without clicking on them to see them at full resolution, but you may discern that the MQ3 feels sharper by looking at the progressively smaller fonts. This is true even in the center of the optics (square “34” below), even before the AVP’s foveate rendering results in a very large blur at the outside of the image (11, 21, 31, 41, 51, and 61). Later, I will show a series of crops to show the central regions next to each other in more detail.
The pictures below were taken by a Canon R5 (45 Megapixel) camera with a 16mm lens at f8. With a combination of window sizing and moving the headset, I created the same size image on the Apple Vision Pro and Meta Quest Pro to give a fair comparison (yes, it took a lot of time). A MacBook Pro M3 Pro was casting the AVP image, and the Meta Quest 3 was running the Immersed application (to get a flat image) mirroring a PC laptop. For reference, I added a picture of a 28″ LCD monitor taken from about 30″ to give approximately the same FOV as the image from a conventional 4K monitor (this monitor could resolve single pixels of four of these 1080p images, although you would have to have very good vision see them distinctly).
Medium Close-Up Comparison
Below are crops from near the center of the AVP image (left), the 28″ monitor (center), and the MQ3 image (right). The red circle on the AVP image over the number 34 is from the eye-tracking pointer being on (also used to help align and focus the camera). The blur of the AVP is more evident in the larger view.
Extreme Close-Up of AVP and MQ3
Cropping even closer to see the details (all the images above are at the same resolution) with the AVP on the top and the MQ3 on the bottom. Some things to note:
- Neither the AVP nor MQ3 can resolve the 1-pixel lines, even though a cheap 1080p monitor would show them distinctly.
- While the MQ3 has more jaggies and the screen door effect, it is noticeably sharper.
- Looking at the space between the circle and the 3-pixel wide lines pointed at by the red arrow, it should be noticed that the AVP has less contrast (is less black) than the MQ3.
- Neither the AVP nor MQ3 can resolve the 1-pixel-wide lines correctly, but the 2- and 3-pixel-wide lines, along with all the text, are significantly sharper and have higher contrast than on the AVP. Yes, the effective resolution of the MQ3 is objectively better than the AVP.
- Some color moiré can be seen in the MQ3 image, a color artifact due to the camera’s Bayer filter (not seen by the eye) and the relative sharpness of the MQ3 optics. The camera can “see” the MQ3’s LCD color filters through the optics.
Experiment with Slightly Blurring the Meta Quest 3
A natural question is whether the MQ3 should have made their optics slightly out of focus to hide the screen door effect. As a quick experiment, I tried a (Gaussian) blur of the MQ3’s image a little (middle image below) as an experiment. There is room to blur it while still having a higher effective resolution than the AVP. The AVP still has more pixels, and the person/elf’s image looks softer on the slightly blurred MQ3. The lines are testing for high contrast resolution (and optical reflections), and the photograph shows what happens to a lower contrast, more natural image with more pixel detail.
AVP’s Issues with High-Resolution Content
While Apple markets each display as having the same number of pixels as a 4K monitor (but differently shaped and not as wide), the resolution is reduced by multiple factors, including those listed below:
- The oval-shaped optics cut about 25-30% of the pixels.
- The outer part of the optics has poor resolution (about 1/3rd the pixels per degree of the center) and has poor color.
- A rectangular image must be inscribed inside the “good” part of the oval-shaped optics with a margin to support head movement. While the combined display might have a ~100-degree FOV, there is only about a 45- to 50-degree sweet spot.
- Any pixels in the source image must be scaled and mapped into the destination pixels. For any high-resolution content, this can cause more than a 2x (linear) loss in resolution and much worse if it aliases. For more on the scaling issues, see my articles on Apple Vision Pro (Part 5A, 5B, & 5C).
- As part of #4 above or in a separate process, the image must be corrected for optical distortion and color as a function of eye tracking, causing further image degradation
- Scintillation and wiggling of high-resolution content with any head movement.
- Blurring by the optics
The net of the above, and as demonstrated by the photographs through the optics shown earlier, the AVP can’t accurately display a detailed 1920×1080 (1080p) image.
AVP Lack “Information Density”
Making everything bigger, including short messages and videos, can work for low-information-density applications. If anything, the AVP demonstrates that very high resolution is less important for movies than people think (watching movies is a notoriously bad way to judge resolution).
As discussed last time, the AVP makes up the less-than-human angular resolution by making everything big to hide the issue. But making the individual elements bigger means less content can be seen simultaneously as the overall image is enlarged. But making things bigger means that the “information density” goes down, with the eyes and head having to move more to see the same amount of content and less overall content can be seen simultaneously. Consider a spreadsheet; fewer rows and columns will be in the sweet spot of a person’s vision, and less of the spreadsheet will be visible without needing to turn your head.
This blog’s article, FOV Obsession, discusses the issue of eye movement and fatigue using information from Thad Starner’s 2019 Photonic’s West AR/VR/MR presentation. The key point is that the eye does not normally want to move more than 10 degrees for an extended period. The graph below left is for a monocular display where the text does not move with the head-turning. Starner points out that a typical newspaper column is only about 6.6 degrees. It is also well known that when reading content more than ~30 degrees wide, even for a short period, people will turn their heads rather than move their eyes. Making text content bigger to make it legible will necessitate more eye and head movement to see/read the same amount of content, likely leading to fatigue (I would like to see a study of this issue).
ANSI-Like Contrast
A standard way to measure contrast is using a black-and-white checkerboard pattern, often called ANSI Contrast. It turns out that with a large checkerboard pattern, the AVP and MQ3 have very similar contrast ratios. For the picture below, I make the checkerboard bigger to fill about 70 degrees horizontally for each device’s FOV. The optical reflections inside the AVP’s optics cancel out the inherent high contrast of the OLED displays inside the AVP.
The AVP Has Worse Color Uniformity than the MQ3
You may be able to tell that the AVP has a slightly pink color in the center white squares. As I move my head around, I see the pink region move with it. Part of the AVP’s processing is used to correct color based on eye tracking. Most of the time, the AVP does an OK job, but it can’t perfectly correct for color issues with the optics, which becomes apparent in large white areas. The issues are most apparent with head and eye movement. Sometimes, by Apple’s admission, the correction can go terribly wrong if it has problems with eye tracking.
Using the same images above and increasing the color saturation in both images by the same amount makes the color issues more apparent. The MQ3 color uniformity only slightly changes in the color of the whites, but the AVP turns pink in the center and cyan on the outside.
The AVP’s “aggressive” optical design has about 1.6x the magnification of the MQ3 and, as discussed last time, has a curved quarter waveplate (QWP). Waveplates modify polarized light and are wavelength (color) and angle of light-dependent. Having repeatedly switched between the AVP and MQ3, the MQ3 has better color uniformity, particularly when taking one off and quickly putting the other on.
Conclusion and Comments
As a complete product (more on this in future articles), the AVP is superior to the Meta Quest Pro, Quest 3, or any other passthrough mixed reality headset. Still, the AVP’s effective resolution is less than the pixel differences would suggest due to the softer/blurrier optics.
While the pixel resolution is better than the Quest Pro and Quest 3, its effective resolution after the optics is worse on high-contrast images. Due to having a somewhat higher PPD, the AVP looks better than the MQP and MQ3 on “natural” lower-contrast content. The AVP image is much worse than a cheap monitor displaying high-resolution, high-contrast content. Effectively, what the AVP supports is multiple low angular resolution monitors.
And before anyone makes me out to be a Meta fanboy, please read my series of articles on the Meta Quest Pro. I’m not saying the MQ3 is better than the AVP. I am saying that the MQ3 is objectively sharper and has better color uniformity. Apple and Meta don’t get different physics, and they make different trade-offs which I am pointing out.
The AVP and any VR/MR headset will fare much better with “movie” and video content with few high-contrast edges; most “natural” content is also low in detail and pixel-to-pixel contrast (and why compression works so well with pictures and movies). I must also caution that we are still in the “wild enthusiasm stage,” where the everyday problems with technology get overlooked.
In the best case, the AVP in the center of the display gives the user a ~20/30 vision view of its direct (non-passthrough) content and worse when using passthrough (20/35 to 20/50). Certainly, some people will find the AVP useful. But it is still a technogeek toy. It will impress people the way 3-D movies did over a decade ago. As a reminder, 3-D TV peaked at 41.45 million units in 2012 before disappearing a few years later.
Making a headset display is like n-dimensional chess; more than 20 major factors must be improved, and improving one typically worsens other factors. These factors include higher resolution, wider FOV, peripheral vision and safety issues, lower power, smaller, less weight, better optics, better cameras, more cameras and sensors, and so on. And people want all these improvements while drastically reducing the cost. I think too much is being made about the cost, as the AVP is about right regarding the cost for a new technology when adjusted for inflation; I’m worried about the other 20 problems that must be fixed to have a mass-market product.
Appendix – Modulation Transfer Function (MTF)
MTF is measured by putting in a series of lines of equal width and spacing and measuring the difference between the white and black as the size and spacing of the lines change. People typically use 50% contrast critical to specify the MTF by convention. But note that contrast is defined as (Imax-Imin)/(Imax+Imin), so to achieve 50% contrast, the black level must be 1/3rd of the white level. The figure (below) shows how the response changes with the line spacing.
The MTF of the optics is reduced by both the sharpness of the optics and any internal reflections that, in turn, reduce contrast.
Display Daily Senior Analyst, SID Display Information Article, & Speaking at AWE
Introduction
I want my readers to know about my first article on Display Daily as a “Senior Analyst” and the article I wrote for the March/April SID Information Display. I will be speaking and attending the upcoming AWE 2023 Conference from May 31st through June 2nd. I also recently recorded another AR Show Podcast with Jason McDowell, which should be published in a few weeks.
I also wanted you to know I have a lot of travel planned for May, so there may not be much, if anything, published on this blog in May. But I have several articles in the works for this month and should have more to discuss in June.
Display Daily Article On Apple – New “Senior Analyst”
Display Daily, a division of Jon Peddie research, has just put out an article by me discussing long-rumored Apple Mixed Reality headset. In some ways, this follows up an article I wrote for Display Daily in 2015 titled VR and AR Head Mounted Displays – Sorry, but there is no Santa Claus.
Display Daily and I are looking at joint wrote and video projects. I will be teaming up with Display Daily as a “Senior Analyst” on these new projects while I continue to publish this blog.
SID Information Display Magazine’s March/April 2023 Article, “The Crucial Role of Optics in AR/MR”
I was asked to contribute an article to SID’s Information Display Magazine’s printed and online March/April 2023 issue.
The article (available for free download) discusses the most common types of optics and displays used in mixed reality today and what I see as the technologies of the future.
Attending and Presenting at AWE 2023
AWE has been the best conference for seeing a wide variety of AR, VR, and MR headsets for many years. While I mostly spend my time on the show floor and in private meetings to see the “good stuff,” I have been invited to give a presentation this year. The Topic of the presentation will be the pros and cons of Optical versus Video Passthrough Mixed Reality. The conference runs from May 31st to June 2nd. I will be presenting at 9:00 AM on June 2nd.
My Long History with Display Daily (20+ Years) and even Longer with Jon Peddie (40+ Years)
I’ve been interacting with Display Daily and its former parent company Insight Media headed by Chris Chinnock, who is still a Display Daily Contributor since I left Texas Instruments to work on LCOS display devices in 1998. Meko, headed by Bob Raikes, took over Display Daily in 2014, then late in 2022, Jon Peddie Research acquired Display Daily.
It turns out that I had known market analyst Jon Peddie since the mid-1980s when he was the chief architect of the TMS34010, the world’s first fully programmable graphics processor, and led the definition of other graphics devices, including the first Video DRAM. Jon suggested we work together on some projects, and I have become a Senior Analyst at Display Daily.