Huawei Connect Paris: AI en de digitale, groene transformatie van Europa
Het bericht Huawei Connect Paris: AI en de digitale, groene transformatie van Europa verscheen eerst op DutchCowboys.
In this series about the Apple Vision Pro, this sub-series on Monitor Replacement and Business/Text applications started with Part 5A, which discussed scaling, text grid fitting, and binocular overlap issues. Part 5B starts by documenting some of Apple’s claims that the AVP would be good for business and text applications. It then discusses the pincushion distortion common in VR optics and likely in the AVP and the radial effect of distortion on resolution in terms of pixels per degree (ppd).
The prior parts, 5A, and 5B, provide setup and background information for what started as a simple “Shootout” between a VR virtual monitor and physical monitors. As discussed in 5A, my office setup has a 34″ 22:9 3440×1440 main monitor with a 27″ 4K (3840×2160) monitor on the right side, which is a “modern” multiple monitor setup that costs ~$1,000. I will use these two monitors plus a 15.5″ 4K OLED Laptop display to compare to the Meta Quest Pro (MQP) since I don’t have an Apple AVP and then extrapolate the results to the AVP.
I will be saving my overall assessment, comments, and conclusions about VR for Office Applications for Part 5D rather than somewhat burying them at the end of this article.
A point to be made by using spreadsheets to generate the patterns is that if you have to make text bigger to be readable, you are lowering the information density and are less productive. Lowering the information density with bigger fonts is also true when reading documents, particularly when scanning web pages or documents for information.
Improving font readability is not solely about increasing their size. VR headsets will have imperfect optics that cause distortions, focus problems, chromatic aberrations, and loss of contrast. These issues make it harder to read fonts below a certain size. In Part 5A, I discussed how scaling/resampling and the inability to grid fit when simulating virtual monitors could cause fonts to appear blurry and scintillate/wiggle when locked in 3-D space, leading to reduced readability and distraction.
As discussed in Part 5A, with Meta’s Horizon Desktop, each virtual monitor is reported to Windows as 1920 by 1200 pixels. When sitting at the nominal position of working at the desktop, the center virtual monitor fills about 880 physical pixels of the MQP’s display. So roughly 1200 virtual pixels are resampled into 880 vertical pixels in the center of view or by about 64%. As discussed in Part 5B, the scaling factor is variable due to severe pincushion distortion of the optics and the (impossible to turn off) curved screen effect in Meta Horizons.
The picture below shows the whole FOV captured by the camera before cropping shot through the left eye. The camera was aligned for the best image quality in the center of the virtual monitor.
Analogous to Nyquist sampling, when you scale pixel rendered image, you want about 2X (linearly) the number of pixels in the display of the source image to render it reasonably faithfully. Below left is a 1920 by 1200 pixel test pattern (a 1920×1080 pattern padded on the top and bottom), “native” to what the MQP reports to Windows. On the right is the picture cropped to that same center monitor.
The picture was taken at 405mp, then scaled down by 3X linearly and cropped. When taking high-resolution display pictures, some amount of moiré in color and intensity is inevitable. The moiré is also affected by scaling and JPEG compression.
Below is a center crop from the original test pattern that has been 2x pixel-replicated to show the detail in the pattern.
Below is a crop from the full-resolution image with reduced exposure to show sub-pixel (color element) detail. Notice how the 1-pixel wide lines are completely blurred, and the test is just becoming fully formed at about Arial 11 point (close to, but not the same scale as used in the MS Excel Calibri 11pt tests to follow). Click on the image to see the full resolution that the camera captured (3275 x 3971 pixels).
The scaling process might lose a little detail for things like pictures and videos of the real world (such as the picture of the elf in the test pattern), but it will be almost impossible for a human to notice most of the time. Pictures of the real world don’t have the level of pixel-to-pixel contrast and fine detail caused by small text and other computer-generated objects.
For the desktop “shootout,” I picked the 34” 22:9 and 27” 4k monitors I regularly use (side by side as shown in Part 5A), plus a Dell 15.5” 4K laptop display. An Excel spreadsheet is used with various displays to demonstrate the amount of content that can be seen at one time on a screen. The spreadsheet allows for flexible changing of how the screen is scaled for various resolutions and text sizes, and the number of cells measures the information density. For repeatability, a screen capture of each spreadsheet was taken and then played back in full-screen mode (Appendix 1 includes the source test patterns)
The pictures below show the relative FOVs of the MQP and various physical monitors taken with the same camera and lens. The camera was approximately 0.5 meters from the center of the physical monitors, and the headset was at the initial position at the MQP’s Horizon Desktop. All the pictures were cropped to the size of a single physical or virtual monitor.
The following is the basic data:
The pictures below show the MQP with MS Windows display text scaling set to 100% (below left) and 175% (below middle). The 175% scaling would result in fonts with about the same number of pixels per font as the Apple Vision Pro (but with a larger angular resolution). Also included below (right) is the 15.5″ 4K display with 250% scaling (as recommended by Windows).
The camera was aimed and focused at the center of the MQP, the best case for it, as the optical quality falls off radially (discussed in Part 5B). The text sharpness is the same for the physical monitors from center to outside, but they have some brightness variation due to their edge illumination.
Each picture above was initially taken 24,576 x 16,384 (405mp) by “pixel shifting” the 45MP R5 camera sensor to support capturing the whole FOV while capturing better than pixel-level detail from the various displays. In all the pictures above, including the composite image with multiple monitors, each image was reduced linearly by 3X.
The crops below show the full resolution (3x linearly the images above) of the center of the various monitors. As the camera, lines, and scaling are identical, the relative sizes are what you would see looking through the headset for the MQP sitting at the desktop and the physical monitors at about 0.5 meters. I have also included a 2X magnification of the MQP’s images.
With Windows 100% text scaling, the 11pt font on the MQP is about the same size as it is on the 34” 22:9 monitor at 100%, the 27” 4K monitor at 150% scaling, and the 15.5” 4K monitor at 250% scaling. But while the fonts are readable on the physical monitor, they are a blurry mess on the MQP at 100%. The MQP at 150% and 175% is “readable” but certainly does not look as sharp as the physical monitors.
Apple’s AVP has about 175% linear pixel density of the MQP. Thus the 175% case gives a reasonable idea of how text should look on the AVP. For comparison below, the MQP’s 175% case has been scaled to match the size of the 34” 22:9 and 27” 4K monitors at 100%. While the text is “readable” and about the same size, it is much softer/blurrier than the physical monitor. Some of this softness is due to optics, but a large part is due to scaling. While the AVP may have better optics and a text rendering pipeline, they still don’t have the resolution to compete on content density and readability with a relatively inexpensive physical monitor.
Thomas Kumlehn had an interesting comment on Part 5B (with my bold highlighting) that I would like to address:
After the VisionPro keynote in a Developer talk at WWDC, Apple mentioned that they rewrote the entire render stack, including the way text is rendered. Please do not extrapolate from the text rendering of the MQP, as Meta has the tech to do foveated rendering but decided to not ship it because it reduced FPS.
Based on my understanding, the AVP will “render from scratch” instead of rendering an intermediate image that is then rescaled as is done with the MQP discussed in Part 5A. While rendering from scratch has a theoretical advantage regarding text image quality, it may not make a big difference in practice. With an ~40 pixels per degree (ppd) display, the strokes and dots of what should be readable small text will be on the order of 1 pixel wide. The AVP will still have to deal with approximately pixel-width objects straddling four or more pixels, as discussed in Part 5A: Simplified Scaling Example – Rendering a Pixel Size Dot.
I wanted to evaluate the MQP pancake optics more than I did in Part 5B. Meta’s Horizon Desktop interface was very limiting. So I decided to try out immersed Virtual Desktop software. Immersed has much more flexibility in the resolution, size, placements, and the ability to select flat or curved monitors. Importantly for my testing, I could create a large, flat virtual 4K monitor that could fill the entire FOV with a single test pattern (the pattern is included in Appendix 1).
Unfortunately, while the immersed software had the basic features I wanted, I found it difficult to precisely control the size and positioning of the virtual monitor (more on this later). Due to these difficulties, I just tried to fill the display with the test pattern with only a roughly perpendicular to the headset/camera monitor. It was a painfully time-consuming process, and I never could get the monitor where it seems perfectly perpendicular.
Below is a picture of the whole (camera) FOV taken at 405mp and then scaled down to 45mp. The image is a bit underexposed to show the sub-pixel (color) detail when viewed at full resolution. In taking the picture, I determined that the MQPs pancake optics focus appears to be a “dished,” with the focus in the center slightly different than on the outsides. The picture was taken focusing between the center and outside focus and using f11 to increase the photograph’s depth of focus. For a person using the headset, this dishing of the focus is likely not a problem as their eye will refocus based on their center of vision.
As discussed in Part 5B, the MQP’s pancake optics have severe pincushion distortion, requiring significant digital pre-correction to make the net result flat/rectilinear. Most notably, the outside areas of the display have about 1/3rd the linear pixel per degree of the center.
Next are shown 9 crops from the full-resolution (click to see) picture at the center, the four corners, top, bottom, left, and right of the camera’s FOV.
The main thing I learned out of this exercised is the apparent dish in focus of the optics and the fall off in brightness. I had determine the change in resolution in the studies shown in Part 5B.
While the immersed had the features I wanted, it was difficult to control the setup of the monitors. The software feels very “beta,” and the interface I got differed from most of the help documentation and videos, suggesting it is a work in progress. In particular, I could’t figure out how to pin the screen, as the control for pinning shown in the help guides/videos didn’t seem to exist on my version. So I had to start from scratch on each session and often within a session.
Trying to orient and resize the screen with controllers or hand gestures was needlessly difficult. I would highly suggest immersed look at some of the 3-D CAD software controls of 3-D models. For example, it would be great to have a single (virtual) button that would position the center monitor directly in front and perpendicular to the user. It would also be a good idea to allow separate control for tilt, virtual distance, and zoom/resize while keeping the monitor centered.
It seemed to be “aware” of things in the room which only served to fight what I wanted to do. I was left contorting my wrist to try and get the monitor roughly perpendicular and then playing with the corners to try and both resized and center the monitor. The interface also appears to conflate “resizing” with moving the monitor closer. While moving the virtual monitor closer or resizing affect the size of everything, the effect will be different when the head moves. I would have a home (perpendicular and center) “button,” and then left-right, up-down, tilt, distance, and size controls.
To be fair, I wanted to set up the screen for a few pictures, and I may have overlooked something. Still, I found the user interface could be vastley better for the setting up the monitors, and the controller or gesture monitor size and positioning were a big fail in my use.
BTW, I don’t want to just pick on immersed for this “all-in-one” control problem. I have found it a pain on every VR and AR/MR headset I have tried that supports virtual monitors to give the user good simple intuitive controls for placing the monitors in the 3D space. Meta Horizons Desktop goes to the extreme of giving no control and overly curved screens.
This series-within-a-series on the VR and the AVP use as an “office monitor replacement” has become rather long with many pictures and examples. I plan to wrap up this series within the series on the AVP with a separate article on issues to consider and my conclusions.
Below is a gallery of PNG file test patterns used in this article. Click on each thumbnail to see the full-resolution test pattern.
As discussed in Appendix 3: Confabulating typeface “points” (pt) with With Pixels – A Brief History, at font “point” is defined as 1/72nd of an inch (some use 1/72.272 or thereabout – it is a complicated history). Microsoft throws the concept of 96 dots per inch (dpi) as 100%. But it is not that simple.
I wanted to share measurements regarding the Calibri 11pt font size. After measuring it on my monitor with a resolution of 110 pixels per inch (PPI), I found that it translates to approximately 8.44pt (8.44/72 inches). However, when factoring in the monitor PPI of 110 and Windows DPI of 96, the font size increases to ~9.67pt. Alternatively, when using a monitor PPI of 72, the font size increases to ~12.89pt. Interestingly, if printed assuming a resolution of 96ppi, the font reaches the standard 11pt size. It seems Windows apply some additional scaling on the screen. Nevertheless, I regularly use the 11pt 100% font size on my 110ppi monitor, which is the Windows default in Excel and Word, and it is also the basis for the test patterns.
As discussed in 5A’s Appendix 2: Notes on Pictures, some moiré issues will be unavoidable when taking high-resolution pictures of a display device. As noted in that Appendix, all pictures in Lens Shootout were taken with the same camera and lens, and the original images were captured at 405 megapixels (Canon R5 “IBIS sensor shift” mode) and then scaled down by 3X. All test patterns used in this article are included in the Appendix below.
I want to address feedback in the comments and on LinkedIn from Part 5A about whether Apple claimed the Apple Vision Pro (AVP) was supposed to be a monitor replacement for office/text applications. Another theory/comment from more than one person is that Apple is hiding the good “spatial computing” concepts so they will have a jump on their competitors. I don’t know whether Apple might be hiding “the good stuff,” but it would seem better for Apple to establish the credibility of the concept. Apple is, after all, a dominant high-tech company and could stomp any competitor.
Studying the MQP’s images in more detail, it was too simplistic to use the average pixels per degree (ppd), given by dividing the resolution into the FOV of the MQP (and likely the AVP).
As per last time, since I don’t have an AVP, I’m using the Meta Quest Pro (MQP) and extrapolating the results to the AVP’s resolution. I will show a “shootout” comparing the text quality of the MQP to existing computer monitors. I will then wrap up with miscellaneous comments and my conclusions.
I have also included some discussion of Gaze-Contingent Ocular Parallax (GCOP) from some work by Stanford Computational Imaging Labs (SCIL) that a reader of this blog asked about. These videos and papers suggest that some amount of depth perception is conveyed to a person by the movement of each eye in addition to vergence (biocular disparity) and accommodation (focus distance).
I’m pushing out a set of VR versus Physical Monitor “Shootout” pictures and some overall conclusions to Part 5C to discuss the above.
In Apple Vision Pro (Part 5A) – Why Monitor Replacement is Ridiculous, I tried to lay a lot of groundwork for why The Apple Vision Pro (AVP), and VR headsets in general, will not be a good replacement for a monitor. I thought it was obvious, but apparently not, based on some feedback I got.
So to be specific and quote directly from Apple’s WWDC 2023 presentation (YouTube transcript) with timestamps with my bold emphasis added and in-line comments about resolution are given below:
1:22:33 Vision Pro is a new kind of computer that augments reality by seamlessly blending the real world with the digital world.
1:31:42 Use the virtual keyboard or Dictation to type. With Vision Pro, you have the room to do it all. Vision Pro also works seamlessly with familiar Bluetooth accessories, like Magic Trackpad and Magic Keyboard, which are great when you’re writing a long email or working on a spreadsheet in Numbers.
Seamless makes many lists of the most overused high-tech marketing words. Marketeers seem to love it because it is both imprecise, suggests it works well, and unfalsifiable (how do you measure “seamless?”). Seamlessly was used eight times in the WWDC23 to describe the AVP and by Meta to describe the Meta Quest Pro (MQP) twice at Meta Connect 2022. From Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough, Meta also used “seamless” to describe the MQP’s MR passthrough:
Apple claims the AVP is good for text-intensive “writing a long email or working on a spreadsheet in numbers.”
1:32:10 Place your Mac screen wherever you want and expand it–giving you an enormous, private, and portable 4K display. Vision Pro is engineered to let you use your Mac seamlessly within your ideal workspace. So you can dial in the White Sands Environment, and use other apps in Vision Pro side by side with your Mac. This powerful Environment and capabilities makes Apple Vision Pro perfect for the office, or for when you’re working remote.
Besides the fact that it is not 4K wide, it is stretching those pixels over about 80 degrees so that there are only about 40 pixels per degree (ppd), much lower than typically with a TV or movie theater. There are the issues discussed in Part 5A that if you are going to make the display stationary in 3-D, the virtual monitor must be inscribed in the viewable area of the physical display with some margin for head movement, and content must be resampled, causing a loss of resolution. Movies are typically in a wide format, whereas the AVP’s FOV is closer to square. As discussed in Apple Vision Pro (Part 3) – Why It May Be Lousy for Watching Movies On a Plane, you have the issue that the AVP’s horizontal ~80° FOV where movies are designed for about 45 degrees.
Here, Apple claims that the “Apple Vision Pro; perfect for the office, or for when you’re working remote.”
1:48:06 And of course, technological breakthroughs in displays. Your eyes see the world with incredible resolution and color fidelity. To give your eyes what they need, we had to invent a display system with a huge number of pixels, but in a small form factor. A display where the pixels would disappear, creating a smooth, continuous image.
The AVP’s expected average of 40ppd is well below the angular resolution “where the pixels would disappear.” It is below Apple’s “retinal resolution.” If the AVP has a radial distortion profile similar to the MQP (discussed in the next section), then the center of the image will have about 60ppd or almost “retinal.” But most of the image will have jaggies that a typical eye can see, particularly when they move/ripple causing scintillation (discussed in part 5A).
1:48:56 We designed a custom three-element lens with incredible sharpness and clarity. The result is a display that’s everywhere you look, delivering jaw-dropping experiences that are simply not possible with any other device. It enables video to be rendered at true 4K resolution, with wide color and high dynamic range, all at massive scale. And fine text looks super sharp from any angle. This is critical for browsing the web, reading messages, and writing emails.
As stated above, the video will not be a “true 4K resolution.” Here is the claim, “fine text looks super sharp from any angle,” which is impossible with resampled text onto 40ppd displays.
1:56:08 Microsoft apps like Excel, Word, and Teams make full use of the expansive canvas and sharp text rendering of Vision Pro.
Here again, is the claim that there will be “sharp text” in text-intensive applications like Excel and Word.
I’m not sure how much clearer it can be that Apple was claiming that the AVP would be a reasonable monitor replacement, used even when a laptop display is present. Also, they were very clear that the AVP would be good for heavily text-based applications.
While I was aware, as discussed in Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough, that the MQP, like almost all VR optics, had a signification pincushion distortion, it didn’t quantify the amount of distortion and its effect on the angular resolution aka ppd. Below is the video capture from the MQP developers app on the left, and the resultant image is seen through the optics (middle).
Particularly note above how small the white wall to the left of the left bookcase is relative to its size after the optics; it looks more than 3X wide.
For a good (but old) video explaining how VR headsets map source pixels into the optics (among other concepts), I recommend watching How Barrel Distortion Works on the Oculus Rift. The image on the right shows how equal size rings in the display are mapped into ever-increasing width rings after the optics with a severe pincushion distortion.
I started with a 405mp camera picture through the MQP optics (right – scaled down 3x linearly), where I could see most of the FOV and zoom in to see individual pixels. I then picked a series of regions in the image to evaluate. Since the pixels in the display device are of uniform size, any size change in their size/spacing must be due to the optics.
The RF16f2.8 camera lens has a known optical barrel distortion that was digitally corrected by the camera, so the camera pixels are roughly linear. The camera and lens combination has a horizontal FOV of 98 degrees and 24,576 pixels or ~250.8ppd.
The MQP display processing pre-compensates for the optics plus adds a cylindrical curvature effect to the virtual monitors. These corrections change the shape of objects in the image but not the physical pixels.
The cropped sections below demonstrate the process. For each region, 8 by 8 pixels were marked with a grid. The horizontal and vertical width of the 8 pixels was counted in terms of the camera pixels. The MQP display is rotated by about 20 degrees to clear the nose of the user, so the rectangular grids are rotated. In addition to the optical distortion in size, chroma aberrations (color separation) and focus worsen with increasing radii.
The image below shows the ppd at a few selected radii. Unlike the Oculus Rift video that showed equal rings, the stepping between these rings below is unequal. The radii are given in terms of angular distance from the optical center.
The plots below show the ppd verse radius for the MQP (left); interestingly, the relationship turns out to be close to linear. The right-hand plot assumes the AVP has a similar distortion profile and FOV, the l but three times the pixels, as reported. It should be noted that ppd is not the only factor affecting resolution; other factors include focus, chroma aberrations, and contrast which worsen with increasing radii.
The display on the MQP is 1920×1800 pixels, and the FOV is about 90° per eye diagonally across a roughly circular image, which works out to about 22 to 22.5 ppd. The optical center has about 1/3rd higher ppd with the pincushion distortion optics. For the MPQ Horizon Desktop application shown, the center monitor is mostly within the 25° circle, where the ppd is at or above average.
While a bit orthogonal to the discussion of ppd and resolution, Gazed-Contingent Ocular Parallax (GCOP) is another issue that may cause problems. A reader, VR user, claims to have noticed GCOP brought to my attention the work of the Stanford Computational Imaging Lab’s (SCIL) work in GCOP. SCIL has put out Multiple videos and articles, including Eye Tracking Revisited by Gordon Wetzstein and Gaze-Contingent Ocular Parallax Rendering for Virtual Reality (associated paper link). I’m a big fan of Wetzstein’s general presentations; per his usual standard, his video explains the concept and related issues well.
The basic concept is that because the center of projection (where the image land on the retina) and center of rotation of the eye are different, the human visual system can detect some amount of 3-D depth in each eye. A parallax and occlusion difference occurs when the eye moves (stills from some video sequences below). Since the eyes constantly move and fixate (saccades), depth can be detected.
GCOP may not be as big a factor as vergence and accommodation. I put it in the category of one of the many things that can cause people to perceive that they are not looking at the real world and may cause problems.
The marketing spin (I think I have heard this before) on VR optics is that they have “fixed foveated optics” in that there is a higher resolution in the center of the display. There is some truth that severe pincushion optical distortion improves the pixel density in the center, but it makes a mess of the rest of the display.
While MQP’s optics have a bigger sweet spot, and the optical quality falls off less rapidly than the Quest 2’s Fresnel optics, they are still very poor by camera standards (optical diagram for the 9-element RF16f2.8 lens, a very simple camera lens, used to take the main picture on the right). VR optics must compromise due to space, cost, and, perhaps most importantly, supporting a very wide FOV.
With a monitor, there is only air between the eye and the display device with no loss of image quality, and there is no need to resample the monitor’s image when the user’s head moves like there is with a VR virtual monitor.
As the MQP other pancake optics and most, if not all, other VR optics have major pincushion distortion; I fully expect the AVP will also. Regardless of the ppd, however, the MQP virtual monitor’s far left and right sides become difficult to read due to other optical problems. The image quality can be no better than its weakest link. If the AVP has 3X the pixels and roughly 1.75x the linear ppd, the optics must be much better than the MQP to deliver the same small readable text that a physical monitor can deliver.
As I wrote in Apple Vision Pro (Part 1) regarding the media coverage of the Apple Vision Pro, “Unfortunately, I saw very little technical analysis and very few with deep knowledge of the issues of virtual and augmented reality. At least they didn’t mention what seemed to me to be obvious issues and questions.”
I have been working for the last month on an article to quantify why it is ridiculous to think that a VR headset, even one from Apple, will be a replacement for a physical monitor. In writing the article, if felt the need to include a lot of background material and other information as part of the explanation. As the article was getting long, I decided to break it into two parts, this being the first part.
The issues will be demonstrated using the Meta Quest Pro (MQP) because that is the closest headset available, and it also claims to be for monitor replacement and uses similar pancake optics. I will then translate these results to the higher, but still insufficient, resolution of the Apple Vision Pro (AVP). The AVP will have to address all the same issues as the MQP.
Office applications, including word processing, spreadsheets, presentations, and internet browsing, mean dealing with text. As this article will discuss, text has always been treated as a special case with some “cheating” (“hints” for grid fitting) to improve sharpness and readability. This article will also deal with resolution issues with trying to fit a virtual monitor in a 3-D space.
I will be for this set of articles suspending my disbelief in many other human factor problems caused by trying to simulate a fixed monitor in VR to concentrate on the readability of text.
Working on this article reminded me of lessons learned in the mid-1980s when I was the technical leader of the TMS34010, the first fully programmable graphics processor. The TMS340 development started in 1982 before an Apple Macintosh (1984) or Lisa (1983) existed (and they were only 1-bit per pixel). But like those products, my work on the 34010 was influenced by Xerox PARC. At that time, only very expensive CAD and CAM systems had “bitmapped graphics,” and all PC/Home Computer text was single-size and monospaced. They were very low resolution if they had color graphics (~320×200 pixels). IBM introduced VGA (640×480) and XGA (1024×768) in 1987, which were their first IBM PC square pixel color monitors.
The original XGA monitor, considered “high resolution” at the time, had a 16” diagonal and 82ppi, which translated 36 to 45 pixels per degree (ppd) from 0.5 meters to 0.8 meters (typical monitor viewing distance), respectively. Factoring in the estimated FOV and resolutions, the Apple Vision Pro is between 35 and 40 ppd or about the same as a 1987 monitor.
So it is time to dust off the DeLorean and go Back to the Future of the mid-1980s and the technical issues with low ppd displays. Only it is worse this time because, in the 1980s, we didn’t have to resample/rescale everything in 3-D space when the user’s head moves to give the illusion that the monitor isn’t moving.
For more about my history in 1980s computer graphics and GPUs, see Appendix 1: My 1980s History with Bitmapped Fonts and Multiple Monitors.
With their marketing and images (below), Apple and Meta suggest that their headsets will work as a monitor replacement. Yes, they will “work” as a monitor if you are desperate and have nothing else, but having multiple terrible monitors is not a solution many people will want. These marketing concepts fail to convey that each virtual monitor will have low effective resolution forcing the text to be blown up to be readable and thus have less content per monitor. They also fail to convey that the text looks grainy and shimmers (more on this in a bit).
Meta Quest Pro (left) and Apple Vision Pro (right) have similar multiple monitor concepts.
Below is a through-the-lens picture of MQP’s Horizons Virtual Desktop. t was taken through the left eye’s optics with the camera centered for best image quality and showed more of the left side of the binocular FOV. Almost all the horizontal FOV for the left eye is shown in the picture, but the camera slightly cuts off the top and bottom.
Below for comparison is my desktop setup with a 34” 22:9 3440×1400 monitor on the left and a 27” 4K monitor on the right. The combined cost of the two monitors is less than $1,000 today. The 22:9 monitor display setting is 100% scale (in Windows display settings) and has 11pt fonts in the spreadsheet. The righthand monitor is set for 150% scaling with 11pt fonts netting fonts that are physically the same size.
Sitting 0.5 to 0.8 meters away (typical desktop monitor distance), I would judge the 11pt font on either of the physical monitors as much more easily readable than the 11pt font on the Meta Quest Pro with the 150% scaling, even though the MQP’s “11pt” is angularly about 1.5x bigger (as measured via the camera). The MQP’s text is fuzzier, grainier, and scintillates/shimmers. I could over six times the legible text on the 34” 22:9 monitor and over four times on the 27” 4K as the MQP. With higher angular resolution, the AVP will be better than the MQP but still well below the amount of legible text.
In Window, 100% means a theoretical 96 dots per inch. Windows factors in the information reported by the monitor to it (in this case, from the MQP’s software) give a “Scale and Layout” recommendation (right). The resolution reported to Windows by the MQP’s Horizon’s virtual monitor is 1920×1200, and the recommended scaling was 150%. This setting is what I used for most pictures other than for the ones called out as being at 100% or 175%.
For more on the subject of how font “points” are defined, see Appendix 3: Confabulating typeface “points” (pt) with With Pixels – A Brief History.
I’m not going to go into everything wrong with VR optics, and this article deals with being able to read text in office applications. VR optics have a lot of constraints in terms of cost, space, weight, and wide FOV. While pancake optics are a major improvement over the more common Fresnel lenses, to date, they still are poor optically (we will have to see about the AVP).
While not bad in the center of the FOV, they typically have severe pincushion distortion and chroma (color) aberrations. Pancake optics are more prone to collecting and scattering light, causing objects to glow on dark backgrounds, contrast reduction, and ghosts (out-of-focus reflection). I discussed these issues with Pancake Optics in Meta (aka Facebook) Cambria Electrically Controllable LC Lens for VAC. With computer monitors, there are no optics to cause these problems.
As explained in Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough, the Meta Quest Pro rotates the two displays for the eyes ~20° to clear the nose. The optics also have very large pincushion distortion. The display processor on the MQP pre-corrects digitally for the display optics’ severe pincushion distortion. This correction comes at some loss of fidelity in the resampling process.
The top right image shows the video feed to the displays. The distortion and rotation have been digitally corrected in the lower right image, but other optical problems are not shown (see through-the-lens pictures in this art cle).
There is also an optical “cropping” of the left and right eye displays, indicated by the Cyan and Red dashed lines, respectively. The optical cropping shown is based on my observations and photographs.
The pre-distortion correction is certainly going to hurt the image quality. It is likely that the AVP, using similar pancake optics, will have similar needs for pre-correction. Even though the MQP displays are rotated (no word on the AVP), there are so many other transforms/rescalings, including the transforms in 3-D space required to make the monitor(s) appear stationary, that if the rotation is combined with them (rather than done as a separate transform), the rotation o the display’s effect on resolution may be negligible. The optical quality distortion and the loss of text resolution, when transformed in 3-D space, are more problematic.
One of the ways to improve the overall FOV with a biocular system is to have the FOV of the left and right eye only partially overlap (see figure below). The paper Perceptual Guidelines for Optimizing Field of View in Stereoscopic Augmented Reality Displays and the article Understanding Binocular Overlap and Why It’s Important for VR Headsets discuss the issues with binocular overlap (also known as “Stereo Overlap”). Most optical AR/MR systems have a full or nearly full overlap, whereas VR headsets often have a significant amount of partial overlap.
Partial overlap increases the total FOV when combining both eyes. The problem with partial overlap occurs at the boundary where one FOV ends in the middle of the other eye’s FOV. One eye sees the image fade out to black, whereas the other sees the image. This is a form of Biocular Rivalry, and it is left to the visual cortex to sort out what is seen. The visual cortex will mostly sort it out in a desirable way, but there will be artifacts. Most often, the visual cortex will pick the eye that appears brighter (i.e., the cortex picks one and does not average), but there can be problems with the transition area. Additionally, where one is concentra ing can affect what is seen/perceived.
In the case of the MQP, the region of binocular overlap is slightly less than the width of the center monitor in Meta’s Horizon’s Desktop when viewed from the starting position. Below left shows the view through the left eye when centering the monitor in the binocular FOV.
When concentrating on a cell in the center, I didn’t notice a problem, but when I took in the whole image, I could see these rings, particularly in the lighter parts of the image.
The Meta Quest 2 appears to have substantially more overlap. On the left is a view through the left eye with the camera positioned similarly to the MQP (above left). Note how the left eye’s FOV overlaps the hole central monitor. I didn’t notice the transition “rings” with the Meta Quest 2 as I did with the MQP.
Binocular overlap is not one of those things VR companies like to specify; they would rather talk about the bigger FOV.
In the case of the AVP, it will be interesting to see the amount of binocular overlap in their optics and if it affects the view of the virtual monitors. One would like the overlap to be more than the width of a “typical” virtual monitor, but what does “typical” mean if the monitors can be of arbitrary size and positioned anywhere in 3-D space, as suggested in the AVP’s marketing material?
The MQP’s desktop illustrates the basic issues of inscribing a virtual monitor into the VR FOV while keeping the monitor stationary. There is some margin for allowing head movement without cutting off the monitor, which would be distracting. Additionally, the binocular overlap cutting off the monitor is discussed above.
As discussed in more detail, the MQP uses a 16:10 aspect ratio, 1920×1200 pixel “virtual monitors” (the size it reports to Windows). The multiple virtual monitors are mapped into the MQP’s 1920×1800 physical display. Looking straight ahead, sitting at the desktop, you see the central monitor and about 30% of the two side monitors.
The center monitor’s center uses about 880 pixels, or about half of the 1800 vertical pixels of the QP’s physical display. The central monitor behaves about 1.5 meters (5 f et) away or about 2 to 3 times the distance of a typical computer monitor. This makes “head zooming” (leaning in to make the image bigger) ineffective.
Apple’s AVP has a similar FOV and will have similar limitations in fitting virtual moni ors. There is the inevitable compromise between showing the whole monitor with some latitude user moving their head while avoiding cutt ng off the monitor the sides of the monitor.
The typical readable text has a lot of high-resolution, high contra t, and features that will be on the order of one pixel wide, such as the stroke and dot in the letter “i.” The problems with drawing a single pixel size dot in 3-D space illustrate some of the problems.
Consider drawing a small circular dot that, after all the 3-D transforms, is the size of about one pixel. In the figure below, the pixel boundaries are shown with blue lines. The four columns below in the figure below show a few of an infinite number of relationships between a rendered dot and the pixel grid.
The first row shows the four dots relative to the grid. The nearest pixel is turned on in the second row based on the centroid. In row three, a simple average is used to draw the pixel where the average of 4 pixels should equal the brightness of one pixel. The fourth row shows a low-pass filter of the virtual dots. The fifth row renders the pixels based on the average value of the low-pass filtered version of the dots.
The centroid method is the sharpest and keeps the size of the dot the same, but the location will tend to jump around with the slightest head movement. If many dots formed an object, the shape would appear to wriggle. With the simple average, the “center of mass” is more accurate than the centroid method, but the dot changes shape dramatically based on alignment/movement. The average of the low-pass filter method is better in terms of center of mass, and the shape changes less based on alignment, but now a single pixel size circle is blurred out over 9 pixels.
There are many variations to resampling/scaling, but they all make tradeoffs. A first-order tradeoff is between wiggling (changing in shape and location), with movement versus sharpness. A big problem with text when rendered low ppd displays, including the Apple Vision Pro, is that many features, from periods to the dots of letters to the stroke width of small text fonts, will be close to 1 pixel.
Since the beginning, personal computers have dealt with low pixels-per-inch monitors, translating into low pixels per degree based on typical viewing distances. Text is full of fine detail and often has perfectly horizontal and vertical strokes that, even with today’s higher PPI monitors, cause pixel alignment issues. Text is so important and so common that it gets special treatment. Everyone “cheats” to make text look better.
The fonts need to be recognizable without making them so big that the eye has to move a lot to read words and make content less dense with less information on a single screen. Big fonts produce less content per display and more eye movement, making the muscles sore.
In the early to mid-1980, PCs moved rough-looking fixed space to proportionally spaced text and carefully hand-crafted fonts, and only a few font sizes were available. Font edges are also smoothed (antialiased) to make it look better. Today, most fonts are rendered from a model with “hints” that help the fonts look better on a pixel grid. TrueType, originally developed by Apple as a workaround to paying royalties to Adobe, is used by both Apple and MS Windows and includes “Hints” in the font definitions for grid fitting (see: Windows hinting and Apple hinting).
Simplistically, grid fitting tries to make horizontal and vertical strokes of a font land on the pixel grid by slightly modifying the shape and location (vertical and horizontal spacing) of the font. Doing so requires less smoothing/antialiasing without making the font look jagged. This works because computer monitor pixels are on a rectangular grid, and in most text applications, the fonts are drawn in horizontal rows.
Almost all font rending is grid fits, just some more than others (see from 2 07 Font rendering philosophies of Windows & Mac OS X). Apple (and Adobe) have historically tried to keep the text size and spacing more accurate at some loss in font sharpness and readability on low PPI monitors (an easy solution for Apple as they expect you to buy a higher PPI monitor). MS Windows with ClearType and Apple with their LCD font smoothing have options to try and improve fonts further by taking advantage of LCDs with side-by-side red-green-blue subpixels.
But this whole grid fitting scheme falls apart when the monitors are virtualized. Horizontal and vertical strokes transform into diagonal lines. Because grid fitting won’t work, the display of a virtual monitor needs to be much higher in angular resolution than a physical monitor to show a font of the same size with similar sharpness. Yet today and for the foreseeable future, VR displays are much lower resolution.
For more on the definition of font “Points” and their history with Windows and Macs, see Appendix 3: Confabulating typeface “points” (pt) with With Pixels – A Brief History.
The slightest head movement means that everything has to be re-rendered. The “grid” to which you want to render text is not the virtual monitor but that of the headset’s display. There are at least two main approaches:
Systems will end up with a hybrid of the two approaches mixing “new” 3-D applications with legacy office applications.
The MQP’s Horizons appears to render the virtual monitor(s) and then re-render them in 3-D space along with the cylindrical effect plus pre-correction for their Pancake lens distortion.
The MQP’s desktop illustrates the basic issues of inscribing a virtual monitor into the VR FOV while keeping the monitor stationary. There is some margin for allowing head movement without cutting off the monitor, which would be distracting. Additionally, the binocular overlap cutting off the monitor is discussed above.
The MQP uses a 16:10 aspect ratio, 1920×1200 pixel “virtual monitors.” The multiple virtual monitors are mapped into the MQP’s 1920×1800 physical display. Looking straight ahead, sitting at the desktop, you see the central monitor and about 30% of the two side monitors.
The virtual monitor’s center uses about 880 pixels, or about half of the 1800 vertical pixels of the MQP’s physical display or 64% of the 1200 vertical pixels reported to Windows with the use at the desktop.
The central monitor behaves like it is about 1.5 meters (5 feet) away or about 2 to 3 times the distance of a typical computer monitor. This makes “head zooming” (leaning in to make the image bigger) much less effective (by a factor of 2 to 3X).
Apple’s AVP has a similar FOV and will have similar limitations in fitting virtual monitors. There is the inevitable compromise between showing the whole monitor with some latitude user movi g their head while avoiding cutting off the monitor on the sides of the monitor.
The pre-distortion correction is certainly going to hurt the image. It is possible that the AVP, using similar pancake optics, will have similar needs for pre-correction (most, if not all, VR optics have significant pincushion distortion – a side effect of trying to support a wide FOV). The MQP displays are rotated to clear the nose (no word on the AVP). However, this can be rolled into the other transformations and probably does not significantly impact the processing requirement or image quality.
The image below, one cell of a test pattern with two lines of text and some 1- and 2-pixel-wide lines, shows a simulation (in Photoshop) of the scaling process. For this test, I chose a 175% scaled 11pt front which should have roughly the same number of pixels as an 11pt font at 100% on an Apple Vision Pro. This simulation greatly simplifies the issue but shows what is happening with the pixels. The MQP and AVP must support resampling with 6 degrees of free om in the virtual world and a pre-correcting distortion with the optics (and, in the case of MQP’s Horizons, curve the virtual monitor).
The pixels have been magnified by 600% (in the full-size image), and a grid has been shown to see the individual pixels. On the top right source has been scaled by 64%, about the same amount MQP Horizons scales the center of the 1920×1200 virtual monitor when sitting at the desktop. The bottom right image scales by 64% and rotates by 1° to simulate some head tilt.
If you look carefully at the scaled one and two-pixel wide lines in the simulation, you will notice that sometimes the one-pixel wide lines are as wide as the 2-pixel lines but dimmer. You will also see what started as identical fonts from line to line look different when scaled even without any rotation. Looking through the lens cells, the fonts have further degradation/softening as they are displayed on color subpixels.
Below is what the 11pt 175% fonts look like via the lens of the MQP in high enough resolution to see the color subpixels. By the time the fonts have gone through all the various scaling, they are pretty rounded off. If you look closely at the same font in different locations (say the “7” for the decimal point), you will notice every instance is different, whereas, on a conventional physical monitor, they would all be identical due to grid fitting.
For reference, the full test pattern and the through-the-lens picture of the virtual monitor are given below (Click on the thumbnails to see the full-resolution images). The camera’s exposure was set low so the subpixels would not blow out and lose all their color.
When looking through the MQP, the text scintillates/sparkles. This occurs because no one can keep their head perfectly still, and every text character is being redrawn on each frame with slightly different alignments to the physical pixels causing the text to wriggle and scintillate.
Scaling/resampling can be done with sharper or softer processing. Unfortunately, the sharper the image after resampling, the more it will wriggle with movement. The only way to avoid this wriggling and have sharp images is to have a much higher ppd. MQP has only 22.5ppd, and the AVP has about 40ppd and should be better, but I think they would need about 80pp (about the limit of good vision and what Apple retinal monitors support) to eliminate the problems.
The MQP (and most displays) uses spatial color with individual red, green, and blue subpixels, so the wriggling is at the subpixel level. The picture below shows the same text with the headset moving slightly between shots.
Below is a video from two pictures taken with the headset moved slightly between shots to demonstrate the scintillation effect. The 14pt font on the right has about the same number of pixels as an 11pt font with the resolution of the Apple Vision Pro.
This will not be a close call, and using any VR headset, including the QP and Apple Vision Pro, as a computer monitor replacement fails any serious analysis. It might impress people who don’t understand the issues and can be wowed by a flashy short demo, and it might be better than nothing. But it will be a terrible replacement for a physical monitor/display.
I can’t believe Apple seriously thinks a headset display with about 40ppd will make a good virtual monitor. Even if some future VR headset has 80ppd and over 100-degree FOV, double the AVP linearly or 4X, it will still have problems.
Part 5B of this series will include more examples and more on my conclusions.
All this discussion of fonts and 3-D rendering reminded me of those early days when the second-generation TMS34020 almost got designed into the color Macintosh (1985 faxed letter from Steve Perlman from that era – right). I also met with Steve Jobs at NeXT and mentioned Pixar to him before Jobs bought them (discussed in my 2011 blog article) and John Warnock, a founder of Adobe, who was interested in doing a Port of Postscript to the 34010 in that same time frame.
In the 1980s, I was the technical leader for a series of programs that led to the first fully programmable graphics processor, the TMS34010, and the Multi-ported Video DRAM (which led to today’s SDRAM and GDRAM) at Texas Instruments (TI) (discussed a bit more here and in Jon Peddie’s 2019 IEEE article and his 2022 book “The History of the GPU – Steps to Invention”).
In the early 1980s, Xerox PARC’s work influenced my development of the TMS34010, including Warnock’s 1980 paper (while still at PARC), “The Display of Characters Using Gray Level Sample Arrays,” and the series of PARC’s articles in BYTE Magazine, particularly the August 1981 edition on Smalltalk which discussed bit/pixel aligned transfers (BitBlt) and the use of a “mouse” which had to be explained to BYTE readers as, “a small mechanical box with wheels that lets you quickly move the cursor around the screen.”
When defining the 34010, I had to explain to TI managers that the Mouse would be the next big input device for ergonomic reasons, not the lightpen (used on CAD terminals at TI in the early 1980s), which requires the user to keep their arm floating in the air which quickly become tiring. Most AR headset user interfaces make users suffer with having to float their hands to point, select, and type, so the lessons of the past are being relearned.
In the late 1980s, a systems engineer for a company I had never heard of called “Bloomberg,” who wanted to support 2 to 4 monitors per PC graphics board, came to see us at TI. In a time when a single 1023×786 graphic card could cost over $1,200 (about $3,000 in 2023 dollars), this meeting stood out. The Bloomberg engineer explained how Wall Street traders would pay a premium to get as much information as possible in front of them, and a small advantage on a single trade would pay for the system. It was my first encounter with someone wanting multiple high-resolution monitors per PC.
I used to have a life designing cutting-edge products from blank sheets of paper (back then, it was physical paper) through production and marketing; in contrast, I blog about other people’s designs today. And I have dealt with pixels and fonts for over 40 years.
Below is one of my early presentations on what was then called the “Intelligent Graphics Controller” (for internal political reasons, we could not call it a “processor”), which became the TMS34010 Graphics System Processor. You can also see the state of 1982 presentation technology with a fixed-spaced font and the need to cut and paste hand drawings. This slide was created in Feb 1982. The Apple Lisa didn’t come out until 1983, and the Mac in 1984.
e announced the TMS34010 in 1986, and our initial main competitor was the Intel 82786. But the Intel chip was “hardware” and lacked the 34010’s programmability, and to top it off, the Intel chip had many bugs. In just a few months, the 82786 was a non-factor. The copies of a few of the many articles below capture the events.
1986 we wrote two articles on the 34010 in the IEEE CG&A magazine. You can see from the front pages of the articles the importance we put on drawing text. Copies of these articles are available online (click on the thumbnails below to be linked to the full articles). You may note the similarity of the IEEE CG&A article’s first figure to the one in the 1981 Byte Smalltalk article, where we discussed extending “BitBlt” to the color “PixBlt.”
Around 1980 we started publishing a 3rd party guide of all the companies developing hardware and software for the 340 family of products, and the June 1990 4th Edition contained over 200 hardware and software products.
Below is a page from the TMS340 TIGA Graphics Library, including the font library. In the early 1980s, everyone had to develop their font libraries. There was insufficient power to render fonts with “hints” on the fly. We also do well to have bitmapped fonts with little or no antialiasing/smoothing. From about
Sadly, we are a bit before our time, and Texas Instruments had, by the late 1980s, fallen far behind TSMC and many other companies in semiconductor technology for making processors. Our competitors, such as ATI (NVidia wasn’t founded until 1993), could get better semiconductor processing at a lower cost from the then-new semiconductor 3rd party fabs such as TSMC (founded in 1987).
All the MQP pictures in these two articles were taken through the l ft eye optics using either the Canon R5 (45mp) with an RF16mmf2.8 or 28mmf2.8 “pancake” lens or the lower resolution Olympus E-M5D-3 (20mp) with 9-18mm zoom lens at 9mm. Both cameras feature a “pixel shift” feature that moves the lens, giving 405mp (24,576 x 16,384) for the R5 and 80mp (10,368 x 7,776 pixels) for the M5D-3 and all the pictures used this feature as it gave better resolution, even if the images were later scaled down.
High-resolution pictures of computer monitors with color subpixels and any scaling or compression cause issues with color and intensity moiré (false patterning) due to the “beat frequency” between the camera’s color sensor and the display device. In this case, there are many different beat frequencies between both the pixels and color subpixels of the MQP’s displays and the cameras. Additionally, the issues of the MQP’s optics (which are poor compared to a camera lens) vary the resolution radially. I found for the whole FOV image, the lower-resolution Olympus camera didn’t have nearly as severe a moiré issue (only a little in intensity and almost none in color). In contrast, it was unavoidable with the R5 with the 16mm lens (see comparison below).
The R5 with the 28mmf2.8 Lens and pixel shift mode could capture the MQP’s individual red, green, and blue subpixels (right). In the picture above, the two “7s” on the far right have a little over 1 pixel wide horizontal and diagonal stroke. The two 7’s are formed by different subpixels caused by them being slightly differently aligned in 3D space. The MQP’s displays are rotated by about 20°; thus, the subpixels are on a 20° diagonal (about the same as the lower stoke on the 7’s. Capturing at this resolution where the individual red, green, and blue sub-pixels are visible necessitated underexposing the overall image by about 8X (3 camera stops). Otherwise, some color dots (particularly green) will “blow out” and shift the color balance.
As seen in the full-resolution crop above, each color dot in the MQP’s display device covers about 1/8th of the area of a pixel, with the other two colors and black filling the rest of the area of a pixel. Note how the scaled-down version of the same pixels on the right look dim when the subpixels are averaged together. The camera exposure had to be set about three stops lower (8 times in brightness as stops are a power of two) to avoid blowing out the subpixels.
Making a monitor appear locked in 3-D spaces breaks everything about how PCs have dealt with rendering text and most other objects. Since the beginning of PC bitmap graphics, practical compromises (and shortcuts) have been made to reduce processing and to make images look better on affordable computer monitors. A classic compromise is the font “point,” defined (since 1517) at ~1/72nd of an inch.
So, in theory, when rendering text, a computer should consider the physical size of the monitor’s pixels. Early bitmapped graphics monitors in the mid-1980s had about 60 to 85 ppi, so the PC developers (except Adobe with their Postscript printers, with founders from Xerox PARC, that also influenced Apple) without a processing power to deal with it and the need to get on with making products confabulated “points” and “pixels.” Display font “scaling” helps correct this early transgression.
Many decades ago, MS Windows decided that a (virtual) 96 dots per inch (DPI) would be their default “100%” font scaling. An interesting Wikipedia article on the convoluted logic that led to Microsoft’s decision is discussed here. Conversely, Apple stuck with 72 PPI as their basis for fonts and made compromises with font readability on lower-resolution monitors with smaller fonts. Adherence to 72 PPI may explain why a modern Apple Mac 27” monitor is 5K to reach 218 ppi (within rounding of 3×72=216). In contrast, the much more common and affordable 27” 4K monitor has 163 ppi, not an integer multiple of 72, and Macs have scaling issues with 3rd party monitors, including the very common 27” 4k.
Microsoft and Macs have tried to improve the text by varying the intensity of the color subpixels. Below is an example from MS Windows with “ClearType” for a series of different-size fonts. Note particularly the horizontal strokes at the bottom of the numbers 1, 2, and 7 below and how the jump from 1 pixel wide with no smoothing from Calibri 9 to 14pt, then an 18pt, the strokes jump to 2 pixels wide with a little smoothing and then at 20pt become 2 pixels wide with no smoothing vertically.
Apple has a similar function known as “LCD Font Smoothing. Apple had low-ppd text rendering issues in its rearview mirror with “retinal resolution” displays for Mac laptops and monitors. “Retinal resolution” translates to more than 80ppd when viewed normally, which is about from about 12” (0.3 meters) for handheld devices (ex. iPhone) or about 0.5 to 0.8 meters for a computer.
Apple today sells “retina monitors” with a high 218 PPI, which makes text grid fitting less of an issue. But as the chart from Mac external displays for designers and developers (right), Mac systems have resolution and performance issues with in-between resolution monitors.
The Apple Vision Pro has less than 40 ppd, much lower than any of these monitors at normal viewing distance. And that is before all the issues with making the virtual monitor seem stationary as the user moves.
Part 1 and Part 2 of this series on the Apple Vision Pro (AVP) primarily covered the hardware. Over the next several articles, I plan to discuss the applications Apple (and others) suggest for AVP. I will try to show the issues with human factors and provide data where possible.
I started working in head-mounted displays in 1998, and we bought a Sony Glasstron to study. Sony’s 1998 Glasstron had an 800×600 (SVGA) display, about the same as most laptop computers in that year, and higher resolution than almost everyone’s television in the U.S. (HDTVs first went on sale in 1998). The 1998 Glasstron even had transparent (sort of) LCD and LCD shutters to support see-through operation.
In the past 25 years, many companies have introduced headsets with increasingly better displays. According to some reports, the installed base of VR headsets will be ~25 million units in 2023. Yet I have never seen anyone on an airplane or a train wear a head-mounted display. I first wrote about this issue in 2012 in an article on the then-new Google Glass with what I called “The Airplane Test.”
I can’t say I was surprised to see Apple showing the movie watching on airplanes VR app, as I have seen it again and again over the last 25 years. It makes me wonder how well Apple verified the concepts they showed. As Snazzy Lab’s explained, there were no new apps that Apple showed that had not failed before, and it is not clear they failed due to not having better hardware.
Since the technology for watching videos on a headset has been available for decades, there must be reasons why almost no one (Brad Lynch of SadlyItsBradley says he has) uses a headset to watch movies on a plane. I also realize that some VR fans will watch movies on their headsets, but this, like VR, does not mean it will support mass market use.
As will be shown, the total pixel angular (pixels per degree) resolution of the AVP, while not horrible, is not particularly good for watching movies. But then, the resolution has not been what has stopped people from using VR on airplanes; it has been other human factors. So the question becomes, “Has the AVP solved the human factors problems that prevent people from using headsets to watch movies on airplanes?”
In 2019 in FOV Obsession, I discussed an excellent Photonics West’s AR/VR/MR Conference presentation by Thad Starner, the Georgia Institute of Technology and a long-time AR advocate and user.
First, the eye only has high resolution in the fovea, which covers only ~2°. The eye goes through a series of movements and fixations known as saccades. What a person “sees” results from the human vision system piecing together a series of “snapshots” at each saccade. The saccadic movement is a function of the activity and the person’s attention. Also, vision is partially, but not completely, blanked when the eye is moving (see: We thought our eyes turned off when moving quickly, but that’s wrong, and Intrasaccadic motion streaks jump-start gaze correction)
Starner shows the results from a 2017 Thesis by Haynes, which included a study on FOV and eye discomfort. Haynes’ thesis states (page 8 of 303 pages and 275 megabytes – click here to download it):
“Thus, eye physiology provides some basic parameters for potential HWD design. A display can be no more than 55° horizontally from the normal line of sight based on oculomotor mechanical limits. However, the effective oculomotor range places a de facto limit at 45°. Further, COMR and saccadic accuracy suggest visually comfortable display locations may be no more than [plus or minus] 10-20° from the primary position of gaze.”
The encyclopedic Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets by Bernard Kress writes about a “fixed foveated region of about 40-50° (right). But in reality, the eyes can’t see 40-50° with high resolution for more than a few minutes without becoming tired.
The bottom line is that the human eye will want to stay within about 20° of the center when watching a movie. Generally, if a user wants to see something more than about 30° from the center of their vision, they will turn their head rather than use just their eyes. This is also true when watching a movie or using a large computer monitor for office-type work.
It may shock many VR game players that want 120+ degree FOVs, but SMTPE, which sets the recommendations for movie theaters, says the optimal viewing angle for HDTV is only 30°. THX specifies 40 degrees (Wikipedia and many other sources). These same optimum seating location angles apply to normal movie theaters as well.
The front row of a “normal” movie theater is about 60°, which is usually the last row in a theater where people will want to sit. Most people don’t want to sit in the front rows of a theater because of the “head pong” (as Thad Starner called it) required to watch a movie that is ~60° wide.
While 30°-40° may seem small, it comes back to human factors and a feedback loop of the content generated to work well with typical theater setups. A person in the theater will naturally only see what is happening in the center ~30° of the Screen most of the time, except for some head-turning fast action.
The image content generated outside of ~30° helps give an immersive feel but costs money to create and will not be seen in any detail 99.999% of the time. If you take content generated assuming a nominal 30° to 40° viewing angle and enlarge it to fill 90°, it will cause eye and head discomfort for the user to watch it.
Another factor is “angular resolution.” The bands in the chart on the right show how far back from a given size TV with a given resolution must sit before you can’t see the pixels. The metric they use for being “beneficial” is 60ppd or more. Also shown on the chart with the dotted white lines are the SMTPE 30° and THX 40° recommendations.
Apple has not given the exact resolution but stated 23 Million (pixels for both eyes). Assuming a square display, this computes to about 3,400 pixels in each direction. The images in the video look to be about a 7:6 aspect ratio which would work out to about ~3680 by ~3150. Also, the optics cut off some of the display’s pixels for each eye, yet often companies count all the display’s pixels.
Apple didn’t specify the field of view (FOV). One big point of confusion on FOV is that VR headsets are typically quoted for both eyes, including the binocular view combing both eyes. The FOV also varies based on the eye relief from person to person (people’s eye insets, foreheads, and other physical features are different). Reports are that the FOV is “similar” to the Meta Quest Pro, which has a binocular FOV of about 106 degrees. The single-eye FOV is about 90°.
Combining the information from various sources, the net result is about 35 to 42 pixels per degree (ppd). Good human 20/20 vision is said to be ~60ppd. Steve Jobs with the iPhone 6 called 300 pixels per inch at reading distance, which works out to ~60ppd), “retinal resolution.” For the record, people with very good eyesight can see 80ppd
Some people wearing the AVP commented that they could make out some screen door effect consistent with about 35-40ppd. The key point is that the AVP is below 60, so jagged line effects will be noticeable.
Using the THX 40° horizontal FOV standard and assuming the AVP is about 90° horizontally (per eye, 110 for both eyes), ~3680 pixels horizontally, and almost no pixels get cropped, this leaves 3680 x (40/90) = ~1635 pixels horizontally. Using the STMPE 30° gives about 3680 x (30/60) = ~1226 pixels wide.
If the AVP is used for watching movies and showing the movie content “optimally,” the image will be lower than full HD (1920×1080) resolution, and since there are ~40ppd, jaggies will be visible.
While the AVP has “more pixels than a 4K TV,” as claimed, they can’t deliver those pixels to an optimally displayed movie’s 40° or 30° horizontal FOV. Using the full FOV would, in effect, put you visually closer than the front row of a movie theater, not where most people would want to watch a movie.
Still, resolution and jaggies alone are not so bad as they would not, and have not, stopped people from using a VR headset for movies.
The vestibulo-ocular reflex (VOR) stabilizes a person’s gaze during head movement. The inner ear detects the rotation, and if one is gazing, it causes the eyes to rotate to counter the movement to stay fixed on where the person is gazing. In this way, a person can, for example, read a document even if their head is moving. People with a VOR deficiency have problems reading.
Human vision will automatically suppress the VOR when it is a counter product. For example, the VOR reflex will be suppressed if one is tracking an object with a combination of head and eye movement, whereas VOR would be counter-productive. The key point is that the display system must account for the combined head and eye movement to generate the image without causing a vestibular (motion sickness) problem where the inner ear does not agree with the eyes.
Quoting from the WWDC 2023 video at ~1:51:18:
Running in parallel is a brand-new chip called R1. This specialized chip was designed specifically for the challenging task of real-time sensor processing. It processes input from 12 cameras, five sensors, and six microphones.
In other head-worn systems, latency between sensors and displays can contribute to motion discomfort. R1 virtually eliminates lag, streaming new images to the displays within 12 milliseconds. That’s eight times faster than the blink of an eye!
Apple did not say if the “12 cameras” included eye-tracking cameras, as they only showed the cameras on the front, but likely they are included. Complicating matters further is the saccadic movement of the eye. Eye tracking can know where the eye is aimed, but not what is seen. The AVP is known to have superior eye tracking for selecting things from a menu. But we don’t know if the eye tracking coupled with the head tracking deals with VOR, and if so, whether it is accurate and fast enough to solve to not cause VOR-related problems for the user.
Now consider some options for displaying a virtual screen on a headset below. Apple has shown locking the Screen in the 3-D space. For their demos, they appear to have gone with a very large (angularly) virtual screen for the demo impact. But, as outlined below, making a very large virtual screen is not the best thing to do for more normal movie and video watching. No matter which option is chosen below, jaggies and “zipper/ripple” antialiasing artifacts will be visible at times due to the angular resolution (pdd) of the AVP.
Apple showed (above) images that might fill about 70 to 90 degrees of the FOV in its short Avatar demos (case 2 above). This will “work” in a demo to be something new and different, but as discussed in #2 above, it is not what you would want to do for a long movie.
On top of all the other issues, the headset processing and sensor must address vestibular-related motion sickness problems caused by being in a moving vehicle while displaying an image.
You then have the ergonomic issues of wearing a somewhat heavy, warm headset sealed against your face with no air circulation for hours while on a plane. Then you have the snag hazard of the cord, which will catch on just about everything.
There will be flight attendants or others tapping you to get your attention. Certainly, you don’t want the see-through mode to come on each time somebody walks by you in the aisle.
A more basic practical problem is that a headset takes up more room/volume due to its shape and the need to protect the glass front than a smartphone, tablet, or even a moderately sized laptop.
It is important to note that humans understand what behaves as “real” versus virtual. The AVP is still cutting off much of a person’s peripheral vision. Something like VOR and Vergence-Accommodation Conflict (VAC discussed in Part 2) and the way focus behaves are well-known issues with VR, but many more subtle issues can cause humans to sense there is something just not right.
In visual human factors, l like to bring up the 90/90 rule, which states, “it takes 90% of the effort to get 90% of the way there, and then the other 90% of the effort to solve the last 10%.” Sometimes this rule has to be applied recursively where multiples of the “90%” effort are required. Apple could do a vastly better job of head and eye tracking with faster response time, and yet people would still prefer to watch movies and videos on a direct-view display.
Certainly, nobody will be the wiser in a short flashy demo. The question is whether it will work for most people watching long movies on an airplane. If it does, it will break a 25+ year losing streak for this application.
This part will primarily cover the hardware and related human physical and visual issues with the Apple Vision Pro (AVP). In Part 3, I intend to discuss my issues with the applications Apple has shown for the AVP. In many cases, I won’t be able to say that the AVP will definitely cause problems for most people, but I can see and report on many features and implementation issues and explain why they may cause problems.
It is important to note that there is a wide variation between humans in their susceptibility and discomfort with visual issues. All display technologies are based on an illusion, and different people have different issues with various imperfections in the illusions. Some people may be able to adapt to some ill effects, whereas others can’t or won’t. This article points out problems I see with the hardware that might not be readily apparent in a shot demo based on over 40 years of working with graphics and display devices. I can’t always say there will be problems, but some things concern me.
The Appendix has some “cleanup/corrections” on Part 1 of this series on the Apple Vision Pro (AVP).
I’m constantly telling people that “Demos are Magic Shows,” what you see has been carefully selected not to show any problems and only what they want you to see. Additionally is impossible to find all the human factor physical and optical issues in the cumulative ~30-minute demo sessions at WWDC. Each session was further broken into short “Sizzle Reels” of various potential applications.
The experience that people can tolerate and enjoy with a short theme park ride or movie clip might make them sick if they endure it for more than a few minutes. In recent history, we have seen how 3-D movies reappeared, migrated to home TVs, and later disappeared after the novelty wore off and people discovered the limitations and downsides of longer-term use.
It will take months of studies with large populations as it is well known that problems with the human visual perception of display technologies vary widely from person to person. Maybe Apple has done some of these studies, but they have not released them. There are some things that Apple looks like they are doing wrong from a human and visual factors perspective (nothing is perfect), but how severe the effects will be on humans will vary from person to person. I will try to point out things I see that Apple is doing that may cause issues and claims that may be “incomplete” and gloss over problems.
Apple employed a trick that gets the observer to focus on one aspect of a problem that is a known issue and where they think they do well. Quoting from the WWDC 2023 video at ~1:51:34:
In other head-worn systems, latency between sensors and displays can contribute to motion discomfort. R1 virtually eliminates lag, streaming new images to the displays within 12 milliseconds. That’s eight times faster than the blink of an eye!
I will give him credit for saying that the delay “can contribute” rather than saying it is the whole cause. But they were also very selective with the wording “streaming new images to the displays within 12 milliseconds,” which is only a part of the “motion to photon” latency problem. They didn’t discuss the camera or display latency. Assuming the camera and display are both at 90Hz frame rates and are working one frame at a time, this would roughly triple the total latency, and there may be other buffering delays not mentioned. We then have any errors that will occur.
The statement, “That’s eight times faster than the blink of an eye!” is pure marketing fluff as it does not tell you if it is fast enough.
In some applications, even 12 milliseconds could be marginal. Some very low latency systems process scan lines from the camera to the display with near zero latency rather than frames to reduce the motion-photon-time. But this scan line processing becomes even more difficult when you add virtual content and requires special cameras and displays that work line by line synchronously. Even systems that work on scan lines rather than frames may not be fast enough for intensive applications. Specifically, this issue is well-known in the area of night vision. The US and other militaries still prefer monochrome (green or b&w) photomultiplier tubes in Enhanced Night Vision Goggles (ENVG) over cameras with displays. They still use the photomultiplier tubes (improved 1940s-era technology) and not semiconductor cameras because the troops find even the slightest delay disorienting.
Granted, troops making military maneuvers outdoors for long periods may be an extreme case, but at least in this application, it shows that even the slightest delay causes issues. What is unknown is who, what applications, and which activities might have problems with the level of delays and tracking errors associated with the AVP.
The militaries also use photomultiplier tubes because they still work with less light (just starlight) than the best semiconductor sensors. But I have been told by night vision experts that the delay is the biggest issue.
The proper location of the cameras would be coaxial with the user’s two eyes. Still, as seen in the figure (right), the Main Cameras and all the other cameras and sensors are in fixed locations well below the eyes, which is not optimal, as will be discussed. This is very different than other passthrough headsets, where the passthrough cameras are roughly located in front of the eyes.
It appears the main cameras and all the other sensors are so low down relative to the eyes to be out of the way of the “Eyesight Display.” The Eyesight display (right) has a glass cover that contributes a lot of weight to the headset. I hear the glass cover is also causing some calibration problems with the various cameras and sensors, as there is variation in the glass, and its placement varies from unit to unit. The glass cover also contributes significant weight to the headset while inhibiting heat from escaping on top of the power/heat caused by the display itself.
It seems Apple wanted the Eyesight Display so much that they were willing to hurt significantly other design aspects.
The importance of centering the (actual or “virtual”) camera with the user’s eye for long-term comfort was a major point made by mixed reality (optical and passthrough) headset user and advocate Steve Mann in his March 2013 IEEE Spectrum article, “What I’ve learned from 35 years of wearing computerized eyewear“. Quoting from the article, “The slight misalignment seemed unimportant at the time, but it produced some strange and unpleasant results. And those troubling effects persisted long after I took the gear off. That’s because my brain had adjusted to an unnatural view, so it took a while to readjust to normal vision.”
I don’t know if or how well Apple has corrected the misalignment with “virtual cameras” (transforming the image to match what the eye should see) as Meta attempted (poorly) with the MQP. Still, they seem to have made the problem much more difficult by locating the cameras so far away from the center of the eyes.
Having the cameras and sensors in poor locations would make visual depth sensing and coordination more difficult and less accurate, particularly at short distances. Any error will be relatively magnified as things like one’s hands get close to the eyes. In the extreme case, I don’t see how it would work if the user’s hands were near and above the eyes.
The demos indicated using some level of depth perception in the video (stills below) were contrived/simple. I have not heard any demos stressing coordinated hand movement with a real object. Any offset error in the virtual camera location might cause coordination problems. Nobody may know or have serious problems with a short demo, particularly if they don’t do anything close up, but I am curious about what will happen with prolonged use.
There must be on the order of a thousand papers and articles on the issue of vergence-accommodation conflict (VAC). Everyone in the AR/VR and 3-D movie industries knows about the problem. The 3-D stereo effect is caused by having a different view for each eye which causes the eyes to rotate and “verge,” but the muscles in the eye will adjust focus, “accommodate,” based on what it takes to focus. If the perceived distances are different, it causes discomfort, referred to as VAC.
Like most other VR headsets, the AVP most likely has a fixed focus at about 2 meters (+/- 0.5m). From multiple developer reports, Apple seems to be telling developers to put things further away from the eyes. Two meters is a good compromise distance for video games where things are on walls or further away. VAC is more of a problem when things get inside 1m, such as when the user works with their hands, which can be 0.5m or less away.
When there is a known problem with many papers on the subject and no products solving it, it usually means there aren’t good solutions. The Magic Leap 1 tried a dual-focus waveguide solution but at the expense of image quality and cost and abandoned it on Magic Leap 2. Meta regularly presents papers and videos about their attempts to address VAC, including Half Dome 1, 2, and 3, focus surfaces, and a new paper using varifocal at Siggraph in August 2023.
There are two main approaches to VAC; one involves trying to solve for focus everywhere, including light fields, computational holograms, or simultaneous focus planes (ex. CREAL3D, VividQ, & Lightspace3D), and the other uses eye tracking to control varifocal optics. Each requires more processing, hardware complexity, and a loss of absolute image quality. But just because the problem is hard does not make it disappear.
From bits and pieces I have heard from developers at WWDC 2023, it sounds like Apple is trying to nudge developers to make objects/screens bigger but with more virtual distance. In essence, to design the interfaces to reduce the VAC issue from close-up objects.
Consider a virtual computer monitor placed 2m away; it won’t behave like a real-world monitor less than 1/2 meter away. You can blow up the monitor to have the text be the same size, but if working properly in the virtual space, the text and other content won’t vary in size the same way when you lean in, no less being able to point at something with your finger. Many subtle things you do with a close-up monitor won’t work with a virtual, far-away large monitor. If you make the virtual monitor act like it is the size and distance of a real-world monitor, you have a VAC problem.
I know some people have suggested using large TVs from further away as computer monitor to relax the eyes, but I have not seen this happening much in practice. I suspect it does not work very well. I have also seen “Ice Bucket challenges,” where people have worn a VR headset as a computer monitor for a week or month, but I have yet to see anyone say they got rid of their monitors at the end of the experiment. Granted, the AVP has more resolution and better motion sensing and tracking than other VR headsets, but these may be necessary but not sufficient. I don’t see a Virtual workspace as efficient for business applications compared to using one or more monitors (I am open to seeing studies that could prove otherwise).
A related point that I plan to discuss in more detail in Part 3 is that there have been near-eye “glasses” for TVs (such as Sony Glasstron) and computer use for the last ~30 years. Yet, I have never seen one used on an airplane, train, or office in all these years. It is not that the displays didn’t work or were too expensive for an air traveler (who will spend $350 on noise-canceling earphones) and had a sufficient resolution for at least watching movies. But 100% of people decide to use a much smaller (effective) image; there must be a reason
VAC is only one of many image generation issues I put in the class of “things not working right,” causing problems for the human visual system. The real world is also “inconvenient” because it has infinite focus distances, and objects can be any distance from the user.
The human eye works very differently from a camera or display device. The eye jumps around in “saccades,” that semi-blank vision between movements. Where the eye looks is a combination of voluntary and involuntary movement and varies if one is reading or looking, for example, at a face. Only the center of vision has a significant resolution and color differentiation, and a sort of variable resolution “snapshot” is taken at each saccade. The human visual system then pieces together what a person “sees” from a combination of objective things captured by each saccade and subjective information (eyewitnesses can be highly unreliable). Sometimes the human vision pieces together some display illusions “wrong,” and the person sees an artifact; often, it is just a flash of something the eye is not meant to see.
Even with great eye tracking, a computer system might know where the eye is pointing, but it does not know what was “seen” by the human visual system. So here we have the human eye taking these “snapshots,” and the virtual image presented does not change quite the way the real world does. There is a risk that the human visual system will know something is wrong at a conscious (you see an artifact that may flash, for example) or unconscious level (over time, you get a headache). And once again, everybody is different in what visual problems most affect them.
Anyone who has put on a VR headset from a major manufacturer gets bombarded with messages at power-up to make sure they are in a safe place. Most have some form of electronic “boundaries” to warn you when you are straying from your safe zone. As VR evangelist Bradley Lynch told me, the issue is known as “VR to the ER,” for when an enthusiastic VR user accidentally meets a real-world object.
I should add that the warnings and virtual boundaries with VR headsets are probably more of a “lawyer thing” than true safety. As I’m fond of saying, “No virtual boundary is small enough to keep you safe or large enough not to be annoying.”
Those in human visual factors say (to the effect), “Your peripheral vision is there to keep you from being eaten by the tigers,” translated to the modern world, it keeps you from getting hit by cars and running into things in your house. Human vision and anatomy (how your neck wants to bend) are biased in favor of looking down. The saying goes, there are many more dangerous things on the ground than in the air.
Peripheral vision has very low resolution and almost no sense of color, but it is very motion and flicker-sensitive. It lets you sense things you don’t consciously see to make you turn your head to see them before you run into them. The two charts on the right illustrate a typical person’s human vision for the Hololens 2 and the AVP. The lightest gray areas are for the individual right and left eye; the central rounded triangular mid-gray area is where the eye has binocular overlap, and you have stereo/depth vision. The near-black areas are where the headset blocks your vision. The green area shows the display’s FOV.
What is concerning from a safety perspective is that with the AVP, essentially all peripheral vision is lost, even if the display is in full passthrough mode with no content. It is one thing to have a demo in a safe demo room with “handlers/wranglers,” as Apple did at the WWDC; it is another thing to let people loose in a real house or workplace.
Almost as a topper on safety, the AVP has the battery on an external cable which is a snag hazard. By all reports, the AVP does not have a small “keep-alive” battery built into the headset if the battery is accidentally disconnected or deliberately swapped (this seems like an oversight). So if the cable gets pulled, the user is completely blinded; you better hope it doesn’t happen at the wrong time. Another saying I have is, “There is no release strength on a breakaway cable that is weak enough to keep you safe that is strong enough not to release when you don’t want it to break.”
Question, which is worse?:
A) To have the pull force so high that you risk pulling the head into something dangerous, or
B) To have the cord pull out needlessly blinding the person so they trip or run into something
This makes me wonder what warnings, if any, will occur with the AVP.
When it comes to the physical design of the headset, it appears that Apple strongly favored style over functionality. Even from largely favorable reviewers, there were many complaints about physical comfort being a problem.
About 90% of the weight of the AVP appears to be in front of the eyes, making the unit very front-heavy. The AVP’s “solution” is to clamp the headset to the face with the “Light Seal” face adapter applying pressure to the face. Many users with just half-hour wear periods discussed the unit’s weight and pressure on the face. Wall Street Journal reporter Joanne Stern discussed the problem and even showed how it left red marks on her face. Apple was making the excuse that they only had limited face adapters and that better adapters would fix or improve the problem. There is no way a better Light Seal shape will fix the problem with so much weight sitting beyond the eyes and without any overhead support.
Experience VR users that tried on the AVP report that they think the AVP headset weighs at least 450 grams, with some thinking it might be over 500 grams. Based on the battery cable size, I think it weighs about 60 grams pulling asymmetrically on the headset. Based on a similar size but slightly differently shaped battery, the AVP’s battery is about 200 grams. While a detachable battery gives options for larger batteries or a direct power connection, it only saves about 200-60 = 140 grams of weight on the head in the current configuration.
Many test users commented on their being an over-the-head strap, and one was shown in the videos (see lower right above). Still, this strap shown is very far behind the unit’s center of gravity and will do little to take the weight off the front that could help reduce the clamping force required against the face. This is basic physics 101.
I have seen reports that several strap types will be available, including ones made out of leather. I expect there will have to be front-to-back straps built-in to relieve pressure on the user’s face.
I thought they could clip a battery back with a shorter cable to the back of the headset, similar to the Meta Quest Pro and Hololens 2 (below), but this won’t work as the back headband is flexible and thus will not transfer the force to help balance the front. Perhaps Apple or 3rd parties will develop a different back headband without as much flexibility, incorporating a battery to help counterbalance the front. Of course, all this talk of straps will be problematic with some hairstyles (ex., right) where neither a front-to-back nor side-to-side strap will work.
Meta Quest Pro is 722 grams (including a ~20Wh battery), and Hololens 2 is 566 grams (including a ~62Wh battery). Even with the forehead pad, the Hololens 2 comes with a front-to-back strap (not shown in the picture above), and the Meta Quest Pro needs one if worn for prolonged periods (and there are multiple aftermarket straps). Even most VR headsets lighter than the AVP with face seals have overhead straps.
If Apple integrated the battery into the back headband, they would only add about 200 grams or a net 140 grams, subtracting out the weight of the cable. This would place the AVP between the Meta Quest Pro and Hololens 2 in weight.
Apple denies physics and the shape of human heads to think they won’t need better support than they have shown for the AVP. I don’t think the net 140 grams of a battery is the difference between needing head straps.
I see Many of the problems with the AVP because doing Passthrough AR well is very hard and because of trade-offs and compromises they made between features and looks. I think Apple made some significant compromises to support the Eyesight feature that even many fans of the technology say Eyesight will have Uncanny Valley problems with people.
As I wrote in Part 1, the AVP blows away the Meta Quest Pro (MQP) and has a vastly improved passthrough. The MQP is obsolete by comparison. Still, I am not convinced it is good enough for long-term use. There are also a lot of basic safety issues.
Next time, I plan to explore more about the applications Apple presented and whether they are realistic regarding hardware support and human factors.
I had made some size comparisons and estimated that the AVP’s battery was about 35Wh to 50Wh, and then I found that someone had leaked (falsely) 36Wh, so I figured that must be it. But not a big difference, as other reports now estimate the battery at about 37Wh. My main point is that the power was higher than some reported, and my power estimate seems close to correct.
All the pre- and post-announcement rumors suggested that the AVP uses pancake optics. I jumped to an erroneous conclusion from the WWDC 2023 video that they made it look like it was aspheric refractive. In watching the flurry of reports and concentrating on the applications, I missed circling back to check on this assumption. It turns out that Apple’s June 5th news release states, “This technological breakthrough, combined with custom catadioptric lenses that enable incredible sharpness and clarity . . . ” Catadioptric means a combination of refractive and reflective optical elements, which included pancake optics. Apple recently bought Limbak, an optics design company known for catadioptric designs, including those used in Lynx (which are catadioptric, but not pancake optics, and not what the AVP uses). They also had what they called “super pancake” designs. Apple eschews using any word used by other companies as they avoided saying MR, XR, AR, VR, and Metaverse, and we can add to that list “pancake optics.”
Update June 14, 2023 PM: It turns out that Apple’s news release states, “This technological breakthrough, combined with custom catadioptric lenses that enable incredible sharpness and clarity . . . ” Catadioptric means a combination of refractive and reflective optical elements. This means that they are not “purely refractive” as I first guessed (wrongly). They could be pancake or some variation of pancake optics. Apple recently bought Limbak, an optics design company known for catadioptric designs including those used in Lynx. They also had what they called “super pancake” designs. Assuming Apple is using a pancake design, then the light and power output of the OLEDs will need to be about 10X higher.
UPDATE June 14, 2023 AM: The information on the battery used as posted by Twitter User Kosutami turned out to be a hoax/fake. The battery shown was that of a Meta Quest 2 Elite as shown in a Reddit post of a teardown of the Quest 2 Elite. I still think the battery power of the Apple Vision Pro is in the 35 to 50Wh range based on the size of the AVP’s battery pack. I want to thank reader Xuelei Zhang for pointing out the error. I have red-lined and X-out the incorrect information in the original article. Additionally based on the battery’s size, Charger Labs estimates that the Apple Vision Pro could be in the 74WH range, but I think this is likely too high based on my own comparison.
I have shot a picture with a Meta Quest Pro (as a stand-in to judge size and perspective to compare against Apple’s picture of the battery pack. In the picture is a known 37Wh battery pack. This battery pack is in a plastic case with two USB-A and one USB-micro, not in the Apple battery pack (there are likely some other differences internally).
I tried to get the picture with a similar setup and perspective, but this is all very approximate to get a rough idea of the battery size. The Apple battery pack looks a little thinner, less wide, and longer than the 37Wh “known” battery pack. The net volume appears to be similar. Thus I would judge the Apple battery to be between about 35Wh and 50Wh.
I’ve been watching and reading the many reviews by those invited to try (typically for about 30 minutes) the Apple Vision Pro (AVP). Unfortunately, I saw very little technical analysis and very few with deep knowledge of the issues of virtual and augmented reality. At least they didn’t mention what seemed to me to be obvious issues and questions. Much of what I saw were people that were either fans or grateful to be selected to get an early look at the AVP and wanted (or needed) to be invited back by Apple.
Unfortunately, I didn’t see a lot of “critical thinking” or understanding of the technical issues rather than having “blown minds.” Specifically, while many discussed the issue of the uncanny valley with the face capture and Eyesight Display, no one even mentioned the issues of variable focusing and Vegence Accommodation Conflict (VAC). The only places I have seen it mentioned are in the Reddit AR/VR/MR and Y-Combinator forums. On June 4th, Brad Lynch reported on Twitter that Meta would present their “VR headset with a retinal resolution varifocal display” paper at Siggraph 2023.
As I mentioned in my AWE 2023 presentation video (and full slides set here), I was doubtful based on what was rumored that Apple would address VAC. Like many others, Apple appears to have ignored the well-known and well-documented human mechanical and visual problem with VR/MR. As I said many times, “If all it took were money and smart people, it would be here already. Apple, Meta, etc. can’t buy different physics,” and I should add, “they are also stuck with humans as they exist with their highly complex and varied visual systems.”
Treat the above as a “teaser” for some of what I will discuss in Part 2. Before discussing the problems I see with the Apple Vision Pro and its prospective applications in Part 2, this part will discuss what the AVP got right over the Meta Quest Pro (MQP).
I know many Apple researchers and executives read this blog; if you have the goods, how about arranging for someone that understands the technology and human factor issues to evaluate the AVP?
I want to highlight three publications that brought up some good issues and dug at least a little below the surface. SadlyIsBradley had an hour and 49-minute live stream discussing many issues, particularly the display hardware and the applications relative to VR (the host, Brad Lynch, primarily follows VR). The Verge Podcast had a pre-WWDC (included some Meta Quest 3) and post-WWDC discussion that brought up issues with the presented applications. I particularly recommend listening to Adi Robertson’s comments in the “pre” podcast; she is hilarious in her take. Finally, I found Snazzy Lab’s 13-minute explanation about the applications put into words some of the problems with the applications Apple showed; in short, there was nothing new that had not failed before and was not just because the hardware was not good enough.
Apple’s AVP has shown up in Meta’s MQP in just about everyone’s opinion. The Meta quest pro is considered expensive and poorly executed, with many features poorly executed. The MQP costs less than half as much at introduction (less than 1/3rd after the price drop) but is a bridge to nowhere. The MQP perhaps would better be called the Quest 2.5 (i.e., halfway to the Quest 3). Discussed below are specific hardware differences between the AVP and MQP.
I will be critical of many of Apple’s AVP decisions, but I think all the comments I have seen about the price being too high completely miss the point. The price is temporal and can be reduced with volume. Apple or Meta must prove that a highly useful MR passthrough headset can be made at any price. I’m certainly not convinced yet, based on what I have seen, that the AVP will succeed in proving the future of passthrough MR, but the MQP has shown that halfway measures fail.
The people commenting on the AVP’s price have been spoiled by looking at mature rather than new technology. Take as just one example, the original retail price of the Apple 2 computer with 4 KB of RAM was US$1,298 (equivalent to $6,268 in 2022) and US$2,638 (equivalent to $12,739 in 2022) with the maximum 48KB of RAM (source Wikipedia). As another example, I bought my first video tape recorder in 1979 for about $1,000, which is more than $4,400 adjusted for inflation, and a blank 1.5-hour tape was about $10 (~$44 in 2023 dollars). The problem is not price but whether the AVP is something people will use regularly.
Meta Quest Pro’s (MQP) looks like a half-baked effort compared to the AVP. The MQP’s passthrough mode is comically bad, as shown in Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough. Apple’s AVP passthrough will not be “perfect” (more on that in part 2), but Apple didn’t make something with so many obvious problems.
The MQP used two IR cameras with a single high-resolution color camera in the middle to try and synthesize a “virtual camera” for each eye with 3-D depth perception. The article above shows that the MQP’s method resulted in a low-resolution and very distorted view. The AVP has a high-resolution camera per eye, with more depth-sensing cameras/sensors and much more processing to create virtual camera-per-eye views.
I should add that there are no reports I have seen on how accurately the AVP creates 3-D views of the real world, but by all reports, the AVP’s passthrough is vastly better than that of the MQP. A hint that all is not well with the AVP’s passthrough is that the forward main cameras are poorly positioned (to be discussed in Part 2).
The next issue is that if you target “business applications” and computer monitor replacement, you need at least 40 pixels per degree (ppd), preferably more. The MQP has only about 20 pixels per degree, meaning much less readable text can fit in a given area. Because the fonts are bigger, the eyes must move further to read the same amount of text, thus slowing down reading speed. The FOV of the AVP has been estimated to be about the same as the MQP, but the AVP has more than 2X the horizontal and vertical pixels, resulting in about 40 ppd.
A note on measuring Pixels per Degree: Typically, VR headset measurement of FOV includes the biocular overlap from both eyes. When it comes to measuring “pixels per degree,” the measurment is based on the total visible pixels divide by the FOV in the same direction for a single eye. The single eye FOV is often not specified and there may be pixels that are cut off based on the optics and the eye location. Additionally, the measurement has a degree of variability based on the amount of eye relief assumed.
Having at least 40 pixels per degree is “necessary but not sufficient” for supporting business applications. OI believe that other visual human factors will make the AVP unsuitable for business applications beyond “emergency” situations and what I call the “Ice Bucket Challenges,” where someone wears a headset for a week or a month to “prove” it could be done and then goes back to a computer monitor/laptop. I have not seen any study (having looked for many years), and Apple presented none that suggests the long-term use of virtual desktops is good for humans (if you know of one, please let me know).
Ironically, in the watchOS video, only a few minutes before the AVP announcement, Apple discussed (linked in WWDC 2023 video) how they implemented features in watchOS to encourage people to go outside and stop looking at screens, as it may be a cause of myopia. I’m not the only one to catch this seeming contradiction in messaging.
The AVP’s Micro-OLED should give better black levels/contrast than MPQ’s LCD with a mini-LED local dimmable backlight. Local dimming is problematic and based on scene content. While the mini-LEDs are more efficient in producing light, much of that light is lost when going through the LCD, and typically only about 3% to 6% of the backlight makes it through the LCD.
While Apple claims to be making the Micro-OLED CMOS “backplane,” by all reports, Sony is applying the OLEDs and performing the Micro-OLED assembly. Sony has long been the leader Micro-OLEDs used in camera viewfinders and birdbath AR headsets, including Xreal (formerly Nreal — see Nreal Teardown: Part 2, Detailed Look Inside).
The color sub-pixel arrangement in the WWDC videos shows a decidedly small light emission area with black space between pixels than the older Sony ECX335 (shown with pixels roughly to scale above). This suggests that Apple didn’t need to push the light output (see optic efficiency in next section) and supported more efficient light collection (semi-collimation) with the use of micro-lens-arrays (MLAs) which are reportedly used on top of the AVP’s Micro-OLED.
John Carmack, former Meta Consulting CTO, gave some of the limitations and issues with MQP’s Local Dimming feature in his unscripted talk after the MQP’s introduction (excerpts from his discussion):
21:10 Quest Pro has a whole lot of back lights, a full grid of them, so we can kind of strobe them off in rows or columns as we scan things out, which lets us sort of get the ability of chasing a rolling shutter like we have on some other things, which should give us some extra latency. But unfortunately, some other choices in this display architecture cost us some latency, so we didn’t wind up really getting a win with that.
But one of the exciting possible things that you can do with this is do local dimming, where if you know that an area of the screen has nothing but black in it, you could literally turn off the bits of the backlight there. . . .
Now, it’s not enabled by default because to do this, we have to kind of scan over the screens and that costs us some time, and we don’t have a lot of extra time here. But a layer can choose to enable this extra local dimming. . . .
And if you’ve got an environment like I’m in right now, there’s literally no complete, maybe a little bit on one of those surfaces over there that’s a complete black. On most systems, most scenes, it doesn’t wind up actually benefiting you. . . .
There’s still limits where you’re not going to get, on an OLED, you can do super bright stars on a completely black sky. With local dimming, you can’t do that because if you’ve got a max value star in a min value black sky, it’s still gotta pick something and stretch the pixels around it. . . . We do have this one flag that we can set up for layer optimization.
John Carmack Meta Connect 2022 Unscripted Talk
Update June 14, 2023 PM: It turns out that Apple’s news release states, “This technological breakthrough, combined with custom catadioptric lenses that enable incredible sharpness and clarity . . . ” Catadioptric means a combination of refractive and reflective optical elements. This means that they are not “purely refractive” as I first guessed (wrongely). They could be pancake or some variation of pancake optics. Apple recently bought Limbak, an optics design company known for catadioptric designs including those used in Lynx. They also had what they called “super pancake” designs. Assuming Apple is using a pancake design, then the power output of the OLEDs will need to be about 10X higher.
Apple used a 3-element aspherical optic rather than Pancake optics in the MQP and many other new VR designs. See this blog’s article Meta (aka Facebook) Cambria Electrically Controllable LC Lens for VAC? which discusses the efficiencies issues with Pancake Optics. Pancake optics are particularly inefficient with Micro-OLED displays, as used in the AVP because they require the unpolarized OLED light to be polarized for the optics to work. This polarization typically loses about 55% of the light (45% transmissive). Then there is a 50% loss on the transmissive pass and another 50% loss on the reflection of a 50/50 semi-mirror in the pancake optics, which results, when combined with the polarization loss, less than 11% of the OLED’s light, making it through pancake optics. It should be noted that the MQP currently uses LCDs that output polarized light, so it doesn’t suffer the polarization loss with pancake optics but still has the 50/50 semi-mirror losses.
The AVP uses four hand-tracking cameras, with the two extra cameras supporting the tracking of hands at about waist level. Holding your hand up to be tracked has been a major ergonomic complaint of mine since I first tried the Hololens_1. Anyone who knows anything about ergonomics knows that humans are not designed to hold their hands up for long periods. Apple seems to be the first company to address this issue. Additionally, by all reports, the hand tracking is very accurate and likely much better than MQP.
According to all reports, the AVP’s eye tracking is exceptionally good and accurate. Part of the reason for this better eye tracking is likely due to better algorithms and processing. On the hardware side, it is interesting that the AVP’s IR illuminator and cameras go through the eyepiece optics. In contrast, on the Meta Quest Pro, the IR illuminator and cameras are closer to the eye on a ring outside the optics. The result is that the AVP cameras have a more straight-on look at the eyes. {Brad Lynch of SadlyIsBradley pointed out the difference in IR illuminator and camera location between the AVP and MQP in an offline discussion.}
As many others have pointed out, the AVP uses a computer-level CPU+GPU (M2) and a custom-designed R1 “vision processor,” whereas the MQP uses high-end smartphone processors. Apple has pressed its advantage in hardware design over Meta or anyone else.
The AVP (below left), the AVP has two squirrel-cage fans situated between the M2 and R1 processor chips and the optics (below left). The AVP appears to have about 37 Watt-Hour battery (see next section) to support the two-hour rated battery life. Thus it suggests that the AVP consumes “typically” about 18.5 Watts. This is consistent with people noticing very-warm/hot air coming out of the top vent holes. The MQP (below right) has a similar dual fan cooling. The MQP has a 20.58 Watt-Hour battery, with the MQP rated by Meta as lasting 2-3 hours.
Because the AVP uses a Micro-OLED and a much more efficient optical design, I would expect the AVP’s OLED to consume less than 1W per eye and much less when not viewing mostly white content. I, therefore, suspect that much of the power in the AVP is going to the M2 and R1 processing. In the case of Meta’s MQP, I suspect that a much higher percentage of the system power will power through the inefficient optical architecture.
It should be noted that the AVP displays about 3.3 times the pixels, has more and higher resolution cameras, and supports much higher resolution passthrough. Thus the AVP is moving massively more data which also consumes power. So while it looks like the AVP consumes about double the power, the power “per pixel” is about 1/3rd less than the MQP and probably much less when considering all factors. Considering the processing done by the AVP seems much more advance processing, it demonstrates Apple’s processing efficiency.
CORRECTION (June 14, 2023): Based on information from reader Xuelei Zhang, I was able to confirm that widely reported tweet of the so-called Apple Vision Pro Battery was a hoax and what was shown is the battery used in a Meta Quest 2 Elite. You can see in the picture on the right how the number is the same and there is the metal slug with the hole just like the supposed AVP battery. I still think based on the size of the battery pack is similar in size to a 37Wh battery or perhaps larger. In an article publish today, Charger Labs estimates that the Apple Vision Pro could be in the 74WH range which is certainly possible, but appears to me to be too big. It looks to me like the batter is between 35Wh and 50Wh.
Based on the available information, I would peg the battery to be in the 35 to 50Wh range and thus the power “typical” power consumption of the AVP to be in the 17.5W to 25W range or about two times the Meta Quest Pro’s ~10W.
Numerous, what I think is erroneous, articles and video report that the AVP has a 4789mAh/18.3Wh battery. Going back to the source of those reports, at Tweat by Kosutami, it appears that the word “dual” was missed. Looking at the original follow-up Tweats, the report is clear that two cells are folded about a metal slug and, when added together, would total 36.6Wh. Additionally, in comparing the AVP’s battery to scale with the headset, it appears to be about the same size as a 37Wh battery I own, which is what I was estimating before I saw Kosutami’s tweet.
Importantly, if the AVP’s battery capacity is doubled, as I think is correct, then the estimated power consumption of the AVP is about double what others have reported, or about 18.5 Watts per hour.
The MQP battery was identified by iFixit (above left) to have two cells that combine to form a 20.58Wh battery pack, or just over half that of the AVP.
With both the MQP and AVP claiming similar battery life (big caveat, as both are talking “typical use”), it suggests the AVP is consuming about double the power.
Based on my quick analysis of the optics and displays, I think the AVP’s displays consume less than 1W per ey or less than 2W. This suggests that the bulk of the ~18W/hour is used by the two processors (M2, R1), data/memory movement (often ignored), the many cameras, and IR illuminators.
In part 2 of this series, I plan to will discuss the many user problems I see with the AVP’s battery pack.
This blog does not seriously follow audio technology, but by all accounts, the AVP’s audio hardware and spatial sound processing capability will be far superior to that of the MQP.
In many ways, the AVP can be seen as the “Meta Quest Pro done much better.” If you are doing more of a “flagship/Pro product,” it better be a flagship. The AVP is 3.5 times the current price of the MQP and about seven times that of the Meta Quest 3, but that is largely irrelevant in the long run. The key to the future is whether anyone can prove that the “vision” for passthrough VR at any price is workable for a large user base. I can see significant niche applications for the AVP (support for people with low vision is just one, although the display resolution is overkill for this use). But as I will discuss next time, there are giant holes in the applications presented.
If the MQP or AVP would solve the problems they purport to solve, the price would not be the major stumbling block. As Apple claimed in the WWDC 2023 video, the feature set of the AVP would be a bargain for many people. Time and volume will cure the cost issues. My problem (teaser for Part 2) is that neither will be able to fulfill the vision they paint, and it is not the difference between a few thousand dollars and a few more years of development.