Normal view
Meta Quest 3S is a disappointing half-step to Carmack’s low-cost VR vision
It's been just over two years now since soon-to-depart CTO John Carmack told a Meta Connect audience about his vision for a super low-end VR headset that came in at $250 and 250 grams. "We're not building that headset today, but I keep trying," Carmack said at the time with some exasperation.
On the pricing half of the equation, the recently released Quest 3S headset is nearly on target for Carmack's hopes and dreams. Meta's new $299 headset is a significant drop from the $499 Quest 3 and the cheapest price point for a Meta VR headset since the company raised the price of the aging Quest 2 to $400 back in the summer of 2022. When you account for a few years of inflation in there, the Quest 3S is close to the $250 headset Carmack envisioned.
Unfortunately, Meta must still seriously tackle the "250 grams" part of Carmack's vision. The 514g Quest 3S feels at least as unwieldy on your face as the 515g Quest 3, and both are still quite far from the "super light comforts" Carmack envisioned. Add in all the compromises Meta made so the Quest 3S could hit that lower price point, and you have a cheap, half-measure headset that we can only really recommend to the most price-conscious of VR consumers.
- EdSurge Articles
- How This District Tech Coach Still Makes Time to Teach — in a Multi-Sensory Immersive Room
How This District Tech Coach Still Makes Time to Teach — in a Multi-Sensory Immersive Room
Miguel Quinteros spent over a decade as something of a tech-savvy teacher — one not afraid to try new things in the classroom, in hopes that they would make learning more interesting, more intuitive and more engaging for his students.
He took that proclivity to the next level a few years ago, when he accepted a position as a K-12 technology coach in a small school district in western Michigan.
Quinteros loves the work he gets to do, trying to solve problems for teachers, students and administrators in his rural farming community, removing obstacles that come their way and generally continuing in his pursuit of looking for ways to make learning more fun and approachable to students.
And he hasn’t had to abandon teaching. In 2022, Quinteros’ district, Mason County Central School District, opened a first-of-its-kind immersive room that, with augmented and virtual reality advanced technology, allows students to deepen their learning with interactive, sensory-oriented lessons — from the World War I trenches to erupting volcanoes to ancient Greece. Quinteros manages the immersive room for the district and helps bring lessons to life for children of all ages.
“I just get to do the fun part now: teach,” he shares. “I don't do the grading and the discipline anymore.”
In any given school, a robust school staff is quietly working behind the scenes to help shape the day for kids. In our Role Call series, we spotlight staff members who sometimes go unnoticed, but whose work is integral in transforming a school into a lively community. For this installment, we’re featuring Miguel Quinteros.
The following interview has been lightly edited and condensed for clarity.
Name: Miguel Quinteros
Age: 51
Location: Scottville, Michigan
Role: K-12 technology coach
Years in the field: Three in current role, after 11 as a teacher
EdSurge: How did you get here? What brought you to your role as a technology coach?
Miguel Quinteros: Well, I'm originally from El Salvador. I came when I was 25 for medical treatment, and then I had to stay in the country and find something to do. So I became a youth minister with the Catholic Church. Then I thought, ‘Oh, I like to work with young people,’ so I decided to become a teacher. When I was studying to become a teacher, I had to choose a major and a minor, and I picked social studies as my major and computer science as my minor. With my minor being computer science, I focused a lot on how to use technology in the classroom, how to do things that we would not be able to do otherwise.
Once I became a teacher, even though I was teaching Spanish, computer science and social studies to middle and high school students, I was always using technology in the classroom. It was a small town, and word got out. After the pandemic, I think a lot of school districts realized that teachers needed more support with technology, and a lot of tech coach positions came up. So then the district where I work now actually recruited me to come take this position.
When people outside of school ask you what you do, like at a social event, how do you describe your work to them?
Most of the time, I don't like to tell people what I do. I feel like, especially being Hispanic, when people see me in social [settings], they assume that I work in the fields doing migrant work, agriculture. And the moment they know what I do, it’s almost like they give me more importance. I like people to see me for who I am as a person, not for what I do.
But if I meet somebody, and I can see that they genuinely accept me for who I am, then I open up more with them. Otherwise, I guess I'm kind of guarded with this topic. It's sad, but that's the reality, and I have to live in my skin every day.
Let’s say you met someone who was genuinely interested in you. How would you describe to them what your work entails, if you were feeling really talkative and generous that day?
I’d tell them I am a technology coach, and most people are like, ‘What is that?’ Because these are kind of new positions that have emerged. And then I explain that I go into classrooms and help teachers use technology, to make classrooms more engaging. I also order technology for the teachers and for the students — physical technology as well as learning apps. I provide teachers with training on how to use that technology.
And then they ask more questions. If they said, ‘So you don't teach kids anymore?’ then I tell them about what I do with teaching young kids, too. My position is really unique because we have, in our district, an AR/VR immersive room, which I run and I create content for when I have downtime. It’s the first of its kind in a K-12 building in the whole country, and it's open for our K-12 students. It’s this room with three big walls with projectors that become interactive to the touch and with surround sound. The floor is also interactive. It's like virtual reality without the goggles.
If I didn’t have that immersive room, I would probably miss being in the classroom, because I went to school to be a teacher. And I like that part, the teaching aspect.
When did the immersive room open in your district? And what are you teaching kids in that setting? What does that look like?
The immersive room was an initiative for the district right after the pandemic. They were brainstorming ideas on how to get kids to come back to school after such a long period of time away.
So far it has accomplished that goal. We’re a rural community. We don't have that much funding, and our kids come from very poor homes and backgrounds. A lot of children have never been to a museum, never been to cool places in the big city. With the immersive room, basically we can recreate any of that.
We can take a field trip to the deepest part of the ocean, for example. I have this one immersive experience that starts on the surface of the ocean and then lowers depending on what part of the ocean you want to visit. If you want to go to the part where the coral reefs are, or if you want to go to the deep part of the ocean where it's dark and no light gets through, you can do that. And then once we are there, in the ocean, the buttons are interactive in the walls and the children take turns touching those buttons, which gives them information about the specific aspect of the ocean. So the kids come and they get to touch the walls and interact and learn that way. And the room also has this four-dimensional aspect. If I want to bring a seashore scent into this experience, I can upload that so they can smell like they're right there in the ocean. And there's also fans that can activate and recreate different wind variance.
So that's what makes the lesson more interactive. We have other lessons to go to the moon, where we play with the gravity of the moon. There's bricks that they pull with their hands, and they fall and it simulates gravity. And then we talk about gravity. ‘What happens if we throw this brick right here on earth? How fast would that go? And look what happens if we throw this brick on the moon and how much slower it goes down.’ Then we’ll learn about the phases of the moon, how the moon interacts with the oceans and how that influences us and our daily lives on earth. This is what makes it really cool for the students.
That sounds incredible. I've never heard of anything like that. And you’re saying you teach all grade levels in the immersive room?
Yes, right now, but the way it works is the teachers schedule time with me and they bring the kids. The teachers are there in the classroom with me also. When they sign up, they give me an idea of what they expect to see in the immersive room. And then when they come, I have the lesson ready and the moment they walk in, boom, they are immersed in the lesson. That's what I like about the system.
What does a hard day look like in your role?
Sometimes, I have to make sure that rostering is OK. That means I have to spend the whole day fixing data and correcting names of students and making sure that everything is properly entered in the system and that students have access to their devices. And I have literally spent days repetitively deleting duplicate students. I guess that would be a hard day, just the monotonous work. I like variety.
What does a really good day look like?
A great day for me is when I get to do a little bit of everything: when I get to see the students, when I get to teach at least one class, when I get to interact with the teachers, helping them brainstorm ideas on how can we include students in this learning process with an app, and when I get to do some purchases too on that day, for some things that the teachers really need.
It just fills my heart when I am able to advocate for them because I tell them, ‘I like to do for you what nobody did for me when I was a teacher.’ Nobody will come and say, ‘What do you need? How are things going?’ I like to do that on a daily basis. If I find myself with the downtime, I don't stay here at my desk. I walk and I go to the other buildings, and it’s like, ‘Oh, Miguel, by the way,’ and then they need me for something. I get to interact with the principal. I get lots of hugs when I go to the lower elementary with the younger kids, like kindergarten to second grade.
So I guess a fulfilling day for me would be when I get to serve all of my clients — and in my job, my clients are students, teachers, admin, and anyone who is walking through this building — and when I get to make their lives better, a little bit lighter.
What is an unexpected way that your role shapes the day for kids?
One way is all the educational apps that they use on a daily basis. If something goes wrong with it, they call me. But if everything is running smoothly, it’s because of the job I do. I guess that's where my job gets taken for granted, when everything is running smoothly, everything is in place. We use tons of different learning apps — from Google Classroom to Clever — and I'm the person responsible for rostering them and then training the teachers.
What do you wish you could change about your school or the education system today?
I wish that the teaching profession would be more respected, that teachers would be able to get all the resources they need and the support that they need. I wish the politicians would put more money where their mouth is. Teachers are underappreciated. I wish that our society would realize that without teachers, there are no other careers. There's no doctors, there's no lawyers, there's no politicians — without teachers.
Also one of the things that I wish we could change is that we expect all students to have the same credits. In Michigan, if you want to graduate high school, you have to have three science credits, four social studies credits, four ELA. Everyone has to have the same. And I think that's seriously wrong because not all kids are the same. Everybody has different needs, everybody has different dreams, everybody has different backgrounds. We should provide students with a variety of choices.
Like OK, imagine this kid who is terrible at reading and he hates social studies, but he's a hands-on kind of kid and he likes to take things apart. Why not provide a path for this kid where he will get to graduate with a high school diploma and with skills on how to do the particular job that the kid wants?
Your role gives you unique access and insight to today's young people. What's one thing you've learned about them through your work?
I’ve learned about how life is a lot simpler in a kid’s mind, and they know the joy of living day to day. When a kid comes and gives you a hug, they really mean it. When they give you a high five, it's because they want to do that. I am touched by the sincerity of the kids and how many times they teach us that life can be fun, life is fun.
Before I became a teacher, I was doing youth ministry and I was recruiting this kid, this young man, and I was like, ‘Hey, I have some fun programs at the church. Come and join us.’ He looked at me and said, ‘What kind of fun? Your kind of fun, or my kind of fun?’ I said, ‘That is an absolutely great question.’
That kid kind of changed my life because when I became a teacher, I always kept that in mind. Still to this day, that echoes in my head: ‘What kind of fun? Is it your kind of fun, or my kind of fun?’ Learning does not have to be boring. It should be fun. And that was my passion, to make learning fun for the students, to the point that they don't realize that they are learning because they're having too much fun.
That's what I like about students. Sometimes they can challenge you, they can ask you questions, and if you listen to them, we can learn a lot from young kids. I have learned a lot from them.
Apple maakt short voor Vision Pro waardoor je naar adem hapt
Het bericht Apple maakt short voor Vision Pro waardoor je naar adem hapt verscheen eerst op DutchCowboys.
UbiSim, a Labster company
UbiSim is the first immersive virtual reality (VR) training platform built specifically for nurses. It is a complete simulation lab that provides nursing trainees with virtual access to a variety of clinical situations and diverse patients in a broad continuum of realistic care settings, helping institutions to overcome limited access to hospitals and other clinical sites for nursing students.
This cool tool allows institutions to create repeatable, real-life scenarios that provide engaging, standardized multi-learner experiences using VR headsets. It combines intuitive interactions, flexible modules, and immediate feedback. These contribute to developing clinical judgment, critical thinking, team interaction, clear communication, and patient engagement skills that enhance safe clinical practice and are essential to improving Next Generation NCLEX test scores.
UbiSim reduces the burden of purchasing and maintaining expensive simulation lab equipment, allowing nursing programs to scale and standardize their simulation activities. Faculty choose from 50-plus existing training scenarios created in collaboration with nursing educators and simulation experts. Educators may also customize content or create original scenarios to fit learning objectives.
Founded in 2016, UbiSim has been a Labster company since 2021. The UbiSim customer roster has grown by 117% since Fall 2022, extending its footprint at universities, community colleges, technical colleges, and medical centers within 9 countries, including 21 American states. UbiSim now partners with 100-plus nursing institutions in North America and Europe to advance the shared mission of addressing the nursing shortage by reducing the cost, time, and logistical challenges of traditional simulation methods and scaling high-quality nursing education. For these reasons and more, UbiSim, a Labster company, is a Cool Tool Award Winner for “Best Virtual Reality / Augmented Reality (AR/VR) Solution” as part of The EdTech Awards 2024 from EdTech Digest. Learn more.
The post UbiSim, a Labster company appeared first on EdTech Digest.
Mixed Reality at CES & AR/VR/MR 2024 (Part 3 Display Devices)
Update 2/21/22: I added a discussion of the DLP’s new frame rates and its potential to address field sequential color breakup.
Introduction
In part 3 of my combined CES and AR/VR/MR 2024 coverage of over 50 Mixed Reality companies, I will discuss display companies.
As discussed in Mixed Reality at CES and the AR/VR/MR 2024 Video (Part 1 – Headset Companies), Jason McDowall of The AR Show recorded more than four hours of video on the 50 companies. In editing the videos, I felt the need to add more information on the companies. So, I decided to release each video in sections with a companion blog article with added information.
Outline of the Video and Additional Information
The part of the video on display companies is only about 14 minutes long, but with my background working in displays, I had more to write about each company. The times in blue on the left of each subsection below link to the YouTube video section discussing a given company.
00:10 Lighting Silicon (Formerly Kopin Micro-OLED)
Lighting Silicon is a spinoff of Kopin’s micro-OLED development. Kopin started making micro-LCD microdisplays with its transmissive color filter “Lift-off LCOS” process in 1990. 2011 Kopin acquired Forth Dimension Displays (FDD), a high-resolution Ferroelectric (reflective) LCOS maker. In 2016, I first reported on Kopin Entering the OLED Microdisplay Market. Lighting Silicon (as Kopin) was the first company to promote the combination of all plastic pancake optics with micro-OLEDs (now used in the Apple Vision Pro). Panasonic picked up the Lighting/Kopin OLED with pancake optics design for their Shift All headset (see also: Pancake Optics Kopin/Panasonic).
At CES 2024, I was invited by Chris Chinnock of Insight Media to be on a panel at Lighting Silicon’s reception. The panel’s title was “Finding the Path to a Consumer-Friendly Vision Pro Headset” (video link – remember this was made before the Apple Vision Pro was available). The panel started with Lighting Silicon’s Chairman, John Fan, explaining Lighting Silicon and its relationship with Lakeside Lighting Semiconductor. Essentially, Lightning Semiconductor designs the semiconductor backplane, and Lakeside Lighting does the OLED assembly (including applying the OLED material a wafer at a time, sealing the display, singulating the displays, and bonding). Currently, Lakeside Lighting is only processing 8-inch/200mm wafers, limiting Lighting Silicon to making ~2.5K resolution devices. To make ~4K devices, Lighting Semiconductor needs a more advanced semiconductor process that is only available in more modern 12-inch/300mm FABs. Lakeside is now building a manufacturing facility that can handle 12-inch OLED wafer assembly, enabling Lighting Silicon to offer ~4K devices.
Related info on Kopin’s history in microdisplays and micro-OLEDs:
- 2022 AWE Video Discussion with Brad Lynch Kopin (LCOS and OLED microdisplays)
- 2021 Pancake Optics Kopin/Panasonic
- 2013 Kopin Displays and Near Eye (Follow-Up to Seeking Alpha Article)
- 2013 Extended Temperature Range with LC-Based Microdisplays (about Kopin)
02:55 RaonTech
RaonTech seems to be one of the most popular LCOS makers, as I see their devices being used in many new designs/prototypes. Himax (Google Glass, Hololens 1, and many others) and Omnivision (Magic Leap 1&2 and other designs) are also LCOS makers I know are in multiple designs, but I didn’t see them at CES or the AR/VR/MR. I first reported on RaonTech at CES 2018 (Part 1 – AR Overview). RaonTech makes various LCOS devices with different pixel sizes and resolutions. More recently, they have developed a 2.15-micron pixel pitch field sequential color pixel with an “embedded spatial interpolation is done by pixel circuit itself,” so (as I understand it) the 4K image is based on 2K data being sent and interpolated by the display.
In addition to LCOS, RaonTech has been designing backplanes for other companies making micro-OLED and MicroLED microdisplays.
04:01 May Display (LCOS)
May Display is a Korean LCOS company that I first saw at CES 2022. It surprised me, as I thought I knew most of the LCOS makers. May is still a bit of an enigma. They make a range of LCOS panels, their most advanced being an 8K (7980 x 4,320) 3.2-micron pixel pitch. May also makes a 4K VR headset with a 75-degree FOV using their LCOS devices.
May has its own in-house LCOS manufacturing capability. May demonstrated using its LCOS devices in projectors and VR headsets and showed them being used in a (true) holographic projector (I think using phase LCOS).
May Display sounds like an impressive LCOS company, but I have not seen or heard of their LCOS devices being used in other companies’ products or prototypes.
04:16 Kopin’s Forth Dimensions Display (LCOS)
As discussed earlier with Lighting Silicon, Kopin acquired Ferroelectric LCOS maker Forth Dimension Displays (FDD) in 2011. FDD was originally founded as Micropix in 1988 as part of CRL-Opto, then renamed CRLO in 2004, and finally Forth Dimension Displays in 2005, before Kopin’s 2011 acquisition.
I started working in LCOS in 1998 as the CTO of Silicon Display, a startup developing a VR/AR monocular headset. I designed an XGA (1024 x768) LCOS backplane and the FGA to drive it. We were looking to work with MicroPix/CRL-Opto to do the LCOS assembly (applying the cover glass, glue seal, and liquid crystal). When MicroPix/CRL-Opto couldn’t get their backplane to work, they ended up licensing the XGA LCOS backplane design I did at Silicon Display to be their first device, which they had made for many years.
FDD has focused on higher-end display applications, with its most high-profile design win being the early 4K RED cameras. But (almost) all viewfinders today, including RED, use OLEDs. FDD’s LCOS devices have been used in military and industrial VR applications, but I haven’t seen them used in the broader AR/VR market. According to FDD, one of the biggest markets for their devices today is in “structured light” for 3-D depth sensing. FDD’s devices are also used in industrial and scientific applications such as 3D Super Resolution Microscopy and 3D Optical Metrology.
05:34 Texas Instruments (TI) DLP®
Around 2015, DLP and LCOS displays seemed to have been used in roughly equal numbers of waveguide-based AR/MR designs. However, since 2016, almost all new waveguide-based designs have used LCOS, most notably the Hololens 1 (2016) and Magic Leap One (2018). Even companies previously using DLP switched to LCOS and, more recently, MicroLEDs with new designs. Among the reasons the companies gave for switching from DLP to LCOS were pixel size and, thus, a smaller device for a given resolution, lower power consumption of the display+asic, more choice in device resolutions and form factors, and cost.
While DLP does not require polarized light, which is a significant efficiency advantage in room/theater projector applications that project hundreds or thousands of lumens, the power of the display device and control logic/ASICs are much more of a factor in near-eye displays that require less than 1 to at most a few lumens since the light is directly aimed into the eye rather than illuminating the whole room. Additionally, many near-eye optical designs employ one or more reflective optics requiring polarized light.
Another issue with DLP is drive algorithm control. Texas Instruments does not give its customers direct access to the DLP’s drive algorithm, which was a major issue for CREAL (to be discussed in the next article), which switched from DLP to LCOS partly because of the need to control its unique light field driving method directly. VividQ (also to be discussed in the next article), which generates a holographic display, started with DLP and now uses LCOS. Lightspace 3D has similarly switched.
Far from giving up, TI is making a concerted effort to improve its position in the AR/VR/MR market with new, smaller, and more efficient DLP/DMD devices and chipsets and reference design optics.
Added 2/21/22: I forgot to discuss the DLP’s new frame rates and field sequential color breakup.
I find the new, much higher frame rates the most interesting. Both DLP and LCOS use field sequential color (FSC), which can be prone to color breakup with eye and/or image movement. One way to reduce the chance of breakup is to increase the frame rate and, thus, the color field sequence rate (there are nominally three color fields, R, G, & B, per frame). With DLP’s new much higher 240Hz & 480Hz frame rates, the DLP would have 720 or 1440 color fields per second. Some older LCOS had as low as 60-frames/180-fields (I think this was used on Hololens 1 – right), and many, if not most, LCOS today use 120-frames/360-fields per second. A few LCOS devices I have seen can go as high as 180-frames/540-fields per second. So, the newer DLP devices would have an advantage in that area.
The content below was extracted from the TI DLP presentation given at AR/VR/MR 2024 on January 29, 2024 (note that only the abstract seems available on the SPIE website).
My Background at Texas Instruments:
I worked at Texas Instruments from 1977 to 1998, becoming the youngest TI Fellow in the company’s history in 1988. However, contrary to what people may think, I never directly worked on the DLP. The closest I came was a short-lived joint development program to develop a DLP-based color copier using the TMS320C80 image processor, for which I was the lead architect.
I worked in the Microprocessor division developing the TMS9918/28/29 (the first “Sprite” video chip), the TMS9995 CPU, the TMS99000 CPU, the TMS34010 (the first programmable graphics processor), the TMS34020 (2nd generation), the TMS302C80 (first image processor with 4 DSP CPUs and a RISC CPU) several generations of Video DRAM (starting with the TMS4161), and the first Synchronous DRAM. I designed silicon to generate or process pixels for about 17 of my 20 years at TI.
After leaving TI, ended up working on LCOS, a rival technology to DLP, from 1998 through 2011. But then when I was designing a aftermarket autmotive HUD at Navdy, I chose use a DLP engine for the projector for its advantages in that application. I like to think of myself as a product focused and want to use whichever technology works best for the given application. I see pros and cons in all the display technologies.
07:25 VueReal MicroLED
VueReal is a Canadian-based startup developing MicroLEDs. Their initial focus was on making single color per device microdisplays (below left).
However, perhaps VueReal’s most interesting development is their cartridge-based method of microprinting MicroLEDs. In this process, they singulate the individual LEDs, test and select them, and then transfer them to a substrate with either passive (wire) or active (ex., thin-film transistors on glass or plastic). They claim to have extremely high yields with this process. With this process, they can make full-color rectangular displays (above right), transparent displays (by spacing the LEDs out on a transparent substrate, and displays of various shapes, such as an automotive instrument panel or a tail light.
I was not allowed to take pictures in the VueReal suite, but Chris Chinnock of Insight Media was allowed to make a video from the suit but had to keep his distance from demos. For more information on VueReal, I would also suggest going to MicroLED-Info, which has a combination of information and videos on VueReal.
08:26 MojoVision MicroLED
MojoVision is pivoting from a “Contact Lens Display Company” to a “MicroLED component company.” Its new CEO is Dr. Nikhil Balram, formerly the head of Google’s Display Group. MojoVision started saying (in private) that it was putting more emphasis on being a MicroLEDs component company around 2021. Still, it didn’t publicly stop developing the contact lens display until January 2023 after spending more than $200M.
To be clear, I always thought the contact lens display concept was fatally flawed due to physics, to the point where I thought it was a scam. Some third-party NDA reasons kept me from talking about MojoVision until 2022. I outlined some fundamental problems and why I thought the contact lens display was a sham in my 2022 Video with Brad Lynch on Mojovision Contact Display in my 2022 CES Discussion video with Brad Lynch (if you take pleasure in my beating up on a dumb concept for about 14 minutes, it might be a fun thing to watch).
So, in my book, Mojovision, the company starts with a major credibility problem. Still, they are now under new leadership and focusing on what they got to work, namely very small MicroLEDs. Their 1.75-micron LEDs are the smallest I have heard about. The “old” Mojovision had developed direct/native green MicroLEDs, but the new MojoVision is developing native blue LEDs and then using quantum dot conversion to get green and red.
I have been hearing about using quantum dots to make full-color MicroLEDs for ~10 years, and many companies have said they are working on it. Playnitride demonstrated quantum dot-converted microdisplays (via Lumus waveguides) and larger direct-view displays at AR/VR/MR 2023 (see MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7)).
Mike Wiemer (CTO) gave a presentation on “Comparing Reds: QD vs InGaN vs AlInGaP” (behind the SPIE Paywall). Below are a few slides from that presentation.
Wiemer gave many of the (well-known in the industry) advantages of the blue LED with the quantum dot approach for MicroLEDs over competing approaches to full-color MicroLEDs, including:
- Blue LEDs are the most efficient color
- You only have to make a single type of LED crystal structure in a single layer.
- It is relatively easy to print small quantum dots; it is infeasible to pick and place microdisplay size MicroLEDs
- Quantum dots converted blue to green and red are much more efficient than native green and red LEDs
- Native red LEDs are inefficient in GaN crystalline structures that are moderately compatible with native green and blue LEDs.
- Stacking native LEDs of different colors on different layers is a complex crystalline growth process, and blocking light from lower layers causes efficiency issues.
- Single emitters with multiple-color LEDs (e.g., See my article on Porotech) have efficiency issues, particularly in RED, which are further exacerbated by the need to time sequence the colors. Controlling a large array of single emitters with multiple colors requires a yet-to-be-developed, complex backplane.
Some of the known big issues with quantum dot conversion with MicroLED microdisplays (not a problem for larger direct view displays):
- MicroLEDs can only have a very thin layer of quantum dots. If the layer is too thin, the light/energy is wasted, and the residual blue light must be filtered out to get good greens and reds.
- MojoVision claims to have developed quantum dots that can convert all the blue light to red or green with thin layers
- There must be some structure/isolation to prevent the blue light from adjacent cells from activating the quantum dots of a given cell, which would cause the desaturation of colors. Eliminating color crosstalk/desaturating is another advantage of having thinner quantum dot layers.
- The lifetime and potential for color shifting with quantum dots, particularly if they are driven hard. Native crystalline LEDs are more durable and can be driven harder/brighter. Thus, quantum dot-converted blue LEDs, while more than 10x brighter than OLEDs, are expected to be less bright than native LEDs
- While MojoVision has a relatively small 1.37-micron LED on a 1.87-micron pitch, that still gives a 3.74-micron pixel pitch (assuming MojoVision keeps using two reds to get enough red brightness). While this is still about half the pixel pitch of the Apple Vision’s Pro ~7.5-micron pitch OLED, a smaller pixel size such as with a single-emitter-with multiple-colors (e.g., Porotech) would be better (more efficient due to étendue see: MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7)) for semi-collimating the light using microlenses as needed by waveguides.
10:20 Porotech MicroLED
I covered Porotech’s single emitter, multiple color, MicroLED technology extensively last year in CES 2023 (Part 2) – Porotech – The Most Advanced MicroLED Technology, MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7), and my CES 2023 Video with Brad Lynch.
While technically interesting, Porotech’s single-emitter device will likely take considerable time to perfect. The single-emitter approach has the major advantage of supporting a smaller pixel since only one LED per pixel is required. This also results in only two electrical connections (power and ground) to LED per pixel.
However, as the current level controls the color wavelength, this level must be precise. The brightness is then controlled by the duty cycle. An extremely advanced semiconductor backplane will be needed to precisely control the current and duty cycle per pixel, a backplane vastly more complex than LCOS or spatial color MicroLEDs (such as MojoVision and Playnitride) require.
Using current to control the color of LEDs is well-known to experts in LEDs. Multiple LED experts have told me that based on their knowledge, they believe Porotech’s red light output will be small relative to the blue and green. To produce a full-color image, the single emitter will have to sequentially display red, green, and blue, further exacerbating the red’s brightness issues.
12:55 Brilliance Color Laser Combiner
Brilliance has developed a 3-color laser combiner on silicon. Light guides formed in/on the silicon act similarly to fiber optics to combine red, green, and blue laser diodes into a single beam. The obvious application of this technology would be a laser beam scanning (LBS) display.
While I appreciate Brilliance’s technical achievement, I don’t believe that laser beam scanning (LBS) is a competitive display technology for any known application. This blog has written dozens of articles (too many to list here) about the failure of LBS displays.
14:24 TriLite/Trixel (Laser Combiner and LBS Display Glasses)
Last and certainly least, we get to TriLite Laser Beam Scanning (LBS) glasses. LBS displays for near-eye and projector use have a perfect 25+ year record of failure. I have written about many of these failures since this blog started. I see nothing in TriLite that will change this trend. It does not matter if they shoot from the temple onto a hologram directly into the eye like North Focals or use a waveguide like TriLite; the fatal weak link is using an LBS display device.
It has reached the point when I see a device with an LBS display. I’m pretty sure it is either part of a scam and/or the people involved are too incompetent to create a good product (and yes, I include Hololens 2 in this category). Every company with an LBS display (once again, including Hololens 2) lies about the resolution by confabulating “scan lines” with the rows of a pixel-based display. Scan lines are not the same as pixel rows because the LBS scan lines vary in spacing and follow a curved path. Thus, every pixel in the image must be resampled into a distorted and non-uniform scanning process.
Like Brilliance above, TriLites’ core technology combines three lasers for LBS. Unlike Brilliance, TriLites does not end up with the beams being coaxial; rather, they are at slightly different angles. This will cause the various colors to diverge by different amounts in the scanning process. TriLite uses its “Trajectory Control Module” (TCM) to compute how to re-sample the image to align the red, green, and blue.
TriLite then compounds its problems with LBS using a Lissajous scanning process, about the worst possible scanning process for generating an image. I wrote about why the Lissajous scanning process, also used by Oqmented (TriLite uses Infineon’s scanning mirror), in AWE 2021 Part 2: Laser Scanning – Oqmented, Dispelix, and ST Micro. Lissajous scanning may be a good way to scan a laser beam for LiDAR (as I discussed in CES 2023 (4) – VoxelSensors 3D Perception, Fast and Accurate), but it is a horrible way to display an image.
The information and images below have been collected from TriLite’s website.
As far as I have seen, it is a myth that LBS has any advantage in size, cost, and power over LCOS for the same image resolution and FOV. As discussed in part 1, Avegant generated the comparison below, comparing North Focals LBS glasses with a ~12-degree FOV and roughly 320×240 resolution to Avegant’s 720 x 720 30-degree LCOS-based glasses.
Below is a selection (from dozens) of related articles I have written on various LBS display devices:
- 2012 Cynic’s Guild to CES — Measuring Resolution – Discusses how LBS companies confabulate resolution with scan lines
- 2018 North’s Focals Laser Beam Scanning AR Glasses – “Color Intel Vaunt”
- 2015 Celluon Laser Beam Scanning Projector Technical Analysis – Part 1 More on LBS and Resolution:
- 2019 Hololens 2 First Impressions: Good Ergonomics, But The LBS Resolution Math Fails! – This article goes into the basic math behind LBS
- 2020 Hololens 2 Display Evaluation (Part 1: LBS Visual Sausage Being Made) – This article details the Hololens very complex LBS scanning process and its problems
- 2021 AWE 2021 Part 2: Laser Scanning – Oqmented, Dispelix, and ST Micro – Goes into the problems with Lissajous scanning in a display device.
- 2023 Humane AI – Pico Laser Projection – $230M AI Twist on an Old Scam (Title says it all)
- 2016 Wrist Projector Scams – Ritot, Cicret, the new eyeHand
- 2018 CES Haier Laser Projector Watch – (Wrist Projector Scams Revisited)
- 2018 Intel AR “Fixer-Upper” For Sale? Only $350M ???
- 2018 Magic Leap Fiber Scanning Display (FSD) – “The Big Con” at the “Core”
Next Time
I plan to cover non-display devices next in this series on CES and AR/VR/MR 2024. That will leave sections on Holograms and Lightfields, Display Measurement Companies, and finally, Jason and my discussion of the Apple Vision Pro.
If These Walls Could Talk: Pico Velasquez, Architecture and the Metaverse
In 2007 I discovered 'reflective architecture', an idea explored by Jon Brouchoud, an architect who was working in Second Life.
It was the concept that in a virtual environment buildings can move, shift, and morph based on user presence. Instead of buildings and environments as static objects, the 'affordances' of a programmable space allowed for them to have a computable relationship to the audience/user/visitor.
While today the idea might seem obvious, at the time it was a leading-edge idea that an architect could actually WORK in a virtual environment, let alone change our concept of space through his explorations.
Walking through one of Jon's experiments created a mental shift for me: first, because we didn't need to "port" standard concepts of what a space can be into virtual environments.
Later, I worked with Jon on the design of the Metanomics stage, the first serious virtual talk show:
This helped me to realize that his work also helped to open up new ways of thinking about the physical world and our relationship to space.
It took almost 15 years to achieve a similar shift in thinking.
And it happened because of Pico Velasquez.
Pico Velasquez and Walls That Talk
It doesn't happen often. I mean - how many Zoom calls, webinars and online 'events' have you been to? Especially over the last year? How many of them blur into each other?
But this session with Pico Velasquez may be the best hour you spend this year.
Sure, you might lose the sense of being there. Because one of the joys of the session was Pico's rapid-fire mind, which was able to lift off of the audience 'back chat' and questions like someone who can design a building, chat with her best friend, write a blog post and cook dinner at the same time.
Pico gave a tour of her work. And the session inverted the experience I had with Jon.
Where Jon showed that virtual environments can be living, breathing entities (with an implication for the physical world), Pico demonstrated that physical spaces can be computable, and that this has an implication for the Metaverse.
While deceptively simple, her work on Bloom, for example, was a living canvas that used a Unity game engine back end to create a narrative that responded to time of day and presence.
Pico gave us a hint of her process during the presentation:
Which resulted in a space that responds to people being nearby (watch the video for the full effect):
Her work on The Oculus, the main entrance to the new Seminole Hard Rock Casino & Hotel has a similar immersive and responsive quality:
Four Pillars for the Metaverse
Once Burning Man and the Social Galaxy (a project with Kenzo Digital for Samsung) came up, Pico started to shift into discussing the Metaverse.
Pico spoke to four main threads that challenge how we think about the spatial 'construction' of the Metaverse:
1. Multiple Layers of Content are Merging
Live streaming, gaming and social media are coming together. Whether it's streaming evolving to have a chat or a game evolving to have more social events (like concerts in Fortnite), there are now multiple 'layers' of content in virtual space.
2. We Need to Design for a New Spatial Dimension
Similar to the shift from radio to TV, it takes time to adapt to a new medium. This has long been the premise of my collaborator, Marty Keltz (who produced The Magic School Bus): that each shift in media requires a new "film grammar".
First, we port over our previous grammar and then we create a new one.
Pico points out that much of virtual/Metaverse architecture is ...static buildings. And that the narrative isn't spatial but linear.
3. We Need to Think About Adaptable Spaces
On this, she really looped me right back to reflective architecture, which I spoke about at the top. But she brought some interesting new dimensions, commenting that Metaverse architecture can be adaptable across multiple variables including audience demographics.
4. Generative Design Is a Key Tool
Similar to my thinking about autonomous avatars, this is the work of a space being dynamic and generative - that forests, for example, should grow.
I'll be coming back to this a lot in the coming weeks. Because it speaks to two key ideas:
- That there will be parts of the Metaverse that exist, grow and thrive without even necessarily needing users. This will be highly relevant to mirror world contexts for enterprise, but will also create deep experiences and time scales that aren't normally visible in game or virtual worlds.
- That automation, generative design, autonomous agents, DAOs and other AI/computable experiences will lead to the Metaverse itself being sentient. We think of the Singularity as the moment when a 'computer' is as smart as a human: but I think we may be too anthropomorphic in how we view intelligence. The planet is an intelligent system. It might be that the Metaverse achieves the Turing Test for being an ecosystem before a computer passes the Turing Test for being human.
The Lines are Blurring Between the Physical and Digital
I have a feeling I'm going to circle back on Pico's talk several times. And this is a decidedly incomplete synopsis.
If nothing else, it reminds us that the lessons we're learning are now easily crossing boundaries between the physical and the 'meta' spatial world (which we're calling the Metaverse).
An architect can use a game engine to power a physical room, and then bring those tools and lessons into the Metaverse.
Tools (like Unreal 5) are evolving to allow things like fully destructible and generative spaces. This will allow for digital spaces that don't just mimic the physical world but can transcend it.
But perhaps most of all, it's a reminder that we're at a key inflection point, when cross-collaboration with other disciplines can generate profound value.
Just as fashion designers are bringing their skills into the design of digital fashion, and architects are bringing their skills in spatial development, all of us can play some role in this new world.
It has an economy, people, places, games, and work to do. Just like the real world.
It's time for all hands on deck as we shape a world that we can imagine, and that what may results are lessons that can make our physical world better too.
Hey...you made it this far! Are you a subscriber? If not, why not click the Subscribe button.
Did you get this via email? Please DO reply! I'd love to hear your thoughts.
Or hit me up on the social links below.
Let's start a conversation.
Semi-Autonomous Avatars and the Metaverse
There's a moment when you log back in to single player mode in Grand Theft Auto.
The camera pulls back. Your character (Franklin, say) is walking out of a store and is waving goodbye to someone off camera. Then the camera slowly moves into a new position, hovering just above and behind Franklin, locking itself into third-person "game position".
It's a powerful illusion: first, that the game world was persistent: you might have logged off, but life in Los Santos went on without you. And second, that your avatar also lives a life of its own when you're not around. Sometimes you log back in and he's coming out of a movie theatre or cruising women on the street or exiting a convenience store.
The camera snaps back into place and you now inhabit the game character. You've taken over the controls.
Persistence in Games and Virtual Worlds
The GTA moment was seminal because it helped to reinforce the idea of persistent worlds. It provided a hint that when we log out, the worlds will continue without us.
And persistence is one of the key definitions of the Metaverse (a term which has otherwise become a sort of collective emblem of a shift in technologies rather than a specific destination).
More recently, a game like Rust has carried persistence into a deep game mechanic: the assets you create will be destroyed or stolen by other players when you log off. And so players band together, share calendars and set up schedules to guard their forts around the clock. The fact that the whole world gets reset once a month just adds...I don't know, a sense of existential futility or something.
An upcoming game like Seed (which I've come to believe will help us imagine new paradigms for the Metaverse) will drive that persistence deep into layers related to economies, wellness, politics and culture.
And so world persistence is profound on its own. But if your concept of persistence is driven mostly by multi-player game platforms, then you're probably missing the deeper point: that persistence speaks to it being a world, which indicates something which isn't static, which changes and does more than deliver a series of grinds and quests according to a pre-determined schedule.
Sure, when you log into GTA or RDR there are already people online doing stuff, but the world itself hasn't particularly changed since the last time you logged in.
Minecraft is more world-like because its persistence is coded right down to the atom. Everything is subject to change. Everything can be re-shaped by other players. By the time you log in again, someone will have opened a portal or built a castle.
But what kind of 'world' is it when the people who inhabit it can stop time? Why is it that if the world is persistent, its citizens can simply log in or log off?
Should our avatars be static in worlds that will increasingly have the physics, economies and environments of the physical one (or imagined versions of the same)?
Avatars and Characters: What's The Difference
For me, the GTA moment was more profound for its hint that our avatars might have lives of their own.
But before we go there, we should take a brief moment to note the difference between an avatar and a game character. And maybe it's enough to say this:
- When a 'gamer' enters a virtual world, they often think of their avatar as their 'character'. They will talk about the avatar in the third person - "my character". They will talk about playing. There is a remove between the player and their representation.
- But at some point, you see a shift (at least in a fully realized virtual environment). Their 'character' becomes YOU. It is not some third party. You might be controlling that emblem of yourself. but you're not 'playing' it.
- There are neurological reasons for this. The brain has difficulty distinguishing between the physical and virtual manifestations of our 'selves'
- And so, when I talk about semi-autonomous avatars, I am not talking about game characters who are part of some story in which our sense of agency is limited.
Franklin in GTA is a character. You might identify with him, you might immerse in his life story, but he isn't YOU.
Set up a new 'toon for GTA Online, however, and you're getting a lot closer to being an avatar. The 'character' you use to play Fortnite is an avatar (especially when you spring for the skins) even if its life is mostly a series of grinds and the occasional Kaskade concert. And certainly when you log-in to Spatial.io your representation is clearly a version of you.
As we spend more and more of our time in synthetic worlds, these avatars, these extensions of ourselves, are US. Your avatar will have closets filled with clothes and NFT-backed sneakers, you will live in a $500,000 virtual house, and you'll head to a concert with 1.2 million other avatars (sharded, but still). Or...maybe not YOU (or ME), but someone will!
Semi-Autonomous Avatars and Why It Matters
There are ideas we have about the Metaverse. Some of them have become so firmly ingrained that we stop questioning the assumptions.
The idea of a semi-autonomous avatar challenges a few of those assumptions:
That the Metaverse is a VR-only experience
This comes, of course, from Snow Crash and Ready Player One. The idea that we slip into our avatar like a skin. That an avatar only exists when we don a pair of goggles and log-in. That the correlation between how our body moves and the movement of our avatar is one-to-one.
But the Metaverse will defy the gadgets that we use to access it. As I've written previously, many people will have their first experience of the Metaverse while sitting in their car.
A semi-autonomous avatar reminds us that there will be 'instances' and sections of the Metaverse where our digital selves can act at least semi-independently of the devices we wear. We might be able to observe or control them through things other than glasses.
That we will want to move seamlessly between worlds
This is a key tenet/conventional wisdom of the Metaverse. It's this idea that we want to log in to some kind of waiting room and use it to pop in and out of a constellation of virtual worlds.
I've never entirely understood this idea. I suppose it's driven by our experience of the Web - as if we move around Websites seamlessly (when in fact there are still a million friction points that prevent our identities, wallets, permissions and 'inventory' from travelling with us as we surf).
Regardless, it doesn't necessarily solve a clearly identified problem. WHY do I want my avatar to jump from Minecraft to Fortnite again? And if I could LEAVE a version of my avatar back in Minecraft to guard my farm, wouldn't I?
I get the idea of IDENTITY. But more often than not most users prefer to slip between identities rather than be burdened with a single one. It's no different than the personas we 'wear' as we move from home, to work, to community. We bring different selves.
This isn't to say that we shouldn't create standards or that we shouldn't be able to bring our avatar from one world to another. It IS to say that there are other use cases as well.
That the Metaverse is a "lean-forward" medium
It's a 'truth' given that there are only two types of media: lean back and lean forward.
Matthew Ball has reinforced the dichotomy between two types of companies as he looks at the future of entertainment:
"Just as gaming seeks Hollywood to adapt their stories in order to build love, Hollywood seeks out gaming to adapt theirs. But in this latter case, Hollywood faces existential threats".
Hollywood can create love. But in his calculation it hasn't mastered the art of the lean-forward experience.
In a macro sense, this division might be true. But it ignores the very messy middle: the worlds which aren't love. Which don't even require much attention.
Pop into GTA these days and listen to the chat. If you run into a group with any experience at the game, you get the sense that they're barely paying attention: they're running another supply quest for their motorcycle club but mostly trash talking other people in the channel.
Attend a virtual dance and you can't even be sure half the people have their eyes on the computer. They're probably watching Netflix or streaming onto Twitch instead where they're chatting up their superfans.
In fact, I'd propose that there is a significant majority of content that is successful because it allows for split attention. Your avatar is there, it's dancing, but there isn't really anyone home.
The semi-autonomous part? At least it has some good pre-recorded dance moves.
The Metaverse is Entertainment
Which brings us to a final myth (although I could go on): which is that the Metaverse will be entertainment.
If I can imbue my avatar with a set of automations, it can also perform tasks. An avatar that can perform tasks, in worlds which will have their own economies, is an avatar that can make money.
Large chunks of the Metaverse will have economies and auctions and shop keepers and fashion shows. It will have round table discussions on the state of bitcoin and mini stock markets where you can trade NFTs.
But even putting that aside, the speed with which automation is becoming a key underpinning of the Internet itself means that the Metaverse will adapt those same technologies.
I have a workflow which connects Tweets to Airtable and over to Notion and then back to an email reminder system and ToDoIst item. I use Automate.io to hook it all up. I take a single action and it creates a cascade of value through a series of systems.
All I need to do is hook it up to GPT-3 and it could maybe even just auto-generate these posts! I'd have a fully functioning enterprise that required almost zero human intervention.
DAUs will be set up in the Metaverse. They will mostly run themselves and exist entirely in a synthetic world. We will able to participate in them (or our avatars will) and we will be able to vote and take actions. And some of those actions we'll be able to automate.
In other words, because there will be economic value in the Metaverse, many of us will want to maximize the value that our avatars create.
The Ambient Metaverse
Currently, the idea of a semi-autonomous avatar brings scripting hacks and automated farmers to mind. They're considered hacks because they're associated with game environments and are used to bypass the (written or unwritten) rules.
Or, they're not considered ways to make an avatar autonomous: the macros you use in Warcraft are just enhancements. Primarily because they're seen to aid the player....who is controlling a character.
But our avatars will end up with all sorts of macros and sub-routines. They will be able to act a bit on their own and give the appearance of presence.
I know people who leave their avatars logged in and resting in a virtual bed while their human controllers sleep. They feel a need to send a signal to others in the synthetic world that they are 'present' even if the human behind the avatar is asleep.
On the other end of the spectrum, our avatars may be mostly invisible. We'll move through virtual worlds seen through our glasses or while driving our cars. In those cases, the autonomy of our avatars will have some real meaning because the sub-routines that they perform will be a large point of our presence in those corners of the Metaverse.
We have lived with the myth of Ready Player One: a lean-forward, entertainment and game-focused 'Metaverse' (owned, mind you, by a benevolent dictator) that you log into when you throw on a haptic suit and some goggles.
The reality is that the Metaverse will often be ambient. We'll skim it. We'll dip in and out or barely even notice it. It will be always on and it (and our avatar) will live a life of its own whether we pay attention to it or not.
We're in this together. I'd love to hear your thoughts.
Let's start a conversation. As a subscriber, all you need to do is hit reply.