Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Google’s Early VR Modeling Tool ‘Blocks’ is Getting Revived as Open Source Software

17 July 2024 at 12:12

Google announced that Blocks, the 3D asset creation tool released for VR in 2017, is following in the footsteps of Tilt Brush by going open source.

Google announced the news in a blogpost, noting that development of Open Blocks is following the example put forth by Open Brush, a version of Google’s Tilt Brush XR creation tool which was open sourced in 2021.

“We now wish to share the code behind Google Blocks, allowing for novel and rich experiences to emerge from the creativity and passion of open source contributors such as the Icosa Foundation,” Google says.

The Icosa Foundation is also known for developing Open Brush and Google Polygon replacement Icosa Gallery.

“Over the coming months, we’ll be working hard to bring the Open Blocks codebase up to modern standards,” Icosa Foundation says in a blogpost. “First up, we’ll be switching to use the OpenXR framework and new input system within Unity, enabling us to target Open Blocks for a much wider range of XR devices. At that point, we will be aiming to create a standalone XR port, and bring Open Blocks to the Quest and Pico platforms. Along the way, there will be plenty of opportunity to add immersive XR features such as MR passthrough.”

The team maintains its long-term roadmap will “transform Open Blocks into a full modelling suite, giving you more control over materials, adding texturing support, and enabling more powerful tools from traditional CSG pipelines.”

The open source archive of the Blocks code can be found on github. Additionally, versions of Google Blocks will remain available on both Steam and the Meta PC Store, although you should not the last time these have received an update was in 2018.

The post Google’s Early VR Modeling Tool ‘Blocks’ is Getting Revived as Open Source Software appeared first on Road to VR.

Meta’s Social VR Platform Now Coming to Every Country Supporting Quest

25 June 2024 at 16:52

Meta’s social VR platform Horizon Worlds hasn’t been available to everyone, with the company restricting the app’s use to only a few countries. Now it’s rightfully rolling out to every region where Quest is supported.

Despite being available on the web since last January, geolocation restrictions only allowed Quest users access in select countries, which included Canada, France, Iceland, Ireland, Spain, the United Kingdom, and the United States.

Meta today announced that starting this week the company will begin rolling out Horizon Worlds “to people in all Meta Quest markets in supported languages so more people can connect with each other around the globe.”

This includes access for users 13+ across the following Quest-supported regions: Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Iceland, Ireland, Italy, Japan, Netherlands, New Zealand, Norway, Poland, Sweden, Switzerland, Taiwan, the United Kingdom, and the United States. Users must be 14+ in South Korea and Spain.

This comes as the company ostensibly seeks to promote Horizon Worlds as a more fundamental social layer to its rapidly growing platform, which is soon set to include third-party VR headsets for the first time.

Horizon Worlds will come part and parcel with Horizon OS (ex-Quest OS) and the Horizon Store (ex-Quest Store), which will be available on Quest-like headsets built by ASUS, Lenovo and Xbox.

The post Meta’s Social VR Platform Now Coming to Every Country Supporting Quest appeared first on Road to VR.

Former Head of Qualcomm’s XR Division Joins Google to Guide XR Strategy

19 June 2024 at 13:56

Hugo Swart, previous head of Qualcomm’s XR division, announced he’s joined Google where he’ll lead the company’s XR Ecosystem Strategy and Technology efforts.

Swart shared the news in a LinkedIn update, noting the move happened a few months ago:

Happy to share that I joined Google couple months ago and am responsible for XR Ecosystem Strategy and Technology. Super excited to continue the XR journey and working with you all – great things ahead! Thank you Shahram Izadi for the opportunity! Looking forward to AWE this week!!

As General Manager and Vice President of XR at Qualcomm, Swart was a driving force behind the company’s Snapdragon XR series of chipsets, which currently power the majority of standalone headsets on the market, including all of Meta’s Quest headsets to date.

Hugo Swart introduces Snapdragon XR | Courtesy Qualcomm

Following Swart’s departure from Qualcomm in February, Alex Katouzian, Group GM of the Mobile, XR, and Compute Business Unit is currently overseeing XR at Qualcomm.

Swart is joining Google at a pivotal moment in XR, as the company recently announced a strategic technology partnership with Magic Leap, which is seen as an effort to keep up with Meta, Apple, and others in a race to control the burgeoning AR headset market.

This follows a notable setback last year when Google reportedly shelved its Project Iris AR glasses following mass restructuring within the company, which included layoffs, reshuffles, and the departure of Clay Bavor, Google’s then-head of AR and VR.

Meanwhile, Google is developing a new Android-based platform for Samsung’s upcoming XR headset announced back in February 2023, which is set to be powered by Qualcomm silicon. Google is also rumored to be developing a “Micro XR” platform for XR glasses, which is said to use a prototyping platform internally known as “Betty.”

The post Former Head of Qualcomm’s XR Division Joins Google to Guide XR Strategy appeared first on Road to VR.

Major ‘ShapesXR’ Update Streamlines Collaborative XR Prototyping, Releases Web Editor for PC Users

18 June 2024 at 16:59

Spatial design and prototyping app ShapesXR (2021) just launched its 2.0 update which better streamlines cross-platform support, letting team members more easily edit and collaborate in both mixed or virtual reality, but also now the web.

ShapesXR 2.0 is packing in a number of new features today to enhance the cross-platform app, which not only supports Quest 1/2/3/Pro and Pico 4, but also now standard flatscreen devices with the addition of a web editor for users joining with mouse and keyboard.

Check out all of the things coming to ShapesXR 2.0 below:

Enhanced UI/UX : Shapes has been fully refreshed with an entire new interface that takes unique advantage of depth and materials. The information architecture has been simplified to enhance ease of use and learnability.

Interactive Prototyping: New triggers and actions have been introduced to help designers explore more robust interactions, allowing them to use button presses, physical touch, and haptics to design dynamic and engaging spatial experiences.

Spatial Sound Prototyping: Users can now import sounds and add spatial audio to interaction triggers, creating more immersive experiences and prototypes that win the arguments and green lights

Procedural Primitives and New Assets Library: A new library of fully procedural primitives provides a diverse range of 3D models and templates for users to build with.

Custom Inspector: The custom inspector allows for precise adjustments, optimizing the design process.

Performance Optimization: Significant optimizations ensure smoother experiences and faster load times, enhancing overall efficiency.

Flexible Input Support: The new architecture and UI support any input type, including controllers, hands, and mouse and keyboard, making the design process smoother and more intuitive.

Released in 2021, ShapesXR founder and CEO Inga Petryaevskaya calls the addition of the new web editor “a strategic move to extend the time users spend in the product and to enable co-design and editing with those who do not have an XR device.”

To boot, a number of VR studios have used ShapesXR over the years to collaboratively build their apps, including mixed reality piano tutor PianoVision, physics-based VR rollercoaster CoasterMania, and XR platform for molecular design in the Drug Discovery and Materials Science industries Nanome. You can check out the company’s full slate of case studies here.

The app is a free download on supported platforms, including both a free and subscription-based plans. ShapesXR’s free plan comes with its core creation tools, three editable spaces, 150 Mb of cloud storage, 20 Mb import cap on files, the ability to import png, jpg, obj, glb, and gITF files, and export glTF, USDz, and Unity files.

Both its Team and Enterprise plans include unlimited editable spaces, respective bumps in cloud storage, and a host of other features that ought to appeal to larger teams looking to integrate ShapesXR into their workflow. You can check out all of the subscription plans here.

The post Major ‘ShapesXR’ Update Streamlines Collaborative XR Prototyping, Releases Web Editor for PC Users appeared first on Road to VR.

Image courtesy ShapesXR

Apple Vision Pro Part 6 – Passthrough Mixed Reality (PtMR) Problems

27 September 2023 at 05:09

Introduction

I planned to wrap up my first pass coverage of the Apple Vision Pro (AVP) with my summary and conclusions based on prior articles. But the more I thought about it, Apple’s approach to Passthrough Mixed Reality (PtMR) seems like it will be so egregiously bad that it should be broken out and discussed separately.

Apple Prioritized EyeSight “Gimmick” Over Ergonomics and Functionality

There are some features, particularly surrounding camera passthrough, where there should have been an internal battle between those who wanted the EyeSight™ gimmick and what I would consider more important functionality. The backers of EyeSight must have won and forced the horrible location of the passthrough cameras, optical distortion from the curved glass in front of all the forward-facing cameras and sensors, put a fragile piece of hard-to-replace glass on the front where it can be easily scratched and broken, and added weight to the front were it is least desired. Also, as discussed later, there are negative effects on the human visual system caused by misaligning the passthrough cameras with the eyes.

The negative effects of EyeSight are so bad for so many fundamental features that someone in power with little appreciation for the technical difficulties must have forced the decision (at least, that is the only way I can conceive of it happening).  People inside the design team must have known it would cause serious problems. Supporting passthrough mixed reality (PtMR) is hard enough without deliberately creating problems.

Meta Quest 3 Camera Location

As noted in Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough, Meta is locating the soon-to-be-released Quest 3 main passthrough camera closer to the center of view of the eyes. Fixed cameras in front of the eyes won’t be perfect and will still require digital correction for better functional use. It does appear that Meta is taking the PtMR more seriously than it did with the Meta Quest Pro and Quest 2.

I’m going to be looking forward to getting a Meta Quest 3 to test out when it is released soon.

Definitions of AR/VR/MR and PtMR

The terms used to describe mixed reality have been very fluid over the last few years. Before the introduction of Hololens, Augmented reality meant any headset that displayed virtual content on a see-through display. For example, just before Hololens went on sale, Wired in 2015 titled their article (with my bold emphasis): Microsoft Shows HoloLens’ Augmented Reality Is No Gimmick. With the introduction of Hololens, the term “Mixed Reality” was used to distinguish AR headsets with SLAM to lock the virtual to the real world. “AR” headsets without SLAM are sometimes called AR Heads-Up Displays (HUDs), but these get confused with automotive HUDs. Many today refer to a see-through headset without SLAM as “AR” and one with SLAM as “MR,” whereas previously, the terms “AR” covered both with and without SLAM.

Now we have the added confusion of optical see-through (e.x. Hololens) and camera passthrough “Mixed Reality.” While they may be trying to accomplish similar capabilities, they are radically different in their capabilities. Rather than constantly typing “passthrough” before MR, I abbreviated it as PtMR.

In Optical AR, the Virtual Content Augments the Real World – With PtMR, the Real World Augments the Virtual Content

Optical MR prioritizes seeing the real world at the expense of the virtual content. The real world is in perfect perspective, at the correct focus distance, with no limitation by a camera or display on brightness, with zero lag, etc. If done well, there is minimal light blocking and distortion of the real world and little blocking of the real-world FOV.

PtMR, on the other hand, prioritizes virtual image quality at the expense of the real world, both in how things behave in 3-D space (focus perspective) and in image quality.

We are likely many decades away, if ever, from passing what Douglas Lanman of Meta calls their Visual Turing Test (see also the video linked here).

Meta’s demonstrations at Siggraph 2023 of their Flamera with perspective-correct passthrough and Butterscotch with vergence accommodation conflict served to show how far PtMR is from optical passthrough. They can only address each problem individually, each with a large prototype, and even then, there are severe restrictions. The Flamera has a very low-resolution passthrough, and Butterscotch only supports a 50-degree FOV.

It is also interesting that Butterscotch moves back from Half Dome 3’s electronic LCD variable focus to electro-mechanical focusing to address VAC. As reported in Mixed Reality News, “However, the technology presented problems with light transmission and image quality [of the electronic LCD approach], so Meta discarded it for Butterscotch Varifocal at the expense of weight and size.”

All of this work is to try and solve some of the many problems created by PtMR that don’t exist with optical MR. PtMR does not “solve” the issues with optical MR. It just creates a long list of massively hard new problems. Optical AR has issues with the image quality of the virtual world, very large FOV, and hard-edge occlusion (see my article Magic Leap 2 (Pt. 3): Soft Edge Occlusion, a Solution for Investors and Not Users). I often say, “What is hard in optical MR is easy in PtMR and vice versa.”

Demo or Die

Meta and others seem to use Siggraph to show off research work that is far from practical. As stated by Lanman of Meta, of their Flamera and Butterscotch VAC demos at Siggraph 2023, Meta’s Reality Labs has a “Demo or Die” philosophy. They will not be tipping off their competition on concepts they will use within a few years. To be clear, I’m happy to see companies showing off their technical prowess, but at the same time, I want to put it in perspective.

Cosmetic vs. Functional Passthrough PtMR

JayzTwoCents video on the HTC Vive XR Elite has a presentation by Phil on what he calls “3D Depth Projection” (others refer to it as “perspective correct“). In the video (sequence of clips below), Phil demonstrates that because the passthrough video was not corrected in scale, position, and perspective in 3-D space, it deprives him of hand-eye coordination to catch a bottle tossed to him.

As discussed in Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough in the section The method in the Madness: MQP prioritizes 3-D spatial over image quality.

Phil demonstrated in the video (and in a sequence of clips below) that with the Meta Quest Pro, even though the image quality is much worse and distorted due to the 3D projection, he can at least catch the bottle.

I would classify the HTC Vive XR Elite as having a Cosmetic Passthrough.” While the image quality is better (but still not very good), it is non-functional. While Meta Quest Pro’s image quality is lousy, it is at least somewhat functional.

Something else to notice in the MQP frame sequence above is that there are both lag and accuracy errors in hand tracking.

Effects on Vision with Long-Term Use

It is less obvious that the human visual system will start adapting to any camera placement and then have to re-adapt after the headset is removed. This was briefly discussed in AVP Part 2 in the section titled Centering correctly for the human visual system, which references Steve Mann in his March 2013 IEEE Spectrum article, “What I’ve learned from 35 years of wearing computerized eyewear.” In the early days with Steve Mann, they had no processing power to attempt to move the effect of the camera images digitally. At the same time, I’m not sure how well the correction will work or how a distorted view will affect people’s visual perception during and after long exposure. As with most visual effects, it will vary from one individual to another.

Meta Flamera Light Field Camera at Siggraph 2023

As discussed in AVP Part 2 and Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough, having the passthrough cameras as close as possible to being coaxial to the eyes (among other things) is highly desirable.

To reduce any undesired negative effects on human vision caused by cameras not aligning with the eyes, some devices, such as the Quest 2 and Quest Pro from Meta, use processing to create what I will call “virtual cameras” with a synthesized view for each eye. The farther the physical cameras are from the eye’s location, the larger the correction will be required and the larger the distortion in the final result.

Meta at Siggraph 2023 presented the paper “Perspective-Correct VR Passthrough Without Reprojection” (and IEEE article) and showed their Flamera prototype with a light field camera (right). The figure below shows how the camera receives light rays from the same angle as the eye with the Light Field Passthrough Camera.

Below are a couple of still frames (with my annotations) from the related video that show how, with the Meta Quest 2, the eye and camera views differ (below left), resulting in a distorted image (below right). The distortion/error as the distance from the eye decreases.

It should be noted that while Flamera’s light field camera approach addresses the angular problems of the camera location, it does so with a massive loss in resolution (by at least “n,” where n is the number of light field subviews). So, while interesting in terms of research and highlighting the problem, it is still a highly impractical approach.

The Importance of “Perspective Correct” PtMR

In preparing this article, I returned to a thread on Hacker News about my Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough article. In my article, I was trying to explain why there was a “The method in the Madness: MQP prioritizes 3-D spatial over image quality” of why Meta was distorting the image.

Poster Zee2 took exception to my article and seemed to feel I was understating the problem of 3-D perspective. I think Zee2 missed what I meant by “pyrrhic victory.” I was trying to say they were correct to address the 3D depth issue but that doing so with a massive loss in image quality was not the solution. I was not dismissing the importance of perspective-correct passthrough.

Below, I am copying his comment from that thread (with my bold highlighting)), including a quote from my article. Interestingly, Zee2 comments on Varjo having good image quality with its passthrough, but it is not perspective-correct.

I also really don’t know why he [refering to my article] decided to deemphasize the perspective and depth correctness so much. He mentions it here:

>[Quoting Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough] In this case, they were willing to sacrifice image quality to try to make the position of things in the real world agree with where virtual objects appear. To some degree, they have accomplished this goal. But the image quality and level of distortion, particularly of “close things,” which includes the user’s hands, is so bad that it seems like a pyrrhic victory.

I don’t think this is even close to capturing how important depth and perspective correct passthrough is.

Reprojecting the passthrough image onto a 3D representation of the world mesh to reconstruct a perspective-correct view is the difference between a novelty that quickly gives people headaches and something that people can actually wear and look through for an extended period of time.

Varjo, as a counterexample, uses incredibly high-resolution cameras for their passthrough. The image quality is excellent, text is readable, contrast is good, etc. However, they make no effort to reproject their passthrough in terms of depth reconstruction. The result is a passthrough image that is very sharp, but is instantly, painfully, nauseatingly uncomfortable when walking around or looking at closeup objects alongside a distant background.

The importance of depth-correct passthrough reprojection (essentially, spacewarp using the depth info of the scene reconstruction mesh) absolutely cannot be understated and is a make or break for general adoption of any MR device. Karl is doing the industry a disservice with this article.

From: Hacker News Meta Quest Pro – Bad AR Passthrough comment by Zee2 

Does the AVP have Cosmetic or Functional PtMR or Something Else?

With the AVP’s passthrough cameras being so poorly located (thanks to EyeSight™), severe distortion would seem inevitable to support functional PtMR. I don’t believe there is some magic (perhaps a pun on Magic Leap) that Apple could employ that Meta couldn’t that would simultaneously support good image quality without serious distortion with the terrible camera placement due to the Eyesight(tm) feature.

So, based on the placement of the cameras, I have low expectations for the functionality of the AVP’s PtMR. The “instant experts” who got to try out the AVP would be more impressed by a cosmetically better-looking passthrough. Since there are no reports of distortion like the MQP, I’m left to conclude that, at least for the demo, they were only doing a cosmetic passthrough.

As I often say, “Nobody will volunteer information, but everyone will correct you.” Thus, it is better to take a position based on the current evidence and then wait for a correction or confirmation from the many developers with AVPs who read this blog.

Conclusion

I’m not discounting the technical and financial power of Apple. But then I have been writing about the exaggerated claims for Mixed Reality products by giant companies such as Google, Meta, and Microsoft, not to mention the many smaller companies, including the over $3B spent by Magic Leap, for the last ten years. The combined sunk cost of about $50B of these companies, not including Apple. As I’m fond of saying, “If all it took were money and smart people, it would already be solved.

Apple doesn’t fully appreciate the difficulties with Passthrough Mixed Reality, or they wouldn’t prioritize the EyeSight gimmick over core capabilities. I’m not saying the AVP would work well for passthrough AR without EyeSight, but it is hard enough without digging big technical holes to support a novelty feature.

CES 2023 SadlyItsBradley Videos Part 1-4 and Meta Leak Controversy

16 February 2023 at 03:52

Introduction

Bradley Lynch of the SadleyItsBradley YouTube channel hosted my presentation about CES 2023. The video was recorded about a week after CES, but it took a few weeks to edit and upload everything. There are over 2 hours of Brad and me talking about things we saw at CES 2023.

Brad was doing his usual YouTube content: fully editing the video, improving the visual content, and breaking the video down into small chunks. But it took Brad weeks to get 3 “sub videos” (part 1, part 2, and part 3) posted while continuing to release his own content. Realizing that it would take a very long time at this rate, Brad released part 4 with the rest of the recording session with only light editing as a single 1-hour and 44-minute video with chapters.

For those that follow news about AR and VR, Brad got involved in a controversy with his leaks of information about the Meta Quest Pro and Meta Quest 3. The controversy occurred between the recording and the release of the videos, so I felt I should comment on the issue.

Videos let me cover many more companies

This blog has become highly recognized in the AR/MR community, and I have many more companies wanting me to write about their technology than I have the time. I also want to do in-depth articles, including major through-the-optics studies on “interesting” AR/MR devices.

I have been experimenting with ways to get more content out quicker. I can spend from 3 days to up to 2 months (such as the rest of the Meta Quest Pro series yet to be published) working on a single article about a technology or product. With CES and the AR/VR/MR conference only 3 weeks apart and meeting with about 20 companies at each conference.

In the past, I only had time to write about a few companies that I thought had the most interesting technology. For the CES 2023 video, It took about 3 days to organize the photos and then about 2.5 hours to discuss about 20 companies and their products, or about 5 to 7 minutes per topic (not including all the time spent by Brad doing the video editing).

I liked working with Brad; we hope to do videos together in the future; he is fun to talk to and adds a different perspective with his deep background in VR. But in retrospect, less than half of what we discussed fits with his primary VR audience.

Working on summary articles for CES and the SPIE AR/VR/MR conference

Over 2 hours of Brad and I discussing over 20 companies and various other subjects and opinions about the AR, VR, and MR technology and industry is a lot for people to go through. Additionally, the CES video was shot in one sitting non-stop. Unfortunately, my dog friends decided they wanted to see me in my closed office door closed and barked much more than I realized as I was focused on the presentation (I should have stopped the recording and quieted them down).

I’m working on a “quick take” summary guide with pictures from the video and some comments and corrections/updates. I expect to break the guide into several parts based on broad topics. It might take a few days before this guide gets published as there is so much material.

Assuming the CES quick take guide goes well, I plan to follow up with my quick takes on the AR/VR/MR conference. I’m also looking at recording a discussion at the AR/VR/MR conference that will likely be published on the KGOnTech YouTube channel.

Links to the Various Sections of the Video

Below is a list of topics with links for the four videos.

Video 1

  • 0:00 Ramblings About CES 2023
  • 6:36 Meta Materials Non-Polarized Dimmers
  • 8:15 Magic Leap 2
  • 14:05 AR vs VR Use Cases/Difficulties
  • 16:47 Meta’s BCI Arm Band MIGHT Help
  • 17:43 OpenBCI Project Galea

Video 2

  • 0.00 Porotech MicroLEDs

Video 3

  • 0:00 NewSight Reality’s Transparent uLEDs
  • 4:07 LetinAR Glasses (Bonus Example/Explanation)

Video 4

SadlyItsBradley’s Meta Leaks Controversy

Between the time of recording the CES 2023 video with Brad and the videos being released, there was some controversy involving Brad and Meta that I felt should be addressed because of my work with Brad.

Brad Lynch made national news when the Verge reported that Meta had caught Brad’s source for the Meta Quest Pro and Meta Quest 3 information and diagrams. Perhaps ironically, the source for the Verge article was a leaked memo by Meta’s CTO, Andrew Bosworth (who goes by Boz). According to The Verge, “In his post to Meta employees, Bosworth confirmed that the unnamed leaker was paid a small sum for sharing the materials with Lynch.

From what was written in The Verge article and Brad’s subsequent Twitter statement, it seems clear that Brad didn’t know that in journalism is considered unethical “checkbook journalism” to pay a source. It is one of those gray areas where, as I understand it (and not legal advice), it is not illegal unless the reporter is soliciting the leak. At the same time, if I knew Brad was going to pay a source, I would have advised him not to do it.

It is nice to know that news media that will out and out lie, distort, hide key information, and report as true information from highly biased named and unnamed sources still has one slim ethical pillar: leaks are our life’s blood but don’t get caught paying for one. It is no wonder public trust in the news media is so low.

The above said, and to be clear, I never have and would never pay a source or encourage anyone to leak confidential content. I also don’t think it was fair or right for a person under NDA to leak sensitive information except in cases of illegal or dangerous activity by the company.

KGOnTech (My) Stance on Confidentiality

Unless under contract with a significant sum of money, I won’t sign an NDA, as it means taking on a legal and, thus, financial risk. At the same time, when I meet privately with companies, I treat information and material as confidential, even if it is not marked as such, unless they want me to release it. I’m constantly asking companies, “what of this can I write about.”

My principle is that I never want to be responsible for hurting someone that shared information with me. And as stated above, I would never encourage, no less pay someone to break a confidence. If someone shares information with me to publish, I always try to know if they want their name to be public as I don’t want to either get them in trouble or take credit for their effort.

Closing

That’s it for this article. I’ve got to finish my quick take summaries on CES and the AR/VR/MR conference.

The post CES 2023 SadlyItsBradley Videos Part 1-4 and Meta Leak Controversy appeared first on KGOnTech.

CES 2023 SadlyIsBradley and KGOnTech Joint Review Video Series (Part 1)

26 January 2023 at 21:17

New Video Series on CES 2023

Brad Lynch of the SadlyItsBradley YouTube Channel and I sat down for over 2 hours a week after CES and recorded our discussion of more than 20 companies one or both of us met with at CES 2023. Today, Jan. 26, 2023, Brad released a 23-minute part 1 of the series. Brad is doing all the editing while I did much of the talking.

Brad primarily covers VR, while this blog mostly covers optical AR/MR. Our two subjects meet when we discuss “Mixed Reality,” where the virtual and the real world merge.

Brad’s title for part 1 is “XR at CES: Deep Dives #1 (Magic Leap 2, OpenBCI, Meta Materials).” While Brad describes the series as a “Deep Dive,” but I, as an engineer, consider it to be more of an “overview.” It will take many more days to complete my blog series on CES 2023. This video series will briefly discuss many of the same companies I plan to write about in more detail on this blog, so consider it a look ahead at some future articles.

Brad’s description of Part 1 of the series:

There have been many AR/VR CES videos from my channel and others, and while they gave a good overview of the things that could be seen on the show floor and in private demoes, many don’t have a technical background to go into how each thing may work or not work

Therefore I decided to team up with retired Electrical Engineer and AR skeptic, Karl Guttag, to go over all things XR at CES. This first part will talk about things such as the Magic Leap 2, Open BCI’s Project Galea, Meta Materials and a few bits more!

Brad also has broken the video into chapters by subject:

  • 0:00 Ramblings About CES 2023
  • 6:36 Meta Materials Non-Polarized Dimmers
  • 8:15 Magic Leap 2
  • 14:05 AR vs. VR Use Cases/Difficulties
  • 16:47 Meta’s BCI Arm Band MIGHT Help
  • 17:43 OpenBCI Project Galea

That’s it for today. Brad expects to publish about 2 to 3 videos in the next week. I will try and post a brief note as Brad publishes each video.

The post CES 2023 SadlyIsBradley and KGOnTech Joint Review Video Series (Part 1) appeared first on KGOnTech.

The Metaverse Is Beautiful and Light

29 September 2021 at 14:00
The Metaverse Is Beautiful and Light

I'm feeling good.

Being houseless I've started moving around again after...well, after the last 18 months, and we all know what that has been like.

And it's hard not to like checking email while drinking a cup of Costa Rican coffee and watching the  waves (that's my view, above). Or to be able to do a quick ocean swim before a meetup with the provocative title: Metaverse or Shmetaverse - the big debate!

The conversation was great. And honestly there wasn't that much I disagreed with.

Each of the panelists brought their own agendas and viewpoints, and I've always loved how provocative Avi can be. Abby Hunter-Syed was mostly pitching portfolio companies and the glorious deep fake/AI-driven world we're all headed towards.

But the title set the tone: this was all about the experts throwing some cool, soothing water on the Metaverse-y people.

I didn't take notes (the bloody monkeys kept distracting me), so don't take these as actual quotes or views from the session. They're more like impressions - and they're ones you can find easily enough elsewhere:

  • One of the problems with all the talk about the Metaverse is privacy is never discussed. And Facebook. And Facebook.
  • The Metaverse will trend towards ad-strewn experiences...because it will. If only we could PAY for our social media services ($20 a year was thrown around) then we could break the yoke of ad-supported digital experiences.
  • An AR version of the Metaverse is dangerous because we'll be walking down the street and suddenly surrounded by content, bombarding us with stimuli, half of it billboards. (I really have no idea who actually thinks that's what is meant by the Metaverse including AR...but there you have it)
  • Massive chunks of digital content will be generated by AI. Deep fakes are going to eat the world. And...isn't that great? Diversity. Or something.
  • NFTs are artificial scarcity and don't we actually want abundance?
  • We need new search. We need new search. Search and discovery will solve all our problems. (I don't disagree by the way...and believe many many people will help to solve this problem in truly innovative and empowering ways)
  • If you're using the term Metaverse you'll look like a dinosaur in a few years. Like those old relics who once used the term "cyberspace". (They all ended up with jobs at Walmart, I guess). But sure! Go ahead. No harm. Just realize that the really, really smart people will be able to spot you.

OK...I might be feeling a little over-caffeinated.

But actually: I don't particularly disagree with any of that.

The "Metaverse" has been attached to...everything. There's a chaos of competing claims for what it will be, how it will be developed, and how it will all be paid for. And yet it still isn't clear...when will I be able to log in?

Facebook is spending loose change to make sure the Metaverse is safe! And responsible! But the money is frankly a rounding error in their Metaverse lobbying budget (of which it should really be considered a part), and the company is now labelled the largest autocracy on earth...and a foreign hostile power.

But let's chill for a minute. Pura Vida! Because there's another story.

The Metaverse Is Human and Polite

Look, the battle isn't won. Privacy, surveillance, security and our data being vacuumed up to feed the ad machine is still a very real possibility.

But I've yet to meet a single person in the "Metaverse space" who talks about user data and its value. It is JUST NOT A TOPIC. No one is pitching new "ad supported 3D conference centers!"

I strongly believe that user privacy and protection is a cultural given. There is a vision for a decentralized Metaverse and that vision doesn't include ad networks.

Again, I'm not saying that the big companies won't eventually decide that they want to make bank on data, and I'm not naive about Facebook (or Niantic, or Epic, or anyone else with data silos).

But I'd propose that the default setting for the Metaverse is currently privacy on, surveillance off.

There is incredible innovative around avatars and permissions, sovereign identity (you own and safeguard your own identity), the right to anonymity (which generally means a right to not be under surveillance), and quantum-safe distributed data (keeping data out of the massive silos they live in today).

Whatever you think about Facebook, they at least seem to be trying to break free of their own self-restraining (and highly profitable) yoke of data-based advertising...or at least expanding their revenue palette a bit.

And so...sure. People don't bring up privacy the first time you chat about the Metaverse.

But I honestly believe that's because as a community there's a general ethos that we want to do better. That we want a more human, a more polite, and a more person-centric way of handling information.

The Metaverse Is Democratic and Distributed

NFTs are not artificial scarcity.

An NFT is an emblem. If you understand that emblem, you understand that it represents a new way for communities to self-organize.

NFTs represent a break from content being delivered from the mountaintop of art galleries and movie studios, ad agencies and CPGs.

They are the first demonstration that by creating a distributed ledger of digital content, we can now start to do some pretty amazing things:

  • The barriers to collaboration collapse. We no longer need to hire a team of lawyers to protect our interests as we work together.
  • Instead of "open source" we're entering an era of "paid open communities".
  • When coupled with DAOs, social tokens and alternative currencies, NFTs allow for grassroots, highly organic, highly participatory and massively scalable initiatives on...well, on anything. Not just the creation of a Twitter profile photo.
  • If that's true, it means that there is intense pressure on large-scale organizations. The old gatekeepers aren't the only game in town anymore.
  • As these initiatives move beyond the few simple "prims" we see today into more elaborate tasks, they'll shift into everything from sustainability to community improvement.

By marrying digital tools to contracts, content, rights and licenses, and commerce, there is a powerful ecosystem which might just give Facebook a run for its money.

This would tend to suggest that the Metaverse is pre-disposed to being distributed, decentralized and more democractic.

If you're talking about the Metaverse and dismissing NFTs, you're going to conclude that our futures are in the hands of Epic and Facebook. And you'll ignore the Bored Apes.

Now, I don't personally think the Metaverse will be solely built on blockchain. And I believe that there will be far more free content than content that you pay for or that is backed by an NFT.

But they will be powerful fuel rods that will help shape vast continents in the Metaverse.

The Metaverse Is Building Beautiful Things

Sure, there are probably startups out there throwing the word Metaverse around because they think it will attract money.

The panelists at Metaverse or Shmetaverse seemed to collectively chuckle. One of them said the term made him cringe when startups use it. They all seemed to agree: "if it will get you cash, great! But we're smarter than the VCs and we SEE you."

What do I know? No one's pitching me.

What I DO know is that I see a dozen new launches every day, hear and see the chats across countless channels.

And the companies talking about the Metaverse are doing fricking cool stuff: from game development platforms constructed to support player economies to avatar projects with Hollywood-worthy plotlines.

If you're not impressed with the level of creativity...with the explosion of innovation, then you might be looking in the wrong place.

These aren't VR or AR or app companies. These are companies with a purpose: to bring new stories, tools, and experiences to an open, interconnected Metaverse.

The Metaverse Is People

I don't know what the Metaverse is. It's a work in progress.

I don't know what it will be called in a decade.

What I do know is that the people who use the term today are usually deeply concerned about how it's built, are running experiments to work out how to do it well, and are creating new forms of storytelling and self-expression.

They're not just talking, they're doing.

Even the big companies, from Epic to Facebook, from Niantic (with it's, um, real world metaverse or whatever) to Microsoft are all at least speaking from a common songbook: "no one can own the Metaverse, it will be built by many people, it will be big and it will unlock new human potential".

(Whether you can fully trust how they will exploit it for profit is a different question).

And so whether large or small, the folks who use the word "Metaverse" are really sending out a signal: "I want to build something beautiful. I want it to be open to everyone. I want to make some money doing it but I want there to be enough money for everyone. I want it to be more human than the digital worlds that have come before. I want to solve hard problems. Let's build this together. You in?"

And so you have a choice when you hear the word: argue whether they mean to include AR or not, get into a debate about how dangerous the whole thing seems, argue whether the word will still be around in a decade, or dismiss the possibilities of NFTs or blockchain or whatever as a passing fad.

Or do what I like to do. And ask how I can help.


How can I help? I'm honestly open to ideas! Have I had too much sun? I can be anxious too or get lost in definitions. We're figuring this out together.

Email me at doug@bureauofbrightideas.com or message me on Twitter.

Let's start a conversation.

If These Walls Could Talk: Pico Velasquez, Architecture and the Metaverse

24 July 2021 at 17:56
If These Walls Could Talk: Pico Velasquez, Architecture and the Metaverse

In 2007 I discovered 'reflective architecture', an idea explored by Jon Brouchoud, an architect who was working in Second Life.

If These Walls Could Talk: Pico Velasquez, Architecture and the Metaverse

It was the concept that in a virtual environment buildings can move, shift, and morph based on user presence. Instead of buildings and environments as static objects, the 'affordances' of a programmable space allowed for them to have a computable relationship to the audience/user/visitor.

While today the idea might seem obvious, at the time it was a leading-edge idea that an architect could actually WORK in a virtual environment, let alone change our concept of space through his explorations.

If These Walls Could Talk: Pico Velasquez, Architecture and the Metaverse

Walking through one of Jon's experiments created a mental shift for me: first, because we didn't need to "port" standard concepts of what a space can be into virtual environments.

Later, I worked with Jon on the design of the Metanomics stage, the first serious virtual talk show:

If These Walls Could Talk: Pico Velasquez, Architecture and the Metaverse

This helped me to realize that his work also helped to open up new ways of thinking about the physical world and our relationship to space.

It took almost 15 years to achieve a similar shift in thinking.

And it happened because of Pico Velasquez.

Pico Velasquez and Walls That Talk

It doesn't happen often. I mean - how many Zoom calls, webinars and online 'events' have you been to? Especially over the last year? How many of them blur into each other?

But this session with Pico Velasquez may be the best hour you spend this year.

Sure, you might lose the sense of being there. Because one of the joys of the session was Pico's rapid-fire mind, which was able to lift off of the audience 'back chat' and questions like someone who can design a building, chat with her best friend, write a blog post and cook dinner at the same time.

Pico gave a tour of her work. And the session inverted the experience I had with Jon.

Where Jon showed that virtual environments can be living, breathing entities (with an implication for the physical world), Pico demonstrated that physical spaces can be computable, and that this has an implication for the Metaverse.

While deceptively simple, her work on Bloom, for example, was a living canvas that used a Unity game engine back end to create a narrative that responded to time of day and presence.

Pico gave us a hint of her process during the presentation:

If These Walls Could Talk: Pico Velasquez, Architecture and the Metaverse
Bloom, LAB at Rockwell Group with artist Pico Velasquez

Which resulted in a space that responds to people being nearby (watch the video for the full effect):

If These Walls Could Talk: Pico Velasquez, Architecture and the Metaverse
Bloom, LAB at Rockwell Group with artist Pico Velasquez

Her work on The Oculus, the main entrance to the new Seminole Hard Rock Casino & Hotel has a similar immersive and responsive quality:

If These Walls Could Talk: Pico Velasquez, Architecture and the Metaverse
Seminole Hard Rock Oculus, LAB at Rockwell Group with creative Director Pico Velásquez

Four Pillars for the Metaverse

Once Burning Man and the Social Galaxy (a project with Kenzo Digital for Samsung) came up, Pico started to shift into discussing the Metaverse.

Pico spoke to four main threads that challenge how we think about the spatial 'construction' of the Metaverse:

1. Multiple Layers of Content are Merging

Live streaming, gaming and social media are coming together. Whether it's streaming evolving to have a chat or a game evolving to have more social events (like concerts in Fortnite), there are now multiple 'layers' of content in virtual space.

2. We Need to Design for a New Spatial Dimension

Similar to the shift from radio to TV, it takes time to adapt to a new medium. This has long been the premise of my collaborator, Marty Keltz (who produced The Magic School Bus): that each shift in media requires a new "film grammar".

First, we port over our previous grammar and then we create a new one.

Pico points out that much of virtual/Metaverse architecture is ...static buildings. And that the narrative isn't spatial but linear.

3. We Need to Think About Adaptable Spaces

On this, she really looped me right back to reflective architecture, which I spoke about at the top. But she brought some interesting new dimensions, commenting that Metaverse architecture can be adaptable across multiple variables including audience demographics.

4. Generative Design Is a Key Tool

Similar to my thinking about autonomous avatars, this is the work of a space being dynamic and generative - that forests, for example, should grow.

If These Walls Could Talk: Pico Velasquez, Architecture and the Metaverse

I'll be coming back to this a lot in the coming weeks. Because it speaks to two key ideas:

  • That there will be parts of the Metaverse that exist, grow and thrive without even necessarily needing users. This will be highly relevant to mirror world contexts for enterprise, but will also create deep experiences and time scales that aren't normally visible in game or virtual worlds.
  • That automation, generative design, autonomous agents, DAOs and other AI/computable experiences will lead to the Metaverse itself being sentient. We think of the Singularity as the moment when a 'computer' is as smart as a human: but I think we may be too anthropomorphic in how we view intelligence. The planet is an intelligent system. It might be that the Metaverse achieves the Turing Test for being an ecosystem before a computer passes the Turing Test for being human.

The Lines are Blurring Between the Physical and Digital

I have a feeling I'm going to circle back on Pico's talk several times. And this is a decidedly incomplete synopsis.

If nothing else, it reminds us that the lessons we're learning are now easily crossing boundaries between the physical and the 'meta' spatial world (which we're calling the Metaverse).

An architect can use a game engine to power a physical room, and then bring those tools and lessons into the Metaverse.

Tools (like Unreal 5) are evolving to allow things like fully destructible and generative spaces. This will allow for digital spaces that don't just mimic the physical world but can transcend it.

But perhaps most of all, it's a reminder that we're at a key inflection point, when cross-collaboration with other disciplines can generate profound value.

Just as fashion designers are bringing their skills into the design of digital fashion, and architects are bringing their skills in spatial development, all of us can play some role in this new world.

It has an economy, people, places, games, and work to do. Just like the real world.

It's time for all hands on deck as we shape a world that we can imagine, and that what may results are lessons that can make our physical world better too.


Hey...you made it this far! Are you a subscriber? If not, why not click the Subscribe button.

Did you get this via email? Please DO reply! I'd love to hear your thoughts.

Or hit me up on the social links below.

Let's start a conversation.

❌
❌