On April 26-28, GamesBeat Summit 2022, the world’s largest gathering of gaming leaders, will take place. Here’s where you can reserve your spot! The metaverse is more than just a brilliant statement by Facebook or anyone else. Someone — or, more likely, a mix of many someones — must begin building it before any of us can begin exploring it. We’ll need the appropriate combination of devices, standards, and network technologies to make this happen internationally and at scale – none of which are completely developed yet. We will enter the metaverse through a variety of devices, each with its own manner of an entrance, connectivity, and style, ranging from fully immersive headgear to fashionable spectacles that may be worn all day, every day. While device manufacturers fight for market dominance and attention based on their unique user interfaces, the virtual worlds users’ access will require a variety of shared and standardized exploration methods. The truth is that many of those standards and standardized approaches are currently lacking and must be developed.
A simple example of this can be found in the mapping and simultaneous localization that will be required to produce the metaverse’s blend of physical and digital augmented realities. Today, each device maker and platform has its own unique data for this procedure, and there is no consensus on what constitutes a standard. Spatial mapping — the process by which sensors acquire and combine sensory input to generate a three-dimensional picture of space — will be used to create the metaverse’s virtual and mixed reality worlds. The computing techniques required to accomplish this can be found on the device, in the network, or a combination of both. The area needs to re-render in real-time as you travel around it, and the real-world surfaces are covered with virtualized colors, textures, and images, so latency is crucial in this metaverse experience. Edge computing is critical for delivering the metaverse experience since many users will experience nausea if latency does not fall below 20 milliseconds in virtual reality apps. To address this, time warping is used, however, a lower latency connection would provide a far higher quality of experience (QoE). In addition, when tracking or interacting with real-world objects in augmented reality situations, significant network and processor latency will result in a low QoE. And, in order for the metaverse to be device and platform agnostic, the current patchwork of proprietary mapping solutions will need to be consolidated into acceptable standards.
The OpenXR community is working on open APIs, the 3GPP is working on radio and network optimization standards, and MPEG is looking into a variety of compression approaches, including spatial audio, haptics, and higher-efficiency video codecs. Regardless of whether it’s uplink and downlink transport optimization for all XR data streams, video, audio, haptics, and point cloud processing, or dedicated network slicing, there’s a need to standardize the processes underpinning spatial mapping data to ensure the metaverse is a universally accessible experience rather than a proprietary fragmented one. The devices, too, have a long way to go before becoming ubiquitous — particularly if we’re talking about attractive, wearable spectacles. The merged reality form of the metaverse entails constructing a digital copy of the real world and overlaying textures on top of it, possibly incorporating virtual things, or altering real-world objects to give them a new appearance. You could make a real city look medieval, or you could make New York look like Gotham City – you’d still see real buildings and people, but they’d have a different aesthetic. Another metaverse might be wholly virtual and built with fully immersive headgear. Versions would most likely be device or platform-specific, but they would all require network technology advancements and standards to deliver those personalized experiences. Of course, while part of this can be supplied through 5G networks due to the bandwidth, reliability, and latency levels available, the ubiquity and scale required are still a ways off. Facebook might produce AR glasses and call it the metaverse, but Apple could come out with its own fantastic device and name it the metaverse as well. Both may devise a business plan that benefits them, but it must also benefit the (mainly mobile) network operators who will offer connectivity.
Furthermore, whereas Facebook’s revenue model is expected to be platform-based and includes advertising, Apple’s will very surely be centered on the ‘cool’ gadget and user experience. But the plain fact is that the metaverse will stall — or fall short of delivering on its full potential as quickly as it could — unless the devices talk to and interact with one another, unless all of these rendered worlds use the same standards and data sharing techniques, and unless the networks can deliver the capacity and connectivity at an affordable and sustainable price. The internet is not owned by a single company. There is no single firm that owns internet commerce, access to it, the user interface, innovation, or the concepts it has spawned. Yes, some businesses are internet behemoths, but the internet also houses millions of successful little businesses and individuals. To completely succeed for the largest potential global audience of enablers and users, the metaverse must follow the same template. Xperi’s Senior Director of Advanced Research and Development, Media IP is Chris Phillips. His current research interests include eXtended reality, the metaverse, and cloud gaming. Prior to joining Xperi, he was the head of Ericsson’s eXtended Reality research and worked in research at AT&T Laboratories and AT&T Bell Laboratories. He’s also a patent holder, with over 100 granted patents around the world.