This post is brought to you by…
NFTfi allows borrowers to put up assets for a loan and lenders to make offers in exchange for interest. The NFT is held in escrow so lenders know for sure that they will either get their money back with interest, or receive the NFT in exchange.
The fourth part of the Metaverse Primer focuses on computation and overlaps a bit with Part III on networking. Specifically, the trade-offs between local compute vs. cloud compute as they relate to bandwidth, latency and reliability. You can read our recap of Part III here.
For the purposes of this article, compute is defined as “The enablement and supply of computing power to support the Metaverse, supporting such diverse and demanding functions as physics calculation, rendering, data reconciliation and synchronization, artificial intelligence, projection, motion capture and translation.”
Matthew’s original article here.
Compute Requirements for the Metaverse
In an earlier article, we looked at the computational requirements based on a subset of data “such as haptics, facial scanning, and live environment scans”. The full computational requirements of the Metaverse, things like applying laws of physics, gravity and such, will be much higher.
“In totality, the Metaverse will have the greatest ongoing computational requirements in human history.”
Therefore, the progress and innovation in compute will shape, constrain and define the development of the Metaverse more than anything else.
It is that progress that brought us the battle royale genre, rich user-generated content (UGC) and experiences that were previously solely possible IRL (concerts in Roblox and Fortnite, for example). The idea of virtual worlds shared with many other concurrent players is not new and dates back to the 90s, at least. However,
“It was only by the mid-2010s that millions of consumer-grade devices could process a game with 100 real players in a single match, and that enough affordable, server-side hardware was available and capable of synchronizing this information in near real-time.“
Even though these things are possible today, there are limitations and sacrifices. Things like 100 concurrent users per instance, which becomes 50 for non-standard game experiences. Sacrifices are made to graphics, enabling games to run on older devices. And that’s just one aspect of compute requirements. What about accessories beyond just skins? What about participation at social events, like a virtual concert, and not just attendance with limited functionality? All of these things need computational resources.
Where to Locate and Build up Compute
In the previous article, we covered the two main schools of thought to address the increasing demand for compute, local and cloud. In the case of cloud compute, all the processing and rendering happens away from the consumer device by the enterprise-grade machines. The local consumer device simply needs to stream data and relay inputs (shoot, move right, etc.). There are many issues with cloud compute model, from capacity utilisation to bandwidth and latency problems.
More broadly, we know that “consumer processors improve much faster than networks as they’re far more frequently replaced and aren’t literally fighting the speed of light. This growth doesn’t mitigate all network challenges, but it suggests that we’re better off asking client-side devices to perform more computations than sending heavy video streams to these devices.”
And while local compute might be a better option, it is still not going to be enough to power a persistent and unending virtual world that supports unlimited interactions. Matthew brings up an example of Rival Peak, a Massively Interactive Live Event on Facebook Watch that was operated by Genvid Technologies between December 2020 and March 2021. Rival Peak was a 24/7 simulation featuring 12 AI characters, with one eliminated from the game each week.
While AIs were not directly controlled, viewers were able to participate directly by “solving puzzles to aid contestants, choosing what they could do, and even influencing who survived and was booted off.”
Rival Peak ran on AWS servers, and with tens of thousands of concurrent viewers — nearly 50,000 at peak, it “once ran out of GPU servers on AWS, and, during testing, routinely exhausted available spot servers.”
It’s quite telling that without a need for any consumer-side processing, Rival Peak was running out of compute. Imagine the requirements for an interconnected, virtual mirroworld.
Decentralised Compute
This brings us to decentralised compute - the concept of utilising idle local compute resources powered by blockchain tech, smart contracts and tokens, to bootstrap a network.
“In this conception, owners of underutilized CPUs and GPUs would be ‘paid’ in some cryptocurrency for the use of their processing capabilities, perhaps by users located ‘near’ them in network topology. There might even be a live auction for access to these resources, either those with ‘jobs’ bidding for access, or those with capacity bidding on jobs.”
Matthew uses Render and the RNDR token as an example. OTOY’s OctaneRender is a best-in-class render engine that makes it possible to modify scenes in real-time. To take advantage of that, OTOY’s customers need access to powerful real-time processing capabilities. Render protocol runs on Ethereum and uses the RNDR token to auction off idle GPU capacity. All the negotiations are handled by the protocol in the background.
A similar system can work with compute. After all, blockchains are computers.
Idle compute in our smartphones, laptops and other personal devices will be continuously auctioned off, in the background, delivering the massive amounts of processing capacity that the immersive and persistent Metaverse will require.
You come to the realization that the blockchain is really a general mechanism for running programs, storing data, and verifiably carrying out transactions. It’s a superset of everything that exists in computing. We’ll eventually come to look at it as a computer that’s distributed and runs a billion times faster than the computer we have on our desktops, because it’s the combination of everyone’s computer. - Tim Sweeney (2017)
Reminds me of BOINC and the Seti@home project, distributed computing idea ...but now, blockchain crypto can give it wings!