This post is brought to you by…
NFTfi allows borrowers to put up assets for a loan and lenders to make offers in exchange for interest. The NFT is held in escrow so lenders know for sure that they will either get their money back with interest, or receive the NFT in exchange.
The third piece in the Primer focuses on networking and its role in the Metaverse, looking specifically at three areas - bandwidth, latency and reliability.
Networking is introduced here as “provisioning of persistent, real-time connections, high bandwidth and decentralised data transmission by backbone providers, the networks, exchange centers, and services that route amongst them, as well as those managing ‘last mile’ data to consumers.”
But don’t let such a technical introduction put you off, this is a really interesting part of Matthew’s Primer that lays bare the current limitations of our networks, along with the potential solutions that might enable a persistent, real-time metaverse.
It is the constraints to networking that form the bulk of this article.
Bandwidth
Not speed so much as volume (think flow of water). A river and stream might flow at the same speed, but one has a far greater volume of water, this is bandwidth.
The article uses Microsoft’s latest Flight Simulator (FS) as an example to demonstrate how the technical requirements for the Metaverse are so high. FS is nothing short of a modern gaming marvel. It contains:
2 trillion individually rendered trees
1.5 billion buildings (mostly captured by photogrammetry)
>2.5 petabytes of data, or 2,500,000GB. Queue the download
But wait, how is it possible to run this on consumer hardware?
Microsoft Flight Simulator works by storing a core amount of data on your local device… but when users are online, Microsoft then streams immense volumes of data to the local player’s device on an as-needed basis.
The reason FS2020 makes such a good example is that this data delivery method is not one used by most online multiplayer games. They tend to work by sending only simple data relating to player position/animations. By delivering rendering data as needed, FS2020 allows for hugely diverse environments and assets. This results in a visually stunning game without untimely buffering or download interruptions (assuming you have the connection, more on this later):
In these so-called ‘mirrorworlds’ like the FS environment, it’s still not enough just to send data on climatic conditions like cloud formations, the data needs to be precise and constantly updated. The same will be true for any future Metaverse, it must be persistent and shared in real-time. You can probably see where this is going.
Many players already struggle with bandwidth and network congestion for online games that require only positional and input data. The Metaverse will only intensify these needs.
What this tells us is that as the complexity and importance of virtual simulation grows, the amount of data that needs to be streamed will increase. Any Metaverse will be full of objects like the clouds in FS, with demands far beyond the simple assets found in games like Roblox. It’s myopic to equate what we see today with what the future requires, and if we want to interact in a shared environment, we will need to receive a superabundance of cloud-streamed data. The Metaverse demands it.
One down, two to go.
Latency
Matthew tees this section up by stating that latency is the biggest, but also least understood challenge to networking. That’s because most of us won’t notice or care about 200ms delays to sending a Whatsapp message and getting the checkmarks confirming delivery. For the gamers among you, however, you’ll know that there are situations where latency means the difference between (virtual) life or death. In racing, sports and first-person shooter (FPS) games, latency is king and the lower the better. The faster you can send and receive information, the better chance you have of winning.
Interestingly, the average person doesn’t even notice audio/video latency until something is delivered 45ms too early, or over 125ms too late. To put that in perspective, modern games are affected by anything over 50ms of ‘lag’. The issue for game companies is how sensitive gamers are to these changes, with a Subspace study finding that a 10ms increase in latency can reduce weekly play time by 6%!
Now while the gaming industry has developed some partial solutions, none of them scale particularly well, and as such will not readily help an emerging Metaverse. Achieving low latency is vital though, as we can reasonably expect avatar-to-avatar communication to become commonplace, therefore capturing and sharing facial expressions in real-time will be important. We also expect to continue communicating globally with friends and family, as the world gets smaller and we meet more people online. This renders the gaming hack of dividing servers by geographic location useless, and a new solution is required.
So while the issue of latency is currently limited to niche games and therefore isn’t prioritised by hardware or service providers, the Metaverse is primed to bring it front and centre. With a potential user count in the billions, all clamouring for high-fidelity digital interactions, the current state of our networks won’t suffice.
But what about Starlink I hear you cry! Unfortunately, SpaceX’s satellite network doesn’t solve the issue, even increasing latency over short distances (faster to go underground than to space and back below a certain distance). A satellite relay does however vastly improve access, allowing more people to participate in the Metaverse, even if it doesn’t improve the quality of the experience for current users.
It’s another case of ‘Show me the incentives and I’ll show the outcome’. Predictably, there are new technologies being worked on to cater for the rising need to have low latency.
Subspace - deploys hardware to develop ‘weather maps’ and show low latency network paths.
Fastly - provides a content delivery network (CDN) for low latency applications.
So much like bandwidth, addressing global latency isn’t getting much attention today because there are ways around it for the affected applications. But when we consider what the Metaverse will require, it’s clear these improvements are needed. It remains to be seen if this is enough to create a business case for upgrading/spending on solutions to the problem in the near term.
Reliability
Measured in both overall uptime and consistency of bandwidth/latency. Any real-time persistent Metaverse will need to hit the mark on these in order to function and retain users. That’s it, that’s the section.
Conclusion
To pull it all together, the current network limitations are going to be a barrier to the rise of a fully immersive Metaverse. It’s a problem because the end result will require global comms, at high fidelity/low latency with near-perfect uptime. Matthew gives an interesting example that reiterates how low latency requirements only apply to niche applications, and the existing crop of solutions don’t scale well. In a nutshell - one might read this article and point out that Netflix runs perfectly fine in 4K most of the time! That is possible through a ton of clever engineering from Netflix though, compression, encoding and pre-loading of content.
Consequently, it’s always harder to cloud-stream 1GB of gaming data than 1GB of Netflix. What will change that? Well, the rise of the Metaverse will make the economic incentives for solutions to latency, bandwidth and reliability challenges more attractive. While the Metaverse itself doesn’t have competitive objectives, it will likely raise the requirements for all aspects of networking.
After all, “It doesn’t matter how powerful your device is if it can’t receive all the information it needs in a timely fashion.”