The Metaverse & NFTs

The Metaverse & NFTs

About the author

Thomas Graham

Thomas Graham

Share This Post

What is the metaverse?

Over the last six months, the metaverse has become one of the most talked-about and frequently misunderstood topics around. So what exactly is it?

From a technical perspective, the metaverse is emerging as a set of persistent, 3D virtual environments that are linked together to create a virtual universe. A more straightforward description of the metaverse is that it is just the internet with a highly visual layer of interactive and immersive content on top. Coupled with web3 economic functionality, like decentralized blockchain payments and smart contracts, the metaverse promises a cornucopia of options for developers and creators to engage and incentivize users with exciting content experiences.

The online game Roblox is one of the most popular metaverse platforms currently available. Credit: Roblox

From 2D to 3D

The metaverse is often linked to VR and AR because these technologies allow users to fully immerse themselves in a 3D environment. Inside these 3D worlds, users interact with each other, play games, and build communities. But the metaverse is more than 3D gaming platforms and we don’t need cumbersome VR goggles to interact with it today. A rapidly growing number of companies are working on ‘metaverse’ initiatives that might take place in 3D environments, but for most users, they will ultimately be experienced through conventional 2D smartphone screens.

For example, large community-driven NFT projects, like Bored Ape Yacht Club and Meebits, along with the latest generation of upcoming drops, increasingly offer NFT holders and community members access to 3D assets and immersive experiences. However, most of the core user experience, including viewing persistent digital assets and chatting to distributed communities, still plays out on our smartphones. It will be some time until 3D engines, internet bandwidth, compute resources and consumer hardware are advanced enough to allow users to experience fully immersive ‘Ready Player One’ style metaverse content on a grand scale.

The highly desirable Bored Ape Yacht Club NFTs collection uses procedural generation based on 170 character traits to create the collection’s 10,000 unique apes. Credit: Bored Ape Yacht Club

Synthetic media and the metaverse

Synthetic media has a multitude of applications in the metaverse, allowing us to create user-specific content faster and with less human intervention. When we consider that the metaverse will be a more visual compute platform filled with user-generated content, it makes sense that AI models and synthetic media will play a large role in scaling immersive and interesting metaverse worlds. A good example today is how artists use AI models, like GANs, to create generative art represented as NFTs. Other NFTs collections utilize language models like OpenAI’s GPT3 to make them interactive and develop unique characters.

We believe one of the most significant applications of synthetic media in the metaverse will be the generation of hyperreal personal avatars. Virtual representations of users are increasingly being generated with AI to produce life-like personas and developers are similarly exploring ways of wrapping AI around existing “dumb” avatars to bring them to life. To heighten the immersion of virtual beings in the metaverse, avatars can even be endowed with AI-generated voices to help users better express themselves or create a new identity entirely. Elsewhere in the metaverse, AI is being used to populate games with intelligent non-playable characters that can don clothing that was also fashioned by machine learning algorithms.

From surreal to hyperreal

Today, many new-generation metaverse avatars take cartoonish or stylized forms. However, hyperreal synthetic media will ultimately transform avatars into deeply personal reflections of our real-world selves, while still allowing us to change our appearance, such as our hair or clothing. For embodied meetings and phone calls in VR, hyperreal synthetic media provides the most compelling vision of a metaverse that can replicate the authenticity and intimacy of real-world interactions.

Facebook’s Codec avatars provide a glimpse of how hyperreal synthetic media could change the game when it comes to metaverse interactions. Credit: Facebook.

The importance of getting the metaverse right

Creating new content based on training data provided by a professional human performer or scaling a person’s likeness beyond their original performance using AI are hot topics of debate today. There is no doubt that significant effort is required to establish ethical, legal, and economic norms to ensure that the livelihoods, privacy, and online experience of creators and users are protected from abuse. We frequently work with policy and lawmakers on these issues and created Synthetic Futures to raise the level of discourse around the impact of synthetic media on society. Ultimately, we are hopeful that new web3 economic modalities and increased access to AI content creation tools will give users and creators more options to benefit from what they create, both financially and in terms of attribution and permissioning.

With the metaverse moving towards the mainstream, synthetic media will be integral to delivering the avatars, NFTs, and experiences that bring its worlds and communities to life. As the technology continues to evolve, hyperreal synthetic media, in particular, will be at the forefront of the metaverse’s evolution, providing ‘meatspace’ authenticity in a wholly digital world.

More To Explore

One2Avatar examples

Better Neural Avatars From Just Five Face Images

Many neural avatar systems of the last 18 months require extensive training data, or even full videoclips. Others are performant, but have exorbitant training demands. However, a new system from Google and the University of Minnesota is proposing a photorealistic deepfake head system that’s trained on only five images – and can work quite well from just one image; and the new system of pretraining that the framework uses throws some of the conventions regarding hyperscale training datasets into question.


The Challenge of Preventing ‘Identity Bleed’ in Face Swaps

KAIST AI has developed a new method of disentangling identity characteristics in a face-swap from secondary characteristics such as lighting, skin texture – and the original structure of the face to be ‘overwritten’ by the new identity. If such techniques can be perfected, facial replacement could be freed from having the original identity ‘bleeding through’ into the superimposed identity.

It is the mark of an educated mind to be able to entertain a thought without accepting it.