An Introduction to Hyperreal Synthetic Media

An Introduction to Hyperreal Synthetic Media

About the author

Picture of Thomas Graham

Thomas Graham

Share This Post

At Metaphysic, our mission is to bring hyperreal synthetic media to the metaverse.

Throughout history, artists have leveraged technology to push the boundaries of self-expression. From the cave paintings at Lascaux to the mass-manufactured pop art of Warhol to the computer-driven CGI of Hollywood blockbusters, artists have consistently found ways to wield new tools to push the boundaries of creativity.

Today, this trend continues at the intersection of artificial intelligence, digital audio/visual media, and emerging mediums such as VR and blockchain. These technological developments have ushered us into the era of synthetic media, opening a new creative frontier for artistic exploration and commercial applications.

What’s the difference between synthetic media and hyperreal synthetic media?

Synthetic media is the catch-all term for the rapidly expanding universe of digital objects and experiences generated with input from artificial intelligence. One form of synthetic media being pioneered at Metaphysic is hyperreal synthetic media, synthetic videos that are highly realistic and often indistinguishable from digital content captured on cameras.

The term “hyperreality” was coined 40 years ago by the French philosopher Jean Baudrillard, who conceptualized it as “the generation of models of a real without origin or reality.” Baudrillard’s exploration of hyperreality was remarkably prescient, describing a reality that has never existed but that simultaneously blurs the line between reality and its simulation.

Metaphysic’s DeepTomCruise is seen by many as the benchmark for hyperreal synthetic media.

The stage is set for synthetic media

At the time Baudrillard was writing, the technologies we now associate with the creation of hyperreal objects — computers, the internet, AI, VR — were still in their infancy. But the foundations were emerging and even in the early 80s, it was clear that it was only a matter of time until these technologies could deliver on their inherent promise.

That time has come. Over the past few decades, the cost of compute power has plummeted, more than half of the world has come online, the application of artificial intelligence in our phones, cars, gadgets, and video games has become commonplace, and breakthroughs in the development of AR and VR have made these technologies increasingly accessible to the general population.

From the extraordinary to the everyday

The proliferation of advanced technologies means they inevitably become part of everyday life, which has the effect of causing those technologies to “disappear” from our awareness. We don’t think of the smart toaster or internet-connected thermostat as the vanguard of an emerging internet of things, they are just a toaster and a thermostat. We don’t think of Siri as an artificial intelligence so much as an application that uses audio commands to deliver information. Photoshop was once considered revolutionary technology but today the use of highly sophisticated photo editing apps is mundane.

Like synthetic media today, photoshop was viewed as a technological marvel when it first shipped in 1990.

Synthetic media is already here

Although hyperreal synthetic media is currently turning heads because it represents a new technological milestone — as the global delight around Metaphysic’s recent DeepTomCruise videos demonstrates — its rapid development and proliferation suggest that it will soon become commonplace in our lives.

In fact, many people who are not directly involved in the creation of synthetic media are usually surprised to learn just how much of it already exists. From photoshop and social media filters, to the tools behind Hollywood movie magic and video games, synthetic media has already begun to seamlessly integrate with the existing media environment and vanish from our conscious awareness.

This is a positive development insofar as it enables deeper engagement with the content itself, rather than focusing on the technical marvel of its production. But it can also make it difficult to appreciate the incredible work being done by creators of synthetic media at both a technical and artistic level.

Popular photo filters that utilise synthetic media are ubiquitous on social media platforms such as Snapchat and Instagram. Credit: Snapchat

The future is hyperreal

At Metaphysic, our mission is to build the future of hyperreal synthetic media as the engine of the metaverse and the next generation of digital content.

From avatars and NFTs to Hollywood movies and innovative advertising, we’re building AI technologies to help creators and companies move beyond the uncanny valley to generate hyperreal experiences and identities that people love and deeply connect with.

In this series of Synthetic Snapshot posts, we’ll explore how synthetic media is already changing the world and how hyperreal synthetic media will further transform the future of the metaverse, film, music, content creation, and advertising.

More To Explore

Main image derived from https://unsplash.com/photos/mens-blue-and-white-button-up-collared-top-DItYlc26zVI
AI ML DL

Detecting AI-Generated Images With Inverted Stable Diffusion Images – and Reverse Image Search

A new system for the detection of AI-generated images trains partially on the noise-maps typical of Stable Diffusion and similar generative systems, as well as using reverse image search to compare images to online images from 2020 or earlier, prior to the advent of high-quality AI image systems. The resulting fake detector works even on genAI systems that have no public access, such as the DALL-E series, and MidJourney.

Illustration developed from 'AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control' (https://xbpeng.github.io/projects/AMP/index.html)
AI ML DL

Powering Generative Video With Arbitrary Video Sources

Making people move convincingly in text-to-video AI systems requires that the system have some prior knowledge about the way people move. But baking that knowledge into a huge model presents a number of practical and logistical challenges. What if, instead, one was free to obtain motion priors from a much wider net of videos, instead of training them, at great expense, into a single model?

It is the mark of an educated mind to be able to entertain a thought without accepting it.

Aristotle