AI is coming to Hollywood with Metaphysic

Robin Wright, Tom Hanks, Robert Zemeckis, Eric Roth. Here.
Robin Wright, Tom Hanks, Robert Zemeckis, Eric Roth. Here.

About the author

Picture of Metaphysic

Metaphysic

Share This Post

Metaphysic has been named the sole AI VFX provider for the highly-anticipated major motion picture ‘Here’, produced by Miramax, and directed by Robert Zemeckis. 

Starring Tom Hanks and Robin Wright, this adaptation of Richard McGuire’s graphic novel reunites the original Oscar-winning team behind ‘Forrest Gump’ and Paul Bettany and Kelly Reilly. 

So what is the story behind the story?

‘Here' and there, Metaphysic AI tools are transformative

In an industry-first, ‘Here’ extensively incorporates hyperreal AI-generated face replacements and de-ageing into the very fabric of its storytelling.

“I’ve always been attracted to technology that helps me to tell a story.” says director Robert Zemeckis.

“With ‘Here’, the film simply wouldn’t work without our actors seamlessly transforming into younger versions of themselves. Metaphysic’s AI tools do exactly that, in ways that were previously impossible!”

“Having tested every flavor of face-replacement and de-aging technology available today, Metaphysic are clearly the global leaders in feature-quality AI content, and the perfect choice for this incredibly challenging, emotional film.”

Our use of AI is essential to the storytelling behind ‘Here’. We believe that this project has advanced state of the art for generative AI, both driven by and serving the performances and storytelling in ‘Here’

Automation

Automation - Here
Automation on scene with Metaphysic AI Tools for AGT 2022.

Since the tech landscape for generative AI is moving incredibly fast, one of Metaphysic’s core advantages is our research and innovation team, which is a central part of that evolving conversation. One of their core technical goals is automation.

Automating AI pipelines is not difficult when the output is a response to a user-prompt, and that output is arbitrary and potentially not very accurate. For example, ChatGPT may give wrong-ish answers, and Stable Diffusion models generate outputs that are only approximations of what was asked for. 

At Metaphysic, instead, our models must be engineered and trained to create perfect outputs. Since everyone instantly recognizes when a face is ‘weird-looking’ or uncanny, our bar for accuracy is incredibly high.

The critical driver of accuracy lies in dataset featurisation not post-processing. 

 

Likeness Accuracy

Much of our research and development is focused on automated dataset processing, optimization tools, and models – but the overriding goal is accuracy of representation. 

In this respect, Metaphysic is far ahead of the curve in several areas, including maintaining consistent results across scenes, extreme-angle robustness, real-time streaming, and obtaining results that work at feature-film resolution.

Likeness Accuracy with Metaphysic Live - DeepTomCruise
Likeness Accuracy with Metaphysic Live - DeepTomCruise

Perfect De-Aging

In the process, for Here and other projects, we’ve also become experts in turning back time. In early 2021, Metaphysic also launched www.everyany.one, which lets any person create a hyperreal avatar of themselves and then age, de-age or change its gender, powered by a Generative Adversarial Network (GAN). 

Regarding de-aging, Metaphysic excels in creating AI models that can accurately represent how someone looked decades ago. The systems we’ve created, use historical and synthetically-generated data to train models that seamlessly represent your younger self, and can generate a faithful likeness, complete with micro-expressions.    

Perfect De-Aging with Antoine De Caunes.
Perfect De-Aging with Antoine De Caunes.

Metaphysic builds for Hollywood - Welcome to 'Metaphysic Live'

Building on its industry-leading generative AI technology for Hollywood, we are releasing our new product, ‘Metaphysic Live’. First deployed in production on ‘Here’, Metaphysic Live creates high-resolution, photorealistic face-swaps and de-ageing effects on top of actors’ performances, live and in real-time, without the need for further compositing or VFX work. 

Streaming AI-generated photorealistic content that maps onto real-world scenes at up to 30 frames per second is a dramatic advance in face-replacement technology, and one that will be essential to creating immersive AR/VR, gaming, and entertainment experiences in the near future.

Kevin Baillie, Production Visual Effects Supervisor on ‘Here’, says,

“It is incredible to see Metaphysic’s AI-generated content flawlessly integrated into a shot live on set! The actors can even use the technology as a ‘youth mirror’ – testing out acting choices for their younger selves in real-time. That feedback loop and youthful performance is absolutely essential in achieving an authentic, delightful result.”  

Having already gained significant attention through our viral DeepTomCruise videos, Metaphysic went on to reach critical mass through history-making performances on America’s Got Talent (AGT), culminating in the showcasing of an AI-generated Elvis Presley. Bringing back The King showed the world the true potential and capabilities of hyperreal content, and how it can be used to push the entertainment industry forward. 

Elvis Presley back to life for AGT 2022
Elvis Presley back to life for AGT 2022

Subsequently, In our partnership with Miramax, we are now bringing together astounding new advancements in AI, and the gifts of some of the best actors of our time, to create truly compelling narratives.

To the Moon and Beyond

Over the last few months, Metaphysic has continued to evolve our technologies to become the leading generative AI platform for the entertainment industry – a journey that began in earnest on AGT, where, with every round, we were able to train and improve our models, and to add new inputs that further advanced their capabilities. 

The result of Metaphysic’s work on ‘Here’ will transport audiences.

We can’t say too much, but it will be like looking into the past and future simultaneously – wait and see! 



More To Explore

LayGa - Source: https://arxiv.org/pdf/2405.07319
AI ML DL

Editable Clothing Layers for Gaussian Splat Human Representations

While the new breed of Gaussian Splat-based neural humans hold much potential for VFX pipelines, it is very difficult to edit any one particular facet of these characters, such as changing their clothes. For the fashion industry in particular, which has a vested interest in ‘virtual try-ons’, it’s essential that this become possible. Now, a new paper from China has developed a multi-training method which allows users to switch out garments on virtual people.

A film grain effect applied to a stock image - source: https://pxhere.com/en/photo/874104
AI ML DL

The Challenge of Simulating Grain in Film Stocks of the Past

Hit shows like The Marvelous Mrs. Maisel and WandaVision use some cool tricks to make modern footage look like it was shot in the 1960s, 70s, and various other eras from film and TV production. But one thing they can’t quite pull off convincingly is reproducing the grainy film stocks of yesterday – a really thorny problem that’s bound up with the chemical processes of emulsion film. With major directors such as Denis Villeneuve and Christopher Nolan fighting to keep the celluloid look alive, it would be great if AI could lend a hand. In this article, we look at the challenges involved with that.

It is the mark of an educated mind to be able to entertain a thought without accepting it.

Aristotle