AI is coming to Hollywood with Metaphysic

Robin Wright, Tom Hanks, Robert Zemeckis, Eric Roth. Here.
Robin Wright, Tom Hanks, Robert Zemeckis, Eric Roth. Here.

About the author

metaphysic

metaphysic

Share This Post

Metaphysic has been named the sole AI VFX provider for the highly-anticipated major motion picture ‘Here’, produced by Miramax, and directed by Robert Zemeckis. 

Starring Tom Hanks and Robin Wright, this adaptation of Richard McGuire’s graphic novel reunites the original Oscar-winning team behind ‘Forrest Gump’ and Paul Bettany and Kelly Reilly. 

So what is the story behind the story?

‘Here' and there, Metaphysic AI tools are transformative

In an industry-first, ‘Here’ extensively incorporates hyperreal AI-generated face replacements and de-ageing into the very fabric of its storytelling.

“I’ve always been attracted to technology that helps me to tell a story.” says director Robert Zemeckis.

“With ‘Here’, the film simply wouldn’t work without our actors seamlessly transforming into younger versions of themselves. Metaphysic’s AI tools do exactly that, in ways that were previously impossible!”

“Having tested every flavor of face-replacement and de-aging technology available today, Metaphysic are clearly the global leaders in feature-quality AI content, and the perfect choice for this incredibly challenging, emotional film.”

Our use of AI is essential to the storytelling behind ‘Here’. We believe that this project has advanced state of the art for generative AI, both driven by and serving the performances and storytelling in ‘Here’

Automation

Automation - Here
Automation on scene with Metaphysic AI Tools for AGT 2022.

Since the tech landscape for generative AI is moving incredibly fast, one of Metaphysic’s core advantages is our research and innovation team, which is a central part of that evolving conversation. One of their core technical goals is automation.

Automating AI pipelines is not difficult when the output is a response to a user-prompt, and that output is arbitrary and potentially not very accurate. For example, ChatGPT may give wrong-ish answers, and Stable Diffusion models generate outputs that are only approximations of what was asked for. 

At Metaphysic, instead, our models must be engineered and trained to create perfect outputs. Since everyone instantly recognizes when a face is ‘weird-looking’ or uncanny, our bar for accuracy is incredibly high.

The critical driver of accuracy lies in dataset featurisation not post-processing. 

 

Likeness Accuracy

Much of our research and development is focused on automated dataset processing, optimization tools, and models – but the overriding goal is accuracy of representation. 

In this respect, Metaphysic is far ahead of the curve in several areas, including maintaining consistent results across scenes, extreme-angle robustness, real-time streaming, and obtaining results that work at feature-film resolution.

Likeness Accuracy with Metaphysic Live - DeepTomCruise
Likeness Accuracy with Metaphysic Live - DeepTomCruise

Perfect De-Aging

In the process, for Here and other projects, we’ve also become experts in turning back time. In early 2021, Metaphysic also launched www.everyany.one, which lets any person create a hyperreal avatar of themselves and then age, de-age or change its gender, powered by a Generative Adversarial Network (GAN). 

Regarding de-aging, Metaphysic excels in creating AI models that can accurately represent how someone looked decades ago. The systems we’ve created, use historical and synthetically-generated data to train models that seamlessly represent your younger self, and can generate a faithful likeness, complete with micro-expressions.    

Perfect De-Aging with Antoine De Caunes.
Perfect De-Aging with Antoine De Caunes.

Metaphysic builds for Hollywood - Welcome to 'Metaphysic Live'

Building on its industry-leading generative AI technology for Hollywood, we are releasing our new product, ‘Metaphysic Live’. First deployed in production on ‘Here’, Metaphysic Live creates high-resolution, photorealistic face-swaps and de-ageing effects on top of actors’ performances, live and in real-time, without the need for further compositing or VFX work. 

Streaming AI-generated photorealistic content that maps onto real-world scenes at up to 30 frames per second is a dramatic advance in face-replacement technology, and one that will be essential to creating immersive AR/VR, gaming, and entertainment experiences in the near future.

Kevin Baillie, Production Visual Effects Supervisor on ‘Here’, says,

“It is incredible to see Metaphysic’s AI-generated content flawlessly integrated into a shot live on set! The actors can even use the technology as a ‘youth mirror’ – testing out acting choices for their younger selves in real-time. That feedback loop and youthful performance is absolutely essential in achieving an authentic, delightful result.”  

Having already gained significant attention through our viral DeepTomCruise videos, Metaphysic went on to reach critical mass through history-making performances on America’s Got Talent (AGT), culminating in the showcasing of an AI-generated Elvis Presley. Bringing back The King showed the world the true potential and capabilities of hyperreal content, and how it can be used to push the entertainment industry forward. 

Elvis Presley back to life for AGT 2022
Elvis Presley back to life for AGT 2022

Subsequently, In our partnership with Miramax, we are now bringing together astounding new advancements in AI, and the gifts of some of the best actors of our time, to create truly compelling narratives.

To the Moon and Beyond

Over the last few months, Metaphysic has continued to evolve our technologies to become the leading generative AI platform for the entertainment industry – a journey that began in earnest on AGT, where, with every round, we were able to train and improve our models, and to add new inputs that further advanced their capabilities. 

The result of Metaphysic’s work on ‘Here’ will transport audiences.

We can’t say too much, but it will be like looking into the past and future simultaneously – wait and see! 



More To Explore

AI ML DL

Improving Facial Expression Recognition by Studying Context and Environment

Understanding what facial expressions mean is going to be essential in neural facial synthesis in the coming years. But in many cases, it’s extremely difficult to correctly guess an emotion unless you can see more of the context than just the face (one example being ’embarrassment’, which of necessity cannot be felt or studied without understanding the context). Now, researchers from Canada are proposing a more intelligent annotation pipeline, using Large Language Models such as GPT3, in order to bring more intelligence to Facial Expression Recognition (FER).

AI ML DL

Controlling Age With AI

Films such as ‘Here’ and ‘Indiana Jones and the Dial of Destiny’ are using advanced machine learning technologies to age and de-age characters. But it’s still a pretty ‘manual’ and painfully laborious process. Now, new research offers a potential pipeline that could shave off or add years to actors in a more systematic way, with the use of Stable Diffusion.

It is the mark of an educated mind to be able to entertain a thought without accepting it.

Aristotle