Metaphysic Joins Thorn and All Tech Is Human to Enact Strong Child Safety Commitments for Generative AI

Thorn
Thorn

About the author

Picture of Metaphysic

Metaphysic

Share This Post

April 23, 2024 — Metaphysic, alongside industry leaders such as Amazon, Anthropic, Civitai, Google, Meta, Microsoft, Mistral AI, OpenAI, and Stability AI, has committed to implementing robust child safety measures in the development, deployment, and maintenance of generative AI technologies. This initiative, led by Thorn, a nonprofit dedicated to defending children from sexual abuse, and All Tech Is Human, an organization dedicated to collectively tackling tech and society’s complex problems, aims to mitigate the risks generative AI poses to children.

By adopting comprehensive Safety by Design principles, Metaphysic and its peers are setting new standards for the ethical development of AI, ensuring that child safety is prioritized at every stage. As part of the working group, Metaphysic has also agreed to release progress updates every three months.

This commitment marks a significant step forward in preventing the misuse of AI technologies to create or spread child sexual abuse material (AIG-CSAM) and other forms of sexual harm against children.

“We stand with Thorn, All Tech Is Human, and the broader tech community in this essential mission, setting the standards and frameworks for responsible AI development,” said Beni Issembert at Metaphysic . “As we explore and embrace the potential of AI, we have a vital responsibility to ensure our technologies serve as a force for good, especially when it comes to protecting children. We are committed to integrating Safety by Design principles into our AI development processes.”

This collective action underscores the tech industry’s approach to child safety, demonstrating a shared commitment to ethical innovation and the well-being of the most vulnerable members of society.

Thorn has published the principles at https://teamthorn.co/gen-ai. Additionally, select participating companies joined Thorn and ATIH to publish a comprehensive paper, “Safety by Design for Generative AI: Preventing Child Sexual Abuse,”  that features these principles and additional mitigations, detailing actionable strategies that AI developers, providers, data hosting platforms, social platforms, and search engines may take to implement these principles, making it more challenging for bad actors to exploit generative AI technologies for harmful purposes.

About Metaphysic

Metaphysic is the industry leader in developing AI technologies and machine learning research to create photorealistic content at internet scale. Recently named one of Fast Company’s Most Innovative Companies in 2024 and TIME100’s Most Influential Companies for 2023, Metaphysic is focused on the ethical development of AI to support the genius of human performance. Metaphysic’s latest innovation, Metaphysic PRO, is the world’s first digital likeness protection platform.  It is designed to address critical aspects such as consent, compensation, copyright, and training data management. Metaphysic recognizes the importance of upholding ethical standards in AI and is dedicated to creating technologies that not only push boundaries but also prioritize the well-being and rights of individuals. Find out more about their technology at www.metaphysic.ai.

About Thorn

Thorn is a nonprofit that builds technology to defend children from sexual abuse. Founded in 2012, the organization creates products and programs to empower the platforms and people who have the ability to defend children. Thorn’s tools have helped the tech industry detect and report millions of child sexual abuse files on the open web, connected investigators and NGOs with critical information to help them solve cases faster and remove children from harm, and provided parents and youth with digital safety resources to prevent abuse. To learn more about Thorn’s mission to defend children from sexual abuse, visit thorn.org.

About All Tech is Human

All Tech Is Human is a nonprofit dedicated to collectively tackling tech and society’s complex problems such as mitigating the risks generative AI poses to children. The organization brings together key stakeholders across civil society, government, industry, and academia to understand values, trade-offs, and best practices. Based in NYC with a global audience and impact, All Tech Is Human has a wide variety of activities that are focused on uniting a multistakeholder community, educating a broad audience, and diversifying the traditional tech pipeline to better match the complex problems we face. Learn more at AllTechIsHuman.org.

More To Explore

LayGa - Source: https://arxiv.org/pdf/2405.07319
AI ML DL

Editable Clothing Layers for Gaussian Splat Human Representations

While the new breed of Gaussian Splat-based neural humans hold much potential for VFX pipelines, it is very difficult to edit any one particular facet of these characters, such as changing their clothes. For the fashion industry in particular, which has a vested interest in ‘virtual try-ons’, it’s essential that this become possible. Now, a new paper from China has developed a multi-training method which allows users to switch out garments on virtual people.

A film grain effect applied to a stock image - source: https://pxhere.com/en/photo/874104
AI ML DL

The Challenge of Simulating Grain in Film Stocks of the Past

Hit shows like The Marvelous Mrs. Maisel and WandaVision use some cool tricks to make modern footage look like it was shot in the 1960s, 70s, and various other eras from film and TV production. But one thing they can’t quite pull off convincingly is reproducing the grainy film stocks of yesterday – a really thorny problem that’s bound up with the chemical processes of emulsion film. With major directors such as Denis Villeneuve and Christopher Nolan fighting to keep the celluloid look alive, it would be great if AI could lend a hand. In this article, we look at the challenges involved with that.

It is the mark of an educated mind to be able to entertain a thought without accepting it.

Aristotle