The Challenge of Simulating Grain in Film Stocks of the Past

A film grain effect applied to a stock image - source: https://pxhere.com/en/photo/874104
A film grain effect applied to a stock image - source: https://pxhere.com/en/photo/874104

About the author

Picture of Martin Anderson

Martin Anderson

I'm Martin Anderson, a writer occupied exclusively with machine learning, artificial intelligence, big data, and closely-related topics, with an emphasis on image synthesis, computer vision, and NLP.

Share This Post

After a while, movie lovers who appreciate classic as well as new films tend to develop an intuitive understanding of at least the rough era, certainly the decade, in which a film they are watching was made. For some eras, cultural and technical changes wrought over just a single decade can significantly evolve the look and feel of movies made in that time-span.

For instance, the ten years constituting the 1960s takes American cinema out of the era of crowd-pleasing musicals, through nuclear paranoia, and right up to a national crisis of identity, brought on by the consequences of the Vietnam war, and a shocking series of assassinations of the emerging political and spiritual leaders of the time. Arguably, no other decade wrought greater change in Hollywood’s output.

The changing faces and moods of American cinema during only eight years of the 1960s, from the booze-ridden Rat Pack era through to the rise to prominence of counter-culture.
The changing faces and moods of American cinema during only eight years of the 1960s, from the booze-ridden Rat Pack era through to the rise to prominence of counter-culture.

Further developments during the 1960s, such as the relaxation of censorship laws toward the end of the decade, and the subsequent growth of profanity and nudity in USA-based movies, make it even easier to pin down the rough year in which a movie was made.

Against the Grain

But together with rapidly-changing fashions and other obvious historical signs, notable developments in film emulsion technologies likewise hallmark the decade – and most other decades since films were invented.

Besides the decline of black-and-white movies in that era, major releases from the 1960s are characterized by the look of a movie, as defined by the choices of color film stock used, and the efforts of cinematographers to exploit and to work around the limitations of these emulsions – including how ‘grainy’ the film stock is.

The history of film grain is not a linear progression away from graininess. Many more recent productions have favored a grainy appearance for stylistic reasons. In the case of James Cameron's 'Aliens' (1986), among the grainiest mainstream movies of the 1980s, the film was shot on one of the last batches of Kodak's large-halide Eastman 400T 5295 Neg. Film and Eastman Color Negative 400T 5294 Film - which may have helped VFX supervisors the Skotak Brothers to hide the many strings supporting the practical on-set models used in the movie.
The history of film grain is not a linear progression away from graininess. Many more recent productions have favored a grainy appearance for stylistic reasons. In the case of James Cameron's 'Aliens' (1986), among the grainiest mainstream movies of the 1980s, the film was shot on one of the last batches of Kodak's large-halide Eastman 400T 5295 Neg. Film and Eastman Color Negative 400T 5294 Film - which may have helped VFX supervisors the Skotak Brothers to hide the many strings supporting the practical on-set models used in the movie.

There are three core characteristics at play in the history of film emulsions – light sensitivity (ISO/ASA values); color gamut; and film grain size and behavior. These are related facets, in many ways.

Light-sensitive silver halide crystals are the basis of all film stocks. In the single strata of a black-and-white stock, these darken according to the amount of light exposed to the gelatin emulsion in which the halides are suspended; in color film stock, three layers of silver halide are additionally coated in interlayers that filter out specific colors from the exposed light, so that yellow, cyan and magenta layers are distinctly registered, creating a composite color image.

The larger the halide crystal (grain), the more sensitive it is to light. As emulsion technology developed, it became possible to create film stocks that retained reasonable sensitivity to light while using smaller halides, reducing the grainy appearance of the footage.

In color stock film, three sandwiched layers of color-sensitive monochrome halides are reinterpreted at processing time into genuine color layers (see embedded video above for more detail). However, despite the use of color in these illustrations, each layer is essentially monochrome, with color added at the developing stage, and targeted to each assigned color strata. Source: https://www.youtube.com/watch?v=I4_7tW-cx1I
In color stock film, three sandwiched layers of color-sensitive monochrome halides are reinterpreted at processing time into genuine color layers (see embedded video above for more detail). However, despite the use of color in these illustrations, each layer is essentially monochrome, with color added at the developing stage, and targeted to each assigned color strata. Source: https://www.youtube.com/watch?v=I4_7tW-cx1I

Nonetheless, the smallest halides, used in low-sensitivity film stocks (such as 64 or 25 ASA stocks), produced the crispest and least grainy images – albeit that it was necessary to pump far more light into the scene, or, in location shooting, to require very bright weather; or, in one case, as director Stanley Kubrick did for his 1975 period-piece Barry Lyndon, to utilize the NASA-designed fastest light-gathering lens in the world at that time.

Conversely, as the growing trend towards location shooting developed from the late 1960s through the 1970s, and as halide size in ‘faster’ film stocks became progressively smaller, it was increasingly possible to trade off obvious film grain against the convenience of shooting in ‘available light’ – even at night, in outings such as Taxi Driver (1976).

In the mid-1970s, director Martin Scorsese used 35mm Eastman Color Negative 100T 5254/7254 film stock for 'Taxi Driver', enabling live shooting on the night-time streets of New York. 100 ASA is a fairly low-sensitivity film, and while the stock was able to adequately capture neon-lit scenes such as this, driving-based footage from the dark streets of 1970s NYC show only the broadest detail.
In the mid-1970s, director Martin Scorsese used 35mm Eastman Color Negative 100T 5254/7254 film stock for 'Taxi Driver', enabling live shooting on the night-time streets of New York. 100 ASA is a fairly low-sensitivity film, and while the stock was able to adequately capture neon-lit scenes such as this, driving-based footage from the dark streets of 1970s NYC show only the broadest detail.

The color gamut represents the range of tonalities within a color that can be captured in color film stock. In early film stocks, this tonality tended to be very limited, often giving faces a ‘comic-strip’ style appearance, as if they had been inked in with a single color.

A broad illustration of the increasing capabilities of color gamut in film stock over the decades. We can see in the earliest examples the lower number of flesh tones that stock was initially capable of rendering. We can also note the increased capability of faster, finer-grained film stocks to allow non-frontal and more nuanced lighting.
A broad illustration of the increasing capabilities of color gamut in film stock over the decades. We can see in the earliest examples the lower number of flesh tones that stock was initially capable of rendering. We can also note the increased capability of faster, finer-grained film stocks to allow non-frontal and more nuanced lighting.

The limitations of the color gamut of a film stock are challenging, but not impossible to reproduce, since the process is destructive (i.e., you take a modern, high-tonality video source and blend the fine-grained tonalities into the more ‘monotonous’ colors that are characteristic of an older emulsion).

To facilitate imitation of the color range of these older stocks, Lookup tables (LUTs) and Color Lookup Tables (CLUTs) have long been standard tools in digital post-production and grading pipelines, and particularly in processes designed to create looks of the past.

CLUTs basically consist of mappings that direct complex color inputs into a (usually more limited) target gamut. For the retro-themed Disney Avengers TV spin-off WandaVision, director of photography Jeff Hall used a series of developing and bespoke LUTs to match the digital footage to the colors typical of 1960s and 1970s television output. Speaking to VFX Voice, Hall noted:

‘I worked meticulously with Mark Worthington and [costume designer] Mayes Rubeo to find the right balance of color in the frame and to accent this using complimentary colors. The Episode 3 LUT stretched the color separation even further, and then in the DI we used Resolve to push certain colors a little further or shift their hue to a more pastel and secondary realm.’

A total of 23 different LUTs were created for the limited series. ‘When it came to the colour work,’ Hall says, ‘I was able to take [SD and HD production stills] and actually analyse RGB values in the colours I was seeing, and basically refine them into a 20-colour palette for each era.’

By the time we arrive at the grain facet of the appearance of a film stock, it has become rather bound in with the prior elements. However, the way that a color stock behaves at different apertures and in different lighting conditions (bright lighting, for instance, tends to minimize the graininess effect) is part of its unique footprint, or signature effect.

The light sensitivity and dynamic range of old film stocks can be recreated, to a certain extent, by constraining the tolerance of the grayscale value in each channel, effectively degrading the amount of light/detail registered.

As with full-fledged CLUTs, it is relatively easy to simulate ISO-based detail from older images, by degrading the range displayed in a modern and high-quality digital image/frame. Source (CCO): https://stocksnap.io/photo/woman-female-JDXWHY8CIN
As with full-fledged CLUTs, it is relatively easy to simulate ISO-based detail from older images, by degrading the range displayed in a modern and high-quality digital image/frame. Source (CCO): https://stocksnap.io/photo/woman-female-JDXWHY8CIN

The color gamut of a classic film stock can, as we have seen, be imitated by CLUTs and other grading procedures, as well as by synchronizing the color palette of the production design with the limited colors of film stocks of yore. In the case of Amazon retro outing The Marvelous Mrs. Maisel, the footage was graded on set, using a consistent bespoke CLUT, in accordance with the Color Decision Lists (CDLs, a standard from the American Society of Cinematographers).

But imitating the signature graininess of a stock is a rather harder prospect, because, as we’ll see, the physical model of halide-based footage differs notably from RGB, its digital successor, and cannot be ‘downgraded’ or ‘side-graded’ into an authentic appearance from raw modern footage, as other aspects of the stock can.

Stock Footage

Films such as Woody Allen’s Zelig (1983) and Robert Zemeckis’ Forrest Gump (1994) periodically attempt to recreate the look of old film stocks, for narrative purposes; but even when nostalgia is not the intent, many modern directors, raised on (and steeped in) classic cinema, seek that ‘celluloid look’, even when they are shooting (much more economically and conveniently) on digital video.

For instance, director Denis Villeneuve shot his entries to date in the Dune franchise on digital video, transferred them to celluloid stock, and then digitized the transferred footage back into a modern digital editing pipeline – a great deal of effort, in order to capture the atmospheric qualities of analog media.

At the same time, A-list director Christopher Nolan continues to shoot all his output on celluloid, actively campaigning that the industry should not abandon the older media in new works.

Additionally, esteemed cinematographer Darius Khondji (City Of Lost Children, Se7en, Alien Resurrection) has said that the request to add analog grain to digital output is widespread:

‘Most movies shooting digitally want to add grain. In fact, there’s only one digital feature I’ve graded that didn’t add grain.’

To add grain to the 2019 Netflix outing Uncut Gems, Khondji shot a plain grey background on film stock, which was then pushed to accentuate the grain effect. He then integrated the grain into the smoother digital footage taken for the production, using a two-minute loop of grain footage – long enough that viewers would not discern repetition in the grain rendering.

Examples from Uncut Gems, showing the effect of adding real footage of film grain to digital footage. Source: https://theasc.com/blog/the-film-book/grain-ghost-frames2-uncut-gems-with-darius-khondji-asc-afc
Examples from Uncut Gems, showing the effect of adding real footage of film grain to digital footage. Source: https://theasc.com/blog/the-film-book/grain-ghost-frames2-uncut-gems-with-darius-khondji-asc-afc

Therefore, besides the need to recreate the style of old footage, a great deal of effort could be saved if there was a convincing method of authentically simulating film stock from any era, whether classic or modern.

One can take an RGB image (for instance, a frame of film) and apply a random grain algorithm to each channel, in an application such as Adobe Photoshop or After Effects; but the result is notably ersatz, and does not resemble classic film grain:

Left, the original image; center, a single pass with Photoshop's Grain feature from the Camera Raw plugin, applied to all channels at once; right, the same Grain filter applied separately and successively to each channel. Source image (CCO): https://stocksnap.io/photo/woman-beach-FJCOO6JWDP
Left, the original image; center, a single pass with Photoshop's Grain feature from the Camera Raw plugin, applied to all channels at once; right, the same Grain filter applied separately and successively to each channel. Source image (CCO): https://stocksnap.io/photo/woman-beach-FJCOO6JWDP

In the image above, the Grain feature in Photoshop’s Camera Raw filter is used; in the central image, the filter is applied all at once to the entire image; in the right-most picture, the filter, which consists of the application of random noise, is applied separately and successively to each channel.

However, as has been pointed out by other critics of digital-only grain emulation, halide crystals come in varying sizes – a characteristic not enabled in a typical ‘random noise’ digital procedure of this kind, which usually restricts grain emulation to pixel-level sizes.

In any case, even if it were possible to store and apply exactly the same random noise pattern at each application, it is obvious that this ‘grungy’ effect is a digital filter – there is no agreement or cohesive effect, even when the filter is applied at a single pass, and this does not look like authentic color negative or positive film.

That’s because this is an overlay manipulation of a picture that has already been taken, whereas genuine film grain is a literal snapshot of a moment in time when all the silver halide layers crowded into the celluloid were activated.

What’s needed is some way of guessing which parts of a picture would be attributable to each halide layer, if the picture had been real, and isolating and acting on those layers, instead of using RGB separation.

Identifying Halide Layers in Color Stock

Los Angeles- based VFX professional Beat Reichenbach has recently leveraged a couple of the very small number of research papers to have addressed ‘authentic’ film grain simulation, and in a post on LinkedIn in May, demonstrated some base examples of grain simulation. We chatted with Beat about this work; but first, let’s see what was involved in the challenge.

There is no single pass that can recreate film grains of the past, since film does not capture imagery in a single pass, but instead registers each color strata on a dedicated sub-layer of film. Therefore the challenge is to identify the pertinent halide layer solely from each of the color layers – since each color layer is in itself a black and white image, this is challenging.

Therefore Reichenbach turned to a seminal 2019 paper from EFPL, Kit and famed VFX company Weta Digital, which facilitates a zero-error mapping of converting textures with tristimulus colors (color representations native to the physical media of film) into the conventional spectral domain, where they can be acted upon.

The 2019 work facilitated a mapping from the otherwise opaque tristimulus space into modern spectral ranges. Source: https://rgl.s3.eu-central-1.amazonaws.com/media/papers/Jakob2019Spectral_3.pdf
The 2019 work facilitated a mapping from the otherwise opaque tristimulus space into modern spectral ranges. Source: https://rgl.s3.eu-central-1.amazonaws.com/media/papers/Jakob2019Spectral_3.pdf

Once the spectral distribution has been guessed, it is possible to use a French method developed in 2017 to apply an apposite and authentic grain effect to each of the three color layers, to obtain a synthesized simulation of film grain.

The French work offers two approaches to a film grain rendering algorithm, using Monte Carlo simulation (a computer-intensive random sampling algorithm that has long been a common feature in traditional CGI render techniques) to determine the value of each output rendered pixel, which allows complete control over the intensity of the film grain effect.

The accompanying video for the 2017 IPOL paper, which provides a method of authentically adding film grain, but not of identifying the correct pixels relating to the silver halide layers of film stock.

In the images below, kindly produced for this article by Beat Reichenbach, the 2019 method is used to individuate target layers that correspond to (what would be) the related halide layers, and the 2017 IPOL method is used to impose the film grain effect on these ‘separations’ (by addressing and ‘graining’ each color layer separately, and then concatenating each altered layer into the appropriate RGB channel).

The effect of increasing grain applied to an open source image. The grainiest effect would be equivalent to the highest-speed film stock, which would be the most light-sensitive and versatile, but the least refined, in terms of film grain. Original image source (CCO license): https://stocksnap.io/photo/female-portrait-UOBQKFXUIG Manipulations courtesy of Beat Reichenbach, who conducted them as an illustrative aide for this article.
The effect of increasing grain applied to an open source image. The grainiest effect would be equivalent to the highest-speed film stock, which would be the most light-sensitive and versatile, but the least refined, in terms of film grain. Original image source (CCO license): https://stocksnap.io/photo/female-portrait-UOBQKFXUIG Manipulations courtesy of Beat Reichenbach, who conducted them as an illustrative aide for this article.

This is a sharp contrast to the earlier example of adding ‘overlay’ grain via Photoshop’s Grain filter.

‘An overlay,’ Beat Reichenbach explains, ‘will just make a pixel a bit darker or brighter from where it is. For example it adds random values in the range of [-0.1, 0.1], meaning the average pixel will still be the same color. So an average brightness of 0.2 is made up of all the pixels being roughly in a range of [0.1, 0.3]. There are no black areas, just gray areas that are a bit darker.’

‘The grain model from the 2017 paper,’ He continues, ‘is generating dots with a density that is based on the brightness. Those dots are either 0 or 1. The average brightness of 0.2 for example comes from 80% of space being black (0) and 20% of space being filled with dots (1).’

Color gradients manipulated at varying simulated ISO intensities by Beat Reichenbach, illustrating the more decisive and authentic choices that halide layer targeting can achieve. Images courtesy of Beat Reichenbach.
Color gradients manipulated at varying simulated ISO intensities by Beat Reichenbach, illustrating the more decisive and authentic choices that halide layer targeting can achieve. Images courtesy of Beat Reichenbach.

To a certain extent, one can try out Reichenbach’s method out using the online demo for the 2017 paper, by converting each of the RGB layers in an image into a separate monochrome image, running it through the online processor, and pasting the apposite processed images back into the respective RGB layers of the original document.

I have done so in the example below, and was able to produce a more effective kind of film grain than by the use of Photoshop’s Grain filter alone – but this method lacks the halide layer-targeting provided by the 2019 research, which Reichenbach has implemented into a production system that he may be releasing soon.

By separating the source image into three grayscale images, each representing a constituent color channel, processing each image through the French 2017 method, and re-compositing the output into the correct channel, a more authentic grain can be obtained.
By separating the source image into three grayscale images, each representing a constituent color channel, processing each image through the French 2017 method, and re-compositing the output into the correct channel, a more authentic grain can be obtained.

Though we can see that the example above is more convincing than the earlier example of the same image processed in the same way with the simple Photoshop Grain filter, the initial selection of pixels was simply via the existing RGB channels in the image, rather than more intelligent estimations of per-color halide layers, as Reichenbach has implemented from the 2019 work.

Reichenbach told us that one issue with the 2019 halide layer estimation method is that highlights and specularity are likely to interfere with the process, and produce incorrect results – though the method is effective with reflective data.

‘During this whole process,’ Reichenbach says*, ‘I do several tricks by working in ratios to always preserve the final sRGB color. So instead of reconstructing the sRGB based on the different emulsion layers, I investigate what percentage of the final result is in the red, green and blue layer.’

‘So,’ he continues. ‘even if my absolute values are off, using the ratios, I always perfectly recreate the same output as the input. It’s just essentially coloring my grain correctly and coloring my halation correctly.’

Reichenbach notes too the relatively small number of projects that address the challenge. In 2007, research from the UK’s Bournemouth University offered a simpler method of halide approximation, using a noise power-spectrum model, and also integrated a grain-imposition method – though the research does not seem to have been developed since then.

The noise power-spectrum model treats grain as noise added to an average density, and the 2007 Bournemouth research leveraged this as a method of adding more authentic grain. Source: https://eprints.bournemouth.ac.uk/10547/1/grain.pdf
The noise power-spectrum model treats grain as noise added to an average density, and the 2007 Bournemouth research leveraged this as a method of adding more authentic grain. Source: https://eprints.bournemouth.ac.uk/10547/1/grain.pdf

One problem with film grain is that it tends to get ‘smoothed out’ by video codec compression (similarly to the way that machine learning vision systems tends to ‘discount’ grain – see ‘Machine Learning and Film Grain’ below). This can be avoided by using only minimal compression, so that the file size of the movie or TV show is very large – but this makes the output unsuitable for distribution via streaming, as it would take too much bandwidth.

Therefore, as Reichenbach points out, Netflix, among other producers, uses the film grain tool in the popular AV1 video compression codec to preserve or restore film grain at deliverable sizes (i.e., a size reduction of around 30%).

Grain applied via the AV1 grain tool. Source: https://norkin.org/pdf/DCC_2018_AV1_film_grain.pdf
Grain applied via the AV1 grain tool. Source: https://norkin.org/pdf/DCC_2018_AV1_film_grain.pdf

American cinematographer Steve Yedlin (Star Wars: The Last Jedi, Looper, Knives Out), as Reichenbach points out, has developed a bespoke method for adding film grain to digital video

‘[For] film grain, I’ve developed my own algorithm which, unlike many grain plug-ins that try to record and then repeat apparent grain geometry that happened to occur on one specific occasion on one specific strand of film.

‘I’ve taken an empirical data set of real film, studied the probability distribution for various grain amplitudes and emulated that probabilistic distribution of those amplitudes. Thus I use a totally empirical model of real film…But the algorithm is probability-based instead of geometry-based.’

However, Reichenbach states that though this method appears similar to his own use of the 2017 and 2019 papers, the proprietary nature of the approach makes it difficult to know if this technique is truly unique, or capable of truly capturing and reproducing a film grain signature for a particular stock.

The LA-based post-production house LiveGrain offers ‘Real Time Texture Mapping’, but it’s more akin to the earlier-cited Darius Khondji method than Reichenbach’s implementation (or than any other approach that seeks to individuate interpreted halide layers) in that it effectively applies film grain to masked areas.

‘What LiveGrain is doing,’ Beat Reichenbach comments, ‘is essentially masking out dark areas of the image, and overlaying grain captured on dark plates. And masking bright areas, and overlaying bright grain.’

Other popular current solutions include Invisipro’s InvisiGrain product – though example footage shows the imposed grain as a kind of ‘sizzling’ overlay, rather than an authentic reproduction of film stock; Dehancer, whose official examples likewise appear as a ‘busy overlay’ rather than as intrinsic film grain (see video below); and the DaVinci Resolve plugins FilmBox (which maps to the Kodak Vision3 chemical film ecosystem) and GrainLab– though neither of these gets completely out of the ‘ uncanny valley’ of emulated film grain, except in fleeting moments or favorable circumstances.

Like InvisiGrain, Dehancer produces an ‘overlay effect’ of grain that does not necessarily match the appearance of true film grain.

A number of other Resolve plugins address this need, including Photographer Tom Bolles’ Cineprint16.

Discussing this topic, Metaphysic’s senior VFX editor Rosa Sánchez reports to be a fan of Juan Melara’s FilmUnlimited Powergrade, and also Mononodes’ script-based procedures for the DaVinci Color Transform Language (DCTL). 

Machine Learning and Film Grain

Latent Diffusion Models (LDMs) such as Stable Diffusion have gained so much influence in the research literature and in the research efforts of visual effects companies, over the past two years, that it seems obvious that machine learning approaches of this type could be of great benefit to the ambit of simulating film grain.

Unfortunately, film grain effectively qualifies as ‘noise’ to an LDM like Stable Diffusion, whose job it is to produce very high-quality images from base random noise patterns – and to leave as little of that noise behind as possible. In discarding noise originated by the system itself, an LDM is very likely to also treat film grain as an irrelevance related to medium.

The denoising process native to Latent Diffusion Models such as Stable Diffusion operates in a way that is unlikely to be able to authentically capture grain, since individual halide grains are generally smaller than the finest detail the model is likely to obtain, and since the training process for the model will have treated any grain in source images as ‘noise’ to be discarded in favor of shape and color.

The problem is illustrated by LoRAs and checkpoints at the popular civit.ai open source model zoo: in models that attempt to simulate grain, in styles such as pointillism, it is very difficult to obtain ‘fine-grained’ output, because the entire aim of training is to discard the medium and to distill shapes, color combinations, and distinct detail. This makes it far easier to simulate the bold delineations of Vincent Van Gogh than the grain-like dots of Seurat.

A popular pointillism LoRA at civit.ai can handle large grains, since these have been trained in as separate objects; but fine grain is a greater challenge for diffusion-based methods. Source: https://civitai.com/models/121293?modelVersionId=131956
A popular pointillism LoRA at civit.ai can handle large grains, since these have been trained in as separate objects; but fine grain is a greater challenge for diffusion-based methods. Source: https://civitai.com/models/121293?modelVersionId=131956

Though a number of models offer film grain simulation, there is no evidence from the showcased work that production-quality fine control can be obtained; and no indication, based on the way that LDMs work, that image-to-image methods could ever consistently apply even a basic film grain style to video frames without distorting the content of the original image/s.

Forays into ML-based grain addition have been made. For instance, InterDigital has released a dataset of film grain applied at five different intensities to ‘clean’ digital images, with the grain added via the usage of the aforementioned 2017 method (though apparently with no specific targeting of halide layers).

The dataset consists of 148,694 grain-free original images, and the respective five amended versions brings it to 892,164 total images, which seems suitable for machine learning usage (if one can accept the relative inauthenticity of the synthetic data).

In regard to the study of grain properties, as opposed to the ineffective use of pixel-sized noise, research strands in chemical engineering and related disciplines may prove helpful in producing datasets that have a more intelligent and accurate understanding of the size and disposition of halide grains, so that these can be treated as ‘final objects’ at training time, opening up the possibility of stock-specific halide mappings, where available material allows.

From a 2021 study, a comparison of grain clustering characteristics (not specifically related to film or halides) from a specialized cluster dataset – the potential beginning of acknowledging that halide particles are not 'dots', but varied and interlocking entities to be studied in a training process. Source: https://arxiv.org/pdf/2112.03348
From a 2021 study, a comparison of grain clustering characteristics (not specifically related to film or halides) from a specialized cluster dataset – the potential beginning of acknowledging that halide particles are not 'dots', but varied and interlocking entities to be studied in a training process. Source: https://arxiv.org/pdf/2112.03348

In an AI-based ‘graining’ procedure resulting from this kind of study, any halide grain equivalent in size to a pixel would simply be color-mapped, much as occurs now for the various grain-adding procedures that operate at pixel-level; but any plausible halide that exceeded the size of a pixel would have to draw multiple pixels into a single color-value ‘halide island’, representing the genuine variegation of film stock (and most particularly where an older and less sophisticated stock is being simulated).

The prospect, then, is nothing less than the creation of at least 24 mosaic-based frame re-interpretations per second, from the original source footage.

While color reference tables and technical data are available for a wide variety of modern and older film stocks (and are used in CLUT development and other grading technologies), the prospect of AI-based ‘grain lookup tables’ (presumably, ‘GLUTs’) might involve the development of novel methods to study, evaluate and break down the halide characteristics of particular film stocks – a challenging prospect, when, in many cases, only exposed film is available to study (in the case of a long-extinct emulsions), and no unexposed reels are available for standardized reference generation.

Beat Reichenbach, though not currently exploring ML approaches to grain addition, believes that any AI-based approach of this kind would likely need to individually target the density curves of each targeted color channel, and learn to interpret and transform wavelength data, presumably at some scale, even for emulation of a single film stock.

‘The important step,’ he says, ‘between guessing the spectral distribution and applying the grain is that we use spectral density curves from film stock to know what wavelengths each emulsion layer reacts to, to generate the emulsion layer. Then we apply the grain model to that emulsion layer.’

Since each color channel is effectively grayscale, this kind of targeting, at scale, may require the formulation of a new kind of mapping; what the anchors would be for such a mapping, if the Monte Carlo method alone proved insufficient, is not clear.

At the current state of the art, the various sleight-of-hand methods available are capable of broadly simulating classic film stocks, at a pinch; and while the Villeneuve/Nolan approach undoubtedly achieves authentic film stock appearance, transferring digital output to celluloid (and back to digital) at scale is not practicable for the average production budget.

To date, as a personal observation, I have yet to see the true appearance of classic film stocks emulated with 100% effectiveness, although CLUT mapping, together with deliberate palette limitations in production design (effected in retro outings such as Ratched, The Marvelous Mrs. Maisel and Masters of Sex) go a long way to approximating the authentic experience. It remains to be seen whether AI will be able to overcome the practical considerations of the challenge, and get us to the final hurdle – completely convincing and stock-specific emulsion simulation.

Thanks to Beat Reichenbach for his cooperation and feedback, to Metaphysic’s Jerom Root for pointing out this interesting topic, and to Chris Turner, Rosa Sánchez, Jo Plaete, and other Metaphysic staff, for contributing ideas.

* My inclusion of explanatory hyperlink/s.

GrainLab’s site offer’s impressive static images, but avoids video samples, which is where the inauthenticity of the ‘sizzling’ effect tends to be more obvious.

More To Explore

LayGa - Source: https://arxiv.org/pdf/2405.07319
AI ML DL

Editable Clothing Layers for Gaussian Splat Human Representations

While the new breed of Gaussian Splat-based neural humans hold much potential for VFX pipelines, it is very difficult to edit any one particular facet of these characters, such as changing their clothes. For the fashion industry in particular, which has a vested interest in ‘virtual try-ons’, it’s essential that this become possible. Now, a new paper from China has developed a multi-training method which allows users to switch out garments on virtual people.

A film grain effect applied to a stock image - source: https://pxhere.com/en/photo/874104
AI ML DL

The Challenge of Simulating Grain in Film Stocks of the Past

Hit shows like The Marvelous Mrs. Maisel and WandaVision use some cool tricks to make modern footage look like it was shot in the 1960s, 70s, and various other eras from film and TV production. But one thing they can’t quite pull off convincingly is reproducing the grainy film stocks of yesterday – a really thorny problem that’s bound up with the chemical processes of emulsion film. With major directors such as Denis Villeneuve and Christopher Nolan fighting to keep the celluloid look alive, it would be great if AI could lend a hand. In this article, we look at the challenges involved with that.

It is the mark of an educated mind to be able to entertain a thought without accepting it.

Aristotle