HOW TO: Correctly Sequence Your Processors
Originally published in Emusician on 9/26/15 by Michael Cooper
In many applications, the order in which processing is applied to tracks is ruled by creativity. But in post-production for film and video projects, the main objective is improving the sound quality of tracks, many of which were probably recorded in uncontrolled environments. Unfortunately, daisy-chaining plugins in the wrong order during post-production can actually make these tracks sound significantly worse in some respects.
In this article, I’ll show you how to order processors correctly for the best possible sound on dialog tracks, since those usually present the biggest challenges in post-production. It’s not practical to suggest an ironclad order for chaining plug-ins for every dialog track, as each one may require something different. It’s more feasible to discuss how not to arrange plug-ins, as specific configurations will consistently degrade the sound or, at the very least, make your workflow considerably more difficult. This will be my approach, and I’ll use iZotope RX4 Advanced (a leading post-production software suite) to illustrate key points.
BEWARE THE NOISE FLOOR
It’s common for untreated dialog tracks to fluctuate widely in level. Resist the temptation to maximize, limit or compress a dialog track before removing broadband noise using RX4’s Denoiser plug-in in (see Fig. 1). Compressing the dynamic range makes low-level signal—including noise— louder with respect to either peaks or average levels (or both), where desired signal resides. A higher noise floor makes it all the more difficult to remove enough noise without introducing ugly artifacts. That is, the closer in level the noise floor is to desired signal, the harder it becomes to set a noise-reduction threshold that isolates the two so you can throw out the chaff while keeping the wheat.
Similarly, let the film producer or videographer know that you prefer they don’t ride fader levels before you process the tracks. Any track they give you that has fader automation rendered will also have a noise floor that moves up and down in tandem with their fader action. The changing noise floor will make it harder for you to set an optimal threshold for the entire track in your noise-reduction software.
WHEN TO EQ
It’s also not a good idea to apply equalization to a track before removing broadband noise. Any EQ upstream of noise reduction will shape both the signal you want and the noise you don’t. That’s not a minor point when you consider that equalizer settings are often revisited and fine-tuned—sometimes multiple times—during mixdown. If you instantiate Denoiser post-EQ, you’ll have to readjust its threshold (or the breakpoints that adjust processing depth for each frequency band, if you’ve crafted a custom noise-reduction curve across the spectrum) every time you reset an EQ parameter. There’s enough to keep track of in post-production sessions without having to go back constantly and reset a noise-reduction threshold that was previously dialed in perfectly.
Interviews recorded in small, live rooms are often plagued by boomy or muddy room tone. If you need to use RX’s Dereverb plug-in (see Fig. 2) to remove excess room tone—or reverb, for that matter— from a track, be sure to place the plug-in in an insert before equalization (and possibly in front of other processing, as well). Once you’ve attenuated the bassy room tone with Dereverb, you’ll often find the dry voice doesn’t need as much low-frequency EQ cut as you previously thought. On the other hand, if you apply EQ before Dereverb you might end up cutting the dialog track’s low frequencies too much. Even a bright voice can excite bassy room resonances, and the dry component of the voice can all too easily end up being thinned too much by EQ cut before the room tone on the track is sufficiently diminished. And similar to using EQ before noise reduction, every time you fine-tune the EQ you’ll need to readjust settings in Dereverb again. Always treat room ambience before applying EQ.
DYNAMIC GOES BEFO RE STATIC
This last tip is more of a helpful suggestion than a hard and fast rule, but it should nevertheless help you produce better-sounding tracks faster (even in music production). When both static and dynamic EQ are needed on a track, you’ll likely get the best results if you apply dynamic EQ first. By applying dynamic EQ (or multiband compression, its close cousin) to fleeting peaks and troughs in isolated frequency bands, you corral the track into a more or less static spectral response. That makes it much easier to identify what’s consistently wrong with the track’s tone from start to finish. Static EQ finishes the job.
Michael Cooper is a recording, mix, mastering and post-production engineer, and a contributing editor for Mix magazine. You can reach Michael firstname.lastname@example.org and hear some of his mixes atsoundcloud.com/michael-cooper-recording.