Some Recording/Mastering Best Practices

 


***** UPDATED IN 2023 *****

Though I plan to cover many different facets of music on this blog, I thought I’d kick things off with a post that’s focused on the audio-engineering side of the spectrum. (My next post will likely be more directly focused on the subject of music itself.)

Since ending my long professional/technical career in October of last year, I’ve been focusing on making new music while also diving very intensely into the finer arts of mixing and mastering audio. While I’ve become quite active in answering questions and trying to assist my fellow musicians and audio engineers online, I thought I'd go ahead and share some of the best practices I’ve chosen to adopt in hopes it might help some other folks along the way.

With this in mind, here are some of the current standards I try to leverage while recording, mixing, and mastering audio in my DAW (REAPER in my case, but everything here should translate to other modern DAWs as well):
  1. Project sample rate:
    • For new projects: 48 kHz
    • For projects created by others, I will generally match the sample rate of the content provided (with any sample-rate conversion deferred to mastering)
  2. Audio tracking specifications:
    • Media type: WAVE (uncompressed)
    • Bit depth: 24 bits
    • Target recording levels (when using 24-bit bit depth as recommended above):
      • Average: -18 to -20 dBFS
      • Peaks: -10 dBFS or under
  3. When rendering/bouncing a mix to send off for mastering:
    • Media type: WAVE (uncompressed)
    • Bit depth: 32-bit floating point
  4. Target output levels when performing mastering work:
    • Peaks: -1 dBTP (true peak) or under
      • Exception: -2dBTP ceiling for Spotify if program material exceeds approximately -14 LUFS (integrated)
    • Loudness: Primarily by ear but with the maximum short-term loudness generally measuring no higher than -9 to -10 LUFS
    • Side note on dithering: When rendering your masters—either to 24- or especially 16-bit WAVE (or FLAC) files—be sure to leverage appropriate dithering/noise shaping options
Though experts don’t always agree on the finer details, I believe you’ll find these recommendations to be within the bounds of general consensus. If you’re simply looking for the data itself, please feel free to stop here—and I hope you’ve found this guidance to be helpful.

For those who are interested in the underlying reasoning behind my recommendations, however, I’m happy to share the following details (corresponding with the bullet items listed above):
  1. *Project sample rate:
    • 44.1 kHz has been the established standard for digital audio, starting with the advent of the compact disc but maintained in the present day by the majority of major music streaming services (though some have been starting to talk about audiophile options as high as 192 kHz).
    • 48 kHz has been the audio standard in video production and, while I used to recommend this for projects destined solely for video, it is now the preferred sample rate for everything new.
    • While sample-rate conversions can be performed, they should be avoided whenever possible as the process can add undesirable artifacts to the results. However, if this conversion ends up being required, it is something best handled at the very end of the process—ideally by the mastering engineer (who may also leverage analog gear in their signal path anyway).
  2. Audio tracking specifications: Standardizing on 24-bit WAVE files for recording your tracks provides an optimal “sweet spot” with plenty of dynamic range/headroom while still keeping corresponding file sizes and processing resources down to reasonable levels. (24-bit audio resolution provides 144 dB of dynamic range, whereas 16-bit only offers 96 dB. Furthermore, as you want to avoid digital clipping if at all possible, some of this dynamic range needs to be sacrificed to allow for ample headroom.) Assuming you’re using a modern DAW, all internal audio processing should be handled at the 32-bit floating-point level or better—and the resolution of the tracks you record will represent the quality of the source audio as the entry point for this higher-order processing.
  3. When rendering/bouncing a mix for mastering: I recommend outputting to 32-bit floating point WAVE files for several reasons:
    • This format allows for a whopping 1528 dB of dynamic range, which is way more than you’ll ever need, but also provides for optimal processing of the lowest-order bits in your DAW while avoiding any pitfalls associated with truncating, dithering, and/or noise-shaping mixes that are rendered/bounced to 24-bit (fixed-length) WAVE files.
    • Though I’d recommend avoiding having your mixes peak (including intersample peaking) over 0 dBFS as a rule, the floating-point component allows for values over 0 dB to be represented accurately, without clipping. This provides a helpful safety net should there be issues with offending peaks that might otherwise inadvertently sneak through.
  4. Target output levels and dithering when mastering:
    • Though quite a few mastering engineers seem to favor outputting their work with a peak threshold that’s closer to 0 dBFS (sometimes setting a ceiling as high as -0.1 dBFS), studies have shown that lossy compression can add as much as 1 dB of extra true-peak gain when compared to its uncompressed source. Additionally, some streaming services already recommend this -1 dBTP limit (for this very reason). As most of today’s music will be converted for lossy playback at some point or another anyway, why not embed the applicable headroom into your master at the outset?
      • In regard to the Spotify-specific footnote, please have a look at this reference.
    • Thankfully, the “loudness wars” generally seem to be coming to an end. For better or worse, audio streaming seems to be the prominent method for listening these days and the major streaming services all seem to have adopted at least some level of loudness normalization—with -14 LUFS (integrated) being the most popular target level at the moment. That being said, much of the music I produce, mix, and/or master is dynamic in nature and I’d prefer to focus on ensuring the loudest sections of the music gel with one another, as opposed to worrying about overall averages. Additionally, the further you compress and limit a master, the more you are likely to sacrifice musicality, nuance, and dynamic range in the process. It’s all a balancing act, of course, but why not favor sonic purity over raw (perceived) volume?
If you’ve made it all the way to the end of this post, I hope you found these explanations to be of additional value. Should you have any questions, or possibly a differing opinion, please don’t hesitate to comment below. Thanks for reading and I wish you all the best with your recording, mixing, and mastering!

--------------------

*October 2023 update: As mainstream sample-rate conversion (SRC) algorithms have improved dramatically over the past few years, I now recommend using a 48 kHz sample rate for all new projects—even when the material is destined for release on compact disc. My recommendation for making the final conversion to 44.1 kHz (for CD release) at the end of the mastering process—whether done independently or using a professional mastering engineer—remains intact. If performing the conversion yourself, please check this great SRC-conversion algorithm quality reference to see how your DAW performs in this area. Additionally, you can always use this free industry-standard stand-alone conversion software (Windows only) to make the final conversions.

Comments

Popular Posts