Audio normalization is a process used in audio production to adjust the volume of a recording to a consistent and optimized level so that your podcast listeners or YouTube viewers can enjoy a wonderful experience.
And while it might sound boring, normalization is actually a powerful automation tool for your workflow!
In this blog you'll learn everything you need to know about audio normalization.
You’ll learn:
Here’s a short list of key takeaways.
With that out of the way, let’s dive into the details.
Audio normalization is a process used in audio production to adjust the volume levels of a recording to a consistent and optimal level. In other words, it’s a process that increases the volume of your entire audio file so that it reaches a target, or normal, level. It does NOT change the tone of your audio at all but simply changes the volume of the file.
Typically people use normalization to turn up the volume of their podcast audio without accidentally creating clipping, the distortion sound that happens when digital audio goes above the maximum allowable volume (0 decibels full scale).
Normalization can be done on a single track (your cohosts voice) or on an entire master file (multiple speakers, music, sound effects). When combined with other audio processing techniques like compression and equalization (EQ), audio normalization can make your podcast sound truly professional.
There are a couple main reasons to normalize you podcast audio.
1. To match the volume of different segments like music, ads, and a monologue
2. To match the volume across separate audio tracks
3. To increase the volume of a really quiet audio recording and quickly prepare it for other steps in the mixing and mastering process
There are two types of normalization: loudness normalization and peak normalization. Loudness normalization is most relevant for podcasting, so we’ll talk about that first.
Loudness normalization in audio refers to the process of adjusting the perceived loudness of audio content to a specified target level, typically measured in units such as LUFS (Loudness Units Full Scale) or dBFS (decibels relative to full scale). In other words, this method takes into account what volume sounds like to a human ear, rather than simply changing the volume to hit a specific decibel number of loudness.
The primary goal of loudness normalization is to ensure that your audio plays back at a consistent and comfortable volume level for the listener.
Want to learn more about what LUFS are and how loudness is measured? Read our guide on LUFS here.
Here are the key aspects of loudness normalization:
Loudness normalization in audio ensures that audio content plays back consistently, making it easier for listeners to enjoy content without constantly adjusting their volume controls.
It’s critical for maintaining audio quality and ensuring compliance with platform standards in various contexts like Apple Podcasts, Spotify, and YouTube.
A quick note about LUFS: It’s worth mentioning that Loudness Units Full Scale is going to be a term you hear in a few different contexts. At the end of the day, LUFS is simply a method of measuring how loud audio is. It is used when normalizing audio (if you are doing loudness normalization described above), mastering your final audio file (to make sure your audio is optimal loudness for platforms), and even used in apps like Spotify, Apple Podcasts, and YouTube to manage the average playback volume of the many different audio sources that are uploaded there.
Peak normalization is a process used to adjust the volume level of an audio recording so that the highest peak amplitude (the loudest moment) reaches a specified target level without affecting the overall loudness or perceived volume of the entire audio file.
Translation? The loudest part of your podcast will be adjusted (up or down) to the specific volume that you determine, but the quieter parts will not be changed.
This process is primarily concerned with preventing distortion and ensuring that the audio doesn't exceed a certain maximum level.
Here's a breakdown of how peak normalization works:
Peak normalization is most useful for preventing distortion in recorded material.
A great sounding podcast starts with a great recording. And the same is true for normalization: Great normalization starts at the source.
Normalizing audio is not exclusively something you do when editing the podcast. You can guide your podcast guests on how to set their equipment before recording to prevent potentially bad audio from being recorded.
For example, we wrote a guide on how to record a podcast that shows you the best tools and techniques. If you follow that list of recommendations and try not to move around too much you’ll already reduce the amount of normalization you need to do afterwards. After recording, in your editing process, you should always normalize the audio of the entire episode. This assures that you don’t have sudden volume variations between segments or during transitions.
Remember that the specific timing and approach to audio normalization may vary depending on your podcast's style, content, and production workflow.
Just because a tool’s in your box doesn’t mean that you should use it.
While audio normalization is a valuable tool in podcast production, there are certain situations when you might want to avoid or limit its use.
Below are some scenarios when you should not normalize your audio.
In these situations, it's essential to strike a balance between consistency and artistic intent.
Ultimately, the decision on whether to normalize audio in your podcast should align with your podcast's style, content, and goals. It's a creative choice that should serve the best interests of your podcast and its intended audience.
There are two ways you can perform audio normalization: manually or automatically through a powerful podcast editor like Resound.
Many DAWs have built-in normalization features that can serve as a starting point. For example, you can normalize your audio with Audacity or Logic Pro out of the box. But you can also use proper gain staging (setting levels), compression, and even adjusting the clip gain of your audio to achieve a balance. There’s more than one tool in your toolbox if you are using a DAW, but you’ll need to learn how they work together.
Once you normalize your individual tracks though you also need to make sure your master output is set to the proper loudness, which we discuss in this article on LUFS.
Finding, downloading, and learning how to use a Digital Audio Workstation (DAW) can be daunting. Thankfully, there’s a much easier way…
Resound makes it as easy as a click on your laptop to normalize your podcast. All you have to do is create a free account, upload your audio files, turn on Enhance to automatically normalize and master to -16 LUFS and a true peak level of -1 dB (recommended by Apple Podcasts). Then you can review, export, and share your normalized file with the world!
Resound Enhance is powered by AI and runs your audio through a series of audio algorithms and signal processes that will remove background noise, make your dialogue crisp and clear, AND normalize and set the proper loudness.
Here’s a short list of key takeaways.