What is audio mixing? A brief overview for video people.

A very important part of making a soundtrack for a video is the audio mixing phase, which is what a sound engineer will do after a track is recorded or produced, often with a producer. Mixing is quite a mysterious subject for people who don’t understand how music is made, so I thought that I could break down the subject a bit and try to demystify this very complicated process. In my work I perform a mixdown for all recorded and produced works to help balance all recorded work so all elements are audiable and fit well amongst one another (Hear my audio work, here). In basic terms, a mix is the process undertaken in which dozens of linear individual tracks, each containing one sound are compiled into just two tracks – commonly referred to as stereo, which is what most of us will hear when watching a film/video. (Obviously there is 5:1 for cinema and other flavours as well, but that’s for a different blog). During that process, the whole piece of music/audio is balanced and enhanced in preparation for mastering.

It all begins with recording or production

When audio mixing for a piece of music or an audio soundtrack begins, it begins from the very first recording. First of all, you need something to mix, and whether that’s instruments, field recordings, audio fx or samples, this is where the first consideration for the mix down begins. To put it simply, a badly recorded or produced soundtrack or piece of music will produce a bad mix, because a mix engineer can only work with what they have so getting a clear sound is key.

When recording an instrument, for example, things we will take into consideration are the proximity of a microphone to the instrument. If a mix engineer is working on a recorded track where all of the instruments are recorded at a distance from the microphone, the engineer will not have much scope to add acoustics to the track in the later stages of the production. Microphones are responsive like a human ear is, so when a sound is close to the ear, it appears closer, the same with a microphone. When you have presence to work with in a recording, you also have the possibility to take it away and be more experimental with the space the instruments/voices fit in during the mixing stage, and vice versa.

Audio software and mixing environment

Recording instruments or voiceovers for video (Click here to hear some examples of my voiceover work), for example, can take place in a studio to ensure minimal audio artifacts and reflections which make the recording messy and hard to work with. The studio also houses the software that will be used to mix all the elements together, called a Digital Audio Workstation (DAW). Sometimes the track will be mixed in the same studio, and other times moved to a separate production studio for the mixing stage of the process. This can often be determined by budget, as larger recording studios cost more, and can become unnecessary for the mix as one room is only needed. One of the most important parts of the mix down is the listening environment, so things engineers will consider before starting anything will whether we have a well acoustically-treated room and ideally have two or three sets of studio monitor speakers at our disposal (at the very minimum, a set of monitor speakers which are a high enough spec enough so you can hear the full spectrum of the frequency range in great detail).

As well as the mixing environment, the mixing tools are also important, and most of the time a good mixing engineer will be using third-party audio plugins with the DAW, or in some cases analog hardware effects processors. This variety of mixing tools (for example equalisers, compressors, phasers or reverberation units) brings to the engineer a variety of possibilities to work with the recorded/processed audio. Reverberation, for example will allow the clean recording to be put into a physical space to match the filmed scene that the audio is recorded from. An equaliser will allow the engineer to remove any unwanted artifacts from the recorded audio or even change the tone of what’s been recorded. A compressor evens out the volume to avoid any sharp audio peaks and an easier listening experience for the listener.

Audio mixing techniques

When it comes to mixing a full track, there are plethora of mixing techniques a mixing engineer will use to get a song/recording sounding loud, well balanced and full of life; Too many to mention in a short blog post. The mixing techniques are nothing without the preparation that has come before, so before we would step into the mix down phase, it’s important to be well prepared. Part of that involves arranging a recording session or produced track into something clear and easy to follow. It can even involve adding colours to the tracks so the layout is clear, and renaming tracks so that it’s completely clear what the engineer is mixing. It also requires fixing timing issues, and pitch issues, as they will complicate the mix down process and mean that the techniques applied are not as effective.

A good engineer will approach the mix in in groups, so will start looking at the whole recorded or produced piece of work in major sections. So all the drums could be going to one group, all the guitars to another group, all the atmospheres, sound fx, vocals, basses etc to their own groups. This not only helps the engineer with the arrangement visually, but it allows the engineer to work on each group separately and create a good balance between the groups as well as the whole mix.

Balance

After mixing techniques have been employed and your mixing engineer has created a great sound across all groups, a good balance is needed among all individual tracks and groups in the mix project. Certain song elements have a dedicated placement in the song’s final balance and a good mix engineer will be able to balance multiple tracks to make this balance just right. Let’s give you one example, as the human ear is much more sensitive to higher frequency audio than it is to low frequency audio content, a mix engineer should balance a track so that instruments which have these qualities are in the correct place in the mix. A bass guitar, for example, has more low frequency audio content than high, so a bass guitar should be a louder in the mix than a ride cymbal or hi hat from a drum kit, which should be much lower in the mix. Even though, over all, these instruments will sound at a similar level to each other in the final balance, they won’t actually be. This accommodates the listeners naturals frequency pickup in the ears and the brain.

Ready for mastering

Mastering is the final stage of a tracks production and it is very often a job taken on by a separate engineer, called a mastering engineer. The job of this engineer is to take the final mix down and process the track’s loudness and tone so that the track can be played on the radio, in a club, in the cinema or on TV. Not only does being a mastering engineer require a very different skill, it also requires very different equipment. They also don’t work with between 20 to 100 tracks, they will often only work with two. Their job is very important because it’s the very final stage of a track production, so the end listener will hear their work as much as the mix engineers.

Mastering engineers ultimate goal is to balance loudness which is measured in terms of LUFS, which is a standard measurement for measuring the average volume of audio over time and is short for “Loudness Units Full Scale'“. Each audio provider, for example a TV broadcaster, will have a standard LUFS level that they play all their audio at. With broadcast over the TV, the level used is very often -23LUFS. This ensures that the end listener hears no major volume jumps between tv shows or audio/music.

I hope you found this simple breakdown informative and even interesting. If you’re looking for an audio professional to work with recorded audio for your film or video project, please feel free to reach out to me by email with any questions you might have. Here’s hoe you can reach me by email: hello@joe.london or via my contact form on this site.

This blog is an adaptation of a blog article I wrote for my client at Cutting Factory and has been adapted to fit my website with permission from my client.

Joe London
Loudness and its importance for your music.

What is loudness?

As someone who has engineered for almost ten years, one of the hardest concepts in sound engineering I struggled to understand early on was that volume (or level) and loudness are two different things. Volume (or level) is something that you can attach a numerical value to, whether it’s 1–30 on a car system, or –2.5 dBFS on a DAW channel. Loudness is something different, it's a perceived volume. For example, you can have two mastered tracks, both of which have peak volumes of –0.4 dBFS, but one can appear louder than the other (loudness can be attributed to RMS, but it doesn't give you the full picture of loudness). This is because you’re perceiving differences in the way that these tracks have been mixed: a well-balanced mix will seem louder because of its frequency content, balance and stereo image.


In the current music market, the loudness of a song does have an impact on how your music is received by your audience. Musicians and engineers have spent the last 30 years competing for the loudest record, in what's known in the industry at "The Loudness War". As somebody who produces and engineers music, I can say that the loudness war is still very present in the industry, with engineers pushing the perceived level of music to superlative levels. Across all genres the same rule applies: if your record isn't loud enough, it won't have an impact on the listener and will not be played by a DJ's or picked up by record labels. Mastering is also very important for loudness but you need a stand-out mix for mastering to produce the desired results.


What produces loudness?


There is no single answer for this, as loudness is the result of combining a few different techniques. One way to increase the perceived loudness of a record is to increase the stereo image of your mix, especially in the higher frequencies, which is the differences between the left and right channels of the mix. This gives the sense that your mix is louder without increasing the overall peak value. Plugins like the SSL Fusion Stereo Image or the Stereo Savage can do that.

Compression will reduce the dynamic range of tracks in your mix, which work by bringing up the lower elements in a track closer to the volume of the higher elements in a track. This evens out the volume of track and allows you to hear the quieter parts of a track or recording. Plugins like the Fab Filer Pro-C 2 are my go to compressors for all around compression, or if you’re looking for something with a bit more character and to introduce some subtle harmonics then try the PSP Vintage Warmer 2.