So where do you go now? You had a song. You recorded all the parts. You even added in some additional overdubs and vocal harmonies. Now all your tracks sit there in your software program waiting for your next move.
You managed to record all your tracks pretty decently and they really sound good, each in their own right. But when everything is playing back all at once, it just really seems overwhelming. All your tracks competing for the same space. Some instruments are louder than others and they all seem to standing in the same place shouting for your attention. It’s time to start mixing.
But where do you start? Where do you go? Where does everything belong? In the following Basix tutorial I’m going to go into the absolute basics of mixing a song. Nothing fancy, just the building blocks of mix engineering.
Let’s do some construction analogy. If you want to build a house, you don’t start with the fancy chandeliers and swanky kitchen interior. You start digging out for a foundation. If you are an architect you you probably don’t start off by designing the Sagrada Famila, you start off by designing simple houses and structures (and if you actually are an architect I welcome you to your new hobby…). Similarly, with mixing, we don’t start off with fancy reverb mixing tricks or complex side-chain triggering. We start off with the Basix.
Pareto’s principle is as follows: 20% of the work results in 80% of the outcome. Sounds pretty nice doesn’t it? What this means is that if we use just a few simple mixing guidelines, and given that the tracks were recorded well, we end up with almost a perfect mix without having to resort to complex modulation effects and automation.
I’m not saying that you shouldn’t familiarize and understand the more advanced mixing techniques out there, but for the purpose of starting a mix we don’t want to think about that. It’s like when you have too much on your to do list and because you don’t know which task you should start on you end up doing none of them. So by starting with an idea of where you want to go, it’ll be easier to get there. Begin with the end in mind, follow a path instead of getting lost in the woods.
And What Are the Basics Actually?
Levels – Volume, mate. Volume. Some things belong in the background, some things stay at the forefront.
Panning – Stereo anyone? I know everybody will say that you should check your mix in mono and all that, but that doesn’t mean stereo panning isn’t hugely important for spreading things out.
Compression – Here is a visual, text based representation of a sentence that could use some compression. If I wErE tO wRiTe WitH TOO mUcH DYNaMic RaNge It WOULD be HarD To rEAd. So by compressing certain parts we keep them at a steady level that doesn’t waver too much in dynamic range. Of course, some instruments need less compression than others, but understanding compression and using it correctly is key.
EQ – Problematic frequency ranges, dull instruments, tinny guitars and boxy drums. This can all be fixed with EQ. It is the No. 1 mixing tool you use to sculpt your mixes.
Depth – If you are a one man band, you can play a multitude of instruments in the exact same place. If you are a band with multiple instruments and players, they are going to be scattered around a room. There is a reason drums are usually positioned behind the singer at a concert. It’s because the drums aren’t supposed to be in the front. By using processors such as reverb and delay we can create depth and space in a mix, and making it sound like there is actually a band spread out on stage.
Those are the basics. These are the mixing techniques and concepts you need to be most familiar with in order to succeed as a mixing engineer. Many of these concepts intertwine and can be used together to create more complex mixing tricks but having an understanding of each one helps you see how they react to each other.
Basic Misconceptions and Questions
1. Basic Levels in the Digital World
On a side note, before you start mixing at all, be sure to check your levels regularly. If you are mixing something that will ultimately make it to a mastering engineer, be sure to allow for sufficient headroom on the master buss. Turn all your faders down to zero and when you start putting up the volume of the individual faders it’s a much better idea to keep the fader levels low but the master volume of your interface or monitors louder.
If you start mixing and down the line you see your master buss peaking and not having any headroom, select all the faders and lower all the levels until you see the master buss showing normal levels. You want the peaks to always be below -3dB, and lower is better. Use a multi-meter or level meter of some sort on the master buss to see the peak level and the average RMS level.
2. Levels of the Drum Kit
For example, let’s start with the drum kit. We want the kick drum nice and juicy so we throw the fader all the way up and go to work on it. Chomp, chomp. A little bass boost there and some EQ and that compression ratio is good and bam, we’ve got it! Nice and punchy kick drum. Now let’s add the snare, and then the bass and we need the space around the drum kit so we add in the overheads. Ah… now the kick drum sounds a little buried. We’ll just push the fader up a little more, that’s it. Much better.
Now we add the guitars and vocals and keyboard stabs and suddenly our bass guitar needs more volume so we add more gain to the compressor. Ah… but now it’s getting in the way of the kick drum, so we need to add more volume to that fader. But wait – we can’t push it up any further. What should we do now?
Mixing isn’t about adding more volume to each element until you’ve balanced them off. It’s about balancing the volume of instrument in a certain volume area. You don’t have an endless amount of headroom in your software, so rather than pushing up the faders until they can’t go any farther you should mentally define a specific area of the faders where you need them to be balanced.
Like I said before, aim for the master buss to be at most at -3dB peaks, and therefore you need to mix the rest of the instruments much lower on their own respective faders. It’s a simple math problem: 10 instruments at a high volume equal a really hot master fader, but 10 instrument with the same balance but a lower level equal a manageable master fader.
3. Who Are the Middle Men?
When panning, some things belong in the middle, and some things can be spread out in the stereo spectrum. Anchor elements such as the lead vocal, snare, bass and kick drum should usually be squared off in the middle, tying all the other elements together.
The snare drum is the defining element of the drum kit, supplying the steady backbeat. Therefore you want all that energy to be in the middle, driving the song forward.
The main vocal is the center of attention, and thus must be placed in the center in order to shine as much as possible.
The kick drum and bass are foundation instruments that give the track a solid grounding, and by placing them in the center you lay the groundwork for all the other instruments on top, to the sides and up above.
4. Bass-ic Panning
The reason the bass drum and bass guitar are usually panned to the center is because of their low frequency content. Since they have so much power in the bass frequency range it can often screw up the phase perception of bass instruments if you pan them out of the center.
If one speaker has more bass information than the other, it can easily screw up your low end. That’s why it’s usually a good idea to anchor them in the middle, giving each speaker the same amount of bass to work with. Also, since it’s hard to pinpoint where low frequencies are coming from, it’s best to keep them in the middle.
5. Why EQ? – Or EQ for EQ’s Sake
Like I said before, EQ is your greatest friend. EQ is to the engineer what the paintbrush is to the painter and is the most crucial tool to shaping the way you want your mix to sound.
However, many beginners slap an EQ on everything and twiddle the knobs aimlessly, thinking that just because there is some EQ on the mix it must sound better. Believe me, I used to be this guy, looking at my audio program and fiddling with all the Hertz, thinking I was making it sound better when in reality I didn’t have a clue.
EQ, even though you can experiment with the various cuts and boosts and frequency ranges is best used when you have a basic understanding of where you are going. Listen to each instrument or sound objectively and think what it might need. Does it lack some punch or is it ill defined and bassy?
Before you even slap an EQ plug-in on your track, think about what you want the finished track to sound like. There are certain frequency areas that have specific sounds. I’ve gone into this in detail before, and you can read about recognizing frequencies or using the audio equalizer for more information.
Bottom line? Don’t EQ for EQ’s sake. Although it’s great to experiment, I do feel that a general understanding of the inherent characteristics of frequencies is vital for efficiently EQing your tracks.
6. Compression Before EQ or the Other Way Around?
I’m going to assume that you don’t know that it matters which processor comes before another. Depending on which processor comes first, it will interact differently to those that come after.
Guitar players know that putting an overdrive pedal before their distortion will sound different than putting a distortion before the overdrive. The are two different effects and the result will either be a distorted overdrive signal (which usually sounds fuzzier), or an overdriven distortion, which can sound harsher and chunkier (and in my book way better).
If you put a compressor before an EQ it will first even out the dynamics of the signal, flattening it if you will, and then you can apply EQ to a more stable signal; whereas adding EQ before compression and boosting various frequency ranges will result in your compressor being more sensitive to those boosted area.
Here is a general guideline: If are going to drastically EQ, boost a lot and so forth I would recommend applying compression before the EQ. But if you want to compress drastically but only clean up the EQ a little bit you can apply the compression after the EQ. Think about what you are going to be doing with each processor and then experiment.
7. Compression for Compression’s Sake
I was recently mixing a few guitar tracks. These guitar tracks all sounded really great in their own right, crystal clean and sparkling with just a hint of tube overdrive to give them that bluesy warmth.
I thought to myself that these were pretty decent guitar tracks and they didn’t need any processing, just some tiny buss compression to make them sound more like a whole. But when I grouped all the guitars to one track and was going to apply a little compression I accidentally started off with a preset that added way too much compression, at least number-wise and what I was seeing on the meters. But what I was hearing was a whole different story.
For some reason, that compression push shone a light to all the little nuances of the guitar playing, making it sound even better than I ever would have thought. The only downside was because the compressor was squashing the signal the noise floor was much too close for comfort, as you could hear the hiss from the guitar amp. But the beautiful pro outweighed the hissy con, so I kept it that way.
Moral of the story? Accidents happen, and sometimes they can sound beautiful. And sometimes you might need to compromise a little audio fidelity for a better sounding performance. If you compress too much you raise the noise floor of the signal, but in this case the little extra noise was bearable.
8. Are You Making These Time-based Effects Mistakes?
Here’s a nugget of wisdom: Don’t use time-based effects such as reverb and delay as insert effects. Huh?
Let’s separate them. Inserts are the processors you put directly onto a track, such as a compressor or EQ that you want to change the ORIGINAL source. Bus effects, effects returns and sends are all part of the jargon when you SEND a COPY of your signal that you have already processed with INSERT effects (EQ, Compression) to a separate bus, or auxiliary track to be effected in some way with TIME-Based Effects.
EQ and compression don’t add any delay or shifts the signal in time in any way. They change and process the original in real time. But time-based effects are used when you want to reproduce and add a separate, delayed (or reverbed or chorused or flanged) signal alongside the original source.
Now, using time based effects is a great way to add extra space to your mixes, thicken up an instrument or give a part a different character. And, just like with EQing before compressing and the other way around, it matters which time based effect comes first. Modulation effects have shorter delay times (around 30ms) but delays can have much longer times.
Lastly, reverb is more of a “space” creator than a signal repeater. Adding delay before reverb, for instance, will delay the reverb from starting but putting a delay after a reverb will repeat the reverb sound. My last basix tut, “How the Hell do I use Reverb Anyway?!?” touches upon the subject of reverb in-depth so those of you who want to gain more knowledge in the matter are advised to read up there.
9. Deep and Dry
Now, depth. How do you achieve depth? Without depth it’s like having everybody stand a the exact same distance away from you and trying to talk to you at the same time. Who are you going to listen to then?
The same happens when you have a mix without depth. Elements might not be occupying the same stereo space since you’ve already panned them around, and they might not be at the same volume, which is one way of achieving a certain level of depth. Obviously louder instruments sound relatively closer than quieter ones, but the still kind of sound like they are equidistant from each other.
Will Walker has a very nice series of tutorials about 3D mixing where he goes into the aspects of looking at a mix from a three-dimensional angle. You think about levels and EQ as up and down, panning as left to right but reverb, delay and effects as front to back.
You need to apply space to your instruments to create a sense of 3D, or depth to your mixes. If you are sending a certain instrument to a reverb you can leverage the amount of depth you want by leveling off the dry (original signal) with the wet (reverbed signal). If you have much more of a really wet reverb in your mix than the actual instrument itself, it will sound further away. By adding more of the effected sound you are essentially drowning it, or pushing it into the space you have created with your reverb.
Of course, you can still achieve depth without drenching your mixes in reverb, and that’s where we come back to leveling off our wet and dry faders. You can add a roundness and space to an element by adding some reverb but keeping the reverb in background. By adding some reverb here, more there and even more to these parts you are positioning your elements front to back. Kind of like placing chess pieces on a chess board.
Conclusion
Do you remember how cool that scene was in Minority Report where Tom Cruise just flung computer windows across the screen with his hands? Throwing some profile that way, opening up a new one and deftly placing it somewhere else in one fell swoop? Or that scene in Iron Man II where he strips apart that virtual model to find the blueprints of an unknown element? That’s what I would like mixing to be. If you could just surround yourself with all the instruments in a virtual reality setting and you could just place them front to back or left to right depending on what you needed.
Think about that next time you are struggling with finding where your mix is going. Think about where you want your elements to be from the start and then try to manoeuvre them into position as deftly and swiftly as possible. Knowing the basics of where you want to go, using the fundamental mixing principles that you learn at the beginning you can put your mix into a desirable position as quickly and as easily as possible. Only then can you try out all the other advanced mixing tricks.