• Home
  • Music
  • Production Blog
  • Contact

    Milo Burke

    • Home
    • Music
    • Production Blog
    • Contact
    Back to all posts

    10 Steps to Mixes That Translate: Part 2

    In part one of this guide on improving mix translation, I covered five aspects of how equipment, room acoustics, and speaker positioning are compromising the effectiveness of all but the most ideal setups, and more importantly, what you can do about it on a reasonable budget. If you haven't read this yet, you'll definitely want to give it a read.

    Today, in part two of this series, we'll be covering steps you can take working with the equipment and room you have now in order to make the best of it and give your mixes the best chance of translating well to the real world. It should go without saying that these latter five steps don't make the first five obsolete, and vice-versa. Encompassing all ten will give you the greatest advantage.

     

    6) Have You Checked Your Mix Against Other Mixes? 

    This is one of two forms of reference checks, and it's very important. Because the ear adapts to what we hear so quickly, and because we sometimes engineer outside of our preferred genres, we can easily lose touch with how our mix sounds in relation to how it could sound, and how audiences expect it to sound.

    The solution to overcome this is to play your mix against professionally engineered mixes in the same genre. If you're mixing country or commercial electro-pop, find a country song or electro-pop song that just sounds fantastic, and play it against your mix. Of course, you need level-match the two songs before contrasting them against each other, to establish an even-playing-field. What differences do you hear in the balance among instruments that sounds superior in the commercially released song? How does the entire song sound as a whole?

    Examples

    If you're mixing a danceable tune with a driving beat, there's a good chance that the kick and snare and vocal deserve to be the loudest elements of the song. But suppose you balanced the snare volume early before all other instruments were mixed in, and it gradually became buried among the other instruments. It's just not going to pop anymore if a rhythm guitar and a synth part are substantially louder, particularly in the same frequency range. Yet it's very possible you didn't notice the snare was gradually disappearing during the process of adding instruments. The solution is simply to increase the snare volume, but there's a good chance you didn't realize there was a problem until you compared your mix to a commercially engineered song. This comparison reveals what you can do to improve your mix.

    Another example that I'm quite familiar with personally: it's easy to gradually build the song up, adding layer after layer, instrument after instrument, and the entire song has a good vibe and what sounds like a decent mix. Yet, without realizing it, I've built a mix that's significantly lacking in the high-end. Sure, it sounds good in the creation process, and all the richness and warmth are there, but it's still an unbalanced mix that I've become accustomed to over a period of hours. Comparing the mix to a well-engineer song can bring things back into perspective, and it becomes a simple matter of making everything sound crisper and brighter with EQ.

    It's not worth covering this in much more detail because I wrote about it at length in another post: How Reference Checks Can Save Your Song. So if you need more clarification, be sure to read that post. Reference checks are a huge part in getting mixes to translate; far more important than 'this vital EQ trick' or 'how that style of compression will make you sound pro', and is very deserving of a place in this list.

     

    7) Have You Checked Your Mix on Other Stereos? 

    Equally important to checking your mix against other mixes is the other kind of reference check: checking your speakers against other speakers. After all, if you mix on Shure in-ear headphones or Rokit monitors, you can't expect that all of your listeners will be listening on the same, can you?

    The Problem

    When we listen on one set of monitors or one set of headphones, we become very set on how things sound on them, flaws in the speakers and all. The challenge is that this leads to false impressions of how loud the mid-bass in your mix actually is, or whether or not there's too much sibilance on the vocals, or if the balance of instruments even feels right. This remains somewhat of a problem assuming you have great speakers set up properly in your room with good acoustics, following part one of this topic, but I can't stress the importance enough when you haven't yet addressed some or all of those aspects.

    The Solution

    What can you do to fix this? Listen on as many different stereos as you can. The popular 'car test' is popular for a reason: go listen to your mix in your car, and you'll likely hear a whole new set of problems you have to solve that weren't apparent on your monitors. But don't stop there: it's also valuable to make sure your mix works on high-end headphones and cheap earbuds, on powerful stereos and table-top devices. So listen on each and take notes on which aspects of your mix need tweaking.

    Maybe the bass synth or bass guitar needs an EQ change, and the kick needs a level change. Maybe the background vocals need adjusting to sit just behind the lead vocals. Maybe the rhythm instruments are masking each other, or aren't given the right amount of power. You can't make your mix sound perfect on every stereo, but you can make it sound as good as it can on as many as it can. And at this point, the deficiencies of each stereo will be increasingly revealed to you. But when you have three sets of speakers telling you the bass is too loud and only one set of speakers telling you it's just right, you need to trust the the majority, even if your studio monitors are the speakers telling you things are just right.

    Like the previous point, this is also covered in more depth in my post on reference checks. So be sure to read there for more insight.

     

    8) Is Your Gain-Staging in Good Order? 

    If the term "gain-staging" is new to you, then perk up your ears, because this is important. Simply put, gain-staging is making sure that at each point you have a volume control, it's set appropriately for whatever piece of software or gear is going to follow it. And if you think about it, each preamp on your interface has a volume control, and each track and send and bus in your DAW has its own volume, not to mention that the input and output of each plugin can be adjusted. That's a lot of volume controls! How can we truly know and understand all of them??

    Making It Simple

    Well, thankfully, you don't have to understand all of them. You just have to make sure each volume control is set 'about right', in that it's not so quiet that you're losing valuable subtleties to the noise floor of your equipment, and that it's not so loud that you're likely to clip, or likely to contribute to excessive overall mix volume. More often than not, people err on the side of having the volume too high, so know that it's okay to keep things a little quieter. And the easiest part is that if you start with the right volume, either at the preamp level on your interface or the output level of your virtual instrument, the rest falls in line.

    What happens if you ignore gain-staging and just keep soldiering on with your music? A severely clipped mix bus is not only the worst-case scenario, but the likely scenario. And you end up with a weak, harsh, unpleasant sound that you can never undo down-the-line in mastering.

    There are three main areas people tend to mess up the most with gain-staging, and I'll break down all three:

     

    a) Recording Level

    When recording with a microphone or physical instrument, if the preamp on your interface is turned too loud, you risk clipping and losing transients. Digital clipping sounds terrible and can't be undone, and many a good take has been ruined by keeping the preamp volume turned too high and hoping the performer doesn't clip the preamp. And even if you play it safe and keep the average level perhaps 8 dB away from clipping, there's a good chance you're losing the subtleties of transients, particularly if the instrument is percussive. Play it safe and record instruments at a lower volume.

    There are two reasons people struggle with this. The first is that people were warned that if you record too quiet, you'll get too much hiss in your track. This was somewhat true more than fifteen years ago, when equipment wasn't yet made to today's standards and all digital audio was 16-bit. But the real fear is still lingering from the days of recording to analog tape! This just isn't a factor anymore if you have even a cheap modern interface and are recording at 24-bit, much less 32-bit float. The second reason people struggle with this is that they compare working in digital with working in analog: aiming for 0 dB on an analog console was just what you did, and every piece of analog equipment was built with at least 12-15 dB of headroom above 0 dB. And even when exceeding that +15 dB max, clipping occurred gradually and sounded soft. However, with digital, 0 dB becomes the absolute maximum volume the sound could ever be, and passing it by just a millimeter results in hard, ugly digital distortion. Digital is not analog, and people working in digital need to create that safe buffer from clipping themselves. We do this by recording softer; turning our preamps down to give ourselves that 12-15 dB of headroom that we need.

    I personally aim for recording at -15 dBFS as an average, meaning that when I'm testing the levels with a track armed to record, and the track doesn't have any plugins on it, the meter in my digital audio workstation (DAW) tends to show the incoming signal most often at about -15 dB. The nice thing about recording at this level is that even if you drift a little, in that the vocalist might move closer or further from the microphone, or the guitarist might adjust his gear, you should still be within the range of -10 and -20 dBFS. That's still enough room to not only avoid clipping, but to preserve transients, all without flirting with the noise-floor of your equipment and recording unnecessary hiss.

     

    b) Virtual Instrument Output Level

    The trouble with most every virtual instrument on the market is that they're incredibly loud! If it's a synth, it likely sounds punishingly loud and has next to no headroom. And if you're working with sampled digital drums, you better believe those samples are pushed within an inch of their lives with multiband compression and limiting before they're added to the sample pack. The result is that one instrument alone is as loud or louder than your entire mix should be. And even something as simple as a small EQ boost can push a near-clipping sample over the cliff of 0 dB, and the result is ugly distortion.

    It's worth noting that at least a few DAWs allow you to exceed 0 dBFS inside the DAW without clipping, especially if your session is at 32-bit floating-point. However, not all DAWs can do this. The trouble is that there doesn't seem to be a list of which DAWs can handle this and which DAWs can't. And further, even if your DAW can handle it as long as the volume is reduced before physical output, you don't know for a fact that your plugins can handle it. A good number of them may be clipping internally if fed a signal higher than 0 dB. Again, all you can to do avoid this is lower the volume of the virtual instrument.

    The best way to handle this, similar to recording a microphone, is to start with low volume from the very beginning. I frequently turn down the volume of virtual instruments by 15 dB, sometimes more.

    Some virtual instruments are done right: for example, Addictive Keys by XLN Audio often outputs right about the perfect volume, and I don't have to dive for the volume control immediately after adding Addictive Keys to a session. For other virtual instruments, it's a small hangup: in Omnisphere 2 by Spectrasonics, the master output volume stays fixed even when changing to a new patch, so it's a one-time step per instance of Omnisphere to lower the virtual instrument's internal master volume by perhaps 15 dB, and then choosing a patch.

    But other instruments make it really difficult: Battery 4 by Native Instruments not only uses ridiculously loud drum samples, but the master output volume for Battery is tied into the instrument patch, so loading a new kit resets the volume to 0 dB. You can choose to lower the volume by 15 dB or so within your DAW using the track's fader, but there can still be internal clipping in Battery from playing stacked drum hits and internal effects, and that doesn't save the plugin chain in your DAW from clipping. My only solution is to lower the volume of the track in my DAW while I'm choosing a kit, and then raise it back to 0 dB and lower the volume of the individual drums in Battery until the level sounds about right for each.

    It's a pain, and I sincerely hope the loudness war among virtual instruments dies down. They could all learn a thing or two from XLN Audio. But until then, we need to responsibly handle the gain of virtual instruments ourselves.

     

    c) Mix Bus Level

    If you combine ten or twenty or thirty high-volume tracks in your DAW, your mix bus will clip so aggressively that your song loses any hint of power and depth and control that it could have had. As I mentioned, some DAWs allow you to surpass 0 dBFS inside the DAW, particularly if you're working at 32-bit floating-point, but volume above that absolute limit of 0 dB can't be exported or bounced without clipping, and your interface can't play back your session to your speakers without the digital-to-analog converter in your interface clipping. The only way around this is to lower the volume of all tracks in your session, so even when summed together in the mix bus, there's still some headroom before clipping.

    I guarantee this is an issue for you if you're not in the habit of recording quietly and reducing your virtual instrument volume: the above two points are major contributors to this. But it's also possible to back yourself into a corner with plugins. For example, if you boost aggressively with EQ, you need to remember to lower the input of the EQ to match the before/after EQ volumes. And if you use a compressor with automatic make-up gain, it's important to reduce the output volume of the compressor. The goal is that once you start with tracks that have reasonable volume levels by setting your preamp levels and virtual instrument levels low, you maintain that nice, easy volume through your plugin chain, and then mix your song by having many of the faders in your DAW around 0 dB instead of the -20 dB you'd need if you don't bother with proper gain-staging.

    It's worth noting that some people might respond negatively to this, under the false impression that a loud mix bus volume equals a loud master, which equals a volume advantage on the radio that will draw in listeners. I'll shoot down this myth on two fronts: first, it's very unlikely that your music will actually be heard louder than other music. Not only do radio stations heavily limit all content before broadcasting, but streaming services including Spotify, Apple Music, Tidal, and more also turn down songs that are too loud. Even YouTube aggressively turns down loud videos, and many media players use volume normalization as well. So the advantage only exists in a small number of places. And second, maxing out your mix bus has nothing to do with the final volume of your song. Instead, the final volume of your song has everything to do with how aggressively it's limited during mastering and the crest-factor of the mix. You won't harm your song an ounce by mixing with headroom to spare (in fact, you'll even be saving it), and all that needs to be done to make the master nice and loud is to lower the threshold on the limiter.

     

    It can feel like a little bit of a headache at first. But if gain-staging is an issue for how you work, learning a little theory now can go a long way for ensuring your music sounds better for the rest of your engineering career. It's well worth the small amount of time required to establish these as habits right now. And it will go a long way towards your mixes sounding good and as expected on other stereos as your levels are in-check.

     

    My Secret to Making It Automatic

    If you want to make it easy for yourself, you can use the hack that I use, adopted by most of the film industry and recommended by mastering engineer Bob Katz. Simply put, keep your speakers turned up really loud. This way, adding a virtual instrument at full volume will sound deafening, and you'll turn down the instrument by default. If your speakers are set at a good (loud) volume, all these habits become automatic.

    Briefly, the recommendation is to set your amplifier volume so that a -20 dBFS 1 kHz sine wave played through one speaker measures 83 dB in your room according to an SPL meter. 83 dB sounds good and loud, and this provides 20 dB of headroom for peaks up to 103 dB. I set mine about 81 dB because I prefer to work a little quieter. I turn down the volume on my monitor controller when listening to commercial music or watching YouTube videos, but I always turn it up to the same spot when making music. Calibrated volume controls make it easy, but if your monitor controller isn't calibrated, you can mark where to turn your volume control for working using a piece of tape or a Sharpie. And never touch your amplifier volume again. Using this method, it's uncomfortably loud to compose or mix music without enough headroom. And as you produce or mix each song to sound good and loud without sounding uncomfortably loud, good gain-staging becomes automatic just working by ear.

     

    9) Are You Cleaning Up Your Low End? 

    One of the bigger problems beginner mixers have is that their songs have messy low-end. Partly, this is due to habits they haven't yet adopted. And partly, this is due to the nature of working with small- or mid-sized studio monitors without accounting for their nature. Let's start with the monitors.

    Relying Too Much on Stereo

    There's a good chance your primary monitors are revealing and have a lot of detail. You do own them with the intention of using them to hear the intricacies in the music you make, after all. And if they're even halfway set up as described in the first half of this topic, the wide stereo image is lending its hand towards increasing separation. Stereo separation is a wonderful thing, but it can also be misleading in that it can encourage instruments to sound separate despite having overlapping frequencies. For example, a mix could have a low rhythm guitar on one side, a low-mid synth on the other side, and a fuzzy bass in the center. When you mix with good stereo separation, there's a good chance you're relying on panning to distinguish the three sounds from each other. But people listening in less ideal situations, including venues with mono playback systems, mono systems in the ceilings of restaurants and retail stores, and even the reduced stereo of sitting on one side of a car or hearing a stereo Bluetooth speaker on the other side of the room, it becomes very difficult to separate the three instruments from each other.

    How do you fix this? With EQ, of course. If you give each instrument it's own space in the frequency range of the entire mix, it becomes much easier to distinguish each instrument from the others in mono and near-mono listening. It helps if you create specific peaks with EQ for each instrument, so each gets its place to shine in the spectrum of the entire mix. Also, this is where mixing in mono becomes very useful: when you can no longer rely on stereo separation to hear each instrument distinct from others, it forces you to fine-tune the volume balance and EQ separation for each, leading towards a mix that not only sounds better in mono, but stereo too. So while the stereo separation of well positioned, good-sounding monitors is important, you can begin to see how one needs to think around that advantage to better deliver a great mix that effectively translates to mono systems and near-mono listening scenarios.

    The Deep Bass

    But there's one more aspect to having speakers like this that can be a disadvantage. Unless your system has a very robust network of subwoofers that deliver the last word in low-frequency reproduction, you're probably not hearing a lot of what's going on in the very low end. You can expect that many kick samples and bass synth patches have a lot of deep bass, but a lot of it can't actually be heard, even on a good system. If you remove the frequencies that are deeper than necessary, you not only increase headroom of the mix allowing it to sound cleaner or louder, but you end up removing a lot of the muck that clouds the low frequencies in poor mixes.

    This problem is also solved by EQ. Say your synth bass is creating valuable content at 50 Hz and also a lot of needless rumble below. My spectrum analyzer shows that many patches create strong signal down to 10 Hz and lower! Your mix will sound better if you use EQ to high-pass the sound of the synth bass just below the relevant content. You can do this by ear, but it becomes significantly easier when you use an EQ with a built-in spectrum analyzer, like Pro-Q 2 by FabFilter, or H-EQ by Waves. With the spectrum analyzer, you can easily pinpoint where the content is and shape your high-pass around the relevant frequencies. In this scenario, I would use a high-pass filter to roll off the bass at about 45 Hz with as steep a filter as I can use without creating audible problems.

    You may think you want to keep the sub-bass frequencies because you want your mix to sound full and deep. So do I! But if you give this a try, you'll realize that the mix as a whole sounds like the bass is deeper and clearer and has greater punch when you limit the low frequency of each instrument to where it belongs, including low-bass instruments like kick drums and bass synths.

    Working with Low-Mids

    This also extends to other instruments that generate a lot of low-frequency content. You may not think of pianos and guitars as bassy instruments, but they have heaps of low-frequency content depending on which octave they're played in. And while that full-spectrum sound is great for solo performances, it substantially clouds the low-end in a full mix. Rolling off the lows just below the relevant content really helps. Same with synthesizers, as many patches have more bass than they need, and far more than a good mix calls for.

    This continues to surprise me: snare samples and even clap and hat samples can be the same way! Some have loads of low-end that you wouldn't expect. Not only is the low-frequency energy not necessary in a full mix, but it's destructive, and removing it is absolutely beneficial to the mix. Establish the habit of checking each track in your song with a spectrum analyzer to visually see if there are unnecessary frequencies present, particularly in the very low frequencies that your speakers likely don't handle like a champ.

    And remember that, in the context of a full mix, you often don't want one instrument to sound full-spectrum. Though a lead synth may sound killer full-spectrum when soloed, there's a very good chance that it detracts from the mix, and that the mix would benefit from a band-limited lead synth leaving the high-end for cymbals and the consonants in the lead vocal. Likewise, rolling off the lows of the lead synth makes room for the bass and kick to shine and provide depth and balance to the song.

    Other Tips

    A few other little elements of house-cleaning can polish your low-end further. Make a habit out of rolling off the bass in your reverb buses. It just doesn't need to be there, and it muddies and clouds the bass in the rest of the mix. Likewise, delay buses generally don't need a lot of bass to sound effective, and rolling off the low frequencies of the delay can clean up the low-end of the mix.

    Also, it's a great practice to roll off the bass when layering kick samples or bass synth patches. For example, if my bass sound is made up of three layers of synths, with one for mid-range grit, one for high frequency grit, and one for a clean deep tone, it generally sounds best to high-pass the first two layers so only the clean, deep layer is providing the anchor of bass required. Remember these steps to clean up the low-end towards a mix that sounds clearer and translates across systems with greater ease.

     

    10) Do You Need More Practice Listening?

    It can be really easy to dismiss the reasons why professionally engineered music can sound better than yours. You might want to say, "He has amazing analog gear in his million dollar studio." But the quality of a mix comes down to how tools are used far more than the tools themselves. To that, you might reply, "But those guys have golden ears and I don't."

    The Source of 'Golden Ears'

    This is the point I want to address. 'Golden ears' aren't genetic, and you can't be born with them. (Though it can be something you can lose, so if you're a drummer or frequent concert-goer, I strongly recommend hearing protection.) But there's equality in this: the people with golden ears are the people who developed golden ears. It all comes down to what many call 'active listening'. If you're shaking your tush at a club, you're probably not actively listening. But if you notice little things while listening to the radio, like "that snare sounds really sharp" or "huh, I can only pick out these three instruments when the volume is this quiet" or "I wonder what about those grungy drums makes them sound so good", you're well on your way. This is active listening, and making a habit of it will help you hear far more into your music and all other music you listen to. This is critical to having the ears and attention to detect the nuances you need to hear to make good mixing decisions.

    Pro-tip: reference checks not only help your mixes translate, but make for superb active listening practice.

    I want to say it's like laser eye surgery for your ears, but it's really not. There is no quick fix or magic solution. Which, again, is a great equalizer in that other people don't have some magic key you don't to help them become great engineers while you struggle. A much better example is learning an instrument. I do okay with guitar, and I have an okay-guitar. And if I want to get better, I need to put in the hours to practice. And though undoubtedly Eric Clapton has many incredible guitars, if he and I were to trade for a day, he could still play much better music off of my okay-guitar than I could off of any of his great guitars. Because he's put in many thousands of hours practicing and performing that I haven't. It all comes down to experience.

    If you are an engineer that only listens passively, you're not going to learn much. And if you ever become good, it will take a long time to get there.

    But if you start listening actively, whenever you're waiting in the check-out line at the store, or riding in a car with the radio on, or resting between sets at the gym, you'll begin to notice things. And just noticing is an incredible teacher. In fact, I learn more about engineering from twenty minutes of actively listening to popular songs than I do from an hour of reading a textbook on mixing.

    Fast-Tracking Your Ears

    But you should know that there is a way to kick your ears into high gear: ear training. There are two kinds of ear training. There's the kind that musicians use to better hear pitches and intervals, which I encourage if you're a singer or musician in any sense. And there's also the kind of ear training relevant to engineering, and to this blog post on helping your mixes translate better: this ear training helps you better hear small details in audio. There are loads of software titles, free and paid, computer and mobile, that can help you train your ears. The one that I've personally used the most is the free beta project made by Harman called How To Listen. It's available for PC and Mac, and can be downloaded here on their blog if you scroll down a little in the post.

    If you want to give me a run for my money, let me know what scores you can get in the various game modes. I'll share mine. A little healthy competition pushes all of us to learn.

     

    Wrapping Up

    There you have it: five tips that you can practice while mixing, after mixing, and in your downtime between mixes to better help your music translate across other stereos, sounding it's best for as wide an audience as possible. And combined with the steps in the first half of this topic, you have the foundation to provide clarity and consistency across all of your future work.

    Though these tips absolutely will help your mixes translate, I admit I didn't go into the nitty-gritty of using any specific type of plugin. Let me know in the comments below if you'd like future blog posts to be focused on getting the most out of EQ or compression, or any other plugin.

    06/20/2017

    • Leave a comment
    • Share

    in Developing Talent, Mixing, Equalization

    Leave a comment

    • Log out