tag:miloburke.com,2005:/blogs/personal-blog?p=4Production Blog2019-02-11T09:00:58-07:00Milo Burkefalsetag:miloburke.com,2005:Post/56115432019-02-11T09:00:00-07:002019-02-11T09:00:58-07:00A Parable on Minimalism in Music Production<p>It's been some months since my last blog post. I've missed you guys. I got married. I did some traveling. I moved from one apartment to another, which is always an ordeal with my studio setup. And I've been working.</p>
<p>One of my jobs is writing for a magazine covering the hardware and software of pro audio and high-end consumer audio. And it's been a learning experience writing under an editor; discovering what it means to adapt my writing style to another's publication.</p>
<p><strong><span class="font_large">The Parable</span></strong></p>
<p>This past month, I've been putting together a review of a music streaming service entering the US market. It plans to be the best, outdoing Spotify and all the others, but that's a bold claim for a small company, and it's a crowded marketplace to be in. My editor told me to aim for 1800-2400 words, and said that a music streaming service review should always include a, b, and c; our readers would appreciate exploring d, e, and f; and also please follow up on x, y, and z. I did all that, and I ended up with 9200 words. It's insightful, educational, has some unreleased news, and some killer quotes from the streaming company's employees opening up about topics you really don't see covered in the tech media. It's a beautiful review. But there's no way it can be published at that length, and I know it.</p>
<p>I asked my writer friend what to do, and he called it the "newbie curse" in that beginner writers don't know how to edit themselves. I have the newbie curse. And the worst part is that I knew I was being too detailed when I wrote. A quiet voice in the back of my mind kept reminding me that I was wasting time as I delved into tiny details irrelevant to the final review.</p>
<p>Aiming to not be a newbie, I spent days chopping away sentences and whole paragraphs that I feel are valuable and insightful, and it makes me sad. These are the things that I feel I would really want to know as a reader. But the result is that I have a leaner, meaner review that seems to flow better. It's not as ground-breaking as it was, but it feels "pro" somehow.</p>
<p><span class="font_large"><strong>The Moral</strong></span></p>
<p>This reminds me of how I make music. I try to include everything in my productions: to use super complex percussion patterns, detailed sound design, and work intricacy into my chord progressions. But I still miss the main goal of keeping it simple and approachable. It doesn't matter that it's full of ear-candy for production nerds like me if I can't strip it down to the essentials, and I never put enough focus on the melody to actually make it a good song.</p>
<p>I know in the back of my mind that I haven't been using the best approach to make great songs. But the lazy approach is fun and easy. So I ignore that inner voice. And doing so helps me create a thousand lame tracks not worth publishing.</p>
<p>I'm not going to call the inner voice a conscience because this isn't about right and wrong. But I need to listen to my inner voice warning me that my song is too complex or that I haven't put enough time into the hook of a song. It can be really hard to start muting tracks that you like the sound of, or digging into an area of weakness. But when I listen to my inner voice and mute tracks and dig into my areas of weakness, I do my best work: I create the music I'm proudest of months and even years later.</p>
<p><span class="font_large"><strong>Learning to Work Smarter</strong></span></p>
<p>You have that inner voice too. The voice that says to be lazy and only work on the aspects of production and engineering you like, ignoring the rest. For some of you, I bet you love starting songs but hate finishing. For others, you love vocals but always put off writing and recording your own. And for others, you build songs up to be a mountain of instruments that can't all co-exist. Like me, you've got the "newbie curse" in that you can't edit out the tracks that, while interesting and creative, don't make the song sound truer. Choose to take on the aspects you're afraid of and you'll advance faster than by any other method. And learn to edit yourself, and your songs will start to feel more "pro" and more "right". </p>
<p><span class="font_large"><strong>Closing Thoughts</strong></span></p>
<p>I'm amazed by how my blog's readership has grown, even in the months I wasn't active. Truly, I'm so thankful for you guys. And I can't wait to see what 2019 brings for this community.</p>
<p>I'm considering introducing a new Q/A portion to my blog and YouTube channel. So if you have any questions you'd like answered, or topics you'd like me to focus on, write me a message or reach out with the Contact form on my website.</p>
<p>And may each of you listen to your inner voice and learn to write and produce leaner, meaner songs that sound more "pro" and more "right".</p>Milo Burketag:miloburke.com,2005:Post/53806502018-08-27T09:00:00-06:002018-08-27T09:00:25-06:00How to Master Music at Home<h2>Introduction</h2>
<p>You're ready to publish your music and you can't afford a mastering engineer. You're familiarized with <a contents="what a mastering engineer does" data-link-label="" data-link-type="url" href="https://www.miloburke.com/production-blog/blog/mastering-isn-t-a-process" target="_blank">what a mastering engineer does</a>, and you're ready to tackle this yourself. But one problem: you don't know where to begin. What's the best way to approach mastering when you've never done it before? Fortunately, that's what we're covering today.</p>
<p> </p>
<h2>Preparing Your Files</h2>
<p>Before you begin, you need to gather all the files: whether you're mastering an 18 song album or a 3 song EP, you need to have all your music in the same place.</p>
<p>For each song, open up your mix and check that everything is sounding good. You should never save problems to fix in mastering when they can be fixed more easily in the mix. If you need to make changes, now's the time.</p>
<p>For each song, remove any limiting or aggressive bus compression in the mix. These are things better saved for mastering. Then, export each song as a high resolution wav file using the sample rate of the song and the highest bit depth option you have available.</p>
<p> </p>
<h2>Arranging Your Files</h2>
<p>When all your files are assembled, open up a new session in your DAW and place each song on a different track. It can help to first decide on song order, then space the tracks out so no one track is playing on top of another. When you do this, give some thought to how much silence sounds best after a song ends before the following one begins. For each track, add a marker on the timeline for when it begins. This will help you export each song consistently with the same song length.</p>
<p>Also add a track or two by an artist in your genre that sounds really good. Music you love to listen to. This will come in handy for <a contents="reference checks" data-link-label="" data-link-type="url" href="https://www.miloburke.com/production-blog/blog/how-reference-checks-can-save-your-song" target="_blank">reference checks</a>.</p>
<p> </p>
<p style="text-align: center;"><span style="color:#999999;">Pro Tip: Turn the volume of the reference tracks down so their volume is equal with your tracks.<br>This will make comparisons much easier.</span></p>
<p> </p>
<h2>Tonal Matching</h2>
<p>Play a bit of the professionally mixed song, then a bit of one of your songs. Listen to the pro mix again, and then your mix again. Is your mix sounding brighter? If so, gently use EQ to darken your song. Does your bass need to be brought up? Fix it with EQ. The goal is to tonally match your song to the reference track. When your frequency spectrum is right, your music will sound a lot better on other stereos. When you've finished EQing the first of your tracks, do the same for the rest of the tracks in your album. Now you see why it's important to have each song on a separate track: each song requires different EQ to get it to match the spectrum of your reference track.</p>
<p>Listening straight through can be tedious, and your ears will adjust too quickly. Don't be afraid to quickly jump around in your track and the reference track. You're looking for differences that are most apparent in the first second you start playing back each song, not hidden details revealed after ten minutes of intense listening.</p>
<p> </p>
<h2>Focusing on the Dynamics</h2>
<p>Remember my post on <a contents="micro-dynamics and macro-dynamics" data-link-label="" data-link-type="url" href="https://www.miloburke.com/production-blog/blog/micro-dynamics-and-macro-dynamics" target="_blank">micro-dynamics and macro-dynamics</a>? We're going to take it from theory to practice.</p>
<p>In each of your songs, intently listen to the volume change within a measure. If your song is energetic and has solid percussion, you can hear the volume jump up and down within each second of music. This volume change is what we're paying attention to. If there's too much difference in intensity from the loudest moment to the softest moment, now's the time to bring in a little compression with a fast attack. A light touch is best: aim to compress just 1-2 dB off the peaks. If you feel your song needs more, add a second compressor and set that one to compress the peaks no more than 2 dB. Subtlety is the name of the game.</p>
<p>If you feel your song needs more dynamic diversity instead of less in each measure of music, you can try using an expander (a compressor with a ratio of less than 1:1). But the best place to fix this is back in the mix. If this is the case, <a contents="go back to your mix" data-link-label="" data-link-type="url" href="https://www.miloburke.com/production-blog/blog/the-core-of-mixing" target="_blank">go back to your mix</a>, remove the extra compression there, export the song, and start again.</p>
<p>If your goal in adding compression is to achieve a little bit of flavor, aim for a colored, vintage compressor. My favorites are TuCo by Sonimus and Puigchild by Waves. With character in mind, set your compressor with a slower attack, for the compressor to actually begin working after the "threat" of loudness has momentarily passed. And aim for a slow release that about matches the amount of time before the next drum beat comes in. Compressing in this way lets the power of the drums poke through the compressor, but everything between beats gets a little squished and starts to gently pump in time with the music. This effect may be best when used in parallel, so aim for a compressor that has a wet/dry control.</p>
<p>Now that the micro-dynamics are in-check, focus on the macro-dynamics as you listen to the transitions in your song. When the song goes from verse to chorus, does the chorus jump out at you and feel special? If it doesn't, it should. A great way to achieve this is to use volume automation to dim the verses by a decibel or two, which then lets the chorus pop out at the listener. If you want each song section to sound special, you can use automation to gradually dip the volume from the beginning of a verse to the end of the verse, where it then pops up for the chorus. If you're subtle, the listener will never notice the drop in volume, but each new song section will feel energetic and big.</p>
<p>Once you've tweaked the dynamics of one song, repeat for the rest. The album's starting to come together.</p>
<p> </p>
<p style="text-align: center;"><span style="color:#999999;">Pro Tip: If you plan on pushing your song extra hard into the limiter, you may get better results<br>using volume automation for macro-dynamics after the limiter, not before.</span></p>
<p> </p>
<h2>Loudness</h2>
<p>For a comprehensive look at how to achieve the perfect loudness for your music, check out <a contents="my blog post on the topic" data-link-label="" data-link-type="url" href="https://www.miloburke.com/production-blog/blog/how-to-give-your-song-the-perfect-loudness-update" target="_blank">my blog post on the topic</a>. But for now, we're just going to skim over the functional parts of that post.</p>
<p>To find the ideal loudness for your music, you'll need a tool that can measure the integrated LUFS of an entire song; bonus points if it does this by scanning an exported file. I use MAAT Digital's DROffline MkII, though other brands make tools that are equally functional. The integrated LUFS is the value we're looking for.</p>
<p>Next, choose one of your songs to focus on first: one that's upbeat, full, and energetic. Put a limiter on your track with a ceiling set at -1.0 dB true-peak, and lower the threshold until the limiter starts nipping at your song's peaks. Then measure the integrated LUFS value over the entire length of your track.</p>
<p>A decently loud, decently dynamic song should score about -14 integrated LUFS. If you want a little more volume at the cost of a little dynamic range, you can push it to -13 LUFS. If you want to go really loud, aim for -12 LUFS. But I don't recommend pushing your loudness any further than -12: you'd be throwing away the integrity and vitality of your music for outdated reasons. Volume normalization on the playback side is now the norm, marking the end of <a contents="the loudness war" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/dynamic-range-and-the-loudness-war" target="_blank">the loudness war</a>.</p>
<p>If your song measures quieter than your target LUFS, lower the threshold in your limiter a bit, export, and measure again. Repeat until your song matches your target integrated LUFS value.</p>
<p>Once your chosen song is ringing in at the integrated LUFS value of your choice, put a limiter on the second song, and the third, and compare the loudness of each to the loudness of your chosen song by ear. At this point, the numbers matter less, and matching the qualitative loudness of the first song is more important. Again, clicking randomly from this part of this song to that part of that song will help you quickly get a feel for how loud the songs sound in relation to each other.</p>
<p> </p>
<p style="text-align: center;"><span style="color:#999999;">Pro Tip: if you don't have a plugin that measures integrated LUFS, upload your song to LoudnessPenalty.com -<br>a Tidal score of -1.0 dB means your song is probably close to -13 LUFS.</span></p>
<p> </p>
<h2>Reference Checks</h2>
<p>You've already been comparing your tracks to commercially engineered tracks in your genre. But now's the time for the other type of <a contents="reference check" data-link-label="" data-link-type="url" href="https://www.miloburke.com/production-blog/blog/how-reference-checks-can-save-your-song" target="_blank">reference check</a>: listening on other speakers.</p>
<p>Export each of your songs, and start listening to them on good speakers and bad speakers, good headphones and bad headphones. Anything that's not the speakers or headphones you made the music on. Your music won't sound amazing on every playback device, but you want it to sound decently good on all of them. And if you hear issues, you may want to make some tweaks in your mastering session and then check again. Usually, these tweaks come in the form of EQ changes.</p>
<p> </p>
<p style="text-align: center;"><span style="color:#999999;">Pro Tip: you may also want to listen to your reference tracks on the other speakers, to give you a sense<br>for how they sound when playing a great sounding mix.</span></p>
<p> </p>
<h2>The Final Export</h2>
<p>When you've finished all of your changes based on the reference checks, the only thing left to do is export each song. Pick a final sample rate: usually 44.1 kHz for music listeners and 48 kHz for video work, and set your dither options to <a contents="dither the sound" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/understanding-dither" target="_blank">dither the sound</a> down to 16-bits. Then export.</p>
<p> </p>
<p style="text-align: center;"><span style="color:#999999;">Pro Tip: if your DAW doesn't include dither in the export settings, your limiter probably has options for dithering.</span></p>
<p> </p>
<h2>Wrapping Up</h2>
<p>And that's about it! If you followed the steps above, your songs are tonally balanced, have solid micro- and macro-dynamics, and are at a great loudness that allows them to sound good and loud on streaming services without paying for unused loudness with distortion. Your music is mastered and ready for distribution.</p>
<p>Your first master won't sound as good as your twentieth, and it's always better to have an outsider perform the master, listening with fresh ears on fresh speakers, to achieve the perspective you can't as the creator of the song. But if you've made it this far, you know how to get your music sounding pretty close all on your own.</p>
<p>Do you have any favorite plugins for mastering? I'd love to hear about them in the comments below.</p>Milo Burketag:miloburke.com,2005:Post/53806182018-08-13T09:00:00-06:002018-08-13T12:13:33-06:00How to Give Your Song the Perfect Loudness - 2018 Update!<p><strong><span class="font_large">Introduction</span></strong></p>
<p>I love listening to dynamic music. There's just no replacement for those clear sounds and clean, punchy drums that make the mix sound powerful. With many of the songs I critique, loudness is one of the biggest issues, and when the producer or engineer just backs off of the limiter by 3-6 dB, everything sounds cleaner and more professional.</p>
<p><a contents="In a previous pos" data-link-label="" data-link-type="url" href="https://www.miloburke.com/production-blog/blog/how-to-give-your-song-the-perfect-loudness" target="_blank">In a previous post</a>, we explored the context for measuring loudness in decibels, how to read LUFS meters, and how to arrive at the perfect loudness for your master.</p>
<p>However, there are a couple of really neat tools available now that didn't exist when I wrote that article, and finding the sweet spot for your music's loudness is easier than ever. Let's take a look at what we need to know:</p>
<p> </p>
<p><span class="font_large"><strong>We're Still Measuring in LUFS</strong></span></p>
<p>Just a refresher, Loudness Units relative to Full Scale (LUFS) is still our preferred scale. We'll be using LUFS to measure average loudness. LUFS-Integrated is a scale that gives one value for the loudness of your entire program, no matter if it's a two minute song or a two hour movie. And if this still seems confusing, be sure to check out the <a contents="more in-depth explanation" data-link-label="" data-link-type="url" href="https://www.miloburke.com/production-blog/blog/how-to-give-your-song-the-perfect-loudness" target="_blank">more in-depth explanation</a> in my previous article on loudness.</p>
<p> </p>
<p><span class="font_large"><strong>We're Still Targeting the Same Loudness</strong></span></p>
<p>Ian Sheperd's loudness observations per streaming platform are still in effect. At the time of this writing (August, 2018), this is how each of the major streaming platforms reacts to loudness:</p>
<p>- YouTube's algorithms target approximately -13 LUFS-Integrated as a loudness target, and YouTube does not turn up songs that are quiet.</p>
<p>- Spotify's algorithms target approximately -14 LUFS-Integrated, and Spotify does turn up songs that are too quiet and can use limiting to do so.</p>
<p>- Tidal's algorithms target approximately -14 LUFS-Integrated, and Tidal does not turn up songs that are quiet.</p>
<p>- Pandora's algorithms target approximately -14 LUFS-Integrated, and Pandora does turn up quiet songs, but does not use limiting to do so.</p>
<p>- Apple's iTunes algorithms target approximately -16 LUFS-Integrated, and iTunes does not turn up songs that are quiet.</p>
<p style="text-align: center;"><span style="color:#999999;">All credit for these numbers goes to Ian Shepherd and his research.<br>You can read more of his writing on his website: <a contents="productionadvice.co.uk" data-link-label="" data-link-type="url" href="http://productionadvice.co.uk" target="_blank">productionadvice.co.uk</a></span></p>
<p> </p>
<p><span class="font_large"><strong>An Exciting New Tool: <a contents="DROffline Mk2" data-link-label="" data-link-type="url" href="https://www.maat.digital/dro2/" target="_blank">DROffline MkII</a></strong></span></p>
<p><a contents="MAAT Digital" data-link-label="" data-link-type="url" href="https://www.maat.digital" target="_blank">MAAT Digital</a> may be a young company, but it's already making a big splash in the mastering community. And one of the neatest tools it makes is called DROffline MkII, short for the second version of Dynamic Range Offline (abbreviated as DRO2 for the rest of the article).</p>
<p>In short, it's a stand-alone program that crunches the numbers on a wav file to tell you exactly how loud it is according to a number of different metrics. I find this approach really useful because it generally takes less time to export a song and scan it with DRO2 than it does to play out the song in my DAW to see its integrated LUFS measurement using a metering plugin. And, DRO2 is a more precise measurement tool, and it also shows other relevant metrics at the same time. I prefer to use it with the Modern Mastering preset, which cuts out a lot of data that's relevant for film and broadcast work, but not for music guys like me.</p>
<p><strong>How I use it: </strong></p>
<p>When I'm finishing up an audio project, I drop the exported wav file into DRO2 and let it quickly scan the file. DRO2 acts as a neat double-checker as it displays the sample rate of the file, the bit depth, and also the bits used. But more importantly, it tells me everything I need to know about my loudness: primarily, how loud my loudest inter-sample peak is, how loud the entire program is in LUFS-integrated, and as a bonus, how much dynamic range my audio has.</p>
<p>I always aim to have my inter-sample peaks hit -1.0 dBFS, no higher. If this reading shows anything else, I adjust the ceiling in my limiter to fix it. Streaming services don't play nicely with songs that peak over -1 dBFS.</p>
<p>I aim to have my LUFS-integrated read -14.0, since I prefer dynamic music. If this reading shows anything else, I adjust the threshold in my limiter.</p>
<p>And the dynamic range figure just gives me a fuzzy feeling when I look at it, knowing that I'm not crushing my music with excessive limiting or compression. It's one of the small pleasures in life.</p>
<p><strong>Why I recommend DRO2: </strong></p>
<p>I don't know of a quicker, more accurate way to get to the heart of my music's loudness, which allows me to quickly make specific changes in my limiter to set things right. And at $49, it's not only convenient, but affordable too.</p>
<p> </p>
<p><span class="font_large"><strong>Another Exciting New Tool: <a contents="LoudnessPenalty.com" data-link-label="" data-link-type="url" href="http://www.loudnesspenalty.com" target="_blank">LoudnessPenalty.com</a></strong></span></p>
<p>LoudnessPenalty.com is Ian Shepherd's latest project, and it works really simply. Drop your mp3 or wav into the website, and your browser crunches the numbers for how loud your music will seem to all of the major streaming services: YouTube, Spotify, Tidal, Pandora, and iTunes. DRO2 tells you the truth of your loudness, whereas LoudnessPenalty tells you the reality of what will happen to your volume. And, best of all, this website is free to use.</p>
<p>We already covered above how each streaming service targets loudness. But what we don't have a handle on is how the math behind each service reacts under different circumstances. My hunch is that Tidal and YouTube have the most accurate measurements in response to true volume. The trouble is that both services can turn down loud songs, but they don't turn up quiet songs: so you may not know from reading those numbers if your volume is perfect or way too quiet. Spotify and Pandora, on the other hand, seem to respond a bit more harshly to dynamic music than the others, but both are capable of turning up quiet songs, meaning both are capable of giving you some perspective on how quiet a quiet song really is. Lastly, iTunes seems a bit sluggish. It will be the first service to turn down your volume since it is geared the most conservatively, but how much it turns the volume down doesn't seem to track consistently with a song's loudness.</p>
<p><strong>How I use it: </strong></p>
<p>I pop my song into the website, wait for it to analyze the loudness, and look for problems. Although if I'm using DRO2 properly, there won't be any.</p>
<p>For me, problem-free means that YouTube and Tidal aren't touching my volume, Spotify and Pandora are are turning me down just a decibel or two, and the iTunes number seems reasonable (anything between -4 and 0). As you can see, LoudnessPenality is easier to understand, but harder to use.</p>
<p><strong>Why I recommend LoudnessPenalty.com:</strong></p>
<p>It's great to get a picture of how each streaming service will react to the loudness of your song, it's visually easy to read, and it's free!</p>
<p> </p>
<p><span class="font_large"><strong>Choosing Your Target loudness</strong></span></p>
<p>So you now have tools you can use. But what loudness should you actually aim for? This may seem like a cop-out, but it's up to you.</p>
<p>I like dynamic music. With this in mind, I aim to have my integrated LUFS measurement in DRO2 read as close to -14.00 as it can. If it's not reading -14, I tweak the limiter's threshold, export, and measure again.</p>
<p>For a slightly more competitive master, I'd aim for DRO2 to read as -13 LUFS-Integrated. Look for YouTube to leave the volume as it is, and for Tidal to turn the music down by 1 dB.</p>
<p>If you want a more competitive loudness still, aim for DRO2 to read as -12 LUFS-Integrated. Look for YouTube to turn the volume down by 1 dB, and for Tidal to turn it down by 2 dB.</p>
<p>I wouldn't push your loudness any more than this. You can do it, but it really won't sound pretty. I've heard arguments for making EDM painfully, punishingly loud in order to be competitive, and that the loss of quality isn't such an issue. Maybe it's even expected as part of the sound. I personally disagree, though I expect most EDM masters to push a lot louder than this. But for everyone else? Save your music!</p>
<p><strong>The rule of thumb:</strong></p>
<p>Consider -12 LUFS-Integrated as "Good", -13 as "Better", and -14 as "Best". And let's throw in a -16 as "Ultimate" if you're targeting an audiophile audience. Just for fun.</p>
<p> </p>
<p><span class="font_large"><strong>Loudness Consistency Across Multiple Tracks </strong></span></p>
<p>So we found the perfect loudness measured in LUFS-integrated according to your preference. But if we apply that to every song on your album, the perceived loudness of each song will vary. Unfortunately, there's no true way to measure loudness as accurately as the ear does. </p>
<p>The best approach is to import all unmastered songs as 2-track wav files into your DAW on separate tracks. Pick just one track to measure. It should be loud and busy and sound typical of the whole album. (In other words, don't pick the slowest, quietest song.) Use the process above to bring that track to your target loudness, be it -14 LUFS-integrated, -13, or -12. You now have one song at the appropriate loudness for your entire album. For all of the other tracks, one by one, match the volume of each by ear to the first song. Remember: set your limiter's ceiling to -1.0 dBFS, and adjust your limiter's threshold to bring the volume up or down to match the loudness of the first song.</p>
<p> </p>
<p><span class="font_large"><strong>TL;DR - The Quick Guide to Achieving Perfect Loudness</strong></span></p>
<p>To find the perfect loudness for each song on your album:</p>
<p>- Start with a loud, full song that you feel represents the peak of your album. Use this song for every step below.</p>
<p>- Decide if your loudness target is ultra-conservative (-16 LUFS), conservative (-14 LUFS), moderate (-13 LUFS), or aggressive (-12 LUFS). I personally target -14.</p>
<p>- Use a quality limiter, like Ozone Maximizer, Pro-L2, Invisible Limiter, etc. Set your limiter's ceiling to -1.0 dBFS, and remember to set your limiter to True Peak mode, for it to detect inter-sample peaks. Adjust your limiter's threshold until it is just responding a little to the music, but not too much.</p>
<p>- Use DRO2 or a similar tool to discover the integrated LUFS value for your song. If your loudness isn't at your target, adjust the threshold of your limiter and measure again. Repeat this process until you nail your target loudness for this one song.</p>
<p>- Double check that things are looking right with LoudnessPenalty.com, according to your loudness goals.</p>
<p>- Use your ear to match the loudness of all the other songs to the loudness of your first song. To make another song louder or quieter, adjust the threshold of the limiter on that song's track.</p>
<p>And you're done!</p>
<p> </p>
<p><span class="font_large"><strong>Wrapping Up</strong></span></p>
<p>Once again, I feel like I used a million words to describe a somewhat simple process. It's just that this process seems to be incredibly murky even after lots of research, and it took me a long time to get my head around when I was learning it. <a contents="Yet it's so important!" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/dynamic-range-and-the-loudness-war" target="_blank">Yet it's so important!</a></p>
<p>I hope this article is useful to you, and will help you target your preferred loudness with confidence and courage.</p>
<p>If you have any questions or additions, please mention them in the comments below. I love hearing from you guys.</p>Milo Burketag:miloburke.com,2005:Post/53664082018-07-30T09:00:00-06:002018-07-30T09:00:23-06:00The Trick to a Perfectly Balanced Mix<p><span class="font_large"><strong>Introduction</strong></span></p>
<p>You're wrapping up production on a song and getting into the mix. Things are sounding pretty good, but when you demo your song on another set of speakers, the volumes all seem wrong. The vocals are too quiet, the snare is too loud, and you didn't realize how much a backing instrument was popping out of the mix. It doesn't just make your mix sound a little off. It makes your mix sound weak.</p>
<p>All of us want our mixes to translate. We want our <a contents="mixes to be solid" data-link-label="" data-link-type="url" href="https://www.miloburke.com/production-blog/blog/the-core-of-mixing" target="_blank">mixes to be solid</a>, and well-represented on every playback system they'll ever be played on. But that just doesn't seem to happen on its own. Why is that?</p>
<p> </p>
<p><span class="font_large"><strong>Understanding the Problem</strong></span></p>
<p>The fundamental responsibility you have as a mixing engineer is balancing the levels of every track. It doesn't matter how cleanly the instruments may be EQed or how creative your effects are: if your mix doesn't have good volume levels for each instrument, it's not a good mix. And your ears will tell you this. You don't need me to.</p>
<p>The trouble is that it comes naturally to us to set the volume at a place that feels comfortable for us to work at: loud enough to hear detail, but not so loud that it's painful. This is a great place to work from during the production stage. But when mixing, we become numb to how things sound at that volume. And when we always listen on the same speakers, we forget that the frequency response and <a contents="room positioning of your speakers" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/why-isn-t-my-mix-translating" target="_blank">room positioning of your speakers</a> not only affect how we EQ things, but how loud one instrument or element may sound against another. And further: the human ear just isn't good at discerning volume differences between elements when the volume is loud. Sure, we can hear it, but we can't nail it. And nailing it down perfectly is what we're after when we're making a great mix.</p>
<p> </p>
<p><span class="font_large"><strong>Sidestepping the Problem</strong></span></p>
<p>Wouldn't it be nice if there was some easy technique that could eliminate this effect? Some quick workaround that doesn't boil down to 10 years of industry experience?</p>
<p>Lucky you, because there is.</p>
<p>This is going to sound so simple, and that's because it is simple. Your problem is that you are having a hard time discerning the balance of loudness across tracks when mixing at one, loud monitoring volume. The trick is to stop listening loud, and to start using a number of volumes instead of just one.</p>
<p> </p>
<p><span class="font_large"><strong>Finding That Magic</strong></span></p>
<p>Just turn your volume down. Turn it down so low that you're having trouble hearing all the instruments in the mix. Because when the complexities of the mix start fighting for your attention at the edge of your hearing, suddenly the differences in volume become easier to discern.</p>
<p>Now that your volume is low, very low, start adjusting levels. You may find that your snare is getting lost and your hi-hat seems too loud. You're hearing the truth, so with the volume still really low, start making changes. Turn that hi-hat down, that snare up, and fix any other dominant elements that need adjusting.</p>
<p>Now it's time to pay attention to the fringes of the mix. Maybe that piano or synth sound isn't foundational to the track, but is just there to fill out the sound. Maybe the perfect volume for it is barely audible. Or maybe it deserves to be so quiet that you can't actually make it out as its own instrument. Sometimes this is what's best. If this is the case, you know you've found the right volume when you don't notice the instrument unmuted, but it feels like something is missing when it's muted. But if you do want to hear the instrument as its own unique sound, find the volume for it that allows it to be heard, but just barely. Let it support without dominating.</p>
<p>If your genre is modern and you utilize transition effects, like risers, reverse reverb swells, samples of drum fills, etc., this may be the moment you realize how off the volume may be. More than likely, your transition effects are punishingly loud, and you had no idea when you were listening with your volume turned up. But now that the volume is low, you can hear how dramatically the transition effects are burying the rest of your mix. It's time to fix that.</p>
<p> </p>
<p><span class="font_large"><strong>Is This a Mistake?</strong></span></p>
<p>Now that you're mixing for low volume, are things going to sound good when you turn the volume back up?</p>
<p>In a word, yes. If the levels of your mix sound good with the volume low, they'll probably sound great with the volume high. It's just how things work out.</p>
<p> </p>
<p><span class="font_large"><strong>A Word of Warning</strong></span></p>
<p>While you're making level changes at low volume, you may notice that some things could use tweaking with EQ. Maybe your vocal sounds a little thinner or thicker than you anticipated, and very likely your kick drum is lacking that low-end power you want it to have. You're tempted to start EQing things to fix the problems you're hearing. Don't. When you're mixing the levels of your tracks, your accuracy goes up when the volume gets turned down. But the opposite happens when you're EQing tracks: what you hear gets more skewed the lower you turn your monitoring volume. If you want to learn why this happens, do a little reading on the <a contents="Fletcher-Munson Curves" data-link-label="" data-link-type="url" href="https://en.wikipedia.org/wiki/Fletcher%E2%80%93Munson_curves" target="_blank">Fletcher-Munson Curves</a>.</p>
<p>The rule of thumb? Make level changes with the volume low, and make EQ changes with the volume high.</p>
<p> </p>
<p><span class="font_large"><strong>Double-Checking and Triple-Checking</strong></span></p>
<p>At this point, your mix is sounding good at low volume, and it probably sounds good at high volume too. But you're not quite there yet. You probably chose your low volume point arbitrarily, and it may also not be telling the whole story. It's probably a lot more accurate of a lens to inspect your levels than mixing at high volume, but it's still not perfect.</p>
<p>Now it's time to listen for a minute with your mix medium-loud. Now ultra-silent. Play with the volume in a few different places. A solid mix sounds good when it's the anthem of a deafening movie trailer in a theater, when it's at moderate volume playing in your car over the road noise and traffic, when it's getting buried by conversation while being played over a restaurant's speaker system, and even when it's just a tinny whisper in a grocery store. You want your mix to pass all these tests too.</p>
<p>So play around with the volume. Make sure the balance of lead vocal to percussion to primary instruments to backing instruments is maintained no matter where you turn the volume knob. You'll probably have to make a few changes beyond your first low-volume tweaks. And that's okay.</p>
<p> </p>
<p><span class="font_large"><strong>What Is My Process Like?</strong></span></p>
<p>I normally produce fairly loud. Not deafening, but loud enough to hear everything going on with strength and clarity. When I'm about to export my mix to share with my mentoring group, I'll spend a minute or two listening at low volume and making changes to the mix. And when my mix is feeling close and I'm ready to do some <a contents="reference checks" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-reference-checks-can-save-your-song" target="_blank">reference checks</a>, I'll spend at least five minutes fine-tuning level changes at one low-volume spot, then another five minutes or more listening at other low-volume spots, making sure the balance of the mix stays pretty consistent. After making changes, I always check that things still sound good at high volume. They usually do, but sometimes a compromise needs to be struck. And usually, testing at all those other volume levels helps me dial in exactly where that compromise needs to be.</p>
<p>The result? A strong, solid mix that sounds professional and exciting when played loud, but still keeps its foundation no matter how quiet it's played, over any amount of background noise. And that's the goal.</p>Milo Burketag:miloburke.com,2005:Post/53666572018-07-29T15:50:20-06:002018-07-29T15:50:20-06:00Blog and Podcast Update<p>My production blog took a little hiatus as I sorted out and launched my podcast and <a contents="YouTube training series" data-link-label="" data-link-type="url" href="https://www.youtube.com/channel/UCpnXBKGbA91WLfBdReHSqhA" target="_blank">YouTube training series</a>. Both are live now, and you can listen to the Golden Wok Studios Podcast on <a contents="SoundCloud" data-link-label="" data-link-type="url" href="https://soundcloud.com/goldenwokstudiospodcast" target="_blank">SoundCloud</a> or your podcast app of choice.</p>
<p>Blog posts will resume tomorrow, and will repeat every other Monday. And I worked out the podcast's schedule as well, which will be repeating every Monday.</p>
<p>So: check for new content at the beginning of every week. =]</p>
<p> </p>
<p>Thanks to each and every one of my readers. You make this content possible, and I'm so grateful to have you as my readers. And now as listeners too. =]</p>
<p>Best wishes,</p>
<p>Milo</p>Milo Burketag:miloburke.com,2005:Post/53118582018-06-22T15:12:17-06:002018-06-22T15:13:39-06:00First Podcast on YouTube and SoundCloud<p>Thanks for hanging in there during this break of content.</p>
<p>My first podcast is now live on <a contents="YouTube" data-link-label="" data-link-type="url" href="https://www.youtube.com/watch?v=Kk3YHRVvXGg" target="_blank">YouTube</a> and <a contents="SoundCloud" data-link-label="" data-link-type="url" href="https://soundcloud.com/goldenwokstudiospodcast/ep-001" target="_blank">SoundCloud</a>! With many more to follow, thanks to you, my followers.<br><br>Unfortunately, it seems that it can take up to a week for podcasts to propagate out to the various service providers. But you'll be able to find my in your favorite podcast app soon enough.</p>
<p>My YouTube teaching channel will allow me to give demonstrations of how I use plugins, how I go about composing, what effects helped me achieve this or that sound, and many more interesting topics.</p>
<p>I'll catch you guys soon.</p>Milo Burketag:miloburke.com,2005:Post/52932892018-06-13T11:22:53-06:002018-06-13T11:22:53-06:00Production Podcast and YouTube Series Coming Soon!<p>I'm putting a temporary halt on my blog as I'm prepping for my upcoming podcast and YouTube training series. I can't wait to share this content with you.</p>
<p>Once the audio and video are flowing, I'll return to writing as I have been.</p>
<p>Thanks to all of my readers!</p>
<p>P.S. My first release as a solo artist is coming out on 6/15.</p>Milo Burketag:miloburke.com,2005:Post/52279242018-05-28T09:00:00-06:002018-05-28T09:00:49-06:00Understanding Dither<p><span class="font_large"><strong>Introduction </strong></span></p>
<p>Maybe your DAW comes with a dithering plugin, or maybe you've seen dither as an option in your limiter, or in the export settings of your DAW. You have the vague idea that it's <a data-link-label="" data-link-type="url" href="https://www.miloburke.com/production-blog/blog/mastering-isn-t-a-process">used for mastering</a>, and you can't remember if you're supposed to always use it or never use it. </p>
<p>What is dither? Why do you want to use it? And when is it appropriate? </p>
<p>For the sake of this post, I'm going to assume you have a rudimentary understanding of sample rate and bit depth. If this isn't the case, we'll cover the basics of those in a future article.</p>
<p>If your eyes glaze over at any point in reading this and you're just thinking, "Milo, I don't want to know how it works!", I don't blame you. This is mathy and dense, and I'm not sure anyone gets particularly excited about dither. The body of the article is there for the curious. But if you want the short and sweet version, skip down to the section titled "Keeping It Simple".</p>
<p> </p>
<p><span class="font_large"><strong>Bit Depth in Your DAW</strong></span></p>
<p>So you know that CDs and mp3 files only allow for a 44.1 kHz sample rate of 16-bit audio. If that's all the resolution your final song will have, why should we work with higher bit depths?</p>
<p>For every sample of audio recorded, the bit depth is the string of numbers that records where the amplitude of each sample is, giving the waveform its shape. But anytime you make the slightest change in audio, you multiply or divide that string of numbers by another string of numbers. This isn't just for major changes, like adding a virtual guitar amp to an instrument. Even adjusting the volume by a single dB requires one long string of numbers to be used against another.</p>
<p>The problem we run into is rounding errors. 16-bit audio only allows for 65,536 unique values. And if you use a virtual instrument with a plugin chain containing EQ, compression, delay, saturation, and stereo-widening, your DAW is crunching many, many string numbers, and all of those calculations will produce rounding errors.</p>
<p>Unfortunately, those errors add up. If you sell a car, it might be close enough to round to the nearest thousand dollars when telling someone how much you sold it for. But if a dealership sells 80 vehicles a week, rounding to the nearest thousand dollars 80 times isn't nearly specific enough for the dealership's manager to know how much revenue is made, and if there will be any profit after paying the sales people and necessary expenses. It's the same when calculating audio: with each new rounding error, what you're hearing is less and less true to the original sound. It will probably still sound "fine", since it's not 12-bit or 8-bit, but it could sound better.</p>
<p>When we step up in bit depth, we can calculate each change to the audio with greater precision. 24-bit allows for 16,777,216 unique values, and 32-bit allows for 4,294,967,296. Your DAW may support 64-bit internal processing, which carries out each calculation to a possible result of 18 quintillion unique values. By comparison, you can see how the paltry 65,000 values offered by 16-bit isn't quite enough.</p>
<p>When you process at a higher bit depth, the rounding errors often still exist, but they are pushed further down the string of digits, further from the core of the sound, down into the inaudible range. Perfect.</p>
<p>So even though 16-bit may be sufficient for recording decent quality audio, you can see why you should use a higher bit depth within your DAW: you're not leaving that audio alone. You're calculating numbers against it again and again. That's why you need the precision.</p>
<p> </p>
<p><span class="font_large"><strong>Exporting Your Music</strong></span></p>
<p>To avoid rounding errors, you've decided to make all future songs 32-bit floating point. Great!</p>
<p>But what happens when you finish the song and you're looking at your export settings? You remember that whether you're aiming for CD or mp3, you're limited to 16-bit audio. If you export your 32-bit song to a 16-bit file, your DAW will just chop off all those extra numbers assuming that they're not needed. It's better than having worked at 16-bit from the beginning, but it's still less than helpful.</p>
<p>If you have all this extra information in every single sample of your music, and the final container can't hold all of that extra information, what can you do to save as much of that detail as you can? What happens when you need to express a value between the numbers available in a 16-bit file?</p>
<p> </p>
<p><span class="font_large"><strong>A Tangible Example</strong></span></p>
<p>Suppose you work for McDonald's, and you're in charge of purchasing paper cups for every restaurant in your state. If ten million people live in your state, that's a lot of cups!</p>
<p>You want to place an order for 150 million paper cups to last the entire month. You know your supplier's cups are worth 2.2 cents each, and that's the price you want to buy them at. However, your currency doesn't allow for transactions of less than a cent. The supplier isn't going to sell the cups for 2 cents each, or she'll be forfeiting $300,000 on a purchase this big. And you're certainly not going to pay 3 cents per cup, costing you an extra $1,200,000. What can you do?</p>
<p>If each cup has to be priced individually using the decimals your currency allows, the solution is to buy one cup for 3 cents, and then buy four cups at 2 cents each. We're rounding the numbers, but we're not rounding in the same direction each time. You're paying a different amount per cup, but you're paying an average of 2.2 cents per cup just like you wanted. Repeat this process for every five cups and you've found a way to fill your entire order at a sale price per cup that's not represented by your currency. Once in a while, math can be pretty cool.</p>
<p> </p>
<p><span class="font_large"><strong>Dither to the Rescue</strong></span></p>
<p>Dither does the same thing to audio, and it does it by introducing random values that sound like noise. If your 32-bit song wants to express the amplitude of a sample with a very high degree of accuracy, but the containing file just can't hold numbers that precise, then dither rounds up for some samples and rounds down for others. Just like we did buying paper cups for McDonald's. Dither will choose to round up on this sample and down on another, making the appropriate decision of which direction to round for every single sample of audio. And when you average them together, the targeted precision from your 32-bit source audio is represented in the tiny 16-bit file. Neat!</p>
<p>If you were reading carefully, you'll notice I mentioned that dither adds noise. Noise is bad. But the noise added is so low level that it's extremely difficult to hear. And smart dither algorithms can shift the added noise out of the frequency range our ears are most sensitive to, about 3.5 kHz, to a frequency much more difficult to hear, perhaps 18 kHz and above. The result is that you're storing more information in a container than you should be allowed to store, and you do it by adding an inaudible amount of noise.</p>
<p> </p>
<p><span class="font_large"><strong>The Time for Dither</strong></span></p>
<p>One application of dither is not likely to add a perceptible amount of noise, considering that every piece of equipment used in recording and playback also adds a tiny amount of noise. But if you stack that noise again and again, it can become audible and irritating. So you shouldn't add dither any more times than necessary.</p>
<p>The only time you need to add dither is when you're reducing bit depth. Because it's only when extra bits are being chopped off that you need to find a way to cram all that extra information into fewer bits.</p>
<p><strong>Examples:</strong></p>
<p>A recording engineer concerned about fidelity records music in a 32-bit session. He needs to send the multi-track files to the mixing engineer, who is not particularly interested in fidelity, insisting that he can only accept 16-bit or 24-bit files. In this scenario, the recording engineer should dither each track of the multi-track session down to 24-bit audio, in order to stay as high quality as possible and packing in as much extra information as possible.</p>
<p>If that mixing engineer also masters the song, then he needs to fit the song into a 16-bit file. In this case, he should also add a layer of dither once he's finished mastering, this time to the CD's target of 16-bits. Because once again, higher resolution audio needs to fit into a lower resolution container, and we don't want to toss away any more of that precious detail than we have to.</p>
<p>But if the mixing engineer can accept 32-bit files and mix the song in a 32-bit session, that's preferred, and the recording engineer doesn't need to dither the files down to 24-bit.</p>
<p>If you're recording or producing music at home, I recommend starting each song at 32-bit floating point and keeping it there throughout the recording, production, mixing, and mastering (if you're doing your own mastering). If your DAW has the option to turn on 64-bit internal processing, it will sound better still. Though we're entering diminishing returns here. Only at the end do you dither. Because you control your bit depth from start to finish, only one application of dither is necessary, right before the final export of the song.</p>
<p>Always dither to the bit-depth you need. Any higher, and you end up chopping off bits. Any lower, and you're throwing away extra resolution.</p>
<p>And if you're making your music available in hi-res format, you probably want to dither an extra copy to 24-bit, not 16-bit, to preserve that extra resolution.</p>
<p> </p>
<p><span class="font_large"><strong>The Place for Dither</strong></span></p>
<p>When we add dither, we're still technically throwing some information away, because the rounding it adds can only be so precise. And we're also adding noise. So we want to be sure we're not dithering any more than necessary.</p>
<p>Unless you're exporting stems to a lower bit depth, it is never appropriate to put dither on an individual track in a session. It can only do harm, not good. Save dither for the master bus.</p>
<p>Working at a higher bit depth allows us to reduce rounding errors, and dither cuts down that bit depth. It would be a shame to add dither before all the calculations are finished, leaving room for new rounding errors and eventually chopping off bits. This is why dither comes last.</p>
<p>The limiter is usually considered the "last" plugin on your master bus. After all, you wouldn't want to limit to <a data-link-label="" data-link-type="url" href="https://www.miloburke.com/production-blog/blog/how-to-give-your-song-the-perfect-loudness">the perfect loudness</a> and then add an EQ after the limiter, boosting the song into clipping, would you? Though the limiter is the last plugin that affects the sound, dither still follows, because it needs to do its job after all the other calculations have been made, and because it won't add enough gain to your song to push it into clipping. Dither is the last plugin that affects the data in any way. The only plugins that you can use after limiting and dither, are metering plugins that don't change the audio in any way.</p>
<p>Your limiter may include the option to turn on dithering. It knows to do this after the limiting takes place, not before. This is just fine. Or, you can use a separate plugin to add dither after your limiter. Or, your DAW may have dither options in the export window.</p>
<p>Note, some DAWs add dither appropriate to your container file automatically in each and every export. I was recently surprised to learn that this is true for Studio One, my preferred DAW. If your DAW can automatically apply dither, you can choose to allow it, or choose to disable it and add your own dither. But it's better not to add dither twice.</p>
<p> </p>
<p><span class="font_large"><strong>Keeping It Simple</strong></span></p>
<ul> <li>
<strong>What does </strong><strong>dither</strong><strong> do?</strong> <em>Preserve extra information from a higher bit depth for when audio is packaged into a lower bit depth file.</em><br> </li> <li>
<strong>When do you need to dither?</strong> <em>At any stage audio of any kind is going from a higher bit depth to a lower bit depth.</em><br> </li> <li>
<strong>Where do you put dither?</strong> <em>After the very last plugin on your master bus.</em><br> </li> <li>
<strong>Do you need to dither when increasing bit depth?</strong> <em>Don't be silly! It's hard to put ten gallons of water in a five-gallon bucket, but it's easy to put five gallons of water in a ten-gallon bucket.</em><br> </li> <li>
<strong>Do you need to dither when changing sample rates?</strong> <em>I know you often see sample rates and bit depths grouped together, but no. Sample rate represents the horizontal axis of a waveform, and bit depth represents the vertical axis. If you change the sample rate, you can still leave the bit depth the same, and there is no reason to add </em><em>dither when the bit depth stays the same.</em>
</li>
</ul>
<p> </p>
<p><span class="font_large"><strong>Wrapping Up</strong></span></p>
<p>Even if you've made it this far, you're probably not super excited about dither. I'm not either, but it is important. That said, the theory and analogies above gave you an understanding of what dither is and what it does. And the quick questions and answers just above tell you when to use it and where.</p>
<p>This information will always be important, and how/when you add dither will never change. Make good dithering practices a habit now and you'll be on dither-autopilot the rest of your life.</p>Milo Burketag:miloburke.com,2005:Post/52032112018-05-21T09:00:00-06:002018-05-21T09:00:51-06:00Finding Musical Collaborators<p><span class="font_large"><strong>Introduction</strong></span></p>
<p>You make music because you're passionate about it. You wouldn't be reading this otherwise. But for many, making music can be a lonely road. And it takes a mountain of passion to carry the heavy load of learning music theory, composition, production, arrangement, lyric writing, instrument performance, mixing, mastering, and the myriad of other aspects that need to be handled in order to make great music that finds an audience.</p>
<p>This is where collaborators come in. It may be unrealistic to try to conquer all of those domains on your own. The goal here is to share them with a teammate, so you can offload some domains in order to better focus on others. Or maybe you already do just about everything, but you just need another person's input to carry your ideas forward when you run out of inspiration, and to give you early drafts to improve on so you're not always staring at a blank session in your DAW, wondering where to begin.</p>
<p>Unfortunately, good musical partners can be hard to find. And if you struggle with this, then today's topic is written specifically for you.</p>
<p> </p>
<p><strong><span class="font_large">Traits to Look for in a Partner</span></strong></p>
<ul> <li>If you are going to partner as a duo or start a band with another person, you need to have common musical interests. If you're all about future bass and he only has ears for metal, it's just not going to work out.<br> </li> <li>You need to have complementary skill-sets. If you're great at production but are a poor singer and lyricist, then aim for a partner that's strong as a singer and lyricist. You're more likely to find a great partner if you can admit to yourself and others the areas you lack. Also, you'll work better together if you're not butting heads, each angling for dominance over the same domain.<br> </li> <li>You need to find someone with dreams as big as yours. If you want to be touring in three years and she doesn't think she'll ever want to leave her job, it's better to keep looking.<br> </li> <li>Make sure your partner is someone you actually like. You need to be able to agree while collaborating. But more than that, you really don't want to enter an arrangement like this with someone you could never be friends with. After all, if this goes well, you'll be spending a lot of time together.<br> </li> <li>You want a partner that can compromise. If it's his way or the highway, you'll feel stifled and wish you were doing something else.<br> </li> <li>You also want a partner that challenges you. You need more than a bandmate that tells you everything you do is golden and can't be improved in any way. A good partner can identify the areas you need to improve and can let you know in a way that doesn't feel like a personal attack.<br> </li> <li>Make sure your partner has enough time to work on music with you. It doesn't matter how much she loves the project if she can never make time to work.<br> </li> <li>Unless you're equally technically-minded and both have lots of gear, you probably won't be able to make this work long-distance. And if one of you is driving to the other to make music, it really helps when it's a short drive. Look close to home.<br> </li> <li>You may not be able to find someone significantly more talented and experienced than you are that's willing to work with you. But also make sure that you don't partner up with someone that's just not at your level yet. You need to be able to work, not exercise endless patience teaching and coaching.<br> </li> <li>It's always convenient when you and your partner use the same tools. It can be tricky if one of you uses PC and the other Mac. And it can be even harder if you use different DAWs, <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/three-of-my-favorite-plugins-october-2017">different plugins</a>, and <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/one-of-my-favorite-instruments-part-1-august-2017">different instruments</a>. You and your partner may not perfectly match in tools, but if you can compromise on which tools both of you will use together, working together becomes a lot easier.</li>
</ul>
<p> </p>
<p><span class="font_large"><strong>Places to Find a Partner</strong></span></p>
<ul> <li>Consider meeting up with local groups focused on exactly what you do. If you produce music, MeetUp.com probably has a group or two of music producers in your area. If there aren't any groups near you, consider creating one. Every group needs someone to start it.<br> </li> <li>There may be other groups in your area. Maybe your city has a group for giving songwriting critiques to each other, or a hobbyist club for synthesizer lovers, and these may not be represented on MeetUp. Google is your friend.<br> </li> <li>If you're taking music or production lessons, you can always ask your teacher if he or she knows anyone. There may be a relationship there waiting to happen.<br> </li> <li>Don't be afraid to ask your friends, neighbors, and co-workers. They know a lot of people you don't. Some might not know that you're looking. Some might not even know that you're a musician.<br> </li> <li>BandMix.com may have archaic options for genres and instruments, but it's powerful because so many people use it. If you're not familiar with it, it's basically a dating website for musicians: you can find profiles like "guitarist seeking band" and "band seeking vocalist," etc. You have to be a paid subscriber in order to message other people, but some paid members might message you if you have a free account. That said, finding a good musical collaborator is worth paying for.<br> </li> <li>Although BandMix is probably the most widely used in your area, a Google search may reveal other websites designed for matching musicians with each other. Even if a site is sparsely used, it doesn't hurt to make a profile for future people to find. You never know when the perfect partner will start looking.<br> </li> <li>Craigslist has its uses. Be wary of anything that seems fishy, but there may be great opportunities for you there also.<br> </li> <li>There may be a Subreddit for musicians in your area. Or you may find people in your area within larger Subreddits for music production, songwriting, or even for your DAW.<br> </li> <li>Consider hanging out at open mics near where you live, even if your music isn't the type that can be performed at an open mic. Every person you see perform is sufficiently skilled and committed to play in front of an audience, but is not so successful to be making a career out of music yet. (If she was, she wouldn't be performing at an open mic.) This can be the perfect way to meet someone working but not yet succeeding, which can be a level playing field. If the culture of one open mic doesn't work for you, try another. OpenMikes.org has info on open mics all across the United States. If you live in the US, it may have information about an open mic near you. Other sites may track this in your area too.<br> </li> <li>If you're still in school, take advantage of your close proximity to so many other people. Find out if there are clubs related to music, or see if you can take music-related classes to meet people. Or just ask the music teachers if they know anyone that wants to be in a band.<br> </li> <li>Consider any other places musicians might congregate. Small music festivals. Open houses at recording studios. Concerts. Battle of the Bands. Even places like Guitar Center. Anywhere you suspect musicians might be found, go there and be found yourself. And don't be shy about meeting people and being the first to make a connection.</li>
</ul>
<p>These options may seem like a lot of work. They are. It takes a great profile on BandMix to attract the kind of attention you're looking for. And you may have to give your phone number to a lot of new musicians you meet before even one contacts you. But don't give up. It's a numbers game, and each rejection just means you're one person closer to finding your ideal collaborator.</p>
<p> </p>
<p><span class="font_large"><strong>Working on Yourself</strong></span></p>
<p>Be willing to be flexible. Maybe the perfect partner requires each of you to bend a little bit on genre. Maybe that mismatch of genres is exactly what's needed to help you make unique music that people want to hear. Don't say no to potential collaborators for reasons that aren't actually a deal-breaker for you.</p>
<p>If you're hoping to find a great partner, you need to become the great partner somebody else needs. It's just like dating in this regard. Of course, you want to smell nice, act nice, show up when you say you will, and not have significant problems with drugs or other dependency issues. But more than that, you need to be skilled. Work at your craft, staying sharp and always improving. You need to make yourself talented enough that someone wants to collaborate with you. You need to be quality enough that a musician would be excited to find you.</p>
<p>Also, you can't wait to make music until after you find someone. If you're not making music now and don't have anything that can show what you can do, nobody is likely to believe you're as talented or passionate as you say you are. And the vast majority of people saying "I'm going to work really hard and learn really fast!" just aren't going to follow through. You wouldn't wait to exercise and get a haircut until after you're in a dating relationship, would you? Of course not. You probably need to look good to find people interested in dating you to begin with. It's the same with music. Invest in yourself so working with you looks like an attractive proposition to others.</p>
<p>However, when you find one potential partner, you don't need to stop looking, and you don't need to tell other interested musicians, "Sorry, I'm taken." You don't have to be a musical monogamist, especially since you can never know which musicians will choose to show up, much less be the teammate that perfectly gels with your musical tastes, your personality, and your style of collaboration.</p>
<p>And a lot of musicians won't show up, even when they tell you they really want to meet you and really want to work. Something about being creative correlates with being a flake. I wish it wasn't so, but it is. Just choose not to get discouraged and to not give up.</p>
<p>Also, it's a great idea to make music now without a partner, because you don't know how long it will be until you find that perfect partner. Maybe she'll show up in a couple of days, or maybe it will be a couple of years. It would be a shame to wait several years while making no progress, especially if your skills decay during that time, or you haven't yet made the music that will interest a potential collaborator. Put yourself out there, and never stop working at meeting someone. But also, continue working as if you don't expect to ever find a collaborator and don't need one. It's this combination of <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-to-become-better">learned skill</a>, work ethic, and confidence that will help you find someone as soon as you begin to feel like you're not looking.</p>
<p> </p>
<p><span class="font_large"><strong>Do You Really Need a Partner?</strong></span></p>
<p>Consider if you really need someone. Partners can be amazing at filling roles you aren't good at filling, and they can occasionally be a great motivator, pushing you to quickly reach a deadline. But collaboration is hard. It can be slow. It can be frustrating when you don't agree. And it can be uncomfortable when neither of you has ideas. Even the most reliable people won't always be able to show up, and they won't always be at the top of their game when they do.</p>
<p>It's easy to fixate on finding a partner to be your bridge to success. It's easy to decide that this one out-of-reach component is the only thing needed to make your musical dreams come true. But if you look deep and have the courage to admit it, you already know that this isn't true. Nobody is going to bring success to you. You have to make success happen for yourself. A successful partnership is when both of you work to make success happen. It never works as a free-ride to either of you. You need to decide to earn success and commit yourself to the process whether or not you find a partner.</p>
<p>A partner can't teach you how to be a better producer if you're the producer, or a better songwriter if you're the songwriter. A partner can't map out your career for you. A partner can't teach you <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/becoming-more-productive">how to be productive</a>, or how to manage your time. A partner can't do the work you need to do to become better at your craft. Those are all up to you, whether or not you have a partner. And if you're handling all those things on your own, do you really need a partner?</p>
<p> </p>
<p><span class="font_large"><strong>My Experience and Perspective</strong></span></p>
<p>I sometimes feel lost in this too. For years, I couldn't find anyone to make music with, and I didn't realize I had to make music on my own in order to find someone. My skills were decaying and I was unhappy.</p>
<p>I awoke my passion for music and production, and I realized that if I was ever going to find someone, I needed to work hard at getting better first in order to find someone, not work hard once I've met him or her. I did this knowing in the back of my mind that I might never find a partner to collaborate with or form a band with. But even if that was true, I decided I loved music enough to go it alone.</p>
<p>Fortunately, I didn't need to. When I got creative with finding people using the above methods, I found loads of people. But many were flaky, or rude, or unmotivated. I found some that were reliable and friendly, but just not skilled enough to partner with, with dreams the same size as mine. I had to politely decline to work with them. That was a new concept for me. And I had to market myself really well to find quality people that were interested in working with me. A lot of promising partnerships didn't work out for reasons I'll never know. That's okay. I keep making music, and I don't let discouragement stop me from working.</p>
<p>As I write this, I'm casually courting four potential duos. I don't know which of them, if any, will produce great music, find an audience, and lead to a sustainable income as an artist. If I'm willing to show up for all of them, and if I have time for all of them, how much success we find will depend on luck and the other person. I can live with that.</p>
<p>And I'm not slowing down on my personal music. In fact, I'm more driven than ever in making music as a solo artist. And maybe it will be my solo music that makes me successful. Or working with a collaborator I haven't even met yet. I can't force the unknown to become known. But I can give all promising opportunities my best effort, continually working and growing as an artist along the way, comfortable that my best project could end up being my solo work.</p>
<p> </p>
<p><span class="font_large"><strong>Wrapping Up</strong></span></p>
<p>And that's exactly what I recommend for you: give all promising opportunities your best effort, and continuously work at growing as an artist. That's the only way you'll have a shot at making it.</p>
<p>And warm up to the idea of working alone. It has advantages that collaborating doesn't. It takes a certain strength, but you can find that strength within you if you look for it. And whether or not you end up working alone, that strength will serve you.</p>
<p>Be willing to look for others. Be willing to make the first move, shake the first hand, write the first email, start the first group. But meanwhile, train and work as if you'll be a solo artist forever, overcoming your weaknesses and creating content despite working alone. If you stay at it long enough and if you work hard enough, you'll find success whether or not you find the perfect collaborator.</p>Milo Burketag:miloburke.com,2005:Post/52028822018-05-14T09:00:00-06:002018-05-14T09:00:49-06:00Becoming More Productive<p><span class="font_large"><strong>The Problem</strong></span></p>
<p>You have huge dreams for your music. You love music. You want it to be your career. And if you're being honest, you're pretty good at it.</p>
<p>But some days, it's just so hard to work! Each day, you start with the ambition of making an amazing song, or some other momentous achievement for your music career. And each day, the time somehow slips away. What happened?</p>
<p>It's okay, you're just human.* You are subject to the productivity traps that threaten all of us. But fortunately, there are things you can do to overcome this.</p>
<p><span class="font_regular"><em>*If you are not human, please let me know in the comments below. It may mean that my blog has much wider readership than I anticipated.</em></span></p>
<p> </p>
<p><span class="font_large"><strong>The Mindset</strong></span></p>
<p>Maybe you're not taking your music seriously.</p>
<p>How many tech start-ups do you think are created accidentally? How many successful artists do you think stumbled their way into their careers? I would wager none. When you want something to be your career, you have to consider it to be work; consider it to be your job. You're not a hobbyist anymore, but an entrepreneur.</p>
<p>At this point, your doubt may have snuck in: you think, "But I'm not good enough to do this professionally yet. I'm still a beginner." That may be true, but the best way to become better is to treat your vision as work and as your career, not as a hobby you dabble in whenever you have the time. At the point you consider yourself an entrepreneur, a true pro, you're finally ready to begin learning and working in earnest.</p>
<p>This doesn't mean that music can't be fun, or that this can't be an enjoyable career. But an entrepreneur doesn't give up when she's tired or skip a day because she doesn't feel like working. She keeps returning to her work with the reliability of an employee to a job, and with the intensity of the founder to her tech-startup. And it's that repeated "showing up" that matters in the long-run, for your success and for <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-to-become-better">your growth in talent and skill</a>.</p>
<p>"But Milo, what about that overnight-success that everyone knows about?" I've heard it said that behind every overnight success, there are years of preparation and planning. Maybe someone truly got lucky and stumbled into success with no prep or planning. Can you count on being equally lucky? No? Then begin your preparation and planning now so you can be successful in the future.</p>
<p>Taking on the mindset of a pro is incredibly powerful. Stephen Pressfield describes this concept at length in his book <a data-link-label="" data-link-type="url" href="https://www.amazon.com/War-Art-Through-Creative-Battles/dp/1936891026/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=1524718830&sr=1-4">The War of Art: Break Through the Blocks and Win Your Inner Creative Battles</a>. This book is a must-read. It completely reframed how I view music as my career.</p>
<p> </p>
<p><span class="font_large"><strong>Work First, Play Later</strong></span></p>
<p>Your mom probably used this strategy when telling you to do your homework. But it is so, so important for two reasons.</p>
<p>First, when you play first, you have no idea how much time playing will take. Maybe you'll lose track of time playing a video game and not realize how much of the day has slipped by. Maybe you'll extend your hike because you're enjoying it and it's good for you. Maybe you'll repeatedly allow the next episode on Netflix to play, feeling guilt and shame each time you start a new episode because you know you're shirking what's most important.</p>
<p>There will be time to play later if you do your work first. I promise. But starting with your work lets the work determine how much time it needs. Maybe you'll finish faster than you expected and take on a second task, accomplishing twice as much as you expected. Or maybe things will go slower than you thought and you have to spend even more time working. At least you still have time to finish your work, prioritizing work over play. Your play-time can be moved a little to make room for what's really important.</p>
<p>Second, your best time really is earlier in the day. You may have heard that you make around 35,000 decisions every day. Some may be big, like "should I apply for this job" or "should I buy this car". Others may be tiny, like "should I put mayonnaise or ranch on this sandwich" or "should I go to the restroom now or can I wait another minute". And every decision you make, big or little, chips away at your energy and capacity to make more decisions for the rest of the day. It's tiring!</p>
<p>And if you fill your day with decisions like "I'm going to play this round as a sniper" or "I'm choosing not to work just yet", you're wearing down your ability to make good decisions during your work, decisions that could have made your work better.</p>
<p>You may feel you're a night-owl who prefers to stay up late. I know I do. And though there can occasionally be a bit of magic in making music while sleep-deprived, there is a power in starting early, whenever early is for you, before your brain is too tired to give its best.</p>
<p> </p>
<p><span class="font_large"><strong>Limit Distractions</strong></span></p>
<p>If you want to craft an amazing song the world will want to listen to but your phone is vibrating with a new text every twenty seconds, you're not going to be able to focus. Same with chat rooms, social media, email accounts, and anything else that robs your focus from your music. You'll make your best decisions when you're in your groove - when you've achieved the state of flow "flow"; when you've lost sense of everything outside your DAW. And that's a beautiful thing. Everything that pulls you away from your DAW can wait.</p>
<p>My big hangup is research. It's so easy for me to pause production in order to learn if that plugin on sale is better than this one I already have, or what processors are coming out next year, or which approach is the best to quietly cool a powerful computer. Maybe these will each be important eventually, but it would a shame to let them rob me today of the time I need to work, particularly in those few, precious hours in the day when my mind is at its peak. Understanding the best way to silence computer fans becomes important the next time I build a computer, not months or years before. And keeping up to date on processor news is really just a hobby if I'm not actively shopping with the intention to buy.</p>
<p>Emails can be responded to later. Your friends don't need an immediate reply to their quips or a response to their plans. Social media will be there when your work is done. If you take your music seriously, that means valuing it more than those other things. Would you let your phone distract you when you're on a first date? Definitely not. Give your creative work the importance you'd give a date: your full, undivided attention.</p>
<p> </p>
<p><span class="font_large"><strong>Do What's Important, Not What's Urgent</strong></span></p>
<p>This is a sneaky one that gets me all the time if I let it. I get caught up in doing what needs to be done now instead of what's really important. I've heard this called "the tyranny of the urgent" because an unimportant-but-urgent thing can keep you from doing the work that's really important.</p>
<p>There are so many forms this can take. When I have a software bug that needs troubleshooting, I feel I need to solve it right away when that's just not true. What about an email advertising a huge plugin sale with only a day left? The email will still be there after my work for the day is done. Sometimes it feels urgent to critique a song for a friend before making a song of my own, or chatting with the members of my producers' study group before I've had a chance to make music. All of these things can be positive things, but not when they come before what really needs to be done.</p>
<p>Elon Musk doesn't receive any phone calls at all when he's working. Why? A phone call feels urgent in that you have to answer it immediately. But just because somebody calls at a specific moment doesn't mean that the call is more important than the work you are doing. When people need Elon, they email him. And he checks his email at regular intervals throughout the day once meaningful pieces of work have been finished. Likewise, you can control what you work on when, in order to make sure the most important work gets done.</p>
<p> </p>
<p><span class="font_large"><strong>Do the Most Important Thing First</strong></span></p>
<p>You've probably heard of the book <a data-link-label="" data-link-type="url" href="https://www.amazon.com/ONE-Thing-Surprisingly-Extraordinary-Results/dp/1885167776">The </a><a data-link-label="" data-link-type="url" href="https://www.amazon.com/ONE-Thing-Surprisingly-Extraordinary-Results/dp/1885167776">ONE</a><a data-link-label="" data-link-type="url" href="https://www.amazon.com/ONE-Thing-Surprisingly-Extraordinary-Results/dp/1885167776"> Thing: The Surprisingly Simple Truth Behind Extraordinary Results</a>, written by Gary Keller and Jay Papasan. The entire book boils down to one lesson: pick the <strong><em>single</em></strong> most important, most beneficial thing you can do now that will have the biggest positive result on your future. And then do it right now. Don't wait. Don't work from the least important task to the most important task throughout your day. Don't even start with something easy as a warm-up if it's not the most important thing.</p>
<p>When you practice finding which one thing is the most important, you develop your ability to prioritize. This is especially useful for musicians and producers like us, when there isn't a career-path or a to-do list set out in front of us.</p>
<p>And when you practice doing the one thing first, you'll find your productivity soaring, and you'll have the incredible satisfaction of knowing you did what was most important.</p>
<p>What happens once you've finished your one thing? Pick a new one thing, and then work on that until it's finished. The cycle repeats, keeping you productive, focused, and efficient. And it feels pretty refreshing to set aside the daunting nature of "plan my whole career and find a way to make it succeed" and instead focus on "doing this one thing right now".</p>
<p>The authors are on to something. And whether you read the book or not, you can benefit from the concept. Choose to use it, appreciate the simplicity it brings to your work, and enjoy the satisfaction of knowing your working time was spent the best it could have been spent.</p>
<p> </p>
<p><span class="font_large"><strong>Forgive Yourself For Being Human</strong></span></p>
<p>Sometimes you're just going to feel burned out. I often struggle with not being focused and productive enough, but once in a while, I'll find myself in that weird state of being too focused and too productive. What do I do all day? Work. What do I do during my evening's relaxation time? Work some more. If enough days of this go by, I get so mentally fatigued and emotionally discouraged that my productivity is shot.</p>
<p>What do you do when this happens? Take a vacation day. At a bare minimum, take an hour or two off for yourself. Sleeping in late, long lunches, and Netflix marathons aren't part of any recipe for success. But when you're approaching burnout, they can be exactly what you need. Celebrate your progress and productivity by rewarding yourself with one or more of these. And the hardest part, once you've learned to consider yourself a pro and your music as your career, is to do this guilt-free. But learn to do this guilt-free, so it has a chance at rejuvenating you.</p>
<p>And there will be days when you don't get any work done at all, even when you weren't intentionally taking a vacation day. Maybe you slipped up by doing your least important work first and never ended up having time to do work that's actually meaningful. Or maybe you never made it away from video games or your favorite TV show to even create the appearance of work. That's okay. Forgive yourself for being human.</p>
<p>But know that's when you have to pick yourself up. If you've been dieting to lose weight, then slip up and have a binge-meal, what's done is done. Maybe you even needed a little break from the diet for your sanity. Or maybe not. But one binge meal doesn't undo an entire diet, and it certainly doesn't mean you can't keep dieting and making progress in the future. Likewise, one lost day isn't going to kill your music career as long as you make sure it's only one day. Accept that you were defeated today, and make a plan to win tomorrow.</p>
<p> </p>
<p><span class="font_large"><strong>Wrapping Up</strong></span></p>
<p>Productivity and effective work time are things all of us struggle with. Maybe some of us more than others. I usually feel I must be worse than anyone at this. But it's part of being human, it's a problem that will never completely vanish, and it is something that you <em>can</em> overcome.</p>
<p>Unfortunately, it doesn't get easier. You won't feel more like focusing and being productive next week, next month, or next year. It's just not going to happen. But you do have an advantage once when you make a habit out of earning your success: by working first, working smart, and doing what most needs to be done. And learning this lesson now will mean far more for your career <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/school-for-mixing-or-music-production">than any class or music program could</a>.</p>
<p>You have the power within you to make it happen for yourself. Better now than waiting until next week or next year to even begin. Make today count, so tomorrow can be even better.</p>Milo Burketag:miloburke.com,2005:Post/52201892018-05-07T09:00:00-06:002018-05-07T09:00:50-06:00Are Your Speakers Good Enough?<p><span class="font_large"><strong>Introduction</strong></span></p>
<p>This is a tricky one where money comes in. Of course, we all can't afford world-class studio monitors. But we also can't mix everything on laptop speakers and expect our mixes to <a contents="translate" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/why-isn-t-my-mix-translating" target="_blank">translate</a>. The truth is that if your speakers aren't good enough, you won't make good mixes. Why is that? </p>
<p><em>First, speakers can mislead you. </em></p>
<p>If your speakers have a mid-bass hump at 100 Hz, and if you have good taste as a mixing engineer, then your resulting mixes will come out with a dip at 100 Hz. Your speakers need to be accurate to help you mix accurately. </p>
<p><em>Second, speakers can provide you with insufficient information. </em></p>
<p>If your speakers just aren't clear or detailed enough, you can still paint the broad strokes of your mix, like <a contents="balancing the volume" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/the-core-of-mixing" target="_blank">balancing the volume</a> of the lead synth or lead guitar with the volume of the lead vocal. But you're going to miss the subtleties, like whether or not your <a contents="reverb sounds realistic" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/smooth-sounding-reverbs" target="_blank">reverb sounds realistic</a>, if your transients are too soft or too aggressive, <a contents="if your vocal needs de-essing" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/my-favorite-de-essing-trick" target="_blank">if your vocal needs de-essing</a>, if your EQ on the hi-hat is working or not, and if your bus compression is helping or hurting the <a contents="micro-dynamics" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/micro-dynamics-and-macro-dynamics" target="_blank">micro-dynamics</a> of your mix. All the small things add up.</p>
<p> </p>
<p><span class="font_large"><strong>What makes a good speaker?</strong></span></p>
<p>My short list is that a speaker should sound clean, detailed, natural, easy to listen to, deep, and punchy. Technically speaking, you want it to have as flat of a frequency response as possible, as extended of a frequency range as possible, be as low distortion as possible, and have as simple of a crossover as possible. But since marketing departments fudge the truth, your ears will be the best judge, not speaker specifications.</p>
<p>Many love to say that speakers are defined by one genre: that these monitors are perfect for EDM, and that those speakers only sound good with jazz, not rock. I don't adhere to this at all. In my opinion, a good speaker is a good speaker. And if it doesn't sound good in one genre or another, then there's something wrong with it. Probably a flaw that certain genres may not reveal. You want your speakers to sound good with all genres, particularly if you produce or mix multiple genres.</p>
<p>If your speakers have all the attributes I listed above, and if the mixes you make on them translate well to other playback devices, then you probably don't need new speakers.</p>
<p>But if not, then it's time to upgrade.</p>
<p> </p>
<p><span class="font_large"><strong>But Milo, you don't know what speakers my fans listen to music on</strong></span></p>
<p>That's true, I can't possibly have a comprehensive list. But I bet there's a healthy mix of laptop speakers, Bluetooth speakers, crappy earbuds, quality headphones, car stereos, and the occasional good living room stereo or pair of decent studio monitors. Most of these listening scenarios are not kind to your music.</p>
<p>The most common argument I hear "against hi-fi" is that "because my fans aren't listening on fancy speakers, my music will sound better to them if I don't mix on fancy speakers."</p>
<p>And this just isn't true. Yes, your fans will listen on bad speakers, but speakers can be bad for many different reasons: some might be too bright, and others too dark. You have to mix on speakers that are accurate enough to give yourself a balanced mix. And then, when a fan listens to your balanced mix on his too-bright speakers, it will sound just like all the other balanced mixes he listens to on his too-bright speakers. The belief that mixing on bad speakers helps you make better mixes assumes that all bad speakers are bad in the same way. And obviously, that's false.</p>
<p>So no, your entire musical fanbase won't be listening on high-quality speakers. But yes, you still need quality speakers in order to make smart mixing decisions, and to best prepare your music for the many varieties of bad speakers and headphones your fanbase will listen on. It's not pretty, but it's reality.</p>
<p>Though there can be exceptions to this now and then. For example, I love good, clean, tuneful bass, and I have a pretty weird and complex subwoofer setup. It sounds really good to me, but the bass is so clean that I often don't realize how muddy a mix will sound on other speakers. But that's why I do <a contents="reference checks" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-reference-checks-can-save-your-song" target="_blank">reference checks</a>.</p>
<p> </p>
<p><span class="font_large"><strong>What if I don't have good speakers?</strong></span></p>
<p>Then it's time to go shopping. The trouble is, the most well-known brands don't always sound the best. And for better or worse,<a contents="&nbsp;the price&nbsp;isn't always that big of a factor in the quality of the sound you get" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/rocky-mountain-audio-fest-2017" target="_blank"> the price isn't always that big of a factor in the quality of the sound you get</a>. This makes it hard in that it's tricky to find good monitors. But easy in that it can be possible to find something great in your price range. </p>
<p>Shop for speakers you can hear instead of ordering online, based on opinions you have read online. I recommend bringing along a handful of tracks that you know well, and that are engineered well. Perhaps something busy and electronic, something clean and acoustic, something with natural-sounding female vocals, and something that exercises the low-end. Each piece of music is a test for each set of speakers: do they sound cloudy in the midst of busy music? Or can you hear new clarity in the many instruments playing? How real do real instruments sound? How natural does a female voice sound? Is the bass content powerful or absent, and is it muffled or precise?</p>
<p><em><strong>Listen skeptically</strong></em></p>
<p>A few years ago when I was shopping, I started reading online about the redesign of a speaker that was well regarded for mixing electronic music. I drooled over every aspect of new tech in the redesign, and I bought the marketing claims hook, line, and sinker. Until I went into a store and actually heard them. The frequency response wasn't very flat, they didn't sound very clear at all, and voices and instruments just didn't sound natural. I'm sure glad I didn't order them online.</p>
<p>Every speaker brand has a marketing department, and the job of every employee in that department is to make customers believe they make the best speakers on the planet. Even when that's not exactly true. And even when that's a bold-faced lie. So don't shop based on marketing material or sales-talk.</p>
<p>This may not be comforting advice, but it's often better to stay away from brands the public considers to be "premium". Some of these brands make fine products, but they are severely over-priced. Some of these brands can make good products, but they choose not to in order to fit in an unusual form factor, or to look beautiful and original, or to increase the company's margins. And other "premium" brands have never made good products, but instead have bought the public's opinion through creative marketing and excessive product placement. That may work on a lot of people, but you don't have to let it fool you.</p>
<p><em><strong>Check your expectations</strong></em></p>
<p>Just this week, a friend brought me along as an extra set of ears as he looked for his next set of speakers. He's working with a good-sized budget, and he did loads of research ahead of time, forming opinions from what he read on which brands sound good and which ones don't, and which brands are good value and which ones aren't. He told me which brand and model he expected to love, and I expected to love it too. But when we heard the speakers at the store, they completely disappointed: the highs were too bright but somehow also cloudy, the bass was super muffled and rumbly, and voices just didn't sound natural through the speakers. So much for brand recognition and reading quality reviews.</p>
<p>He decided not to buy those speakers, which was a smart choice. But the same day, he discovered a couple of new brands he hadn't listened to before and found he really likes them. And I agree: they sound nice for the money. I think he'll have an easy time shopping now that he's listening to his ears instead of listening to the opinions of others.</p>
<p>I wish I could give you a list of brands that never do wrong, or a list of speakers that sound great for the price. Unfortunately, quality brands mess up designs all the time: I've heard fantastic older designs from the brand my friend expected to love, and I respected them as a designer and manufacturer. I even toured their factory once and left impressed. There are just too many brands with too many models that receive too many redesigns for me to keep up, as a producer and not a speaker reviewer.</p>
<p>And also, my tastes may not reflect your tastes. You might like speakers that sound much brighter or darker or have much more bass than I prefer. Make sure you bear in mind your preferences when shopping so you can buy the right speakers for you.</p>
<p>But, if you want a hint, I wrote about one of my favorite brands for quality and value <a contents="in another post" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/rocky-mountain-audio-fest-2017" target="_blank">in another post</a>.</p>
<p> </p>
<p><span class="font_large"><strong>What if I don't have the budget for good speakers? </strong></span></p>
<p>Within reason, this is an area you do have to invest in. But good speakers can be had for cheap. Some budget studio monitors can sound surprisingly real. And you don't need to pass up traditional stereos built for the living room. I found the smaller Pioneer bookshelf speakers designed by Andrew Jones to sound quite nice, and I bought them new on a sale for $50/pair. Some Polk or Infinity speakers in that range can sound good, though I've heard others that don't. Listen before you buy, and trust your ears. </p>
<p>Remember, you can buy used. And you certainly can buy old. Despite what marketing departments would lead you to believe, speaker technology advances very slowly, and some of the best sounding speakers in the world have paper cones for their woofers, not exotic weaves of recently invented materials. It can be a dynamite decision to find which 10-20 year old discontinued hi-fi speakers are for sale right now on Craigslist in your area. And you can pair them with an integrated receiver also from Craigslist. If you're smart about your purchase, you can get both speakers and amp for $100-200. And if you carefully audition before you buy, you can find speakers that sound fantastic in that price range. It just takes creativity and patience.</p>
<p>Or, maybe you already have speakers in the other room that match what I'm describing, but didn't consider them because advertising convinced you that you need modern-looking plastic speakers with glowing power lights in order to make music. You don't. You just need speakers accurate enough to let you hear what's really going on.</p>
<p>Of course, if you're buying used studio monitors or bookshelf speakers or older hi-fi speakers, you still need to listen to music you know on them first to hear how they sound, and if they'll be good for you. There's no rating system or technology or size or brand that can guarantee good sound. So again, trust your ears.</p>
<p>It's also worth considering that good headphones can be had for a lot cheaper than good speakers. You'll have issues <a contents="if you only mix on headphones" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/why-isn-t-my-mix-translating" target="_blank">if you only mix on headphones</a>, but good headphones are better than bad speakers.</p>
<p>And if you don't have any money at all for better speakers at this time, you can double-down on your reference checks. It's not a replacement for great studio monitors, <a contents="but it sure does help close the gap" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-reference-checks-can-save-your-song" target="_blank">but it sure does help close the gap</a>.</p>
<p>Also, consider <a contents="tweaking your speaker placement" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/why-isn-t-my-mix-translating" target="_blank">tweaking your speaker placement</a>. Honestly, a little research and a little time are as good as an upgrade.</p>
<p> </p>
<p><span class="font_large"><strong>Conclusion</strong></span></p>
<p>Speakers aren't free. Good equipment is going to require some cash. But if you think outside the box and are willing to go used, you have a lot more options than you expected. And if your current speakers are sub-optimal or you're mixing on headphones, then there's a lot of benefit to be found from upgrading.</p>
<p>I'd love to hear about what speakers you use, and what you like and dislike about them. Also, if you have any speaker buying tips, please share them in the comments below.</p>Milo Burketag:miloburke.com,2005:Post/48263442018-04-30T09:00:00-06:002020-12-02T09:32:20-07:00Why I Don't Use Subtractive Arranging<p><strong><span class="font_large">Introduction</span></strong></p>
<p>I don't know about you, but I love the raw thrill of starting a brand new production. I love picking a new instrument and finding a way to play it in a new way, fluidly writing chords and melodies in the moment, fueled by being in the creative groove. Sometimes I make something amazing, something I love and can't believe I invented. Other times, I make something terrible that never sees the light of day. That's okay. But I love the initial rush of freely creating a new song.</p>
<p>What I don't love is arranging. It's hard to make a second song section that sounds good while still sounding different from the first. And the third song section is hard too!</p>
<p>We're all looking at ways to <a contents="shortcut this process" data-link-label="" data-link-type="url" href="https://www.miloburke.com/production-blog/blog/shortcutting-arrangements-with-midi-skeletons" target="_blank">shortcut this process</a>, and many voices out there recommend using "subtractive arranging" to give themselves a boost out of the one-song-section trap.</p>
<p> </p>
<p><span class="font_large"><strong>What is Subtractive Arranging?</strong></span></p>
<p>Whether you try to or not, the easiest way to start a new production is to create a single song section. It could be chill like a verse, anthemic like a chorus, or exciting like a drop. No matter where you start, most of us tend to start on one section.</p>
<p>And then you feel stuck. You created this awesome section with great rhythm, the instruments sound full, and the sound design tickles your imagination. This song section is often called a "super loop". But you don't know where to bring it. You don't know how to make a song out of it.</p>
<p>Subtractive arranging is the process of copying and pasting your super loop again and again until it fills about a song's length of time. Then start removing or "subtracting" elements of it at various points in time.</p>
<p>For example, maybe after 16 bars, you take the drums out. And for the first 16 bars, you leave the drums in but take out the lead melody. Keep subtracting components of your super loop to make a song section, cut out different components for another section, and before you know it, you have an arrangement.</p>
<p>Well, maybe.</p>
<p> </p>
<p><span class="font_large"><strong>The Problem with Subtractive Arranging</strong></span></p>
<p>There's one main flaw in what you just did: you made song sections, but they all sound similar to each other because they all share too many common elements. From a compositional perspective, your song lacks "sectional variety."</p>
<p>I hear and critique tracks all the time that are guilty of a lack of sectional variety. And sometimes they sound really great. But if the chorus or drop starts and you've already heard every instrument it has earlier in the song, you've got a problem. And if the chorus or drop starts and you've already heard every chord and every note it has earlier in the song, you have a massive problem.</p>
<p>As each song section progresses, a song lacking sectional variety feels like it's never truly ending one section and beginning another. Instead, it feels like one never-ending song section. It's boring. It puts listeners to sleep.</p>
<p> </p>
<p><span class="font_large"><strong>But My Genre Doesn't Rely on an Obvious Arrangement...</strong></span></p>
<p>It probably does.</p>
<p>It doesn't matter if you make electro-pop, future bass, trap, indie, folk, or even jazz. The most popular songs in your genre contain a number of elements that make them work. One of them is a strong arrangement, and another is the story of energy throughout the song.</p>
<p>A strong arrangement is critical to make the song feel like it's going somewhere, it's critical to keep the listener engaged, and it's critical in creating a journey that the listener feels he or she has just traveled.</p>
<p>In something as diverse as music, there will always be exceptions. Jam-bands like STS9 make songs without defined sections that keep evolving without ever returning to where they started. Some sub-genres of trance feature long songs that evolve almost imperceptibly throughout, setting a mood and a beat to keep club-goers moving, but otherwise lacks variety. Likewise, music for meditation or the spa intends not to take the listener through a story, but just to soothe the listener with relaxing sounds.</p>
<p>But for everyone else: arrangement matters.</p>
<p> </p>
<p><span class="font_large"><strong>What Makes A Strong Arrangement</strong></span></p>
<p>It's all about creating and relieving tension.</p>
<p>A verse probably doesn't have a lot of tension, but it shouldn't feel stuck: it should feel like it's going somewhere. If you use a pre-chorus or a build, you're intentionally ramping up the energy, telling the listener that something exciting is coming. And when the peak moment of the song arrives, whether it's in the form of a chorus or a drop, the energy and euphoria should feel like it's been earned. Then, when the song transitions from that peak to the second verse, the reduction in energy should feel like a great relaxation, like you've just returned to somewhere comfortable.</p>
<p>Repetition is also a big part of a stong arrangement: the listener likes to be able to identify which song section he's in, but even more, he wants to feel the safety of returning to a familiar song section that he already knows. Except that for each return, the song section should be evolved somewhat. Some new element should be added to make it sound more exciting or more complete than it was before.</p>
<p>You may think that the <strong><em>Verse > Chorus > Verse > Chorus > Bridge > Chorus</em></strong> format feels old. It's been around since the 50's. But it's stuck around so long because it works. If you make vocal music or instrumental electronic, you should pay attention to this format because listeners love hearing it.</p>
<p>There are variations, of course. An older variation might be <em><strong>Verse > Chorus > Verse > Chorus > Verse > Chorus</strong></em></p>
<p>And a more modern variation is <em><strong>Verse > Pre-Chorus</strong> (High-Energy) <strong>> Chorus</strong> (Low-Energy)<strong> - Drop</strong> (High-Energy, Reminiscent of Chorus) <strong>> Repeat</strong></em></p>
<p>Those cycles of energy rising and falling are understood by the listener. And the familiarity feels comfortable, which the listener likes.</p>
<p> </p>
<p><span class="font_large"><strong>Then How Do I Build My Song's Arrangement?</strong></span></p>
<p>This is a little tricky, because you'll have to find which option works for you. The subtractive method isn't very effective because it encourages a lack of sectional variety and discourages the forming of a song's story of energy. But I can tell you what I do.</p>
<p>I generally start by making one song section, just like most of you. And sometimes, it immediately calls to me, asking for this other song section to come after it. Sometimes, by making a verse, the chorus begs to come out. But that doesn't always happen. There are so many times I've felt stuck at the end of a verse wondering how to make a chorus, or even more often, stuck at the end of a chorus wondering how to make a bridge.</p>
<p>It can help to loop the previous song section and listen to it on repeat. I try to forget that I'm the composer and imagine myself as the listener. What do I expect to come next after what I'm hearing now? Does it sound high energy or low? Does it have drums? If so, what do they sound like? Does it have bass or chords? If so, what do they sound like? Do any melodies come to mind?</p>
<p>That may be all I need. But if nothing's coming, it may be time to give the song a break. If you don't listen to it for two days, maybe a week, there's a good chance that when you come back to it and listen all the way through, you'll begin to imagine where it will go musically.</p>
<p>Also, it really helps to know that you don't have to create perfection. In fact, you probably can't. If I make a great sounding verse and chorus, I usually feel like the pressure's on; that the bridge has to be amazing otherwise the song is a failure. But applying pressure kills creativity instead of fostering it. Tell yourself it's okay if the next song section you make sucks, or if it doesn't match the previous section at all. You can delete it and start again. In fact, you may have to. When you stop beating yourself up for making something imperfect, you free yourself to try. Sometimes you'll need those extra tries. Other times, just knowing you could have extra tries takes the pressure off so you can make something right the first time.</p>
<p> </p>
<p><span class="font_large"><strong>Conclusion</strong></span></p>
<p>I'm not going to lie: it's possible to make a great arrangement using the subtractive method. But if you use this method, know that if you want to make a compelling song that connects with its listeners, you have to force that sectional variety, and you have to work to build that story of energy that sets the scene, ramps up tension, and ends with a climax and closure, just like a great movie.<span style="display: none;"> </span></p>
<p>But if you choose not to use the subtractive method, as I choose not to, know there are alternative ways to jump in. Ways that don't make building a strong arrangement an uphill battle. <span style="display: none;"> </span>You may find your own method that's totally different than mine. If you do, please share it with me in the comments below.<span style="display: none;"> </span><span style="display: none;"> </span><span style="display: none;"> </span><span style="display: none;"> </span><span style="display: none;"> </span></p>
<p>And if you already don't use the subtractive method, I hope this little exploration into arrangements and tension and story will be helpful for you in creating great music.</p>Milo Burketag:miloburke.com,2005:Post/48485952018-04-16T09:00:00-06:002018-04-19T17:26:54-06:00My Favorite De-Essing Trick<p><span class="font_large"><strong>Introduction</strong></span></p>
<p>If you have worked extensively with unedited vocals, whether your own or by recording someone else's, you've probably had at least one track full of uncomfortable sibilance. You know the sound I'm talking about: powerful "S" sounds that make your head hurt. In unfortunate situations, it can happen with other consonants too, including "T" sounds and occasionally even "F" sounds.</p>
<p>Sibilance is often caused by how close the singer is to the microphone. Sometimes backing off just a little is all you need. But some people are just sibilant singers, and this becomes a real problem.</p>
<p> </p>
<p><span class="font_large"><strong>Why I Hate De-Esser Plugins</strong></span></p>
<p>There are plugins specifically designed to tame sibilance in vocal tracks. Most all plugin developers that offer a broad selection of basic tools include a de-esser. They all work on the same principle: one narrow-band compressor that you can move up or down to find the right frequency. You can also adjust the threshold, in order for it to trim sibilance more or less aggressively, and a Q setting, in order to dial in how wide or narrow the area the narrow-band compressor is affecting.</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/7adebf0a24615d64635accd6cbb3fd9483d459cd/original/de-esser-plugin.jpg/!!/b:W1sic2l6ZSIsIm1lZGl1bSJdXQ==.jpg" class="size_m justify_center border_" /></p>
<p style="text-align: center;"><em>Example of a de-</em><em>esser</em><em> plugin</em></p>
<p> </p>
<p>You can actually accomplish the same thing without a dedicated de-esser plugin using any flexible narrow-band compressor, such as a single band in <a contents="Izotope" data-link-label="" data-link-type="url" href="https://www.izotope.com/en.html" target="_blank">Izotope</a>'s <a contents="Ozone Dynamic EQ" data-link-label="" data-link-type="url" href="https://www.izotope.com/en/products/master-and-deliver/ozone/features-and-comparison/dynamic-eq.html" target="_blank">Ozone Dynamic EQ</a>. It's just less streamlined to use in this scenario.</p>
<p>The trouble is that no matter how fancy the plugin brand behind the de-esser, they never are effective enough until they make the singer sound like he or she has a lisp. The result is pretty gross, and it's usually even more noticeable than the sibilance was to begin with. To my ears, de-essers ineffective until they're unusable.</p>
<p> </p>
<p><span class="font_large"><strong>De-Essing the Old Fashioned Way</strong></span></p>
<p>Fortunately, there's a solution that is very effective and sounds completely invisible. Unfortunately, it takes manual editing instead of applying a set-and-forget plugin. But follow me on the steps below, and you'll see that it's really not that complicated, and it doesn't even take very long to do.</p>
<p> </p>
<p>Here is a sample vocal phrase from a project of mine with a singer who couldn't reign in her sibilance:</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/7732d992821c7549562249fbe7b773cce22fd836/original/vocal-phrase.jpg" class="size_orig justify_center border_" /></p>
<p> </p>
<p>That may not mean a lot to you visually if you haven't <a contents="studied the shape of waveforms" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/identifying-audio-waveforms" target="_blank">studied the shape of waveforms</a>. So let's break down what's happening here:</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/f0dfb4d1b29660295b2b5c073341303d80614545/original/phrase-anatomy.jpg" class="size_orig justify_center border_" /></p>
<ol> <li>The singer takes a breath at the beginning of the phrase</li> <li>You can see a loud vowel here, part of a short word</li> <li>Here's a word with a long vowel that ends in a pretty nasty "S" sound</li> <li>And here's a word that begins with a loud "S" sound and ends in a very soft, restrained "T" sound</li>
</ol>
<p> </p>
<p>Now, let's identify the problem areas.</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/cf9805a2e651e6be0dc34a85afd2ff49d1c8814c/original/sibilance-problem-areas.jpg" class="size_orig justify_center border_" /></p>
<ol> <li>There's a "T" sound that's pretty loud and abrasive - we need to fix that</li> <li>This "S" sound is out of control</li> <li>And this "S" sound is worse</li>
</ol>
<p> </p>
<p>In order to fix it, we need to zoom way in, in order to see what we're dealing with. The image below is a closeup of the third instance of sibilance in the image above.</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/368c89bb92779fb718d46a5159300b4d9f68a5f3/original/sibilance-waveform.jpg" class="size_orig justify_center border_" /></p>
<p>If you remember from my article on visually identifying waveforms, the first and third shapes in the close-up just above have peaks and troughs that are further apart, which means they are lower frequency. These are the vowels before and after the "S". And we can tell the center waveform is high frequency because it's so dense. This is the "S" sound that we need to fix.</p>
<p> </p>
<p>The first step is to separate that "S" sound into its own clip. In Studio One, I select the area of the "S" with my mouse and use the Split hotkey (Alt+X).</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/9e00dc76c3cafc22f79f129a0afa0c543f8c5360/original/separate-sibilance.jpg" class="size_orig justify_center border_" /></p>
<p>Next, we lower the volume of that selected clip. In Studio One, I find the little square box at the top middle of the clip and drag it downward.</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/15051184fa57bf22ec15d0daf90ba529f6fbada2/original/quiet-sibilance.jpg" class="size_orig justify_center border_" /></p>
<p>How much, you ask? Go by ear until it sounds right. But for a rough guideline, any sibilance that is bothersome enough to be fixed probably needs to be attenuated by at least 3 dB. Aggressive sibilance sometimes needs more than 8 dB of attenuation. Use your ears as your guide until the phrase sounds natural to you.</p>
<p>If your DAW does support clip-based volume, you can use standard volume automation on the entire track. However, for best results, you want the volume change to occur before your plugin stack, not after.</p>
<p> </p>
<p>You may or may not need to crossfade the edges of the separated clips with each other. I usually don't unless I hear a problem. I learned a long time ago that editing is only 20% as time-consuming and unbearable when you edit only the problems you can hear, compared with editing every aspect to perfection.</p>
<p>Here's what it looks like when you apply crossfades. To do this in Studio One, select each region you want to fade and use the Create Crossfades hotkey (X).</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/d913ecf3a02bbaf39dc6d32d4235534c14d9fd50/original/sibilance-crossfade.jpg" class="size_orig justify_center border_" /></p>
<p> </p>
<p>And that's all there is to it. You cut the volume of the "S" sound down without altering its brightness or clarity, and you left the adjacent vowel sounds completely unchanged. Better than any de-esser plugin could ever do.</p>
<p>Here is what the entire phrase looks like after I fixed all three sibilance issues:</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/f57f14c480cab8785f58cac0e72b001ea2a6579b/original/finished-de-essing.jpg" class="size_orig justify_center border_" /></p>
<p> </p>
<p>In order to get things sounding right, I had to cut the "T" sound at the beginning of the phrase by 5 dB, the "S' sound in the middle of the phrase by 6.5 dB, and the "S" sound at the end of the phrase by 7.5 dB. But again, use your ears to determine how much to attenuate.</p>
<p> </p>
<p><span class="font_large"><strong>Conclusion</strong></span></p>
<p>This seems like a lot, especially when you realize you have to do this for all sibilant sounds in the entire song. But it goes a lot faster than you'd expect when you learn how to visually identify the sibilance you're after, and you memorize your DAW's hotkeys for splitting clips by selection and for creating crossfades. If a song has only one vocal layer, it doesn't take that long at all to de-ess the entire song by hand.</p>
<p>However, if your very sibilant singer recorded group vocals for each layer of multiple layers of harmony, and each vocal take needs to be de-essed ... you have my sympathy.</p>
<p>If you have a secret de-essing plugin that you feel makes this method obsolete, please share in the comments below. Also, if there are any other aspects of audio editing you'd like me to spread some light on, just let me know.</p>Milo Burketag:miloburke.com,2005:Post/48229502018-04-02T09:00:00-06:002018-05-12T02:02:35-06:00Minimalism in Mixing<p><span class="font_large"><strong>Introduction</strong></span></p>
<p>When I first started out mixing, I thought sounding pro meant using every semi-relevant plugin at every available opportunity. Compression is a perfect example to cherry-pick: clearly, compression needs to exist on every drum, every vocal, every instrument, and every bus, right? I felt the same way about EQ too: every single channel has to have EQ to improve it, right? So I'd slap EQ on every single channel and move some bands around until I felt maybe things were sounding okay. They weren't, but I didn't know it yet.</p>
<p>I hope you can see the problem: I was using tools because I felt I was supposed to. I didn't yet know how to listen to decide if a certain track needed a tool to make it sound better.</p>
<p>And this brings us to the philosophy I want to share with you today: minimalism in mixing.</p>
<p> </p>
<p><span class="font_large"><strong>The Truth</strong></span></p>
<p>A professional mix doesn't come down to stacking a whole bunch of professional tools on every single channel. Not at all. In fact, the core of mixing is a whole lot simpler than that: it's about getting all the layers of a song to fit together nicely, and then adding a little excitement here and there. If you want to learn more about this, be sure to check out my post on the <a data-link-label="" data-link-type="url" href="https://www.miloburke.com/production-blog/blog/the-core-of-mixing">core of mixing</a>.</p>
<p>What happens when you engineer without knowing this? When you stack layer after layer of this and that effect? More than likely, you're just screwing up your track. If your dynamic range is already in a good place, adding compression is just going to flatten and squash the track, robbing it of power and excitement. And if you start aggressively EQing a track that already sounds pretty good because you feel a pro engineer is supposed to shape every track, you're just going to make a mess.</p>
<p>I don't consider there to be many benefits of <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/plugins-vs-hardware-gear">working with hardware effects</a>. But when your tools are physical, you can't afford to have that many tools, and using a tool takes real time to set up, requiring patching work and bouncing each track in real-time to apply the effect. These limitations encourage the engineer to be more thoughtful about when to use effects and how to use them. In this particular case, the forced minimalism is an advantage in helping you make better mixes. But if you adhere to the philosophy of today's post, you can achieve the same with software effects too.</p>
<p> </p>
<p><span class="font_large"><strong>How to Change Your Thinking</strong></span></p>
<p>The best way to move away from these preconceptions or habits is to stop thinking of effects as prerequisites for a finished song. Instead, think of them as tools that solve problems. In this mindset, why would you use a tool when you don't have a problem? A tool without a problem just makes a problem. Let's get to some examples.</p>
<p> </p>
<p><span class="font_large"><strong>Compression</strong></span></p>
<p>What does compression do? It's not a character plugin; it's not a sound-good-izer that only PhDs understand; it's not fairy dust to sprinkle on tracks to make them sound magical. A compressor simply reduces or "compresses" the <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/micro-dynamics-and-macro-dynamics">dynamic range</a> of sound.</p>
<p>Are you recording a real drummer? There's a good chance some drum hits sound too loud compared to others. The drummer's human imperfections have created a performance with too much dynamic range, and a compressor is the perfect tool to reign in some of those wilder drum hits to bring them into line with the rest of the performance.</p>
<p>What about digital drums? Whether your digital drums are meant to sound real or are clearly digital samples, there's a good chance that the sounds are already compressed and EQed to sound bright and full and even and generally problem-free. If there's no dynamic range issue, adding a compressor to the kick drum is probably a bad move.</p>
<p>Exceptions: maybe you want to compress the attack of a digital snare drum in order to bring out the body of the snare sound, by increasing the volume of the snare's sustain relative to the snare's attack. If you don't have a transient shaper, compression is probably your best tool. Or, maybe you want to compress a copy of your drum mix in parallel to the original drum mix, in order to bring out the room and emphasize the character of the kit. Or, maybe you want to emphasize the attack of the drums while minimizing the sustain: again, assuming you don't have a transient shaper, a compressor can be your best tool. With a slower attack setting, your compressor will be "alerted" to the loud noise of a snare hit right away, but it responds too slow to actually reduce the volume of the attack. But it does respond fast enough to reduce the volume of the sustain of each snare hit.</p>
<p>All three of those scenarios are fine. But for each, there's a problem that we're trying to solve, and we're using compression as a tool to solve it. When you approach plugins with the perspective that they exist to solve problems, it not only helps you know when to use them, but it directs you on how to use them in each case. That's the goal here.</p>
<p> </p>
<p><span class="font_large"><strong>Equalization</strong></span></p>
<p>What does equalization do? It can boost frequencies present in your mix, or it can subtract from them. In essence, it's the "bass and treble knobs" in your DAW, just with more flexibility and precision.</p>
<p>Learning to use EQ can be tricky because there are a lot of guides out there saying "boost this frequency to make your kick drum sound awesome", and "boost that frequency to make your snare crack through the mix." And it seems every EQ comes with presets for these purposes. Which is really dumb if you think about it, because what if your recorded drums or drum samples already sound awesome and crack through the mix? Doubling up on those frequencies is going to make the drums sound worse, not better. Maybe flipping through those EQ presets will help you find a weird sound you wouldn't have thought to create. But 95% of the time, you're better off starting with a clean slate, then subtract from any frequency or frequencies that are causing issues. If the situation even calls for EQ. It may not.</p>
<p>This took me a while to accept and put into practice, but often removing frequencies with EQ is more important than adding. Usually, it's not some frequency a sound is missing, but it's a frequency the sound has too much of that's messing up the mix and making it sound awkward and less than professional. Before you make any changes, listen to the sound and imagine what might need to change to make it sound better. The easiest way to find a frequency that needs to be removed is to make a big boost with a parametric EQ, then sweep that frequency up and down. Somewhere in that sweep, you'll find the area where the boost sounds the worst. That's because these frequencies need to be cut, not boosted. Once you've found that frequency, subtract that frequency instead of adding to it.</p>
<p>I use EQ when I hear a problem I want to fix. If there's low-end rumble in a track that doesn't belong, I add an EQ plugin and roll off the lows to better allow the kick drum and bass to speak. If a vocal is too bright, I cut the highs. If a vocal isn't bright enough, I add some highs, or sometimes cut some mids that might be overpowering the highs. I don't use EQ by habit to "sweeten" the vocal, but just to solve problems as I hear them. And if there isn't a problem to solve with EQ, why use it?</p>
<p>I use a minimalistic approach to EQing vocals, only touching where needed. And I do the same thing with instruments too. My music is largely electronic, which means I mostly use sounds from synths and virtual drums and other virtual instruments. These instruments are designed and built to sound problem free with no extra tweaking. Because of this, virtual instruments don't require heavy sculpting, unlike the many problems that need fixing when you're working with a poor microphone that was poorly placed near a physical instrument you're recording. Because virtual instruments often don't have problems that need fixing, I usually don't use EQ on virtual instruments at all. And when I do, it's to directly solve a problem: like when an instrument has too much low-end energy, or too much harsh sizzle, or too much mid-range bloat.</p>
<p> </p>
<p><span class="font_large"><strong>Other Effects</strong></span></p>
<p><a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/smooth-sounding-reverbs">Good mixes often have reverb</a>. Good mixes often have delay. Good mixes may utilize a lot of other effects too numerous to mention. But the minimalist mixer adds these effects when he or she identifies that one is needed, not by default or out of habit.</p>
<p>It's okay to start adding effects like crazy: a little of this, a little of that. That's how you experiment with what sounds good in a mix. And more importantly, this is how you learn what plugins do, how to use them, and which plugins you like more than others. Experimentation is great. But for each experimental plugin you add to your mix, really listen to make sure that you like what it's adding. Otherwise, it doesn't belong, and your mix will sound stronger if you remove it.</p>
<p><em>Pro tip: don't be fooled by volume. A lot of plugins make things louder while not necessarily making them better. Adjust the output volume of a plugin you're working with so it sounds the same volume with the plugin active or bypassed. Flip back and forth a few times between active and bypassed to listen to which you truly prefer. The answer might surprise you.</em></p>
<p>It's also a good idea not to use the same effect on many or all channels in your song. You may feel a mix sounds "enhanced" if you put reverb or delay or chorus on the master bus. It sure will sound different, but I bet it doesn't sound even half as clear as it does without. It's better not to put heavy effects like these on the master bus.</p>
<p>Likewise, if you use reverb in your mix, not every instrument should have reverb. If you use delay, not every instrument should have delay. Let those instruments with added effects sound different and special in contrast to the instruments that don't have them.</p>
<p>Distortion can be a really powerful tool for giving an instrument "grit" or "bite" or "edge". Sometimes I use it on pianos, sometimes on synths, sometimes on drums, often on bass guitar, and often on vocal chops. Using distortion on guitars is a no-brainer choice for many. But for a distorted instrument to sound good, it needs to be heard in the presence of clean instruments that aren't distorted. If you add distortion to the master bus, or to so many instruments it might as well be on the master bus, your mix might sound "edgier", but it will also sound a lot less clear and will be extremely fatiguing to listen to. Just like with the rest of your effects, carefully select which instrument needs the extra excitement, and try to keep the effect limited to that instrument.</p>
<p> </p>
<p><span class="font_large"><strong>Why We're Talking Minimalism</strong></span></p>
<p>There are a few reasons:</p>
<p>First, eliminating the use of unnecessary plugins is one of the quickest and easiest ways for beginners to level up their mixing chops. I often hear better mixes out of amateur producers that "don't really bother" with mixing compared to music producers that have been practicing mixing for a while. Aspiring mixers often feel they have to use every tool in every case, while people who "don't mix" don't bother adding problematic plugins.</p>
<p>Second, it transforms mixing from a nebulous task or some shady list of expectations to adhere to into a series of problems one fixes with a variety of tools. Listen for a problem, use a tool to fix it, and repeat until you don't hear any more problems. When there are no more problems to solve, the mix is done.</p>
<p>Third, it's just faster. Experimentation and creative exploration of plugins is a wonderful thing, and I encourage you to delve into it from time to time. But when it comes down to mixing a song, the results you'll get will be better and quicker when you use plugins just to solve problems you hear.</p>
<p> </p>
<p><span class="font_large"><strong>Conclusion</strong></span></p>
<p>I hope this helps you think about your approach to mixing. Have you discovered similar lessons during your journey? I'd love to hear about them in the comments below.</p>Milo Burketag:miloburke.com,2005:Post/51907242018-03-19T09:00:00-06:002018-04-19T17:22:41-06:00Identifying Audio Waveforms<p><span class="font_large"><strong>Introduction</strong></span></p>
<p>Whether you're organizing a poorly-labeled mix or editing audio tracks in a production, you'll be able to work a lot faster when you understand what a track is at a glance just by seeing its waveform. This is just one of those skills that help experienced audio engineers do what they do, and do it quickly.</p>
<p>But not everyone has spent a few hundred hours editing audio to have those visuals ingrained in memory. Today's blog post feels like a dry topic, so I'll keep it short. But I hope it helps at least a couple of my readers skip a few steps of tedious learning and level up the efficiency of their work.</p>
<p> </p>
<p><span class="font_large"><strong>The Basic Structure of Your DAW</strong></span></p>
<p>This probably feels self-explanatory to most of you, but just in case, we'll spend a few moments on the basics.</p>
<p>In every common DAW, the tracks are arranged vertically. Each vertical track contains a different instrument or sound. In the image below, you can see separate tracks for Kick, Hard Snare, Light Snare, and Hat Groove.</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/ba70910baed5ef2471e28c0c3189853fdddd3e87/original/tracks.jpg" class="size_orig justify_center border_" /></p>
<p> </p>
<p>And in each track, there are clips running horizontally, marking where audio or MIDI exist in the timeline of the song.</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/62c3a27a565f94ae0a4985afb462a5c9438426e8/original/timeline.jpg" class="size_orig justify_center border_" /></p>
<p> </p>
<p> </p>
<p><span class="font_large"><strong>Volume Visualized</strong></span></p>
<p><span class="font_regular"><strong>Amplitude</strong></span></p>
<p>In each audio clip, waveforms (squiggle shapes) mark where sound is present. The taller the waveforms are, the louder the sound is. You can see a single vertical representation for the "Lead Vox" track since it is a mono signal. But because the "Chorus Drone" track originated from a stereo synth, you can see separate vertical volume representations for the left and right channels.</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/2d345544b91fd70dd95a46396ad46224c7485729/original/amplitude.jpg" class="size_orig justify_center border_" /></p>
<p> </p>
<p><strong>A Quick Note on Gain-Staging</strong></p>
<p>Anytime you record audio from a microphone, you want to keep the gain low on your interface. There is a common misconception that you have to record loud in order to have the recording sound better. But usually the opposite is true: recording at a quieter level keeps your audio a safe distance from clipping and helps preserve momentary transients. If you want to learn more about this, check out the gain-staging section on my <a contents="gain-staging section on my guide to better mixes" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/10-steps-to-mixes-that-translate-part-2" target="_blank">guide to better mixes</a>. But in short, my rule of thumb is to never let a recorded signal exceed -15 dBFS.</p>
<p>You may also have experienced that many virtual instruments output ridiculously loud sound. Samples are generally created equally loud. Not only can this clip the instrument itself or plugins in your processing chain, but it will clip the master bus in your DAW unless you have the faders for virtual instruments pulled down very low. My recommendation is to turn down the volume inside each virtual instrument at the time you open it. And you can fine-tune each instrument's level later with the faders in your mixer.</p>
<p> </p>
<p><strong>Using Data Zoom</strong></p>
<p>But you'll notice that when you follow proper gain-staging rules like the ones I outlined above, your waveforms look smaller and are harder to identify, such as in this image:<img src="//d10j3mvrs1suex.cloudfront.net/u/210695/d776e6dde96844a705d14378b29296133ac6c8ce/original/data-zoom-before.jpg" class="size_orig justify_center border_" /></p>
<p> </p>
<p>Fortunately, many DAWs offer a zoom tool to enlarge the waveform visuals without actually increasing the volume of the sounds they represent. In Studio One, this is called Data Zoom, and it can be found in the bottom right corner of the primary session view. </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/989a781b41b255b820a5caa06c561aa0d3d65604/original/data-zoom-control.jpg" class="size_orig justify_center border_" /></p>
<p> </p>
<p>Don't the waveforms look so much easier to read now that I've adjusted the data zoom?</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/4f43ab0173c55a638ab281de1612cafaf54f620a/original/data-zoom-after.jpg" class="size_orig justify_center border_" /></p>
<p> </p>
<p> </p>
<p><span class="font_large"><strong>Frequency Visualized</strong></span></p>
<p>But when you look closely at a waveform, you can see a whole lot more than just how loud it is. Individual waves are wider, with more distance between the peaks and troughs, when the frequency the waves represent is lower. You can see in this blue waveform of a kick drum that the peaks and troughs are far apart. Therefore, the frequency is very low:</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/f4f1ddccca666333980686686a85474338d28529/original/kick.jpg" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>This kick waveform is shown at the same zoom-level as the snare and hi-hat below.</em></p>
<p> </p>
<p>In this green waveform of a snare drum, you can see the waves are much smaller and closer together. This means that the frequency of the sound is higher.</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/2f14d0b06a25bed7e16b5e8a0644df827645cf24/original/snare.jpg" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>This snare waveform is shown at the same zoom-level as the kick and hi-hat.</em></p>
<p> </p>
<p>You can also note that the snare drum sound begins very abruptly. The beginning of a sound is called the transient, and snare drums are often defined by their sharp transients.</p>
<p>You can use a transient shaper to accentuate or diminish the intensity of transients relative to the sustain of the sound. If I were to use a transient shaper to cut the attack and boost the sustain of this snare, it would give the snare more body and character. Or, I could use the same plugin to boost the attack and cut the sustain, helping the snare drum sound more impactful and enabling it to better cut through a dense mix without perceptually sounding louder than it should.</p>
<p> </p>
<p>Below is the waveform of a hi-hat. Because the waves are much smaller and closer together, we can see that the frequency is very high. Also, the rounded start of the waveform shows that this hi-hat has a much softer transient than the snare drum above.</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/ed508bc8664f027c2f8ee764da1b6f0615f24038/original/hi-hat.jpg" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>This hi-hat waveform is shown at the same zoom-level as the kick and snare above.</em></p>
<p style="text-align: center;"> </p>
<p style="text-align: center;"> </p>
<p><span class="font_large"><strong>Identifying Instruments by Their Waveforms</strong></span></p>
<p>In the same way that different instruments sound different, the waveforms of different instruments look different.</p>
<p> </p>
<p><strong>Piano</strong></p>
<p>Here is a wide-zoom waveform of a piano:</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/3180a5892ce141e246c29a3eef26bfbe446c0070/original/piano.jpg" class="size_orig justify_center border_" /></p>
<p>You can see that each chord begins abruptly, and that the sound gently fades until the next chord.</p>
<p> </p>
<p>Here is a closer zoom on just one chord of the same piano:</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/b1671c7b5371957a2e542204df1ad19ea6720ff5/original/piano-close.jpg" class="size_orig justify_center border_" /></p>
<p>You can see that the sound very gradually falls off. This is very typical for sustained instruments. The waveform of a strummed guitar would look very similar.</p>
<p> </p>
<p><strong>Drums</strong></p>
<p>Here, I have the waveform for a drum kit at the same zoom-level as the wide-zoom piano.</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/58ff8966000771b7af233a1f39e84672bc605b03/original/drum-kit.jpg" class="size_orig justify_center border_" /></p>
<p>By comparison, you can see that the waveform consists of a bunch of spikes without much sustain. These spikes represent the individual drum hits and the percussive, sharp-transient nature of each sound.</p>
<p>Let's take a closer look:</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/147b619384a9d6639c105d7eec3b0e7389c490a0/original/drum-kit-close-labeled.jpg" class="size_orig justify_center border_" /></p>
<p>When you zoom in tighter, you can see the individual drum hits more clearly. We can also visually identify which waveforms belong to which drums:</p>
<ol> <li>This drum groove begins with loud kick drum notes that have a little space between them. Note the fat waves representing low frequency.</li> <li>A snare drum follows. We know it's a snare because of the sharp transient, because the waves are tighter indicating higher frequency, and because snare drums are often very loud.</li> <li>The kick drum is hit twice again, but this time the hits are softer and closer together.</li> <li>At the end of the groove, we can see two soft taps on the hi-hat. They're too small to properly see how tight the waves are at this zoom-level, but the frequency doesn't look low, the transients look rounded, and the hi-hat is generally played much softer than the kick and snare.</li>
</ol>
<p> </p>
<p><strong>Vocals</strong></p>
<p>Below is the waveform of a singer's vocal phrase, shown at the same zoom level as the wide-zoom piano and drums:</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/baf4d52689a521d77c30d08be5c547788d8875c4/original/vocal-close.jpg" class="size_orig justify_center border_" /></p>
<p>We can't learn a whole lot just from looking at the waveform because the shapes are far more random than the predictable waveform shapes of the piano and drums. And that randomness actually helps us identify it. The human voice can create loud and soft vowels that sustain or even grow in intensity. Vowels can appear medium-low frequency while some consonants appear very high frequency. I explore this much more in my <a contents="de-essing guide" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/my-favorite-de-essing-trick" target="_blank">de-</a><a contents="de-essing guide" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/my-favorite-de-essing-trick" target="_blank">essing</a><a contents="de-essing guide" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/my-favorite-de-essing-trick" target="_blank"> guide</a>.</p>
<p> </p>
<p>It takes more experience to break down what is going on in a vocal waveform. But I can get you started.</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/9d311807f1974cc8e3232fd829948ee23d94e723/original/vocal-close-labeled.jpg" class="size_orig justify_center border_" /></p>
<ol> <li>Here we can see a long, sustained vowel that gets softer near the end.</li> <li>Here marks a sibilant sound, probably an "S". You can tell because the waveforms are smaller and more tightly packed together.</li> <li>After several shorter vowels and softer consonants, we see another sibilant consonant. Possibly an "F".</li> <li>At the very end of the phrase, we can see the singer take a soft breath.</li>
</ol>
<p>Vocal waveforms are the hardest to study, but the more vocal editing you do, the more you'll pick up on the signs to help you find exactly what you're looking for while editing.</p>
<p> </p>
<p><span class="font_large"><strong>Conclusion</strong></span></p>
<p>Yep, that was a dry one. Studying waveforms isn't very fun. And once you can do it, you can do it. You don't need me to point things out to you.</p>
<p>But for those growing in their experience that want a kick-start in editing audio, I hope today's guide helps you edit more quickly by paying attention to the visual cues in waveforms.</p>Milo Burketag:miloburke.com,2005:Post/48229972018-03-12T09:00:00-06:002018-06-05T15:42:51-06:00Shortcutting Arrangements with MIDI Skeletons<p><span class="font_large"><strong>Introduction</strong></span></p>
<p>One of the most amazing aspects of music production is just how fun it is to start throwing down layer after layer, adding new instruments and melodies and samples, building up a song until it blooms into life. It's one of the reasons production is so compelling: it's entertaining like a great video game, it's engaging like television never can be, you are building real skills as you produce, and you have something to show for your work after each production session.</p>
<p>One of the challenges, however, is that while stacking on new instruments and sounds can be great fun, arrangement is not. If you've been producing for even a short length of time, I'm sure you've experienced the exhilaration of packing on layers closely followed by the crash, as you realize you have one giant loop of music that all sounds the same. You hit the brick wall.</p>
<p>How long you stay at the wall is up to you. It could be a few minutes, it could be a few months.</p>
<p>But making arrangements doesn't have to be intimidating. I'm going to show you a tool you can use to remove the roadblocks of arranging and get back to the fun of producing.</p>
<p> </p>
<p><span class="font_large"><strong>What Is a MIDI Skeleton?</strong></span></p>
<p>A MIDI skeleton is a map for the arrangement of a song. It charts out not only what the song sections are, but which instruments are in the song, and which section each instrument plays in. And where do you get all of this information? You borrow it.</p>
<p>Pick a song that you love and want to emulate. There are probably a lot of reasons you love this song, and I'd be willing to bet that a strong arrangement is a big contributor to its sound, whether you realize it or not. We're going to start taking notes on how the song's arrangement does what it does, so you can learn to use these methods too.</p>
<p> </p>
<p><span class="font_large"><strong>How to Build the Skeleton</strong></span></p>
<ul> <li>Create a brand new, empty session in your DAW. Import the song you want to learn from into the session.<br> </li> <li>Match the session's tempo to the tempo of the song. If your DAW doesn't have beat detection, use an online metronome: I've had good luck using the average across 20 beats using <a data-link-label="" data-link-type="url" href="http://a.bestmetronome.com/" style="" target="_blank">this metronome</a>.<br> </li> <li>Line up the first beat of the song further into your timeline than bar 1. I like to line it up with bar 9, which is especially useful if the song has a longer, arrhythmic intro.<br> </li> <li>Listen to the entirety of the song, adding markers to your session to mark each song section. For example, pop songs virtually always have verses and choruses. And they often have intros, outros, bridges, and interludes too. It gets a little tricky if the song you're learning from doesn't have vocals, but sections like the build and the drop are usually obvious. Feel free to describe sections of instrumental songs as verses and choruses if it seems to fit. Use whatever terminology makes sense to you for describing each section's place in the song.<br> </li> <li>Create a blank MIDI track for each dominant instrument you hear in the song. For example, an electronic pop song probably has vocals, drums, synth bass, synth chords, and a synth pad. It also probably has something special, like a rhythm guitar or mallet percussion or string plucks. Add a blank MIDI track for each major instrument the song contains.<br> </li> <li>For each blank midi track, create empty MIDI regions for each place in the song you can hear the instrument. For example, maybe the synth bass is present in the choruses but not the verses: add a MIDI region without notes to the synth bass track to occupy that track for each chorus, but not for each verse. Continue doing this for all primary instruments. It will take some time.</li>
</ul>
<p>This process usually takes me about a half hour if I'm being really thorough. Though I can do it in less than fifteen minutes if I'm aiming just for the broader points.</p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/bedbe3e0ba5992d3d175f515fa97f1ffb05230e3/original/midi-skeleton-1-800wide.jpg?1503956753" class="size_l justify_center border_" /></p>
<p style="text-align: center;"><em>Here's one of my MIDI Skeletons. I tend to make mine thorough.</em></p>
<p> </p>
<p><span class="font_large"><strong>Finished - Now What?</strong></span></p>
<p>You now have a map of the song charted out in your DAW, or a "MIDI skeleton". For starters, it's a great teacher.</p>
<p>What can you learn?</p>
<ul> <li>I used to wonder what was lacking in my arrangements until I started creating MIDI skeletons. After making maps for several songs I really enjoy, I realized that all of them transitioned from a high-energy pre-chorus to a low-energy chorus to a high-energy drop that sounded reminiscent of the chorus. Yet I was still writing in a medium-energy pre-chorus, high-energy chorus song structure that didn't have a drop. This now feels like such an obvious thing, but I just never noticed it until I did the homework, and my songs took on a dramatically different feel when I started writing in this arrangement. Likewise, there are probably pretty bold differences between your music and the music you love that you haven't yet noticed.<br> </li> <li>Get specific and learn from the little details. Did you know that dropping out the bass or drums at the end of a loud section is a great transitionary tool? Whether you lead into a soft section or another loud section from there, the listener's interest is sustained as the song's energy was broken up and the listener knew a transition was coming. This can be really effective even if the only thing you cut out is the kick drum. Take note of which instruments drop out within a song section and exactly when this happens.</li>
</ul>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/2ab2397c3f0c0553c3c266fac210d6c8642b3874/original/midi-skeleton-2-800wide.jpg?1503956752" class="size_l justify_center border_" /></p>
<p style="text-align: center;"><em>Staggering start/end timing for layers can contribute to killer transitions</em></p>
<ul> <li>Also, take note of what is added over time. What instruments are present in the chorus that aren't in the verse? What instruments make the second verse more special than the first verse, and the second chorus more special than the first chorus? Does the song you love make use of risers or transitionary instruments specifically for ramping up the energy between two song sections?<br> </li> <li>Your MIDI skeletons will likely teach you very different things than mine taught me. After all, we'll be learning from different songs, and often, completely different genres. Let your musical tastes guide you towards which songs you want to break down in this fashion, which will in-turn teach you how to sound more like your own musical tastes. There's no downside.</li>
</ul>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/bf12d436266cd1cf57bbf4817c31bd931b19e78d/original/midi-skeleton-3.jpg?1503956753" class="size_l justify_center border_" /></p>
<p style="text-align: center;"><em>A study in arranging guitar layers throughout a song</em></p>
<p> </p>
<p><span class="font_large"><strong>Making Music</strong></span></p>
<p>While this has been a great teacher, there's more that you can do with a MIDI skeleton than just learn from it. You can use it.</p>
<ul> <li>It's best if you save your MIDI skeleton session not as a session, but as a template for future sessions.<br> </li> <li>Open up a new session from a MIDI skeleton template and change the tempo a little or a lot.<br> </li> <li>You already have all the song sections mapped out, and a list of the instruments too. But no audio tracks and no MIDI notes. This makes the perfect place to begin a new session.<br> </li> <li>Is the chorus marked with synth bass? Great. Choose your own bass patch and write your own bass melody for the chorus. You can fly this bass melody over to each chorus in the song that has synth bass in it.<br> </li> <li>Are the drums soft in the verses, absent in the choruses, and super heavy in the drops? Great. Make your own drum parts that follow the same rules, but are your own sounds played in your own patterns.<br> </li> <li>Does the MIDI skeleton have a piano or guitar part as a featured instrument? Write your own piano or guitar part to fit in that place.</li>
</ul>
<p>You're on the fast-track to making music similar to what you love, borrowing its strong arrangement, primary instrument selection, and transitionary tools.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/bcc2f211e79cda6dfd5585203d72c6b42805a74c/original/midi-skeleton-4.jpg?1503956753" class="size_l justify_center border_" /></p>
<p style="text-align: center;"><em>Start creating your own instrument layers according to your MIDI skeleton</em></p>
<p> </p>
<p><strong><span class="font_large">But Isn't This Wrong?</span></strong></p>
<p>If the melodies are yours and the song is your own, no one will be able to tell if you borrowed heavily from the arrangement of another song. Especially if you use a different BPM, your song is in a different key, and you use different chord progressions. After all, copyright violations stem from lyrics and melodies that are borrowed too closely. But you can't copyright an arrangement, or the tools you used to draw attention to a transition. These are just indications of craft shared by all artists.</p>
<p>Had Mozart directly copied melodies from Haydn, or Beethoven from Mozart, it would be plagiarism. Definitely not a line to cross. Yet all three were composers in the same Classical period of music with similar distinctions: melodies were more homophonic than polyphonic, dynamics were used liberally, and orchestras were large and included woodwind sections. All three wrote three-movement symphonies common for the time, with a fast movement, a slow movement, and another fast movement. And all three transitioned (further, for the later composers) to four-movement symphonies, often with an allegro, adagio, minuet, and another allegro. This was not plagiarism: it was just the standard format of music that composers commonly followed, in the same way that rock and pop music for decades have followed the verse>chorus>verse>chorus>bridge>chorus format.</p>
<p>Likewise, you could get in trouble if you aim to paint your own copy of a famous painting you really enjoy. But you're not doing anything wrong if you learn from it, taking note of how the original artist used bold or subtle colors, what type of brush strokes she employed, and perhaps stylistic details like how shadows are portrayed. Any original paintings you make with these techniques are still yours. Art is ripe for learning without doing anything leading to a copyright violation, or even wrong-doing in the inspiring artist's mind.</p>
<p> </p>
<p><span class="font_large"><strong>Changing How You Work</strong></span></p>
<p>What you do with your MIDI skeletons and how many of them you make is up to you.</p>
<p>My friend has well over 100 MIDI skeletons nested in folders and subfolders arranged by genre and by aspects of the track, such as long-build and short-build songs, double-drop and single-drop songs, long-intro and short-intro songs, etc. For each new song, he picks one based on his mood and jumps in. All the hard decisions for arrangement are already made and he's free to just produce. If you produce in this style, it may be worth building a super-loop in the place of the chorus or the drop. You already know each place in the song it belongs, so copy and paste as you need to. And the MIDI skeleton template tells you to deviate from your super-loop as you build out the verses, etc. It's a great system.</p>
<p>Personally, I've only made about 15 MIDI skeletons, though I did make them extremely specific and detailed. Along the way, I learned a lot about how different instruments interact with each other in the same song section in addition to learning which song sections in which order speak powerfully to me, and which instruments seem to work in each song section. A couple of my songs are built on these MIDI skeletons of other songs, though I bet even the artists themselves wouldn't recognize them if they heard them.</p>
<p>However, I stopped making MIDI skeletons, and I've largely stopped using the small library that I created as session templates. Why? Because I no longer feel my songs suffer from poor arrangements. I spent enough time studying the big picture and the subtle techniques in making those MIDI skeletons that I already know what to do for my own songs. I don't feel stuck in super-loops for long periods of time anymore. And I like the freedom of being able to determine a song's structure on my own, based on what I feel it needs. That said, I will probably make more MIDI skeletons in the future when there's some new technique or style I want to study and emulate.</p>
<p>Either approach is good. Create a library of templates that foster creativity through controlled restriction, or just use this as an exercise until arrangements are no longer a problem for you.</p>
<p><span class="font_large"><strong>Wrapping Up</strong></span></p>
<p>I'd love to hear what you guys think. Have you made MIDI skeletons before? Did you find them helpful?</p>
<p>What arrangement tricks have you recently learned? Which transition technique do you rely on the most?</p>Milo Burketag:miloburke.com,2005:Post/48030842017-10-24T09:00:00-06:002018-05-24T19:41:41-06:00Three of My Favorite Plugins: Part 2 (October 2017)<p><span class="font_large"><strong>Introduction</strong></span></p>
<p>It's time for a second round of covering the plugins Milo uses all the time and is currently excited about. Why waste more time? Let's jump in:</p>
<p> </p>
<p><span class="font_large"><strong>1. <a data-link-label="" data-link-type="url" href="https://valhalladsp.com/">Valhalla DSP</a> <a data-link-label="" data-link-type="url" href="https://valhalladsp.com/shop/reverb/valhalla-vintage-verb/">VintageVerb</a></strong></span></p>
<p>Valhalla DSP is the vision of plugin designer Sean Costello. It's a tiny company that specializes in great sounding reverbs and other effects similar to reverb. Its pricing has no room for frills: all plugins cost the same, all plugins are cheap, there are no bundles, there are never sales, and there are no promotional offers to get famous people using the plugins. If you want to use it, you pay for it, and you pay the same price as everyone else.</p>
<p>VintageVerb is an incredible sounding reverb with a lot of tricks up its sleeve. But even better, it just doesn't have that weird slappy sound that so many reverb units and plugins have, causing them to sound like no room ever. If you're looking for reverb that sounds creamy and lush and real, I highly recommend Vintage Verb.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/021a8818f3493425b596f327f3202055f38ea78f/original/vintageverb-2-800wide.jpg?1508806633" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>VintageVerb set to 70's style giving a warm, roomy sound</em></p>
<p style="text-align: center;"> </p>
<p>I also own <a data-link-label="" data-link-type="url" href="https://valhalladsp.com/shop/reverb/valhalla-room/">Valhalla Room</a>, and I like it and use it as well. It has a certain cleanness to it that translates really well for recreating the authenticity of a large space. But VintageVerb's decade selector is a really neat way to add character and vibe to reverb, and its design makes it fit that much better into the popular styles of music we've all heard for decades.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/f5024c4e3e35423de7d7830dfe4e12290529114a/original/vintageverb-1-800wide.jpg?1508806633" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>VintageVerb set to 80's style, giving presence to the vocal</em></p>
<p style="text-align: center;"> </p>
<p>Some people really don't like Valhalla DSP's interface style. But the truth of it is that having a killer graphical user interface that looks exactly like a famous hardware piece or has spinning tape wheels or glowing tubes doesn't help your music sound better. VintageVerb looks like it was crafted out of construction paper. I'm sure Trey Parker and Matt Stone would be proud. But it does the job and sounds great, and I can't fault it for that.</p>
<p> </p>
<p><span class="font_large"><strong>2. <a data-link-label="" data-link-type="url" href="https://www.waves.com/">Waves</a> <a data-link-label="" data-link-type="url" href="https://www.waves.com/plugins/greg-wells-voicecentric#greg-wells-voicecentric-vocal-plugin-demo" style="">Greg Wells VoiceCentric</a></strong></span></p>
<p>I know, it's horrifying. I use one of those "one-knob" amazingifyer plugins branded after a celebrity.</p>
<p>But here's the deal: there's a lot going on inside this plugin, and I really like the sound I get out of it. And more importantly, it helps me achieve the sound I'm looking for quickly, which is really important when I'm just trying to establish vibe to better lay down vocals before I lose an idea, or rushing a mix out the door in order to publish it quickly.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/36ec84358dce1f77c67ec41b9789933d5eae0574/original/voice-centric-1.jpg?1508806633" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>This is pretty close to my starting place, tweaked slightly for one of my songs</em></p>
<p style="text-align: center;"> </p>
<p>VoiceCentric is designed to be the centerpiece of vocal mixing, and to help you get results quickly. The big knob in the middle helps you dial in how much "effect" you want the plugin to have, from less EQ sculpting and less compression to a lot of both. If your compression is out of whack, you can trim the input gain to get things balanced, and then adjust the output gain to keep your plugin volume-neutral.</p>
<p>I really like the built-in effects. The reverb is nice and dark, and good at getting out of the way of the sound. It's even detuned a little, to better help it separate from the primary sound. The delay is dark and spacious too. And the doubling adds a certain strength and weight to a vocal that I just couldn't find in other plugins. My voice sounds pretty reedy without it. The mix of these three effect knobs varies by song, but throwing on my basic preset gets me close, and fine-tuning the effects for the song gets me even closer.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/c8022b6cc51a14ba73cb06d9672cc5e4658b1355/original/voice-centric-2.jpg?1508806633" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>Here I'm aiming for more vocal crispness and a dryer sound</em></p>
<p style="text-align: center;"> </p>
<p>Because there are no adjustable parameters for the reverb and delay, I don't view them as a replacement for a proper reverb or delay bus, but I also don't view them as a nuisance to shut off. I keep a little in the mix to give the vocal space and strength and vibe. But I still rely on proper effects buses to bring out the full depth of sound, often using VintageVerb and <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/three-of-my-favorite-plugins-july-2017">EchoBoy Jr</a>.</p>
<p>In full disclosure, I own a lot more Waves plugins than I use. And I don't think I'd ever buy one of their products if it wasn't part of some tremendous sale. But this is a tool I bought (on sale) and get a lot of mileage out of. And if I didn't have it, I'd go out and buy it today.</p>
<p> </p>
<p><span class="font_large"><strong>3. <a data-link-label="" data-link-type="url" href="https://www.wavesfactory.com/">Wavesfactory</a> <a data-link-label="" data-link-type="url" href="https://www.wavesfactory.com/trackspacer/">Trackspacer</a></strong></span></p>
<p>I was confused at first too, but Wavesfactory is a different company than Waves that just happens to have a similar name. It focuses on instruments and has just a few audio processing plugins. I don't have much from Wavesfactory, but what I do have, I love.</p>
<p>Trackspacer is designed to be your secret weapon in getting your mix to sit just right. What does it do? Simply put, it helps you eliminate masking without mucking up the integrity of your instrument's frequency balance or your mix balance too much.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/74ba43b3ac77a5896826bdb8d322a2bea8306e1d/original/trackspacer-1.jpg?1508806632" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>When used gently, you don't even hear it working - you just hear the extra space</em></p>
<p style="text-align: center;"> </p>
<p>When you have two instruments that are masking each other, say voice and guitar, decide which you consider dominant. For me, this would be voice. If the guitar needs to be bright and rich in order to sound right, but then obscures the intelligibility of the voice with its bright and rich tone, place Trackspacer on the guitar (or guitar bus) and sidechain the vocal channel to it.</p>
<p>Trackspacer reads the spectrum of the sidechain input and applies the inverse of it with a 32-band equalizer in real-time. In our example, if the body and clarity and sibilance of the voice is getting obscured by the guitar, those frequency ranges present in the voice get cut from the guitar only while the voice is present. And the instant the singer's phrase is over, the guitar sounds full range again. It's so quick that you don't realize something was missing.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/af353dfb03998be9a920ff7248941f5f0f6e7a1c/original/trackspacer-2.jpg?1508806635" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>Trackspacer is a great tool for creating space in the mids and upper bass</em></p>
<p style="text-align: center;"> </p>
<p>Trackspacer comes with dials to adjust the aggressiveness of the EQ ducking, an input low-pass, and an input high-pass in order to set the scope of frequencies the plugin operates in. The result is a neat little tool that, when dialed appropriately, unmasks offending instruments and creates space in the mix pretty invisibly. It's processor-intensive, so I generally don't use it until I'm past production and into the mixing phase, when I can lean heavily on freezing tracks. But ever since I bought it, Trackspacer invariably finds a place in my mixes, solving a little issue here and creating a little breathing room there.</p>
<p> </p>
<p><span class="font_large"><strong>Wrapping Up</strong></span></p>
<p>There you have it: another three months gone by and another three of my favorite plugins that I use regularly. If you want more, you'll probably want to check out my <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/three-of-my-favorite-plugins-july-2017">previous round-up here</a>. And also, I write about <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/one-of-my-favorite-instruments-part-1-august-2017">one of my favorite instruments here</a>.</p>
<p>I've shared my part, but I certainly haven't tried even a tenth of the plugins on the market. Do you have any killer plugins you feel I should know about? Share about them in the comments below.</p>Milo Burketag:miloburke.com,2005:Post/48841402017-10-10T09:00:00-06:002018-04-25T20:51:22-06:00Rocky Mountain Audio Fest 2017<p><span class="font_large"><strong>Introduction</strong></span></p>
<p>This past weekend, I had a blast exploring the <a data-link-label="" data-link-type="url" href="https://www.audiofest.net/">Rocky Mountain Audio Fest</a>, hosted by the <a data-link-label="" data-link-type="url" href="https://coloradoaudio.com/">Colorado Audio Society</a>. It was my third year attending, and I hope it won't be my last.</p>
<p>If you're not in-the-know, it's a big convention for high-end stereos aimed at consumers. The exhibitors are brands that make things like speakers, stereo amplifiers, speaker cables, <a data-link-label="" data-link-type="url" href="https://www.crutchfield.com/S-FXZV3voHM9r/m_308950/Digital-to-Analog-Converters.html">DACs</a>, and other aspects important to home audio. To host this in an open convention hall would be madness, so they essentially rent out an entire hotel. Each room has all the beds and furniture removed, and a speaker brand will team up with an amplifier brand and cable brand to outfit a room with a great sounding stereo. Attendees go from room to room, hearing each new stereo, and you can even bring your own music to play to hear how each stereo performs on music you know.</p>
<p>For those that are interested, many bloggers and hi-fi magazines are covering the convention and any exciting products unveiled at Rocky Mountain Audio Fest by major brands. I'll let them cover things like that. But I will offer a few takeaways from the show that feel significant to me:</p>
<p> </p>
<p><span class="font_large"><strong>1) Speakers Matter - Other Equipment Matters Less</strong></span></p>
<p>It's hard to say with 100% certainty that if a room sounded good or bad, it was because of this component or that component. You would need to do in-depth shootouts to determine that, which just isn't possible from one room to another when all components are changed instead of just one at a time.</p>
<p>That said, speakers are king. Maybe a room is set up with $30,000 of cabling or just $30 of cabling, but when I walked into a room and heard a new system, it was the speakers I was hearing. Bad speakers kill the sound, and great speakers make the sound. Everything else is a bonus, looking for incremental improvements. But quality speakers are easily the most important investment you can make in a stereo. As far as electronics are concerned, I'd spend the lion's share of your budget on the speakers.</p>
<p> </p>
<p><span class="font_large"><strong>2) Placement and Acoustics Matter Just as Much</strong></span></p>
<p>The rooms and stereos that really impressed me had three things in common: the speakers were great, the speakers were carefully placed, and the exhibitors worked with the acoustics of the room. On the contrary, I heard some really great speakers that just sounded like mush because they were haphazardly placed and acoustics weren't taken into account.</p>
<p>You're right to place a lot of emphasis on good speakers, whether you're producing and mixing on them or just using them to enjoy music made by others. A lot of people have this emphasis already. But equally important is your attention to <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/an-acoustic-primer-the-secret-to-better-mix-decisions">room acoustics</a>, which most people forgo entirely. A little research and a little budget <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/my-journey-with-acoustics-part-1" style="">go</a><a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/my-journey-with-acoustics-part-1"> a long way</a>. And also equally important is <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/why-isn-t-my-mix-translating">speaker placement</a>: if your speakers aren't in the right place in your room, they're just not going to sound very good, no matter how fancy and expensive they are. Do yourself a favor and pay attention to acoustics, especially since it's cheap, and speaker placement, especially since it's free.</p>
<p> </p>
<p><span class="font_large"><strong>3) Listening Means Nothing Without Reference Tracks</strong></span></p>
<p>A lot of rooms played some really cool songs that I've never heard before. And, for some reason, a lot of rooms played some really terrible music I hope never to hear again. (Why they would do this baffles me.) But, it's not really fair to judge a system on music you don't know. Maybe the terrible music is masking a great sounding stereo. And maybe the cool music that sounds good just happens to undercompensate in the way that the stereo overcompensates, making up for its shortcomings. The one song may sound good, but it's not representative of how the speakers will sound across all music.</p>
<p>The only way to fairly judge systems is to listen to the same piece of music on as many stereo systems as you can, ideally music that sounds really good and that you know really well. One of my favorite test tracks is the song "Love Is A Verb" by John Mayer (off of his album <a data-link-label="" data-link-type="url" href="https://www.amazon.com/Born-Raised-John-Mayer/dp/B01EUKLOO6/ref=sr_1_1_twi_aud_2?ie=UTF8&qid=1507589654&sr=8-1&keywords=born+and+raised"><em>Born and Raised</em></a>). The instruments are lush, the vocals are precise, the bass has impact, the mix is great, and the frequency response is broad. And, as bonuses to the many other listeners to be found at such shows, the genre is unoffensive to most all, the lyrics are interesting, and the song is very short.</p>
<p> </p>
<p><span class="font_large"><strong>4) Agnostic Listening</strong></span></p>
<p>I approach speakers (and other gear) with a certain degree of skepticism. If someone tells me that they use adamantium magnets to drive a woofer made out of Hindenburg fabric in an enclosure constructed entirely from Apollo-program heat shields, I don't care. If it sounds good, it sounds good. Too many times have I heard the hype over speakers with some radical new tech or construction material that just doesn't pan out into great sound. Maybe a great pair of speakers have tweeters made from the Shroud of Turin, or maybe it's just paper for the tweeter and woofer. That's okay if the sound is there.</p>
<p>Likewise, I don't care if a speaker is cheap or expensive. This past weekend, I heard a pair of speakers that cost over $250,000 and thought they sounded like a cruel joke. And <a data-link-label="" data-link-type="url" href="http://www.vanatoo.com/store/speakers/new-transparent-zero-black#.Wdv-WTCQyUk">a pair of $360 speakers</a>, somehow including amplifiers and DAC and signal selection and even a Bluetooth radio, sounded surprisingly good. Quality of sound speaks, not style or marketing or price. I encourage you to let your ears guide you.</p>
<p>I don't buy into people that say "this speaker is great for EDM" or "that speaker is only good with classical". If a speaker is limited to a certain genre, then there must be some flaw that's obscured by the nature of that genre. Maybe that flaw is as simple as limited low-frequency output, in which case adding a subwoofer or two is the miracle solution. Or maybe the problem is much worse. But a great speaker is a great speaker, whether you're playing jazz or classical or rock or pop or EDM.</p>
<p>Also, I don't really care whether I end up using hi-fi speakers for my studio or studio monitors for my living room. A great speaker is a great speaker, whether it's self-powered or not, and whether it's designed for and marketed to pros or consumers. Again, let your ears guide you, not marketing.</p>
<p>I encourage you to listen as a skeptical agnostic: the sound is all that matters, not what it's made of, who it's made by, who it's made for, or how it's marketed. Believe nothing but your ears.</p>
<p> </p>
<p><span class="font_large"><strong>5) My Budget Champion for Speakers</strong></span></p>
<p>My favorite speakers of the show were made by <a data-link-label="" data-link-type="url" href="https://www.elac.com/">ELAC</a>, and designed by Andrew Jones. I have something of a man-crush on <a data-link-label="" data-link-type="url" href="https://elac-content.s3.amazonaws.com/uploads/2015/12/3000x1600_AndrewJonesImage-1500x800.jpg" style="">Mr</a><a data-link-label="" data-link-type="url" href="https://elac-content.s3.amazonaws.com/uploads/2015/12/3000x1600_AndrewJonesImage-1500x800.jpg"> Jones</a>, and it's not because of his winning personality or his fame or his incredible accent (though those are pluses). It's because when he designs speakers, even when working with a minimal budget, he builds killer speakers that sound incredible. He's famous within hi-fi circles for a reason, and you don't have to be a fanboy to buy into his work.</p>
<p>If I were shopping with the appropriate budget, I'd buy the <a data-link-label="" data-link-type="url" href="https://www.elac.com/series/adante/">Adante series</a> of ELAC speakers, which he designed. They're superb in every way, possibly the best at the show regardless of budget, despite being cheaper than 95% of speakers I saw. The floor-standing model would be ideal, though the bookshelf model is half the price and should deliver nearly all of the performance.</p>
<p>But that's still real money we're talking about, and the <a data-link-label="" data-link-type="url" href="https://www.elac.com/series/uni-fi/">Uni-Fi series</a> of ELAC speakers he also designed is only 1/5th the price of the Adante. They still sound incredible, better than 100% of speakers by other brands up to 10x the price, and better than 95% of speakers up to 100x the price. I truly mean that. A pair of these properly set up in your room would sound killer, and the price is unbeatable.</p>
<p>Note: the <a data-link-label="" data-link-type="url" href="https://www.elac.com/series/debut/">Debut series</a> of ELAC speakers is cheaper still, but I can't recommend them based on the sound. Upgrading to the Uni-Fi series is worth it 10x over.</p>
<p> </p>
<p><span class="font_large"><strong>6) Bonus Recommendation</strong></span></p>
<p>The internet-direct <a data-link-label="" data-link-type="url" href="http://www.hsuresearch.com/" style="">Hsu Research</a> (pronounced Shoe Research) wasn't actually at the show this year, though it has been in the past. I've heard some good and some bad things about its traditional bookshelf speakers, but its subwoofers are incredible, especially for the money. I've personally installed three different models in peoples' homes, and each time, I was amazed by the quality and quantity of bass for the money. If you need a good subwoofer and you have a terrestrial budget, I'd recommend buying from Hsu Research - whether that means you get a single <a data-link-label="" data-link-type="url" href="http://www.hsuresearch.com/products/vtf-1mk3.html">modest Hsu subwoofer</a> or a pair of <a data-link-label="" data-link-type="url" href="http://www.hsuresearch.com/products/vtf-15hmk2.html">beastly Hsu subwoofers</a>, I doubt you could equal the value with any brand, much less surpass it.</p>
<p>Pro tip: buying two subwoofers not only improves headroom and works toward negating room nulls, but it provides the opportunity to reference your "pair of Hsus", which is always fun.</p>
<p> </p>
<p><span class="font_large"><strong>Wrapping Up</strong></span></p>
<p>If you couldn't tell, I had fun at the show. I love this stuff, even though most of it is geared toward the rich.</p>
<p>I shared with you what I learned at the show that applies to every stereo used for every purpose in every room. One could get lost in the world of expensive DACs and amps and speaker cables, and though they provide some benefit, they're not nearly as important as the basics: speakers, placement, and acoustics.</p>
<p>Don't be influenced by price. In fact, don't even pay attention to it until after you've listened yourself and made your own decisions. Marketing is marketing and will always tout, even when it's false or at least blatantly misleading 95% of the time. Let your ears decide instead.</p>
<p>And though I'm not paid a cent by anyone to say this, you now know my personal value favorites to assemble a killer stereo, whether for production or engineering or home theater or living room music enjoyment: either the Uni-Fi or Adante ELAC speakers, according to your budget, and any single or pair of Hsu Research subwoofers, according to your budget. Of course, trust your own ears more than my recommendation. But know I make my recommendation based on what my ears tell me.</p>
<p>Have you made it out to an audio convention? What was your experience? Perhaps equally important: what songs did you discover at the show? Share in the comments below.</p>Milo Burketag:miloburke.com,2005:Post/48229962017-10-03T09:00:00-06:002018-04-25T23:39:24-06:00Plugins vs Hardware Gear<p><span class="font_large"><strong>Introduction</strong></span></p>
<p>If you want to use compression on a vocal track, there are two routes you can take: run audio out through your interface through a hardware compressor and back into your computer, or just slap on a plugin compressor. Same if you want to use EQ, or delay, or reverb; you have the same two choices: hardware or software?</p>
<p>I used to get bummed by this choice: I didn't have the money for expensive hardware gear, or even a place to put it if I did. The mere mention of hardware effects felt like a reminder of poverty to me, and like a death sentence for the quality of my music. But I don't feel that way anymore, and I'll tell you why.</p>
<p> </p>
<p><span class="font_large"><strong>Advantages of Software Effects</strong></span></p>
<p>There are a number of features that make software plugins a compelling choice compared to hardware. These are the ones that come to mind for me:</p>
<ul> <li>Plugins are cheaper. Everybody loves the compressor from <a data-link-label="" data-link-type="url" href="http://www.empiricallabs.com/index.html">Empirical Labs</a> called <a data-link-label="" data-link-type="url" href="https://www.sweetwater.com/store/detail/ELI8M">Distressor</a>. You want one of these? It costs $1,350 in hardware form. The software alternative also by Empirical Labs is only $250, and there are alternatives from other companies that are even cheaper. This is just one example, but anywhere you look, the plugin version is cheaper. There's no arguing: it costs less to duplicate and sell software products than it does hardware products.<br> </li> <li>Plugins don't require routing and bouncing. Routing an output through your interface to your hardware plugin and back takes time to set up and time to bounce, not to mention conversion losses unless you have some of the best converters on the planet.<br> </li> <li>Plugins support automation. If you want the parameters of the effect to change through the song, isn't it nice to be able to precisely draw in those changes with automation instead of having to "perform" the effects unit like an instrument?<br> </li> <li>With software, you can make changes, or remove plugins later. If you use hardware effects, the audio is printed back in. That means no changes without a lot of hassle.<br> </li> <li>Plugins have recall: when you open a session a week or a year later, each plugin remembers how it was last configured. If you ever decide that the lead vocal was compressed a little bit too much in last week's session, you'd have to reconfigure the hardware compressor from scratch to try again instead of just raising the threshold a bit in the plugin. The way around that is to take notes of all of your settings each time you use hardware gear, and that also isn't fun.<br> </li> <li>A plugin can be used more than once in the same session at the same time. Hardware is limited to how many units you buy.<br> </li> <li>Virtually all plugins can process in stereo, whereas your favorite hardware effects unit might be restricted to mono unless you buy a second one.<br> </li> <li>Plugins have presets. You can use stock presets, or make your own as you discover how you like things to sound.<br> </li> <li>Software inserts can be rearranged in order later: that's a huge advantage for sound design and creative mixing.<br> </li> <li>Virtually all plugins can lock to the tempo of your session, for delays based on rhythm instead of time passed. A huge convenience.<br> </li>
</ul>
<p>Wow, so maybe the benefits of software are something to consider. But don't they sound bad compared to rack-mounted gear?</p>
<p> </p>
<p><span class="font_large"><strong>Sound Quality</strong></span></p>
<p>I'm not going to lie: there are a lot of bad sounding plugins out there. But there are also bad sounding hardware effects. There are great sounding hardware units, of course, but there are great sounding software plugins too. It comes down to the talent and team size and philosophy of the people designing the product, not the form it takes.</p>
<p>A lot of people endlessly pursue vintage: it's got to be a vintage bass passed through vintage effects to a vintage amp captured by a vintage mic sent to a vintage mic preamp and routed through more vintage effects to be captured by a ... vintage interface? Okay, maybe that last one's not a thing. But why are we so stuck on vintage? And why are seemingly all new plugins just emulations of vintage gear?</p>
<p>There are a few reasons:</p>
<p>First, it's the sound a lot of people grew up with. People love to reminisce, and it's easy to wear rose-colored glasses when remembering the favorite bands of your youth. A lot of people want to try to replicate that, as if it's the only way. The truth is that a lot of that gear didn't sound that good, but nostalgia pushes us to try to recreate it anyway.</p>
<p>Second, digital for a long time sounded really bad. Early DAWs and software effects were just nasty. Even early digital to analog converters sounded nasty. The technology just wasn't there. We're well past that now, but it seems to have left a coppery taste in people's mouths for a long time.</p>
<p>Third, there's the weirdness aspect. A lot of beloved hardware units weren't that perfect and freaked out in unexpected ways. And sometimes those freakout settings were exactly what a sound or song needed. People came to love these units because of the warts they had, not because they were perfect. Maybe software designers are too careful to remove all the warts in a plugin, robbing us of some that might be useful or sound special. Or maybe there are warts, but all the engineers are too hooked on old gear to learn new gear enough to find them.</p>
<p>Fourth, sometimes sounding "too clean" isn't a good thing, and music needs a little grit to sound interesting and real. A little distortion can add a lot of character. It's the same with carefully sculpted noise, or other aspects of effect "failure". Saturation is the artificial addition of harmonics of sound, and old hardware gear added tons of it, even when that wasn't the purpose of any given piece of gear. Those little duplicates of sound repeated at new frequencies added a certain thickness and character that was often subtle, but very real. Digital doesn't have that, and as a result, can sound too "sterile". Because of this, adding a saturation plugin or two to the most important elements in a song can add a certain thickness or realness that's desirable. We're learning to move music away from sounding too perfect because too perfect doesn't always sound good. I share a lot more about this in my earlier post, <a data-link-label="" data-link-type="url" href="https://www.miloburke.com/production-blog/blog/creative-processing-with-effects">Creative Processing with Effects</a>.</p>
<p>Fifth, as a plugin designer, it's a lot easier to decide the concept of your next plugin if it's recreating something old instead of inventing something new. I suspect this is why the plugin market is truly flooded with software emulations of this or that piece of vintage gear. Some of the original gear was great, but we don't need thirteen different emulations of it made by eleven different brands. Maybe that laziness in design concept is keeping plugin developers from thinking more freely and creatively. And when 90% of the new plugins coming to market are emulations of vintage hardware gear, it's a way of telling those of us that buy plugins that the old way is the only good way. And that's just not true.</p>
<p> </p>
<p><span class="font_large"><strong>When Hardware is an Option</strong></span></p>
<p>Maybe you have an unholy amount of money and a preference for analog gear. If you have the space to store it all, go for it. Who am I to tell anyone that what they love is wrong?</p>
<p>There is something to be said for creative limitations. Having fewer options at your fingertips for how to process sounds can help you come up with new ways to use old options. So restricting yourself, whether by owning a few choice pieces of rack mounted gear or slimming down your plugin collection, could help you engineer. It also might encourage you to lean less on engineering as a crutch and to shift your weight further onto songwriting. That's never a bad thing.</p>
<p>Engineering with fewer choices can be faster. There's no room for analysis-paralysis when you only have one option for a desired effect. And sometimes, having an effect printed onto audio can just help you move past that decision and onto what needs to be done next.</p>
<p>I met an engineer with a truly impressive studio. Money didn't seem to be an issue, and he acquired whatever he wanted when he wanted it. He told me he uses software or hardware based on each client: recording a techy young kid making modern music? The client will probably feel better seeing plugins being used over hardware. And when he's recording an old fart that doesn't trust computers? He leans on rack mounted gear for all of his processing needs. That's one way to keep the client happy.</p>
<p>If I had a couple of million to spend on a studio, I'd probably get a few fancy mics, and I'm sure I'd put some real money into preamps and converters. That said, past a certain threshold, microphones are less about sounding "better" and more about sounding "different". And if my preamps and converters were replaced by things costing 20x the price, would I be able to hear it? Would my listeners? I'm honestly not sure. Maybe I'd be able to hear it in controlled tests, so why not upgrade if I had the money? But the quality of my music would still come down to the song, the production, the performance, and the engineering. Not a fancier microphone, or better quality preamps or converters.</p>
<p>And that's it. Even if I had a couple of million to spend on a studio, I don't think I'd buy any more hardware than that.</p>
<p>To be sure, a bundle of money would go towards <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/why-isn-t-my-mix-translating">upgraded monitoring</a> and <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/my-journey-with-acoustics-part-1">room treatment</a>. Those are the gifts that keep on giving.</p>
<p> </p>
<p><span class="font_large"><strong>Conclusion</strong></span></p>
<p>When I was working out of a real, physical studio, I had access to hardware. But I realized pretty quickly that I didn't like it. The permanence clashed with my need for flexibility, the inconvenience clashed with my need to stay in the creative groove, and I never found the hardware benefit over what plugins could provide me. I write this post with the full realization that I'm biased, but I'm sharing the bias I've slowly developed over years because it's truth to me. It allows me to do more work more quickly with a greater a degree of control, and I no longer feel doomed to using inferior tools when I'm using software. That's why I'm sharing this with you.</p>
<p>If you have analog gear and love it, that's great. I'm glad it's working for you, and I'm glad you know how to use it to its fullest extent.</p>
<p>But for those of you feeling like you're missing out because you're poor, or you don't have the space, or you have to stay mobile: you're not missing out. The greatest tools you have are <a contents="your experience" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-to-become-better" target="_blank">your experience</a>, your tastes, and your creativity. Everything else is supporting, even the world's best gear. And plugins have sufficient quality and convenience to be all you need while you grow your experience, your tastes, and your creativity.</p>
<p>Are there any pros to plugins or hardware that you feel I missed? Do you have any one plugin or one rack-mounted unit you feel you can't live without? Share about it below.</p>Milo Burketag:miloburke.com,2005:Post/48421032017-09-19T10:00:00-06:002018-05-26T08:22:20-06:00Should You Go to School for Mixing or Music Production?<p><span class="font_large"><strong>Introduction</strong></span></p>
<p>What's holding you back from making the music you want to make? Do you need education to learn the skills you need? Is it worth going to school to learn the production or engineering you want to do professionally? These are questions a lot of us have. What's the magic key to getting to where you want to be?</p>
<p> </p>
<p><strong><span class="font_large">My Background</span></strong></p>
<p>I went to school for audio engineering. It was a longer program than most - a full four-year degree. I was so hyped to begin the program. I thought it would change my life.</p>
<p>All the way through school, my professors and advisors and of course the other students all told me that getting hired by a studio afterward would be easy. It's just the way of things.</p>
<p>But here's what nobody ever told me when I was in school: the audio engineering industry is saturated. And since the advent of cheap recording gear and the concept of the 99 cent song, there have been fewer customers willing to pay professionals, remaining customers have less money to spend as record labels take fewer risks supporting new artists, and high craftsmanship isn't valued liked it used to be.</p>
<p>Sadly, music production isn't any better. Want to be a producer-for-hire? Congratulations, so do millions of other people all over the world, and the price for production is in a race to the bottom. Want to be your own artist? Nobody is going to make it easy for you: not record labels, not band members, not music bloggers or taste-makers.</p>
<p>It killed me when I couldn't find work in music. I quit entirely for five years, believing it wasn't possible for me. Those were soul-crushing years because I knew what I loved, but I wasn't pursuing it even a little. It wasn't until years after I graduated that an engineer leveled with me: I was waiting for someone to make it happen for me. But nobody does that anymore. Nobody was going to be my angel, making my career become reality for me. <a contents="People make it happen for themselves" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/becoming-more-productive" target="_blank">People make it happen for themselves</a> or it doesn't happen at all. I needed to find a way to DIY my way into the music industry, pursuing what I really love doing even if it meant I was doing it for free. Do that long enough and you become good enough for money and acclaim to find their way to you.</p>
<p> </p>
<p><span class="font_large"><strong>What Didn't Happen for Me in School</strong></span></p>
<p>Back to the topic of school. I didn't learn much at all in my classes. I found most all of the content dumbed down to the level anyone could understand, just trying to bring laymen up to half of my working knowledge, having spent my formative years drooling over gear I couldn't afford in every category of the Musician's Friend catalog. I wasn't a prodigy. But I already knew what a DI box was and what the various knobs on a mixing console do, which put my knowledge beyond the scope of the program, it seemed.</p>
<p>My internships and sucking up to potential bosses never panned out into jobs. Partly because I had the wrong expectations, that I was desirable enough to be a paid employee instead of an entrepreneur paving his own way. And partly because, again, the music industry is saturated. People with 20 years of experience were also looking for work, and still are, so why would anybody hire a kid fresh out of school? What are a few classes compared to years of hands-on work and practical experience delivering results?</p>
<p>And I didn't connect with other students either. I was branded as an ultra-nerd because I loved the stuff too much, and my self-accumulated knowledge extended far past the basics. My passion scared away potential collaborations and friendships. I felt disappointed in other students because none seemed as hungry as I was. And they avoided me because they probably felt I had a superiority complex. Maybe I did.</p>
<p> </p>
<p><span class="font_large"><strong>Where I Did My Learning</strong></span></p>
<p>That isn't to say I didn't learn or grow during school. It's just that the learning and growing didn't come from the directions I expected. These are the five ways I learned the most, from least to most helpful:</p>
<p>5) Asking questions to professors after or outside of class. Some of them had remarkable stories and could answer questions far more involved than the curriculum prepared for.</p>
<p>4) Forums and websites that fueled my curiosity. Sites like GearSlutz are terrible and wonderful. Swim, but don't dive too deep.</p>
<p>3) Recording other students outside of class. This made for better hands-on experience than my curriculum did, and helped stretch me outside of what I thought I could do into different genres and different roles.</p>
<p>2) Reading textbooks that weren't assigned. Do you realize you can buy textbooks on any subject that you want to learn? You can, and you don't need to take a class to learn! Make your own education.</p>
<p>1) Making my own music outside of the school studios in my dorm room, alone, unrelated to my assignments. This is where I learned my way around a DAW better than my classes prepping me for Pro Tools certification could, where I began to understand the core of composing and producing with virtual instruments, and <a contents="where I discovered my mixing style" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/minimalism-in-mixing" target="_blank">where I discovered my mixing style</a>. Time spent alone, late at night, freely creating was the best teacher I've ever had. And you <a contents="don't need great gear or expensive tools" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/plugins-vs-hardware-gear" target="_blank">don't need great gear or expensive tools</a> for that. I certainly didn't have any. You just need passion, creativity, and time.</p>
<p> </p>
<p><span class="font_large"><strong>How School Might Be Different for You</strong></span></p>
<p>Maybe you'll attend a better school with a stronger program than I did. Maybe the classes will contain significantly more content, and the faculty will be more in-tune with the students and the program. Be careful as you read about schools. In their own advertising, every school has the best program, the best professors, the best classes, and the best studios. Be skeptical as you choose a school.</p>
<p>Maybe the school you find will have structures in place to help you find work in your chosen field after you graduate. Perhaps even better, maybe the school you find will set realistic expectations about what the industry looks like, and how each member of the industry needs to think and act like an entrepreneur in order to be successful. The sooner you start thinking of your career as "Me Incorporated", the more likely you are to survive and find your way through it.</p>
<p>Maybe you'll connect better with the other students than I did and start to make things happen, like forming a band that can make it, or starting a company that offers real value. People view college dropouts as failures. But if you drop out because you thought of the company you want to start and you found the business partners that you need, you're better prepared for the real world than the rest of the students that finish school without developing those plans and connections. A degree doesn't mean much anymore, but a career plan is everything.</p>
<p> </p>
<p><span class="font_large"><strong>The Takeaway</strong></span></p>
<p>I'm writing this in part to warn you: attending a school probably won't open a lot of doors for you, despite what the school's marketing material says. And there's a fair chance it won't even teach you that much.</p>
<p>I'm writing this to let you know that how much you learn is in your control, in or out of school. You can find educational tools anywhere, from textbooks ordered through Amazon to YouTube channels (perhaps like mine). And the best teacher of all will be experience - the kind earned through <a contents="hundreds to thousands of hours of hands-on, brain-engaged activity" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-to-become-better" target="_blank">hundreds to thousands of hours of hands-on, brain-engaged activity</a>, pursued passionately around the requirements of living.</p>
<p>Maybe school will help you. If so, dive in and make the most of it, in and out of the classroom. Or maybe you don't need school. You just need time to incubate your skills, and time to discover where you want to be in the industry in five years so you can DIY your way there.</p>
<p>If you've been to production or engineering school, are attending now, or are thinking about going, I'd love to hear about it. My story isn't the only one. Whether you've had good or bad experiences, please share in the comments below.</p>Milo Burketag:miloburke.com,2005:Post/48229522017-09-04T09:00:00-06:002018-07-29T16:04:17-06:00The Core of Mixing<p><strong><span class="font_large">Introduction</span></strong></p>
<p>Mixing audio can seem like a black art. There are millions of tips around the internet on how to do this or that to create a great mix. And I'm sure you've seen just as many tutorials by the masters as I have. The problem is that 99.9% of these are based on some minute little trick to handle this tiny little situation, or it's using gear you don't have to fix problems you also don't have.</p>
<p>The truth of the matter is that there's far too much information on the specifics, but not enough on the generals. That's what we're going to focus on today: the big picture of mixing. Though I'll include a few specifics for examples, and to spur your creativity.</p>
<p>So what is the view from 10,000 feet? Where do we begin?</p>
<p> </p>
<p><span class="font_large"><strong>1) Levels</strong></span></p>
<p>Like it or not, the core of a mix comes down to the <a contents="levels of each track in the mix" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/the-trick-to-perfect-volume-balancing" target="_blank">levels of each track in the mix</a>. Getting the levels right is far more important than some magical plugin chain you like to use on this or that instrument.</p>
<p>What do we aim for? Two things:</p>
<p>First, make sure every instrument can be heard. Even in dense mixes with a lot of layers, you should be able to pick out each specific instrument and hear what it's doing. To do this, you'll naturally have to mix percussive elements louder than steady elements: if drums are the same volume as the organ or pad, you're going to hear all organ/pad and no drums. It's good practice to mix steady instruments like strings and pads low in the mix since they can be heard between drum beats, and the drums come in loud to carry the song. When mixing, make sure you can hear each and every instrument. Each instrument doesn't need to be bold, or to carry the interest all the time. In fact, most listeners can't easily keep track of more than three elements at a time, so it's okay for smaller sounds introduced earlier to fade into the background of the mix. But they should still be audible to you and others if you're listening closely.</p>
<p>Second, you want to let the powerful instruments speak when they need to. Are the vocals popping out during the chorus? Is the guitar carrying the song during the solo, or the synth hook carrying the song during the drop? They should be. While you want everything to be heard, you want the most important parts to be heard the loudest.</p>
<ul> <li>Tip: it can be a lot easier to hear the balance of instruments when things are really quiet. Turn your speakers down very, very low. You know your mix is sitting right when you can still hear everything when the volume is low, but the most important parts still stick out of the mix and sound special. If the song sounds good quiet, it will sound great loud.</li>
</ul>
<p> </p>
<p><span class="font_large"><strong>2) Equalization</strong></span></p>
<p>I'll be honest: I don't believe in trademark EQ curves for specific instruments or to somehow give a certain signature sound. That all happens elsewhere. If you're using EQ right, it's more of a cleanup tool than a space for creativity. Though it sometimes takes creativity to best clean up a mess with EQ.</p>
<p>What is EQ for? Two things:</p>
<p>First, make sure each track doesn't have any problems when soloed (or when not-soloed, if you're an experienced mixer). If a voice is honky or a hi-hat is sizzly or a kick drum is boomy, these are things you can clean up with EQ. You want to make sure each instrument and layer in your mix sounds good in isolation. Though boosting what sounds good can seem easy, usually cutting what isn't needed is the better and faster route to fixing your problems.</p>
<p>If I hear a problem but don't know where to begin, I insert a big boost with my parametric EQ and sweep around the frequency until I find the heart of the frequency range that's giving me problems. Then I cut where that big boost is, removing the problem I was looking for. Special use cases aside, I strongly recommend using a parametric EQ over a graphic EQ. You just have more control. And it can be a big help if your parametric EQ has a spectrum analyzer built in: it makes it super easy to see where unwanted peaks live, and also to know at a glance if there's too much very high or very low energy in your song.</p>
<p>Second, EQ is a great tool for getting the many layers in a song sit well together. Sure, an electric guitar might sound great when it's played full-spectrum while soloed. But there's a better than good chance it will blend with the low-end and high-end of the song better if you band-limit the electric guitar to the frequencies relevant to where it sits in the mix: if you cut the low-end out of the electric guitar, the synth bass or electric bass will sound that much cleaner and more powerful because of it. And that's a worthwhile trade. For the same reason, make sure no two dominant instruments are masking each other in frequency, and that no background instruments are masking dominant instruments.</p>
<p>Another example of using EQ to help instruments fit nicely with each other is to roll off unneeded frequencies. If my cymbals or drums are sounding a bit too shrill, I like to add a low-pass filter at the very top of the frequency spectrum, just to tame the extreme highs a little. And it's common practice for me to roll off the lows on most every instrument that has them. I even use a high-pass filter on my kick drums and bass synths to cut out the extreme lows: getting rid of the stuff below 30-50 Hz (depending on the song) not only frees up headroom to give the song more perceived volume, but it just makes the bass sound punchier and cleaner, even when played on a system that can produce evenly down to 20 Hz. Oftentimes those lowest frequencies just aren't adding anything to the song.</p>
<ul> <li>Tip: EQing in mono is a fantastic trick to help get your mix sounding right. Not only does it help your song's mono-compatibility, but it gives you an edge on instrument separation. Stereo sound is a magnificent thing, and of course, your mix will sound its best when it's good and wide, full of exciting stereo content. <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/why-isn-t-my-mix-translating">But if your speakers are set up even halfway right</a>, you're mixing with a lot more stereo separation than the average listener ever will hear. Stereo separation implies that frequency carving for specific instruments is less valuable, but that simply isn't true. Set your monitor controller to mono, or put a plugin on the end of your master fader that converts your mix to mono, then do your EQing for separation. When you turn stereo back on, your mix will sound better than ever.</li>
</ul>
<p> </p>
<p><span class="font_large"><strong>3) Panning</strong></span></p>
<p>As I mentioned, the invention of stereo is an incredible thing. Nothing beats having a wide, clean mix that just sings from the speakers. This might be second-nature to some of you, but maybe others could use a little direction here.</p>
<p>Generally speaking, the lead elements of the song and the bassy elements of the song are panned center. There's a very good chance you want your lead vocal, your snare drum, your kick drum, and your bass panned to the middle. And if there's a lead guitar solo, for example, that would probably sound good centered too.</p>
<p>Auxiliary instruments usually sound good spread out. Maybe a piano layer is panned off to one side, and a rhythm instrument to another side. In the context of the lead instruments in the center, the mix will start to sound bigger with these less important instruments panned out. There are two ways to do this: the natural way and the artificial way. The natural way is to pan all instruments as if you were looking at the musicians on a stage: the lead vocalist stands center, the keyboard player is off to one side, the rhythm guitarist is off to the other side, etc. Try to replicate how you see bands arranged on stage for this. The artificial way is more common for largely digital music, where the song may be many layers of synths, all of which are stereo. Lead elements should sound in the middle, important wide elements can be panned as wide as the sky, and less important stereo layers can still be panned directionally. For example, a stereo synth sound could be panned 90% left and 20% left to give it a left-side bias. And then pan a different synth to the opposite side to balance out the mix.</p>
<p>Generally speaking, the mix will sound best when the volume from left and right are about equal. My meters show that a lot of my mixes aren't exactly balanced, especially not all the time. But as long as both sides of the mix sound balanced to the ear, it all works out.</p>
<p>Also, doubling is a powerful tool for adding width and dimension to a track. Recording two layers of a rhythm guitar part and panning them 100% left and 100% right is the oldest trick in the book for adding stereo power: if the performances are tight, it will sound like one guitar part, but your ear hears the two are different and perceives it as one instrument with incredible space and size. Doubling vocals and panning them is also a great trick to add weight and power and width to the vocals, especially with harmonies and layers additional to the lead vocal.</p>
<ul> <li>Tip: don't forget about stereo effects. You might be accidentally routing an instrument to mono reverb when stereo reverb likely sounds better. Guitar effects emulators can sound a lot different in stereo too, even when fed a mono guitar signal. You can add a lot of perceived space by sending a left-panned instrument to a right-panned reverb. And adding ping-pong delay can really amp up the width and space of your mix.</li>
</ul>
<p> </p>
<p><span class="font_large"><strong>4) Excitement</strong></span></p>
<p>The last job of the mixing engineer is to make the track exciting. One might say this is optional, but all of the best mixes add a little spice to keep the interest flowing.</p>
<p>In modern electronic music, sidechain compression or volume-shaping can add a lot of excitement to the mix. Dialed in correctly, you get a core element or the entire song pumping and moving to the rhythm of the song. The classic technique is to put a compressor on a lead element or a bus of multiple elements sidechained to the kick drum, so the other instruments duck out of the way each time the kick drum is heard. I prefer using a volume-shaping plugin instead: it's easier to implement than setting up a sidechain, it's far faster to get a desirable shape dialed in, and you can intentionally use a volume-shaping pattern separate from the kick drum. For example, if you have the kick hitting every quarter note, using volume shaping on the pad synth or upper bass synth layers set to dotted quarter notes could sound really interesting.</p>
<p>Reverb is a staple tool in mixing. Use it to add dimension and space to your mix. This might take the mix from a dull, tight room to a small hall or a large hall, depending on your preference. Used subtly, it can add glue to the mix and presence to the vocals and key layers while still sounding dry. Used moderately, it can make lead synths and guitars sound huge and anthemic. Used aggressively, and you can make instruments sound muffled and in the background, contributing to a vintage, lo-fi sound.</p>
<p>If you want to learn more about reverb, <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/smooth-sounding-reverbs">check out my post on maximizing reverb</a>. And be creative: remember that you can put effect plugins on a reverb bus: maybe distortion, maybe amp emulation, maybe volume shaping, maybe sidechain compression to the source so the reverb swells only when the source goes quiet.</p>
<p>Delay is another staple tool in mixing. Used subtly, it bolsters the strength and warmth of vocals and keyboard instruments. Used more aggressively, it can fill up holes in the mix: for example, vocal delay is a great way to add interest to a pause after a vocal phrase. Get creative and see what options your delay plugin gives you. I love the flexibility and control my <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/three-of-my-favorite-plugins-july-2017">favorite delay plugin</a> offers.</p>
<p>These are just examples. But if the instruments in the mix aren't enough to make the song exciting, then reach into your toolbox to find an effect that can help. Which effect you use and how you use it is up to you. Just be sure that it doesn't significantly alter the levels, EQ balance, and panning that you worked so hard to achieve.</p>
<ul> <li>Tip: the output of a virtual instrument or the recorded track from a physical instrument doesn't have to be the final sound. Get creative by throwing amp emulators or distortion plugins or filtering plugins or multi-effects plugins onto instruments. A lot of what you try won't sound good, but once in a while, you'll stumble across a killer effect that adds incredible character to the instrument. In my own music, often the bite and character of the hook owe all of their interest to the happy accident of finding the perfect preset in an effects plugin added after the instrument was recorded. If this interests you, <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/creative-processing-with-effects">check out my guide on adding character to your tracks</a>.</li>
</ul>
<p> </p>
<p><span class="font_large"><strong>Wrapping Up</strong></span></p>
<p>If you've made it this far, you know the core of mixing. Get the levels sounding right, subtly shape with EQ to solve problems and create space, pan things around for width and separation, and add excitement through effects. I can't promise your mixes will sound 100% better after reading this. After all, a beginner drummer can't suddenly become a master after reading a single how-to article. Learning takes time, and it almost entirely comes down to how much experience you have: how many hours you've spent mixing, how many mixes you've made, and how proactive you are in learning from pro mixes as you hear them.</p>
<p>It's quite possible that in your experimentation with mixing, you've developed some bad habits. <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/minimalism-in-mixing">Approaching the mix from a minimalist's perspective</a> can free you from those bad habits.</p>
<p>Also, you'll probably have to use <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-reference-checks-can-save-your-song">reference checks</a> to hear your song in perspective.</p>
<p>That said, the purpose of today's article is to largely bypass the subtle tricks here and there in order to focus on the heart of mixing. And if you get these elements right, your mixes will sound really strong.</p>
<p>Now that we've covered the basics, is there anything that you'd like to add? Any favorite tricks you'd like me or my followers to know? Please write them in the comments below.</p>
<p> </p>
<p>P.S. You'll notice I didn't include a section for compression. I don't believe compression plays a major role in the fundamentals of mixing. Sure, it can even out the volume of a dynamic instrument to better keep the levels of your mix sounding consistent, or it can add sustain to percussion or pumping to the mix for extra character. But all of these uses fall under the Levels and Excitement components of mixing. Compression is just another tool to be used only when it's needed and otherwise ignored. Not a core component of mixing.</p>Milo Burketag:miloburke.com,2005:Post/48186152017-08-22T07:45:00-06:002018-08-11T19:28:34-06:00How to Give Your Song the Perfect Loudness<p style="text-align: center;"><strong><em><span class="font_large"><span style="color:#999999;">Note: there is an</span> <a contents="there is an update to this article" data-link-label="" data-link-type="url" href="https://www.miloburke.com/production-blog/blog/how-to-give-your-song-the-perfect-loudness-update" target="_self">update to this article</a> <span style="color:#999999;">(August 2018). </span></span></em></strong></p>
<p> </p>
<p><span class="font_large"><strong>Introduction</strong></span></p>
<p>In a previous post, I covered <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/dynamic-range-and-the-loudness-war">how the Loudness War started, and why it's bad for the sound of music from increased distortion and decreased dynamic range</a>. Fortunately, there's a glimmer of hope in that many of the popular streaming services are turning down songs that are too loud in order to even out playback volume for listeners. As of this writing, YouTube, Spotify, Apple Music, and Tidal all turn down songs that are too loud, eliminating any benefit from exceedingly loud mastering.</p>
<p>That's great, but if you don't know how loud to make your music under the new rules of the streaming services, it won't do you any good. That's what we're covering today.</p>
<p>I'll warn you right now: this probably won't be fun to read. I'll have to write a lot more digits and acronyms than I'd like. But the volume of your song can't be taken back after release. And getting your head around this one sticky topic will permanently help you increase your music's volume if you master too quietly, or more likely, permanently improve the sound quality of your music with no penalty for the vast majority of listeners who listen through streaming services, radio, and modern media players.</p>
<p> </p>
<p><span class="font_large"><strong>Identifying Your Target Loudness</strong></span></p>
<p>This is where the supremely helpful Ian Shepherd (British mastering engineer and author of the <a data-link-label="" data-link-type="url" href="http://productionadvice.co.uk/blog/">Production Advice</a> blog) comes in. He has a very nifty chart outlining the loudness recommendations from the Audio Engineering Society, along with the target loudness of four streaming services, and how each one handles songs that are louder and softer than the target loudness.</p>
<p><a data-link-label="" data-link-type="url" href="http://productionadvice.co.uk/online-loudness/">Check out this super useful chart that Ian made, visualizing all of this</a>.</p>
<p> </p>
<p><span class="font_large"><strong>Coming to Terms with Loudness</strong></span></p>
<p>That chart is really helpful if you know what LUFS are and how to measure them. Otherwise, it's just further muddying the waters!</p>
<p>As you probably know, the standard unit of loudness is the decibel, or dB for short. The decibel was established as "the smallest unit of loudness that a person could hear", though most engineers can hear a difference of just a tenth of a dB. A quiet fan in your computer might whir at 35 dB, a loud restaurant may be 80 dB, and a loud concert may be 110 dB. It's not a linear scale: 76 dB is twice as loud as 70 dB, and 82 dB is four times as loud as 70 dB. An amplifier requires double the wattage to play 3 dB louder; an amplifier pulling 25 watts would need to pull 100 watts to play 6 dB louder. The loudest possible sound by this scale is 194 dB, limited not by the sound source, but by the physics of air and Earth's atmospheric pressure: any louder, and the sound would behave closer to a shockwave than a sound wave. That said, sounds much louder than 194 dB are scientifically possible underwater or in the atmospheres of other planets.</p>
<p>In the digital realm, it's much easier to achieve the loudest possible sound: decibels are relative to "full scale", or the loudest that a digital file can store a sound. In a session, 0 dBFS (0 decibels full scale) is that max ceiling, since peaks can't exceed 0 dBFS without being written to the file as pure distortion. As an example, recording vocals at quiet levels might have you record in at -20 dBFS, and you may turn them up by 5 dB in your session to play back at -15 dBFS. Everything we do in our DAWs is in the negative scale, working down from 0 dBFS. It's just how digital audio works.</p>
<p>Loudness in music is tricky to measure. Sure, we can measure the peaks according to dBFS, but that doesn't really tell us anything about the average loudness of a song, but rather just how loud a single moment was. Classic VU (volume unit) meters measure average volume levels over time, but they often don't respond the same to different styles of music or even different songs of the same style of music. And how much time is averaged can vary the reading significantly. The same problems apply to RMS meters and their readings.</p>
<p> </p>
<p><span class="font_large"><strong>Understanding LUFS</strong></span></p>
<p>Enter the solution to this problem: LUFS, short for Loudness Units relative to Full Scale. The history of LUFS can be traced back to the International Telecommunications Union from as early as 2006, and more recent revisions have clarified and expanded upon what has gradually become the new unequivocal standard for average loudness measurement.</p>
<p>Fortunately, it's easy to understand: one LUFS unit is the same size as one decibel, so understanding the scale becomes simple if you're familiar with any traditional meters relying on the decibel for scale. The value of the scale, however, is two-fold: there's a lot of math happening behind the scenes that makes LUFS a much more accurate and consistent measure of average loudness than VU meters and RMS meters, and it's been widely adopted by streaming services and government broadcast regulations alike as the one metering algorithm to pay attention to.</p>
<p>Also, fortunately, reading it is easy once you get your head around it. LUFS can be tracked by the momentary value (averaged over 400 ms), and by the short-term value (averaged over 3 seconds). But the integrated loudness is tracked and averaged over the entire length of your program. Meaning that if you are working on a three-minute song or a forty-five minute TV episode, the integrated LUFS is a single value that gives you the sum of loudness for your entire project.</p>
<p>A lot of different brands of plugins are now offering tools that can measure LUFS. iZotope, Waves, TC Electronic, MeterPlugs, Dolby, Avid, NuGen, Mastering the Mix, Klangfreund, and HOFA are just a few. Youlean even makes a free plugin that measures LUFS, though I haven't used it personally. I use <a data-link-label="" data-link-type="url" href="https://www.izotope.com/en/products/mix/insight.html">iZotope Insight</a>, <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/three-of-my-favorite-plugins-july-2017">which I share more about here</a>. Pick a plugin to do the job, and find where the short-term LUFS value and the integrated LUFS value are displayed. These are the primary numbers to pay attention to.</p>
<p> </p>
<p><span style="font-size: 16.8px;"><b>The Advantages of Each</b></span></p>
<p>Since LUFS-integrated captures the loudness of the entire song with one value, it's easy to read. And since streaming services use algorithms closest to LUFS-integrated to determine how much to turn down loud songs, there's a built in argument for exclusively focusing on integrated LUFS.</p>
<p>When using LUFS-integrated, targeting a value of -13 to -14 for your entire song makes finalizing your song's loudness easy. At these levels, you're likely not compressing so much to have audible issues, but you're also not ignoring useful loudness without cause.</p>
<p>But what the integrated figure doesn't take into account is how dynamic your song is. If your song is high energy throughout, with instruments playing from 7 to 8 intensity on a scale of 1-10, the <a contents="macro-dynamics" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/micro-dynamics-and-macro-dynamics" target="_blank">macro-dynamics</a> of your song will naturally be very understated. By comparison, another song that starts out at an intensity of 1 and rises all the way to an intensity of 10 will sound much louder during the loud sections, assuming you set your limiter for each song to target the same integrated LUFS value.</p>
<p>The other approach is to measure using the short-term LUFS value. If the loudest three seconds of your song ring in at -9 short-term LUFS, then your song is likely loud enough to be reasonably competitive while being quiet enough to maintain dynamic range.</p>
<p> </p>
<p><span class="font_large"><strong>Deciding Between Short-Term LUFS and Integrated LUFS</strong></span></p>
<p>Some mastering engineers recommend using the short-term LUFS value to determine the final loudness of your song. Proponents of this method are right: using integrated LUFS doesn't take into account the differences between a song that's consistently loud and a song that has occasional momentary loudness. In this way, measuring with integrated LUFS can mislead you.</p>
<p>Other mastering engineers recommend using the integrated LUFS value to determine your song's loudness. After all, momentary loudness doesn't define the loudness of the whole song, and this measurement is closer to the algorithms the streaming services use to determine how much to turn down loud songs.</p>
<p>You can use whichever method makes more sense to you. Ian Shepherd prefers to measure by short-term LUFS. I personally find more sense in using integrated LUFS. Either way, you'll have to adjust the volume of the rest of the songs on your album by ear to have them best match the first song's loudness.</p>
<p> </p>
<p><strong><span class="font_large">The Answer to Loudness Measuring</span></strong></p>
<p>So there you have it: employ very light limiting at the end of your mastering chain. I recommend setting a target true-peak ceiling of -1.0 dBFS, in part because volume normalization algorithms are sensitive to peaks above -1.0 dBFS and would only turn down the volume of your song, and in part because you don't want the mp3 conversion volume boost to push your song into distortion.</p>
<p>Then place your LUFS-aware metering plugin after your limiter, and play your song from start to finish to discover the peak value of the short-term LUFS and the integrated LUFS. The values may start low and then climb higher during louder portions of your song. That's normal; we're just looking for the final values after your song is finished playing. If you're measuring by short-term LUFS and your metering plugin shows a value of -11.7 after the song is finished, then you can likely crank your limiter up by an extra 2.7 dB (targeting -9 LUFS short-term). And if you're measuring by LUFS-integrated and your metering plugin shows a value of -15.4 dB, you can ratchet up your limiter by an extra 1.4 dB (targeting -14 LUFS-integrated).</p>
<p>Play your song from start to finish again to check that the result of changing your limiter's threshold is as expected and that you met the final LUFS-integrated or LUFS short-term value you were targeting.</p>
<p>Limit any more aggressively than that and the majority of your audience won't hear the volume advantage anyway. Using this method, your music will sound as loud and clear as any on the popular music streaming services, and the dynamics of your music will likely sound superior since your music was mastered in the post-Loudness War era. Congratulations. </p>
<p> </p>
<p><strong><span class="font_large">Wrapping Up</span></strong></p>
<p>You probably need a nap after that. I know I do.</p>
<p>But if you've made it this far, you know which brands make meter plugins that are useful, which loudness scales to pay attention to, and which target loudness is ideal for your music. That's a major win.</p>
<p>Now start releasing dynamic music! And do it without fear. This is the death of the Loudness War, and it's a beautiful thing.</p>Milo Burketag:miloburke.com,2005:Post/48030852017-08-08T12:40:00-06:002018-05-03T08:41:45-06:00One of My Favorite Instruments: Part 1 (August 2017)<p>I'm something of a nerd for audio software made with a little creativity and a little love. Hopefully you caught my previous post <a contents="sharing three effects plugins that I rely on heavily" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/three-of-my-favorite-plugins-july-2017" target="_blank">sharing three effects plugins that I rely on heavily</a>. Today, I'll be giving you a glimpse at one of my favorite instruments: <a contents="Addictive Keys" data-link-label="" data-link-type="url" href="https://www.xlnaudio.com/addictivekeys" target="_blank">Addictive Keys</a> by <a contents="XLN Audio" data-link-label="" data-link-type="url" href="https://www.xlnaudio.com/" target="_blank">XLN Audio</a>.</p>
<p>Yeah, I know. It's just another virtual piano plugin. Hooray. But Addictive Keys is so much more than that. I'd be surprised if any songs I've made since I bought it don't use it in some way. Many of my songs have seven or more layers of it. And a recent track I made for a client used<em> sixteen</em> layers of Addictive Keys. Not all for piano sounds, of course. I'll share more below.</p>
<p> </p>
<p><span class="font_large"><strong>But First, the Basics</strong></span></p>
<p>Addictive Keys offers four different piano styles: Studio Grand, Modern Upright, Electric Grand, and Mark One. You can purchase any one of these individually, or pay more for a larger package including more pianos. I started small and worked my way up to owning all four.</p>
<p>Each instrument comes with three pages of stock-made presets, starting with basic sounds ranging to the more creative patches.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/9e33b015c94a1b292e9caa99a1f403251568cf7e/original/electric-grand-1-800wide.jpg?1502136127" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>The first page of Electric Grand presets offers sounds you might expect</em></p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/0b8c8d21a0466da8815c2e418b92ce03c0c99c05/original/electric-grand-2-800wide.jpg?1502136127" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>The second page of presets for Electric Grand gets a little more creative</em></p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/922fec09e759a31986616a0eaac8f6b5ebecbed0/original/electric-grand-3-800wide.jpg?1502136128" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>And the third page of presets for Electric Grand is downright adventurous</em></p>
<p>Things get interesting when the presets delve more deeply into the effects package. Even just from the stock presets, you can hear a wealth of different sounds based on those templates, from creamy crunch to delicate side-chain pumping to the truly bizarre. When loaded with effects, the pianos are rich with character while still sounding high-fidelity. This, in my opinion, is what Addictive Keys truly excels at.</p>
<p> </p>
<p><span class="font_large"><strong>The Sounds</strong></span></p>
<p>I wouldn't say that the standard, piano-like patches for Studio Grand and Modern Upright sound the most stunning of all virtual pianos I've heard. Some by Native Instruments glisten and sound larger than life. But unlike those Native Instruments pianos, I find the Addictive Keys pianos far more functional in a mix: within the proper context, they sound more real, they fit better in the mix, and they lack the phasey quality many Native Instruments pianos get when they're not panned as wide as the sky.</p>
<p>Studio Grand and Modern Upright sound as expected. They get you clean, bold piano sounds or vintage, distressed piano sounds. But XLN Audio ups the game with Mark One, one of the best sounding electric piano emulations I've played, and by far the most usable in my experience. And Electric Grand offers whole new pallets of piano-like sounds that I find just fascinating and include in many of my songs.</p>
<p>One nice aspect is that you can select from different microphones and input sources at different locations. Especially with the electric pianos, playing with arrangements of input sources gets really fun.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/9e50e9d662d2b9a976bcd864c70b9594a644ff5d/original/sound-sources-800wide.jpg?1502136128" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>A list of available sound sources and inputs</em></p>
<p> </p>
<p><span class="font_large"><strong>The Effects</strong></span></p>
<p>Addictive Keys ships with quality base sounds, but the effects package is where it shines. I know, I know, these are normally the parts of virtual instruments to gloss over. I usually do, when I have more variety of better sounding plugins in my DAW that I'm already familiar with. But what XLN Audio does with effects is something special.</p>
<p> </p>
<p>First, they have a layer of pitch and filter effects on the root instrument. Dissonance and vibrato add character to the instrument that you don't get in a lot of virtual pianos that just sound too clean or too flat. You can do a lot of wacky stuff with volume envelopes and filter envelopes too.</p>
<p>And one particular knob I've never found in a virtual instrument before: "Sample Shift" pitches the the samples up or down by semitones, which is then offset by using midi to transpose notes in the opposite direction, For example, setting this knob to -10 semitones and playing C3 would actually play D2 pitch-shifted up 10 semitones to sound like C3, which results in a lovely, altered, otherworldy piano with an unnatural but fascinating overtone response. Twisting this one knob brings out fantastic character that I've never heard in another virtual piano.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/1cb7dc5c9c69fa383ac5fad5ea943199876c8a84/original/note-fx-800wide.jpg?1502136127" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>The note effects, including Sample Shift</em></p>
<p style="text-align: center;"> </p>
<p>Second, each of the sound inputs (microphones, etc.) has its own rack of effects for you to EQ, distort, and create separate effects sends from. I love layering different characters of sound on top of each other, and subtle balance tweaks between these layers dramatically alter the resulting sound.</p>
<p> </p>
<p>Third, XLN Audio developed something that's a hybrid between reverb and delay that sounds very interesting and quite rich. It's easy to dial in a fresh effects blend, especially compared to the limited controls most delay and reverb plugins offer.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/36e4661bd1435d679c1d93c85029e51876bab02d/original/reverb-delay-800wide.jpg?1502136128" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>Addictive Keys' spacial effects page, with a unique fader ranging from delay to reverb</em></p>
<p> </p>
<p>And fourth, the master channel inside Addictive Keys comes with its own little suite of effects for altering the sound. Tremolo is very useful here, the distortion is rich and textured, the addition of noise adds realistic grit to the sound, and filtering down the frequency is great for giving the instrument even more of a lo-fi vibe.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/d4f4300332cd522044e89fa9d0c267a744bf52ba/original/effects-800wide.jpg?1502136128" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><i>Just some of the effects options on the master channel</i></p>
<p> </p>
<p> </p>
<p><span class="font_large"><strong>Why I Love It</strong></span></p>
<p>It's just so unique. I thought it was all about pianos, but now I'm making patches that sound like distortion leads, freaky flutes, synth leads, fuzzy side-chain cream, smooth pads, and overtone layers for sub bass synths, all in an instrument that brands itself as so much less. It's really something of a secret weapon for me. At least until I publish this blog.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/005c5715b7550b3d757f84f4612dfc3853958de7/original/my-presets-800wide.jpg?1502136127" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>These are just a few of the presets I've made in Addictive Keys</em></p>
<p> </p>
<p>I can't praise XLN Audio enough for creating such a unique instrument that's so easy to manipulate into fresh sounds that are really inspiring to play. Now that you know my secret weapon, I'm sure I'll start hearing similar sounds from my fans. And I can't wait.</p>Milo Burketag:miloburke.com,2005:Post/48032612017-07-25T13:45:00-06:002018-05-17T09:07:06-06:00Dynamic Range and the Loudness War<p><span class="font_large"><strong>Why We're Talking Loudness</strong></span></p>
<p>Releasing music is complicated. One of the reasons it's complicated is that it can be really hard to know how loud to release a song. If you make your song too quiet, many listeners might check out and skip to the next song because they feel it sounds "boring". But if you make your song too loud, the audio quality really suffers.</p>
<p>This is pretty easy to solve for those with money: hire a great (loudness-war-aware) mastering engineer and let him/her decide. But for those of us who do our own mastering, usually for budget reasons, this is something we have to get a handle on.</p>
<p> </p>
<p><span class="font_large"><strong>A Brief History Lesson</strong></span></p>
<p>For better or worse, when you hear two songs side-by-side, the louder one tends to sound better. It pops out more, it seems to have more layers, and it somehow just pleases the ear more. You can experiment with this at home, even with two copies of the same song. It's just part of how we hear music.</p>
<p>When records were predominant as the media format for music, engineers had to keep levels quiet to prevent turntable needles from skipping grooves, and to increase how much playback time could fit on a record. And when cassettes were at their height, most engineers aimed for a happy medium of volume, facilitated by analog gear designed with headroom in mind. Due to this, music was distributed to the consumer without much reason to compress the dynamic range.</p>
<p><em>And dynamic range is important</em>: when your drums can pop out of your song, the mix just sounds bigger and better. There need to be loud instruments for soft instruments to sound quiet by comparison. And there need to be quiet sections in a song for a loud section to feel powerful. To a point, dynamic range is a wonderful thing that makes music sound louder and better when you control the playback volume. Without quiet, there can be no loud.</p>
<p>The CD was the first digital music media format with widespread adoption. And in digital, volume works fundamentally different: instead of gradually sounding worse with added volume, as with cassettes and analog gear, digital gear sounds perfectly clear with added volume until it reaches its max, called 0 decibels full-scale, or 0 dBFS for short. It is impossible to store a digital sound louder than 0 dBFS. When engineers began working in the digital domain, they lost the reference point of 0 dB in the analog domain, a reference point created with lots of headroom for louder sounds to poke through.</p>
<p> </p>
<p><span class="font_large"><strong>The Loudness War</strong></span></p>
<p>When working with CD as the final destination for music, it quickly became common practice to "normalize" the volume of a CD, bringing the volume of the entire album up until its loudest moment became exactly the loudest possible volume the CD could contain: 0 dBFS.</p>
<p>Engineers quickly noticed that a song on a normalized CD popped out more from the speakers than a song from a non-normalized CD. Louder could sound better, and could be better at grabbing the attention of listeners. A louder song would pop out more than other songs when heard on the radio.</p>
<p>Surprising to some, soft acoustic music, especially music without drums, seemed to sound louder than percussive music. How could that be? Without strong transients in the audio, normalizing music to the loudest possible volume brought the average level of a folk song higher than the average level of a rock song. Artists, record labels, and engineers couldn't have this, releasing rock albums quieter than folk albums, so they started pushing limiters harder in the mastering chain, to achieve a louder perceived sound at the expense of dynamics and with the addition of distortion.</p>
<p>New technologies only encouraged this race to loudness: multi-band compression became a dangerous tool for loudness, and look-ahead brick-wall limiters allowed music to be made louder than ever before.</p>
<p>When one artist at one label had a louder sounding album, other artists and labels scrambled to produce even louder albums. And the loudness war was born, reducing the dynamic range and increasing the distortion of most all recorded music over the last twenty years and counting, getting worse every year.</p>
<p>You really need to hear it to understand. <a contents="This video explains it far better than I ever could." data-link-label="" data-link-type="url" href="https://www.youtube.com/watch?v=3Gmex_4hreQ" target="_blank">This video explains it far better than I ever could</a>. And if you want to make a difference, signing <a contents="this petition at Change.org" data-link-label="" data-link-type="url" href="https://www.change.org/p/music-streaming-services-bring-peace-to-the-loudness-war" target="_blank">this petition at Change.org</a> will help.</p>
<p> </p>
<p><span class="font_large"><strong>Hope for Relief</strong></span></p>
<p>For decades, radio stations have managed the loudness of all the songs played on the air: for them, it's a delicate balance of playing music as loud as possible while not incurring fines from the government. Unfortunately, the audio quality of radio stations is quite bad, with very compressed, very distorted music as the end result.</p>
<p>Things got easier for some when a few media players began including functionality for adjusted volume normalization. Proper design could scan an album for its average volume level, then turn playback volume down to achieve a target average volume level. If CD #1 was on average 3 dB louder than CD #2, then CD #2 might be turned down 10 dB for playback, and CD #1 might be turned down 13 dB for playback. This was great for minimizing volume variation while playing music from your computer, if you used such a media player. But it did nothing to encourage artists and record labels to release quieter music.</p>
<p>I don't know the full history of which streaming service went first, but as of this writing, YouTube, Spotify, Apple Music, and Tidal all stream with normalized volume. What this means is that if you release a song at a somewhat conservative -12 LUFS or a very loud -10 LUFS or a punishingly loud -8 LUFS, it will sound the same volume through these streaming services after their systems work their magic, robbing you of any volume playback advantages from pushing your music too hard into a limiter. Hopefully this is enough to get the attention of record labels, towards the goal of having all artists release music with less limiter-induced distortion and more dynamic range.</p>
<p> </p>
<p><span class="font_large"><strong>Where You Come In</strong></span></p>
<p>You've made it this far. You know a fair bit about how music started becoming louder, and why that's not a good thing. But if you don't have proper metering, and if you haven't established a measurable loudness goal for your own music, you're still in the dark. Your releases could still end up far too loud, with all the consequences of loud music, or too quiet to foster much audience engagement.</p>
<p>What can you do to keep your music from sounding bad? For the love of donuts, don't push it hard with a limiter! That's the "Keep it simple, stupid" version.</p>
<p>And if you want to learn how to better strike that fine balance, you'll definitely want to read my post <a contents="explaining LUFS and loudness targeting for mastering" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-to-give-your-song-the-perfect-loudness" target="_blank">explaining LUFS and loudness targeting for mastering</a>. This article provides the "why", that article provides the "how".</p>Milo Burketag:miloburke.com,2005:Post/47500582017-07-11T14:25:00-06:002017-08-22T10:36:10-06:00Three of My Favorite Plugins: Part 1 (July 2017)<p><span class="font_large"><strong>Introduction</strong></span></p>
<p>If you're a reader of my blog, there's a good chance you use software plugin effects. Digital effects are an integral part of making music in this decade and decades to come, and they're likely your only choice if you don't have access to a million dollar studio and the large effect racks they often house.</p>
<p>This is the first in a series of posts I'll release quarterly. In each, I'll pick out three plugins I'm currently using and currently loving. Maybe they're new toys for me, or maybe they're old-hat plugins that I rely on heavily. But I'll only feature plugins that I use, like, and recommend.</p>
<p>If you like my music, this series will give you a taste of what goes into making my songs. If you're an experienced producer or engineer, I hope you appreciate taking a look at which tools a fellow producer relies on. And if you're relatively new to production, you may be looking for some of your very first plugins to purchase above and beyond the stock plugins in your DAW.</p>
<p> </p>
<p><span class="font_large"><strong>1. <a contents="iZotope" data-link-label="" data-link-type="url" href="https://www.izotope.com/en.html" target="_blank">iZotope</a> <a contents="Insight" data-link-label="" data-link-type="url" href="https://www.izotope.com/en/products/mix/insight.html" target="_blank">Insight</a></strong></span></p>
<p>Let's get the boring one out of the way first. As a metering plugin, Insight doesn't actually <em>do</em> anything to the sound of your song. But it does give a lot of relevant information that can help you make your song sound better.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/124607910a8ab3bb8e75b247783bab99b1c0f428/original/insight-1-800wide.jpg?1501879645" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>Here is Insight loaded with one if its broadcast presets</em></p>
<p style="text-align: center;"> </p>
<p>Insight was developed with broadcasters in mind: content creators and TV stations need to dial in their volumes to sound sufficiently loud, but quiet enough to not incur fines from their governments. Yet Insight's wealth of customizable mini displays is useful to mixing engineers as well, and especially to mastering engineers (and those who dare to master at home, like myself at present).</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/27c34f3788e0dc921fc5944fd814914639347cf4/original/insight-2-800wide.jpg?1501879646" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>Here is Insight loaded with my custom arrangement of displays and levels</em></p>
<p style="text-align: center;"> </p>
<p>I like being able to see at a glance if the left and right sides of a mix are balanced, and if the general frequency spectrum is balanced. More importantly, Insight's displays help me gauge a song's dynamic range and identify if there are any phase issues with the mix.</p>
<p>But most importantly of all, Insight helps me dial in the exact loudness I want: -14.0 LUFS integrated. Any louder and my song loses quality, only to be turned down by most playback services anyway; any quieter and I lose bite to most listeners compared to other songs. If you don't know what LUFS are or want to know how I arrived at this target, check out my article <a contents="covering the Loudness War and&nbsp;dynamic range" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/dynamic-range-and-the-loudness-war" target="_blank">covering the Loudness War and dynamic range</a>, and my article <a contents="explaining LUFS and loudness targeting for mastering" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-to-give-your-song-the-perfect-loudness" target="_blank">explaining LUFS and loudness targeting for mastering</a>.</p>
<p>Metering plugins aren't the most exciting. But Insight is a new tool in my arsenal, and for the first time, finding the appropriate loudness for a track is no longer guesswork. It's changed the way I finalize songs.</p>
<p>There are other metering plugins out there that display loudness, but few have the the vitally important integrated LUFS measurement, and few are as full-featured and well made as Insight. The price is pretty steep at $500, but it also comes in a beastly bundle with many amazing iZotope plugins for the same price of $500 (somehow), and less during sales, of course.</p>
<p> </p>
<p><span class="font_large"><strong>2. <a contents="Soundtoys" data-link-label="" data-link-type="url" href="http://www.soundtoys.com/" target="_blank">Soundtoys</a> <a contents="EchoBoy Jr" data-link-label="" data-link-type="url" href="http://www.soundtoys.com/product/echoboy-jr/" target="_blank">EchoBoy Jr</a></strong></span></p>
<p>I have sixteen delay plugins. I just counted. It takes something pretty special for me to even use a delay plugin regularly, much less purchase a new one. Fortunately, EchoBoy Jr is that special. I also love its big brother, <a contents="Echo Boy" data-link-label="" data-link-type="url" href="http://www.soundtoys.com/product/echoboy/" target="_blank">Echo Boy</a>, which I've heard is a "desert island" plugin for more than a few producers. But the newer EchoBoy Jr has a simpler interface, more knobs to shape the character of the sound, and I find it easier to quickly arrive at a sound that I love. Not to mention, it's half the price.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/1e845ff115a74dce0548c9527a461fc853f8e787/original/echoboy-jr-1-800wide.jpg?1501879645" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>Here is one of the stock presets in EchoBoy Jr</em></p>
<p style="text-align: center;"> </p>
<p>In my opinion, the ability to filter out the highs and lows of both delay and reverb is critical to get the effected sound to blend with the mix. I love my delays and reverbs to sound very mid-rangy, and it's much simpler when those controls are built into the interface.</p>
<p>Ping-pong is always an interesting sound, and I love using it in my music. Wide is another style of delay that you don't see or hear very often, but it adds a nice character that I enjoy.</p>
<p>The dial for delay-style according to technology is super useful for quickly changing the delayed sound into something truly unique. And the saturation knob adds a little flavor and a hint of bite that makes up the final polish.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/369b81e7d0891e9519bf19d260bd8f340d9fb535/original/echoboy-jr-2-800wide.jpg?1501879645" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>Here is a preset I made, and it's heard at least once in over half of my songs</em></p>
<p style="text-align: center;"> </p>
<p>If you didn't know, I'm a pretty big fan of Soundtoys as a company. They tend to stick to the simpler core effects of audio, instead of branching out in bizarre ways like <a contents="Adaptiverb" data-link-label="" data-link-type="url" href="http://www.zynaptiq.com/adaptiverb/" target="_blank">Adaptiverb</a> or <a contents="Relayer" data-link-label="" data-link-type="url" href="https://www.uvi.net/relayer.html" target="_blank">Relayer</a>. But the company consistently packages those core effects into simple interfaces that still provide depth to control, effects are combined in creative ways, and they just sound really, really good.</p>
<p> </p>
<p><span class="font_large"><strong>3. <a contents="XLN Audio" data-link-label="" data-link-type="url" href="https://www.xlnaudio.com/">XLN Audio</a> <a contents="RC-20 Retro Color" data-link-label="" data-link-type="url" href="https://www.xlnaudio.com/products/effect/rc-20_retro_color" target="_blank">RC-20 Retro Color</a></strong></span></p>
<p>I used to record towards perfection. Every piano had to sound clean and pristine, in perfect condition and perfect tune. Every drum had to be crisp. Every instrument clear. But at some point in my journey, that just got boring. Every virtual piano sounds perfect. Every drum sounds bombastic, with a huge smile-EQ. It all starts to sound the same, and it just doesn't jive with the crunchy, thick sounds I love in music.</p>
<p>So I started adding color to my instruments. Personally, I don't get a lot of vibe out of "vintage compressors", and especially not from "vintage EQs". What else is there? Sometimes I turn to tape emulation, I regularly add saturation, and I often add quality analog-style distortion to many layers in my mixes.</p>
<p>But RC-20 Retro Color is something of a new secret weapon for me. All it does is add dirt and grime, but it does it in such a lovely way, with heaps of knobs you can twist to achieve a ton of different flavors.</p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/f45245d16a86e6fee64875f882d50035252e7e6a/original/retro-color-2-800wide.jpg?1501879645" class="size_orig justify_center border_" /></p>
<p style="text-align: center;"><em>Here is one of the stock presets</em></p>
<p style="text-align: center;"> </p>
<p>There are separate modules for six different types of effects. I generally use Wobble, Digital, and Magentic pretty sparingly. But I'm always surprised by how Noise, Distort, and Space can radically alter a sound. All the right controls are there: for example, adding noise without controls for Follow and Duck is downright messy, but with them, I can make smooth sounds with character. The high-pass and low-pass settings are valuable in their own for any type of lo-fi effect, and the Magnitude slider helps you dial in exactly how much distress your sound requires to shine.</p>
<p>Whenever I have an instrument that is just sounding too bland, I reach for either distortion or Retro Color. And it seems to have found a permanent use for me in processing my drums in parallel, adding thickness and texture in almost all of my songs, from the grungiest to the cleanest mix.</p>
<p> </p>
<p><span class="font_large"><strong>Wrapping Up</strong></span></p>
<p>As always, thanks for being my reader. If you like using these plugins too, I'd love to hear how you use them. And if you have a few favorite plugins of your own that you can't live without, please share them in the comments below. I read every single one.</p>Milo Burketag:miloburke.com,2005:Post/47228432017-06-27T12:05:00-06:002017-06-27T12:06:52-06:00My Journey With Acoustics: Part 2<p>If you've been following along, I dropped a healthy dose of theory on <a contents="which acoustic problems your room is likely to have and how to cost-effectively solve them" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/an-acoustic-primer-the-secret-to-better-mix-decisions" target="_blank">which acoustic problems your room is likely to have and how to cost-effectively solve them</a>. Then I began to do the very same for myself.</p>
<p><a contents="Treating my sidewall first reflection points" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/my-journey-with-acoustics-part-1" target="_blank">Treating my sidewall first reflection points</a> was very valuable, and the sound I was hearing was much improved. But still, that's only the beginning. If one aims to significantly clean up the bass, there's more work to be done.</p>
<p>In this post, I'll describe the second stage in my room treatment project, directly aimed at addressing the low-frequency issues in my room.</p>
<p> </p>
<p><span class="font_large"><strong>What I Built</strong></span></p>
<p>If you're not familiar with "super-chunk bass traps", it may be worth skipping over to <a contents="Google Images" data-link-label="" data-link-type="url" href="https://www.google.com/search?q=super+chunk+bass+trap&safe=off&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiPwNrX8NvUAhVV7GMKHceuC3oQ_AUICygC&biw=1920&bih=974" target="_blank">Google Images</a> for a moment to see what I'm referring to. The physics involved dictate that low-frequency energy builds up in the corners, and the only thing required to prove this is to stick your head in the corner of your room while your subwoofer is exercising. The strategy behind super-chunk traps is to put as much absorptive material in the corners as possible.</p>
<p>The theory is that in doing so, kick drums will start and stop more percussively, according to the source material; bass notes will sound with greater tunefulness, according to the notes played in the recording; room dimension-shaped bass buildups at specific frequencies will be reduced; and room dimension-shaped bass nulls at other specific frequencies will also be reduced. Resulting in hearing a flatter low frequency response that's more accurate to the sound coming out of the speakers.</p>
<p>I don't put much faith in the dainty corner traps many companies like to sell. How much low energy is absorbed is directly related to how thick the absorption is that's placed in the corners. Towards this end, most super-chunk builders aim for fiberglass or mineral wool triangles of about 24" by 24" by 33". These triangles are stacked floor-to-ceiling in the corner of the room, ideally in as many corners as are available and can be treated within the budget.</p>
<p>To avoid insulation sag, to save my apartment walls, and to keep my super-chunk traps portable in expectation of my next move, I opted to build four triangular frames for the two corners I treated. I blazed my own trail in this regard, making my own measurements and my own plan, since most available plans are designed for more permanent installs.</p>
<p>Home Depot cut the lengths of 1x4 common board for me, but they don't cut diagonally, so I had to branch out to cut the 2' by 2' plywood sheets used to make the top and bottom triangles of each frame. If you have a woodworking shop, this likely won't be a problem for you. If you don't, you'll likely have to hire a woodworker like I did to make the cuts. Fortunately, it was a fast job that took no more than ten minutes of an expert's time.</p>
<p>Cutting the insulation was the arduous part. Owens Corning sells 703 fiberglass most affordably in 2' by 4' sheets that are each 2" thick. It was my job to cut each sheet three times: once in half to make 2' by 2' squares, and twice more to cut each of those squares diagonally. I'm not particularly concerned about handling fiberglass with bare hands, but the dust and airborne particles from cutting fiberglass are cause for concern. I made the cuts outside on my apartment's concrete patio wearing a breathing mask. I used a long insulation knife to make a series of increasingly deep scores in the direction needed for each cut, with a sheet of cardboard beneath the cuts to protect the concrete and my knife.</p>
<p>My ceilings are just a hair under 9' high. Leaving room for the frames, I was able to fit 53 triangles 2" tall in each of the two corners I treated. This required 26.5 sheets of fiberglass, and the 80 cuts took me most of a day.</p>
<p>As before, I bought silky textured fabric from Jo-Ann Fabric and Crafts for the frames. I built simple wooden frames out of 1x4 common board and stapled the fabric to the frames with a staple gun.</p>
<p> </p>
<p><span class="font_large"><strong>Where I Placed Them</strong></span></p>
<p>I only had the budget for two super-chunk bass traps. Fortunately, the two corners in the front of the room were available, and that's where I stacked the frames. I had the best luck filling each frame about halfway before putting it in place, then filling the remaining space with fiberglass triangles by hand. This created an avoidable fiberglass mess on the studio floor, but I discovered the hard way that the frames were just too heavy to handle alone once completely filled.</p>
<p> </p>
<p><span class="font_large"><strong>How It Sounds</strong></span></p>
<p>In the last three rooms I've had my studio in, I've had a devil of a time getting the bass to sound right. Partly, I was confounded by a defective or broken measurement mic that was leading me to EQ out my bass at a slope of about 6 dB per octave, but that's another story. The other part was that things just weren't congealing in the middle. Two rooms ago, I had loads of deep bass 20' behind the listening position in the kitchen of my apartment, but not in the listening seat. And in my last room, I could hear deep bass while standing in the doorway, but not inside the room. I tried more than a dozen sub arrangements in that room, and more than a dozen in the room I'm in now. But the best I could do was to have somewhat of a bass suck-out around 50-60 Hz that left bass guitar and bass synths sounding mostly okay, but kick drums just weren't rewarding until I stood up and took a couple of steps towards the door.</p>
<p>Well, problem no more. With the super-chunks installed, the kick drum finally has slam in the listening seat. Fantastic! Also, bass guitars and bass synths sound more tuneful, in that I'm able to more clearly discern which pitches they are playing even when there aren't high frequency cues to give it away. I'm hearing the interaction between the kick and bass more clearly, and both the kick and bass are more discernible from each other than they were before. This is all excellent, and as expected.</p>
<p>Another aspect to note is that I'm now able to hear the reverb and decay on the kick drum, on songs the engineers decided needed reverb on the kick drum. It gives a more prolonged, cinematic thud than I'm accustomed to hearing. And now that I can hear the difference between percussive kicks and spacious kicks, I have yet another tool to add to my production arsenal to control the emotion and feel of my own music.</p>
<p>There are two aspects that confound me, however. First, and I have no idea how, the flutter echo in my room is significantly worse with the super-chunks installed. Not subtly, but dramatically. This doesn't make sense to me at all, considering I just added a massive amount of absorption. All things considered, I'm not recording percussive sounds in my space, and flutter echo from clapping in my listening seat doesn't affect a speaker's job to produce full-spectrum sound from the speaker stand. With this in mind, I'm not exceedingly concerned, but I still find it a little odd. </p>
<p>Second, and this also surprises me, the bass now actually sounds louder. By absorbing bass energy from the room, by more quickly killing the low-frequency decay in my room, bass now sounds louder?? I expected it to be the opposite: that much of the low-frequency sound I was hearing was due to low-frequency energy bouncing around the room, being reflected back into the room at each wall. Before, my subs were turned a little low so as not to bother my neighbors more than necessary. But now, I'm tempted to turn my subs down a little.</p>
<p>I have a guess as to why: every room has peaks and nulls in the low frequency according to the dimensions of the room. And by soaking up the low-frequency energy before it has time to peak and null, I'm hearing the bass more directly from the subs to my ears without it being masked and cancelled by bass reflections. And maybe reducing my room's ability to null is what results in the louder-sounding bass.</p>
<p> </p>
<p><span class="font_large"><strong>Conclusion</strong></span></p>
<p>I'm only two steps into my room treatment journey, but already I'm hooked. I'm motivated to keep working on the project, and I'm not sure I could ever go without my new beloved bass traps. And though my monitoring system is somewhat humble, I'm already afraid that I won't care for most all of the exotic stereos I'll hear at Rocky Mountain Audio Fest this year, after growing accustomed to how a more modest playback system can sound in a room with effective treatment. I'm reminded more than ever that room acoustics equal speakers in the importance of how a stereo sounds. Comparatively, the quality of amplifiers seems of little importance relative to how much acoustic treatment can improve sound.</p>
<p>I'll keep you posted on the next steps in my journey.</p>Milo Burketag:miloburke.com,2005:Post/47436222017-06-20T14:13:54-06:002018-04-26T01:34:31-06:0010 Steps to Mixes That Translate: Part 2<p>In part one of this guide on improving mix translation, <a contents="I covered five aspects of how&nbsp;equipment, room acoustics, and speaker positioning are compromising the effectiveness of all but the most ideal setups" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/why-isn-t-my-mix-translating" target="_blank">I covered five aspects of how equipment, room acoustics, and speaker positioning are compromising the effectiveness of all but the most ideal setups</a>, and more importantly, what you can do about it on a reasonable budget. If you haven't read this yet, you'll definitely want to give it a read.</p>
<p>Today, in part two of this series, we'll be covering steps you can take working with the equipment and room you have now in order to make the best of it and give your mixes the best chance of translating well to the real world. It should go without saying that these latter five steps don't make the first five obsolete, and vice-versa. Encompassing all ten will give you the greatest advantage.</p>
<p> </p>
<p><strong><span class="font_large">6) Have You Checked Your Mix Against Other Mixes? </span></strong></p>
<p>This is one of two forms of reference checks, and it's very important. Because the ear adapts to what we hear so quickly, and because we sometimes engineer outside of our preferred genres, we can easily lose touch with how our mix sounds in relation to how it could sound, and how audiences expect it to sound.</p>
<p>The solution to overcome this is to play your mix against professionally engineered mixes in the same genre. If you're mixing country or commercial electro-pop, find a country song or electro-pop song that just sounds fantastic, and play it against your mix. Of course, you need level-match the two songs before contrasting them against each other, to establish an even-playing-field. What differences do you hear in the balance among instruments that sounds superior in the commercially released song? How does the entire song sound as a whole?</p>
<p><strong>Examples</strong></p>
<p>If you're mixing a danceable tune with a driving beat, there's a good chance that the kick and snare and vocal deserve to be the loudest elements of the song. But suppose you balanced the snare volume early before all other instruments were mixed in, and it gradually became buried among the other instruments. It's just not going to pop anymore if a rhythm guitar and a synth part are substantially louder, particularly in the same frequency range. Yet it's very possible you didn't notice the snare was gradually disappearing during the process of adding instruments. The solution is simply to increase the snare volume, but there's a good chance you didn't realize there was a problem until you compared your mix to a commercially engineered song. This comparison reveals what you can do to improve your mix.</p>
<p>Another example that I'm quite familiar with personally: it's easy to gradually build the song up, adding layer after layer, instrument after instrument, and the entire song has a good vibe and what sounds like a decent mix. Yet, without realizing it, I've built a mix that's significantly lacking in the high-end. Sure, it sounds good in the creation process, and all the richness and warmth are there, but it's still an unbalanced mix that I've become accustomed to over a period of hours. Comparing the mix to a well-engineer song can bring things back into perspective, and it becomes a simple matter of making everything sound crisper and brighter with EQ.</p>
<p>It's not worth covering this in much more detail because I wrote about it at length in another post: <a contents="How Reference Checks Can Save Your Song" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-reference-checks-can-save-your-song" target="_blank">How Reference Checks Can Save Your Song</a>. So if you need more clarification, be sure to read that post. Reference checks are a huge part in getting mixes to translate; far more important than 'this vital EQ trick' or 'how that style of compression will make you sound pro', and is very deserving of a place in this list.</p>
<p> </p>
<p><strong><span class="font_large">7) Have You Checked Your Mix on Other Stereos? </span></strong></p>
<p>Equally important to checking your mix against other mixes is the other kind of reference check: checking your speakers against other speakers. After all, if you mix on Shure in-ear headphones or Rokit monitors, you can't expect that all of your listeners will be listening on the same, can you?</p>
<p><strong>The Problem</strong></p>
<p>When we listen on one set of monitors or one set of headphones, we become very set on how things sound on them, flaws in the speakers and all. The challenge is that this leads to false impressions of how loud the mid-bass in your mix actually is, or whether or not there's too much sibilance on the vocals, or if the balance of instruments even feels right. This remains somewhat of a problem assuming you have great speakers set up properly in your room with good acoustics, following part one of this topic, but I can't stress the importance enough when you haven't yet addressed some or all of those aspects.</p>
<p><strong>The Solution</strong></p>
<p>What can you do to fix this? Listen on as many different stereos as you can. The popular 'car test' is popular for a reason: go listen to your mix in your car, and you'll likely hear a whole new set of problems you have to solve that weren't apparent on your monitors. But don't stop there: it's also valuable to make sure your mix works on high-end headphones and cheap earbuds, on powerful stereos and table-top devices. So listen on each and take notes on which aspects of your mix need tweaking.</p>
<p>Maybe the bass synth or bass guitar needs an EQ change, and the kick needs a level change. Maybe the background vocals need adjusting to sit just behind the lead vocals. Maybe the rhythm instruments are masking each other, or aren't given the right amount of power. You can't make your mix sound perfect on every stereo, but you can make it sound as good as it can on as many as it can. And at this point, the deficiencies of each stereo will be increasingly revealed to you. But when you have three sets of speakers telling you the bass is too loud and only one set of speakers telling you it's just right, you need to trust the the majority, even if your studio monitors are the speakers telling you things are just right.</p>
<p>Like the previous point, this is also covered in more depth in my <a contents="post on reference checks" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-reference-checks-can-save-your-song" target="_blank">post on reference checks</a>. So be sure to read there for more insight.</p>
<p> </p>
<p><span class="font_large"><strong>8) Is Your Gain-Staging in Good Order? </strong></span></p>
<p>If the term "gain-staging" is new to you, then perk up your ears, because this is important. Simply put, gain-staging is making sure that at each point you have a volume control, it's set appropriately for whatever piece of software or gear is going to follow it. And if you think about it, each preamp on your interface has a volume control, and each track and send and bus in your DAW has its own volume, not to mention that the input and output of each plugin can be adjusted. That's a lot of volume controls! How can we truly know and understand all of them??</p>
<p><strong>Making It Simple</strong></p>
<p>Well, thankfully, you don't have to understand all of them. You just have to make sure each volume control is set 'about right', in that it's not so quiet that you're losing valuable subtleties to the noise floor of your equipment, and that it's not so loud that you're likely to clip, or likely to contribute to excessive overall mix volume. More often than not, people err on the side of having the volume too high, so know that it's okay to keep things a little quieter. And the easiest part is that if you start with the right volume, either at the preamp level on your interface or the output level of your virtual instrument, the rest falls in line.</p>
<p>What happens if you ignore gain-staging and just keep soldiering on with your music? A severely clipped mix bus is not only the worst-case scenario, but the likely scenario. And you end up with a weak, harsh, unpleasant sound that you can never undo down-the-line in mastering.</p>
<p>There are three main areas people tend to mess up the most with gain-staging, and I'll break down all three:</p>
<p> </p>
<p><em><strong>a) Recording Level</strong></em></p>
<p>When recording with a microphone or physical instrument, if the preamp on your interface is turned too loud, you risk clipping and losing transients. Digital clipping sounds terrible and can't be undone, and many a good take has been ruined by keeping the preamp volume turned too high and hoping the performer doesn't clip the preamp. And even if you play it safe and keep the average level perhaps 8 dB away from clipping, there's a good chance you're losing the subtleties of transients, particularly if the instrument is percussive. Play it safe and record instruments at a lower volume.</p>
<p>There are two reasons people struggle with this. The first is that people were warned that if you record too quiet, you'll get too much hiss in your track. This was somewhat true more than fifteen years ago, when equipment wasn't yet made to today's standards and all digital audio was 16-bit. But the real fear is still lingering from the days of recording to analog tape! This just isn't a factor anymore if you have even a cheap modern interface and are recording at 24-bit, much less 32-bit float. The second reason people struggle with this is that they compare working in digital with working in analog: aiming for 0 dB on an analog console was just what you did, and every piece of analog equipment was built with at least 12-15 dB of headroom above 0 dB. And even when exceeding that +15 dB max, clipping occurred gradually and sounded soft. However, with digital, 0 dB becomes the absolute maximum volume the sound could ever be, and passing it by just a millimeter results in hard, ugly digital distortion. Digital is not analog, and people working in digital need to create that safe buffer from clipping themselves. We do this by recording softer; turning our preamps down to give ourselves that 12-15 dB of headroom that we need.</p>
<p>I personally aim for recording at -15 dBFS as an average, meaning that when I'm testing the levels with a track armed to record, and the track doesn't have any plugins on it, the meter in my digital audio workstation (DAW) tends to show the incoming signal most often at about -15 dB. The nice thing about recording at this level is that even if you drift a little, in that the vocalist might move closer or further from the microphone, or the guitarist might adjust his gear, you should still be within the range of -10 and -20 dBFS. That's still enough room to not only avoid clipping, but to preserve transients, all without flirting with the noise-floor of your equipment and recording unnecessary hiss.</p>
<p> </p>
<p><em><strong>b) Virtual Instrument Output Level</strong></em></p>
<p>The trouble with most every virtual instrument on the market is that they're incredibly loud! If it's a synth, it likely sounds punishingly loud and has next to no headroom. And if you're working with sampled digital drums, you better believe those samples are pushed within an inch of their lives with multiband compression and limiting before they're added to the sample pack. The result is that one instrument alone is as loud or louder than your entire mix should be. And even something as simple as a small EQ boost can push a near-clipping sample over the cliff of 0 dB, and the result is ugly distortion.</p>
<p>It's worth noting that at least a few DAWs allow you to exceed 0 dBFS inside the DAW without clipping, especially if your session is at 32-bit floating-point. However, not all DAWs can do this. The trouble is that there doesn't seem to be a list of which DAWs can handle this and which DAWs can't. And further, even if your DAW can handle it as long as the volume is reduced before physical output, you don't know for a fact that your plugins can handle it. A good number of them may be clipping internally if fed a signal higher than 0 dB. Again, all you can to do avoid this is lower the volume of the virtual instrument.</p>
<p>The best way to handle this, similar to recording a microphone, is to start with low volume from the very beginning. I frequently turn down the volume of virtual instruments by 15 dB, sometimes more.</p>
<p>Some virtual instruments are done right: for example, <a contents="Addictive Keys" data-link-label="" data-link-type="url" href="https://www.xlnaudio.com/addictivekeys" target="_blank">Addictive Keys</a> by <a contents="XLN Audio" data-link-label="" data-link-type="url" href="https://www.xlnaudio.com/" target="_blank">XLN Audio</a> often outputs right about the perfect volume, and I don't have to dive for the volume control immediately after adding Addictive Keys to a session. For other virtual instruments, it's a small hangup: in <a contents="Omnisphere 2" data-link-label="" data-link-type="url" href="http://Omnisphere%202" target="_blank">Omnisphere 2</a> by <a contents="Spectrasonics" data-link-label="" data-link-type="url" href="https://www.spectrasonics.net/index.php" target="_blank">Spectrasonics</a>, the master output volume stays fixed even when changing to a new patch, so it's a one-time step per instance of Omnisphere to lower the virtual instrument's internal master volume by perhaps 15 dB, and then choosing a patch.</p>
<p>But other instruments make it really difficult: <a contents="Battery 4" data-link-label="" data-link-type="url" href="https://www.native-instruments.com/en/products/komplete/drums/battery-4/" target="_blank">Battery 4</a> by <a contents="Native Instruments" data-link-label="" data-link-type="url" href="https://www.native-instruments.com/en/" target="_blank">Native Instruments</a> not only uses ridiculously loud drum samples, but the master output volume for Battery is tied into the instrument patch, so loading a new kit resets the volume to 0 dB. You can choose to lower the volume by 15 dB or so within your DAW using the track's fader, but there can still be internal clipping in Battery from playing stacked drum hits and internal effects, and that doesn't save the plugin chain in your DAW from clipping. My only solution is to lower the volume of the track in my DAW while I'm choosing a kit, and then raise it back to 0 dB and lower the volume of the individual drums in Battery until the level sounds about right for each.</p>
<p>It's a pain, and I sincerely hope the <a contents="loudness war" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/dynamic-range-and-the-loudness-war" target="_blank">loudness war</a> among virtual instruments dies down. They could all learn a thing or two from XLN Audio. But until then, we need to responsibly handle the gain of virtual instruments ourselves.</p>
<p> </p>
<p><em><strong>c) Mix Bus Level</strong></em></p>
<p>If you combine ten or twenty or thirty high-volume tracks in your DAW, your mix bus will clip so aggressively that your song loses any hint of power and depth and control that it could have had. As I mentioned, some DAWs allow you to surpass 0 dBFS inside the DAW, particularly if you're working at 32-bit floating-point, but volume above that absolute limit of 0 dB can't be exported or bounced without clipping, and your interface can't play back your session to your speakers without the digital-to-analog converter in your interface clipping. The only way around this is to lower the volume of all tracks in your session, so even when summed together in the mix bus, there's still some headroom before clipping.</p>
<p>I guarantee this is an issue for you if you're not in the habit of recording quietly and reducing your virtual instrument volume: the above two points are major contributors to this. But it's also possible to back yourself into a corner with plugins. For example, if you boost aggressively with EQ, you need to remember to lower the input of the EQ to match the before/after EQ volumes. And if you use a compressor with automatic make-up gain, it's important to reduce the output volume of the compressor. The goal is that once you start with tracks that have reasonable volume levels by setting your preamp levels and virtual instrument levels low, you maintain that nice, easy volume through your plugin chain, and then mix your song by having many of the faders in your DAW around 0 dB instead of the -20 dB you'd need if you don't bother with proper gain-staging.</p>
<p>It's worth noting that some people might respond negatively to this, under the false impression that a loud mix bus volume equals a loud master, which equals a volume advantage on the radio that will draw in listeners. I'll shoot down this myth on two fronts: first, it's very unlikely that your music will actually be heard louder than other music. Not only do radio stations heavily limit all content before broadcasting, but streaming services including Spotify, Apple Music, Tidal, and more also turn down songs that are too loud. Even YouTube aggressively turns down loud videos, and many media players use volume normalization as well. So the advantage only exists in a small number of places. And second, maxing out your mix bus has nothing to do with the final volume of your song. Instead, the final volume of your song has everything to do with how aggressively it's <a contents="limited during mastering" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-to-give-your-song-the-perfect-loudness" target="_blank">limited during </a><a contents="limited during mastering" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-to-give-your-song-the-perfect-loudness" target="_blank">mastering</a> and the crest-factor of the mix. You won't harm your song an ounce by mixing with headroom to spare (in fact, you'll even be saving it), and all that needs to be done to make the master nice and loud is to lower the threshold on the limiter.</p>
<p> </p>
<p>It can feel like a little bit of a headache at first. But if gain-staging is an issue for how you work, learning a little theory now can go a long way for ensuring your music sounds better for the rest of your engineering career. It's well worth the small amount of time required to establish these as habits right now. And it will go a long way towards your mixes sounding good and as expected on other stereos as your levels are in-check.</p>
<p> </p>
<p><strong>My Secret to Making It Automatic</strong></p>
<p>If you want to make it easy for yourself, you can use the hack that I use, adopted by most of the film industry and recommended by mastering engineer Bob Katz. Simply put, keep your speakers turned up really loud. This way, adding a virtual instrument at full volume will sound deafening, and you'll turn down the instrument by default. If your speakers are set at a good (loud) volume, all these habits become automatic.</p>
<p>Briefly, the recommendation is to set your amplifier volume so that a -20 dBFS 1 kHz sine wave played through one speaker measures 83 dB in your room according to an SPL meter. 83 dB sounds good and loud, and this provides 20 dB of headroom for peaks up to 103 dB. I set mine about 81 dB because I prefer to work a little quieter. I turn down the volume on my monitor controller when listening to commercial music or watching YouTube videos, but I always turn it up to the same spot when making music. Calibrated volume controls make it easy, but if your monitor controller isn't calibrated, you can mark where to turn your volume control for working using a piece of tape or a Sharpie. And never touch your amplifier volume again. Using this method, it's uncomfortably loud to compose or mix music without enough headroom. And as you produce or mix each song to sound good and loud without sounding uncomfortably loud, good gain-staging becomes automatic just working by ear.</p>
<p> </p>
<p><span class="font_large"><strong>9) Are You Cleaning Up Your Low End? </strong></span></p>
<p>One of the bigger problems beginner mixers have is that their songs have messy low-end. Partly, this is due to habits they haven't yet adopted. And partly, this is due to the nature of working with small- or mid-sized studio monitors without accounting for their nature. Let's start with the monitors.</p>
<p><strong>Relying Too Much on Stereo</strong></p>
<p>There's a good chance your primary monitors are revealing and have a lot of detail. You do own them with the intention of using them to hear the intricacies in the music you make, after all. And if they're even halfway set up as described in the <a contents="first half of this topic" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/why-isn-t-my-mix-translating" target="_blank">first half of this topic</a>, the wide stereo image is lending its hand towards increasing separation. Stereo separation is a wonderful thing, but it can also be misleading in that it can encourage instruments to sound separate despite having overlapping frequencies. For example, a mix could have a low rhythm guitar on one side, a low-mid synth on the other side, and a fuzzy bass in the center. When you mix with good stereo separation, there's a good chance you're relying on panning to distinguish the three sounds from each other. But people listening in less ideal situations, including venues with mono playback systems, mono systems in the ceilings of restaurants and retail stores, and even the reduced stereo of sitting on one side of a car or hearing a stereo Bluetooth speaker on the other side of the room, it becomes very difficult to separate the three instruments from each other.</p>
<p>How do you fix this? With EQ, of course. If you give each instrument it's own space in the frequency range of the entire mix, it becomes much easier to distinguish each instrument from the others in mono and near-mono listening. It helps if you create specific peaks with EQ for each instrument, so each gets its place to shine in the spectrum of the entire mix. Also, this is where mixing in mono becomes very useful: when you can no longer rely on stereo separation to hear each instrument distinct from others, it forces you to fine-tune the volume balance and EQ separation for each, leading towards a mix that not only sounds better in mono, but stereo too. So while the stereo separation of well positioned, good-sounding monitors is important, you can begin to see how one needs to think around that advantage to better deliver a great mix that effectively translates to mono systems and near-mono listening scenarios.</p>
<p><strong>The Deep Bass</strong></p>
<p>But there's one more aspect to having speakers like this that can be a disadvantage. Unless your system has a very robust network of subwoofers that deliver the last word in low-frequency reproduction, you're probably not hearing a lot of what's going on in the very low end. You can expect that many kick samples and bass synth patches have a lot of deep bass, but a lot of it can't actually be heard, even on a good system. If you remove the frequencies that are deeper than necessary, you not only increase headroom of the mix allowing it to sound cleaner or louder, but you end up removing a lot of the muck that clouds the low frequencies in poor mixes.</p>
<p>This problem is also solved by EQ. Say your synth bass is creating valuable content at 50 Hz and also a lot of needless rumble below. My spectrum analyzer shows that many patches create strong signal down to 10 Hz and lower! Your mix will sound better if you use EQ to high-pass the sound of the synth bass just below the relevant content. You can do this by ear, but it becomes significantly easier when you use an EQ with a built-in spectrum analyzer, like <a contents="Pro-Q 2" data-link-label="" data-link-type="url" href="https://www.fabfilter.com/products/pro-q-2-equalizer-plug-in" target="_blank">Pro-Q 2</a> by <a contents="FabFilter" data-link-label="" data-link-type="url" href="https://www.fabfilter.com/" target="_blank">FabFilter</a>, or <a contents="H-EQ" data-link-label="" data-link-type="url" href="https://www.waves.com/plugins/h-eq-hybrid-equalizer#h-eq-hybrid-equalizer" target="_blank">H-EQ</a> by <a contents="Waves" data-link-label="" data-link-type="url" href="https://www.waves.com/" target="_blank">Waves</a>. With the spectrum analyzer, you can easily pinpoint where the content is and shape your high-pass around the relevant frequencies. In this scenario, I would use a high-pass filter to roll off the bass at about 45 Hz with as steep a filter as I can use without creating audible problems.</p>
<p>You may think you want to keep the sub-bass frequencies because you want your mix to sound full and deep. So do I! But if you give this a try, you'll realize that the mix as a whole sounds like the bass is deeper and clearer and has greater punch when you limit the low frequency of each instrument to where it belongs, including low-bass instruments like kick drums and bass synths.</p>
<p><strong>Working with Low-Mids</strong></p>
<p>This also extends to other instruments that generate a lot of low-frequency content. You may not think of pianos and guitars as bassy instruments, but they have heaps of low-frequency content depending on which octave they're played in. And while that full-spectrum sound is great for solo performances, it substantially clouds the low-end in a full mix. Rolling off the lows just below the relevant content really helps. Same with synthesizers, as many patches have more bass than they need, and far more than a good mix calls for.</p>
<p>This continues to surprise me: snare samples and even clap and hat samples can be the same way! Some have loads of low-end that you wouldn't expect. Not only is the low-frequency energy not necessary in a full mix, but it's destructive, and removing it is absolutely beneficial to the mix. Establish the habit of checking each track in your song with a spectrum analyzer to visually see if there are unnecessary frequencies present, particularly in the very low frequencies that your speakers likely don't handle like a champ.</p>
<p>And remember that, in the context of a full mix, you often don't want one instrument to sound full-spectrum. Though a lead synth may sound killer full-spectrum when soloed, there's a very good chance that it detracts from the mix, and that the mix would benefit from a band-limited lead synth leaving the high-end for cymbals and the consonants in the lead vocal. Likewise, rolling off the lows of the lead synth makes room for the bass and kick to shine and provide depth and balance to the song.</p>
<p><strong>Other Tips</strong></p>
<p>A few other little elements of house-cleaning can polish your low-end further. Make a habit out of rolling off the bass in your reverb buses. It just doesn't need to be there, and it muddies and clouds the bass in the rest of the mix. Likewise, delay buses generally don't need a lot of bass to sound effective, and rolling off the low frequencies of the delay can clean up the low-end of the mix.</p>
<p>Also, it's a great practice to roll off the bass when layering kick samples or bass synth patches. For example, if my bass sound is made up of three layers of synths, with one for mid-range grit, one for high frequency grit, and one for a clean deep tone, it generally sounds best to high-pass the first two layers so only the clean, deep layer is providing the anchor of bass required. Remember these steps to clean up the low-end towards a mix that sounds clearer and translates across systems with greater ease.</p>
<p> </p>
<p><span class="font_large"><strong>10) Do You Need More Practice Listening?</strong></span></p>
<p>It can be really easy to dismiss the reasons why professionally engineered music can sound better than yours. You might want to say, "He has <a contents="amazing analog gear" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/plugins-vs-hardware-gear" target="_blank">amazing analog gear</a> in his million dollar studio." But the quality of a mix comes down to how tools are used far more than the tools themselves. To that, you might reply, "But those guys have golden ears and I don't."</p>
<p><strong>The Source of 'Golden Ears'</strong></p>
<p>This is the point I want to address. 'Golden ears' aren't genetic, and you can't be born with them. (Though it can be something you can lose, so if you're a drummer or frequent concert-goer, I strongly recommend hearing protection.) But there's equality in this: the people with golden ears are the people who developed golden ears. It all comes down to what many call 'active listening'. If you're shaking your tush at a club, you're probably not actively listening. But if you notice little things while listening to the radio, like "that snare sounds really sharp" or "huh, I can only pick out these three instruments when the volume is this quiet" or "I wonder what about those grungy drums makes them sound so good", you're well on your way. This is active listening, and making a habit of it will help you hear far more into your music and all other music you listen to. This is critical to having the ears and attention to detect the nuances you need to hear to make good mixing decisions.</p>
<p style="text-align: center;"><em>Pro-tip: reference checks not only help your mixes translate, but make for superb active listening practice.</em></p>
<p>I want to say it's like laser eye surgery for your ears, but it's really not. There is no quick fix or magic solution. Which, again, is a great equalizer in that other people don't have some magic key you don't to help them become great engineers while you struggle. A much better example is learning an instrument. I do okay with guitar, and I have an okay-guitar. And if I want to get better, I need to put in the hours to practice. And though undoubtedly Eric Clapton has many incredible guitars, if he and I were to trade for a day, he could still play much better music off of my okay-guitar than I could off of any of his great guitars. Because he's put in many thousands of hours practicing and performing that I haven't. <a contents="It all comes down to experience" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-to-become-better" target="_blank">It all comes down to experience</a>.</p>
<p>If you are an engineer that only listens passively, you're not going to learn much. And if you ever become good, it will take a long time to get there.</p>
<p>But if you start listening actively, whenever you're waiting in the check-out line at the store, or riding in a car with the radio on, or resting between sets at the gym, you'll begin to notice things. And just noticing is an incredible teacher. In fact, I learn more about engineering from twenty minutes of actively listening to popular songs than I do from an hour of reading a textbook on mixing.</p>
<p><strong>Fast-Tracking Your Ears</strong></p>
<p>But you should know that there is a way to kick your ears into high gear: ear training. There are two kinds of ear training. There's the kind that musicians use to better hear pitches and intervals, which I encourage if you're a singer or musician in any sense. And there's also the kind of ear training relevant to engineering, and to this blog post on helping your mixes translate better: this ear training helps you better hear small details in audio. There are loads of software titles, free and paid, computer and mobile, that can help you train your ears. The one that I've personally used the most is the free beta project made by Harman called How To Listen. It's available for PC and Mac, and can be downloaded <a contents="here on their blog" data-link-label="" data-link-type="url" href="http://harmanhowtolisten.blogspot.com/" target="_blank">here on their blog</a> if you scroll down a little in the post.</p>
<p>If you want to give me a run for my money, let me know what scores you can get in the various game modes. I'll share mine. A little healthy competition pushes all of us to learn.</p>
<p> </p>
<p><span class="font_large"><strong>Wrapping Up</strong></span></p>
<p>There you have it: five tips that you can practice while mixing, after mixing, and in your downtime between mixes to better help your music translate across other stereos, sounding it's best for as wide an audience as possible. And combined with <a contents="the steps in the first half of this topic" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/why-isn-t-my-mix-translating" target="_blank">the steps in the first half of this topic</a>, you have the foundation to provide clarity and consistency across all of your future work.</p>
<p>Though these tips absolutely will help your mixes translate, I admit I didn't go into the nitty-gritty of using any specific type of plugin. Let me know in the comments below if you'd like future blog posts to be focused on getting the most out of EQ or compression, or any other plugin.</p>Milo Burketag:miloburke.com,2005:Post/47067082017-06-13T12:35:00-06:002018-05-06T21:18:13-06:0010 Steps to Mixes That Translate: Part 1<p> </p>
<p>So you've just <a contents="finished your final mix" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/the-core-of-mixing" target="_blank">finished your final mix</a>, and it sounds amazing on your monitors or headphones. Excitedly, you play it for your friend on his stereo, and the mix just falls apart. Maybe the frequency balance is all wrong. Maybe the balance of instruments is completely off. Maybe it sounds weak and hollow instead of full and powerful. Maybe it's enough to make your ears cringe.</p>
<p><em>What happened??</em></p>
<p>The short of it is that your mix sounds good on your speakers or headphones, but doesn't translate to other speakers or headphones. Why? Well, it could have been one of many things. Or many of many things, potentially, if your mix is very challenged. We're going to take a look at what could have gone wrong.</p>
<p>In Part 1 of this topic, we're going to explore five ways your equipment and your room may not be set up to help you deliver mixes that translate. In Part 2, <a contents="we'll cover what you can do as an engineer to create mixes that translate better." data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/10-steps-to-mixes-that-translate-part-2" target="_blank">we'll cover what you can do as an engineer to create mixes that translate better.</a></p>
<p> </p>
<p><strong><span class="font_large">1) Are Your Speakers Holding You Back?</span></strong><strong><span class="font_large"> </span></strong></p>
<p>When you mix music on studio monitors, you rely on those speakers and the information they give you to make several thousand decisions throughout the course of your work. But if your monitors just aren't up to snuff, the inaccurate and incomplete information they provide you will guide you to make wrong decisions. Enough wrong decisions and your mix just won't sound good on speakers other than your studio monitors. This is why you need clear, uncolored studio monitors to help you create better mixes.</p>
<p>After all, you wouldn't expect to make a great painting while wearing tinted sunglasses, would you?</p>
<p><em><strong>How monitors affect your mix</strong></em></p>
<p>Your speakers need to have a flat enough frequency response that you don't feel the need to correct your music's spectrum by boosting here and cutting there. Your speakers need to play deep enough to show you what's going on in the low-end of your song. Similarly, they need to play high enough to show you whether or not you have a shrillness problem in your mix.</p>
<p>And it's not just the obvious limits of frequency response that affect your mix. Speakers need to sound clear and sharp in order to help you make decisions. If your speakers aren't clear enough, you may not hear swallowing noises that need to be edited out between vocal phrases, or bad fades that sound like a click or a thump on speakers precise enough to reveal them. Also, when listening on cloudy speakers, it can be really hard to fine-tune the amount of reverb an instrument needs, or exactly how much delay supports an instrument without overwhelming it. Even balancing the levels of your mix can be a challenge when you can't accurately dial in how loud a supporting instrument needs to be in order to be heard on great speakers while merely adding strength on lesser speakers.</p>
<p><em><strong>Assessing speakers and how they translate</strong></em></p>
<p>The most accurate way to determine if speakers translate well is to do a few mixes on them and see how the mixes sound elsewhere. Put them to the test doing exactly what you need speakers to do for you. But you can only do this with speakers you own, and it's less than helpful advice if you're reading this because your mixes already aren't translating.</p>
<p>The easiest way to tell if speakers are likely to translate well is to listen to how good they sound when you play well-engineered music on them. Well-engineered music, no matter the genre, should sound reasonably good on just about any speakers or headphones, and it will sound better as it's played on better speakers. But if you know a song sounds good on other speakers yet you don't like how it sounds on this specific pair of speakers, then you know you have a problem. Because if you make your music sound good on these speakers that you now identify as flawed, then your mixes just won't translate.</p>
<p>There are a dozen more ways to evaluate if speakers are good for mixing, like how much distortion they add to the music, how fatiguing they are to listen to, and how they integrate into your room. But you can choose a great pair of speakers without factoring in those more nebulous criteria.</p>
<p>If you need help identifying if you need new speakers, or help navigating the purchase decision, <a contents="I cover these aspects in detail in another post." data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/are-your-speakers-good-enough" target="_blank">I cover these aspects in detail in another post.</a></p>
<p> </p>
<p><span class="font_large"><strong>2) Are You Listening Only on Headphones?</strong></span></p>
<p>This is a tricky one. On one hand, headphones can sound surprisingly detailed, can have great frequency extension, they don't even factor in less-than-optimal room acoustics, and good headphones can cost a lot less than good speakers. On the other hand, even great headphones won't necessarily lead to good mixes.</p>
<p><strong><em>The Challenges</em></strong></p>
<p>One of the biggest reasons is that headphones tend to skew your perspective of stereo imaging. The phantom-center doesn't act as it should, you can be tricked into making decisions that cancel in mono playback scenarios, extreme separation can lead you to rely on EQing less than you should, and headphones tend to encourage mixes that sound narrow on traditional speakers.</p>
<p>But stereo-field aside, headphones can be dangerous in that they don't always convey the low-end how it's heard at high volume in a large room. Also, the volume balance of instruments can sound skewed, and you'll miss out on the tactile slam of transients if you mix on headphones alone.</p>
<p>Don't get me wrong, headphones are an important check to know that your headphone listeners won't be left out in the cold. But mixing on headphones is often a poor choice compared to mixing on speakers.</p>
<p><strong><em>The Workaround</em></strong></p>
<p>That said, if you're enormously budget constrained, or you simply can't make much noise at all, or your work is completely mobile, headphone mixing can work. But you need to learn the disadvantages of your headphones, and headphones in general, so you can compensate. And you need to be extra thorough with your <a contents="reference checks" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-reference-checks-can-save-your-song" target="_blank">reference checks</a> too.</p>
<p> </p>
<p><span class="font_large"><strong>3) Are Your Speakers Positioned Well?</strong></span></p>
<p>Just having good speakers doesn't mean they'll sound good. Have you experimented with speaker placement? It has a night-and-day effect on the sound of any speaker. And, in my experience, any speaker regardless of price, when positioned well, can sound equivalent to speakers costing four times as much that aren't positioned well.</p>
<p><em><strong>Starting Small</strong></em></p>
<p>The basics are that you want to make an equilateral triangle between the speakers and your head while you're in the listening position. If one speaker is 3' away from you, the other should be 3' away from you as well, and they should be 3' away from each other. Aim to keep your speakers positioned with the tweeter at ear-height.</p>
<p><em><strong>Support</strong></em></p>
<p>Though it matters where in the room you place this equilateral triangle of speakers and you, I find the surface speakers are resting on to be more important. If your speakers are resting on a desk or cabinet or shelf, it's very likely that the surface is resonating in sympathy to the vibrations of the speakers. This is especially true with low-end. The problem is that the desk or cabinet or shelf begins to act like a mid-bass speaker too, except with terrible frequency response and sluggish timing. This can lead to a really cloudy low-end. The best way to avoid this is to keep your speakers on speaker stands: preferably heavy, dense stands of the appropriate height to keep the tweeter at ear level. If this isn't an option for you, vibration-absorbing platforms can help, or monitor isolation pads, at a minimum.</p>
<p><em><strong>Location</strong></em></p>
<p>If you do a search for speaker placement guides, you'll find more information that you'll be able to absorb. Many recommend setting up the speakers on a short wall to fire lengthwise into the room, and to position the speaker/listener triangle in the middle of the room equally between the side-walls. These are good starting tips. Using calculators and measurements for placing speakers at specific points in the room can help even out your bass-response, assuming you're relying on your primary monitors for bass and not separate subwoofers.</p>
<p>In my experience, it's especially important to bring the speakers out from the wall. Low-end muddiness aside, I find that speakers just sound truer when you get them at least 2-3' from the nearest wall, maybe even further.</p>
<p><em><strong>Getting Technical</strong></em></p>
<p>The full story is that an equilateral triangle might not be ideal: maybe your listening position should be a little closer or a little further. Some speakers sound best pointed at the wall behind you, others directly at you, and others somewhere in-between. Some speakers don't actually sound best with the tweeter at ear-height. Finding the ideal place for your speakers and the listening seat is more complicated than a computer can predict, and it is best found through trial and error. But using the simplified tips I mentioned above should get you 90% of the way there in 5% of the time.</p>
<p>Whether your speakers cost $15,000 or $50, you can make the most of them by learning about speaker placement and finding the spots in your room that they sound best through trial and error. Think of it as a free speaker upgrade, and it absolutely will improve the translation of your mixes when you can better hear what's going on in each of your mixes.</p>
<p> </p>
<p><strong><span class="font_large">4) What Are Your Room Acoustics Like?</span></strong></p>
<p>This isn't a fun one to think about, but room acoustics make a huge deal. As-big-a-deal-as-your-speakers kind of huge deal.</p>
<p><em><strong>The Problem</strong></em></p>
<p>The truth is that most every room suffers from some very serious issues including bass build-ups on some frequencies, bass suck-outs on other frequencies, masking, comb-filtering, flutter-echo, and more. It may be nice to have speakers that have a frequency response flat to +-3 dB, and an interface that has a frequency response flat to +-0.1 dB, but that doesn't mean much when your room very likely has a bass response of +-15 dB. And early-reflections off of hard surfaces smear what you hear in a way that blurs detail and masks the information you need to make precise mixing decisions.</p>
<p><em><strong>The Good News</strong></em></p>
<p>The first bit of good news is that using just one tool, broadband absorption, can help solve most issues present in most rooms.</p>
<p>And the second bit of good news is that, while it may cost more than $10,000 to ideally treat the acoustics of a large mix space, you can make substantial progress in your space for less than the price of your interface, or far less if you're willing to get creative.</p>
<p><em><strong>Resources</strong></em></p>
<p>Room acoustics is too complex of a topic to cover in-depth in this brief look at improving mix translation. But if you're not up to speed on acoustics and acoustic treatment already, check out my previous post on <a contents="the fundamentals of room acoustics and how to solve problems" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/an-acoustic-primer-the-secret-to-better-mix-decisions" target="_blank">the fundamentals of room acoustics and how to solve problems</a>, and also take a look at <a contents="my personal journey through room acoustics" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/my-journey-with-acoustics-part-1" target="_blank">my personal journey through room acoustics</a> using a medium-budget, medium-complexity approach to dramatically improve the acoustics of my bedroom studio in an apartment-friendly way.</p>
<p>That said, just as important as your speakers, and even more important than your placement, tackling at least the fundamentals of room acoustics will provide the clarity and precision of sound you need to properly inform your mix decisions towards creating better translating mixes.</p>
<p> </p>
<p><span class="font_large"><strong>5) Do Your Monitors Reflect Your Listening Preferences?</strong></span></p>
<p>This is one that most guides on mix translation might miss, and even most recommendations for buying speakers. But it matters very much that your speakers sound how you like them to sound.</p>
<p><b><i>A Few Stories</i></b></p>
<p>First, as an example, recently at a Meetup group for producers, I met a gentleman that fundamentally disagreed with me on what "accurate bass" sounds like. To my ears, the system we were listening on and the way it was calibrated resulted in a massive "smile EQ" in that the bass and the treble were both tremendously boosted (in the shape of a smile, if you imagine a graphic equalizer). I would find it super difficult to mix on such a system because, to my ears, the speakers were grossly inaccurate. Yet he asserted that the bass sounded right, sounded as it should, and that his mixes sounded on those speakers like they did in his home studio.</p>
<p>While I know the monitors were off, I think it's best that the fellow continue to mix on a system like that. Though I'm positive the frequency response of that system isn't accurate, what would happen if he purchased monitors that are? More than likely, he'd be eternally dissatisfied with the bass response of music on his speakers, and he would almost certainly mix far too much low-end into his tracks according to his preference. After all, all we can really do while engineering is to mix according to our preferences.</p>
<p>Second, some years ago, I had an internship with a well-regarded mastering engineer in Florida. He quickly brought me up to speed on how tremendously important it is to him to have a playback system with a razor-flat frequency response. After months of working for him in his studio, I became accustomed to what a big-dollar system with a razor-flat frequency response sounds like. The same mastering engineer had a trusted friend that built his career out of positioning speakers and calibrating systems in high-end studios around the world. And for one afternoon, I was privileged to spend one-on-one time with this studio consultant listening to music and talking shop in his home listening room. After hearing the first song for just a few moments, my eyes went wide, and I told him that the bass wasn't flat! In fact, it was boosted. He turned to me with a smile and said, "I know. I like it this way." And I realized I liked it too. It was a small boost, but I just found it more satisfying than listening to the mastering engineer's flat system. There were other inaccuracies in the studio consultant's system too, and he acknowledged and said it was his preference.</p>
<p>At the time, that was so freeing for me. It meant I could like the sound of stereos that I like. And that I'm not wrong or uneducated or misinformed to like what I like. And to this day, I prefer listening and engineering on speakers that have a little bass bloom. For me, this means that if it were time to go shopping for speakers again, I should shop for speakers that have just enough bass bloom to make me smile while listening to commercially engineered music I love.</p>
<p>Third, I have a friend that is a serious bass-head. If bass were a drug, he belongs in rehab. He has a monstrous Hsu Research subwoofer for his powerful stereo in his tiny room. I've never measured exactly, but I estimate he has his sub turned 12-15 dB louder than even I, who enjoys a little bass boost, could appreciate. It's truly overkill. Yet he loves it, and that makes it the perfect system for him.</p>
<p><em><strong>Why This Matters</strong></em></p>
<p>I tell these stories and use these examples to help you, my reader, embrace what you like. And, more to the point, you can shop and calibrate accordingly. If you like a little bass bloom like me, or if you like your music sounding bass-shy, or even if you also belong in bass-rehab, don't be afraid to buy speakers that can deliver that for you. Same with the rest of the frequency spectrum, or any other aspects of speakers that you find you're drawn towards.</p>
<p>And if you already have your speakers, don't be afraid to calibrate according to your preferences. If your monitors have switches or calibration knobs, play with them while listening to well-engineered tracks you know and love. Adjust until things sound perfect to you. Same goes for your subwoofer's volume and crossover controls. Or tone controls, if you have an amplifier with them. And if you don't have any knobs to turn and your speakers' or headphones' frequency response is very far from your liking, it may be worth building a custom EQ preset to sit on your master bus while you work (though be sure to remove it before bouncing your track).</p>
<p>I delved more into story-telling for this point than I ever have before in my blog. But I wanted to emphasize this aspect of hearing speakers that most overlook, and I wanted to give examples that show character. If you like your food a little spicy and a little salty like I do, you may find it challenging to prepare meals appropriate for other people. Likewise, if you prefer extra bass and your monitors are bass-shy, you're very likely to produce booming mixes that translate poorly. This can be overcome by careful planning and restraint, but it's tedious and not complimentary to creativity. But if you purchase or calibrate towards your preferences, you can create and mix freely without distraction and restraint, and you'll end up with a mix that translates better because of it.</p>
<p> </p>
<p><span class="font_large"><strong>Part 1 Conclusion</strong></span></p>
<p>I hope these first five tips can get you well on your way towards mixes that translate by taking a closer look at the elements of your system and room that are influencing your decisions. In Part 2,<a contents=" we'll cover aspects more related to your decisions themselves and things you can do within a session to create better translating mixes." data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/10-steps-to-mixes-that-translate-part-2" target="_blank"> we'll cover aspects more related to your decisions themselves and things you can do within a session to create better translating mixes.</a></p>
<p>I'd love to hear your stories. Do you have any speaker buying tips or placement tips that you want to share? What are you doing to address the acoustics in your room, and what are your listening preferences in speakers? Feel free to share in the comments below. I do my best to read every post.</p>Milo Burketag:miloburke.com,2005:Post/47007312017-06-06T12:45:00-06:002018-04-25T23:48:04-06:00Micro-Dynamics and Macro-Dynamics<p>In an earlier post, I wrote about <a contents="how mastering isn't a process" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/mastering-isn-t-a-process">how mastering isn't a process</a>. It's definitely worth noting that a mastering engineer has to approach each song differently in order to make it sound its best. That said, of course there are some systematic things a mastering engineer should pay attention to. Some of which many readers may not already know. Two biggies are micro-dynamics and macro-dynamics.</p>
<p> </p>
<p><span class="font_large"><strong>Why Dynamics Matter</strong></span></p>
<p>If you're not caught up on the lingo, dynamics simply refers to the volume variation within a song. Dynamics are a good thing, in that <a contents="skillful use of loud and soft adds interest and emotion to music" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/dynamic-range-and-the-loudness-war" target="_blank">skillful use of loud and soft adds interest and emotion to music</a>. An experienced pianist will play certain sections of music with more force or more grace than others in order to wring extra emotion out of the music. Likewise, a skilled engineer or producer can keep one section of a song quieter in order to emphasize the impact of another section of the song.</p>
<p>But poor dynamics can also be a problem. When a vocalist isn't in complete control of his or her singing volume, using a compressor can help reign in the dynamics to keep things sounding even and expected. And while a good drummer knows how to perform at a consistent volume level without playing accidentally exaggerated or accidentally soft drum hits, if you've ever heard music with a drummer that struggled with his dynamics, you'll understand how distracting it can sound. So we want to maintain dynamics within music, but we want controlled, intentional dynamics; not sloppy, erratic dynamics.</p>
<p> </p>
<p>Dynamics are a big part of what makes a song sound good. Just to list a few reasons:</p>
<ul> <li>A good song has dynamics, in that loudness comes and goes in a way that emphasizes the emotion and the story of a song.<br> </li> <li>A good song has contained dynamics, in that too much dynamic range in the performance of any one instrument or all instruments makes things sound sloppy.<br> </li> <li>A strong rhythm gets its movement and strength through the use of dynamics. This is the part that makes you want to move.<br> </li> <li>A great arrangement makes a great song when it brings you from moments of peace to moments of emotion. Without quiet, there can be no loud.</li>
</ul>
<p>And as the title and introduction of this blog post suggest, we're going to be looking deeper into two of these forms today, the two that can really help shape the master of a song.</p>
<p> </p>
<p><span class="font_large"><strong>Micro-Dynamics</strong></span></p>
<p>Simply put, micro-dynamics are the volume changes within a small portion of the song. Think of it as the loud and soft moments in a single measure of music. A good mix always emphasizes the groove music has, especially in the chorus or the drop. And a good mastering engineer aims to make sure that the song just sings, even within a small moment of the song, because the dynamics are just right.</p>
<p>What are the elements of good micro-dynamics?</p>
<ul> <li>The drums have slam and power. They shouldn't be over-compressed. And when compressing drums, it's critical to use a slow attack in order to maintain the proper snap of the drums through the mix.<br> </li> <li>The rhythm of the song seems to pulse. This has a lot to do with the <a contents="volume balance between the instruments in the mix" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/the-core-of-mixing" target="_blank">volume balance between the instruments in the mix</a>, in that steady-state instruments like pads are mixed low, while percussive instruments like drums are mixed loud.<br> </li> <li>Compression feels rhythmic, too. When compression is set just so, with a slow attack and slow release, the drums poke through, the mix ducks, and then the mix rises up in volume before the next drum hit, when it all happens again. When the compressor is set just so, the mix pulses and moves with the rhythm of the drums.</li>
</ul>
<p>How can a mastering engineer improve the micro-dynamics of a song?</p>
<ul> <li>If the micro-dynamics are out of control, the mastering engineer can use compression to get the instruments to gel together more smoothly, and to avoid excessive volume variation.<br> </li> <li>If the micro-dynamics of the song are too squashed from the mix, the mastering engineer can use expansion to bring out volume variation within a measure.<br> </li> <li>If the transients of the drums aren't getting enough attention, the mastering engineer can compress with a slower attack.<br> </li> <li>If the transients of the drums are too excessive, the mastering engineer can tame them by using a compressor with very fast attack and release times.<br> </li> <li>A mastering engineer can bring out the groove and shape of each measure by balancing a compressor just right, to have the level of the mix subtly bounce and sway around the rhythm of the drums.</li>
</ul>
<p> </p>
<p><strong><span class="font_large">Macro-Dynamics</span></strong></p>
<p>While micro-dynamics focus on the little details of a moment in the song, macro-dynamics take in the larger picture of volume over the whole of the song. How much power does the chorus have compared to the verse? What can be done with volume to control the emotion of the song? These are also things that are subtle to the listener, but make a song just sound better when intentionally controlled.</p>
<p>What are the elements of good macro-dynamics?</p>
<ul> <li>The chorus or drop feels surprisingly powerful when it begins.<br> </li> <li>The verses save power for the choruses without sounding weak in their own right.<br> </li> <li>The emotion and tension of the song varies by section.<br> </li> <li>The emotion and tension of the song matches the emotion and tension of the lyrics and instrumentation and arrangement of the song.</li>
</ul>
<p>How can a mastering engineer improve the macro-dynamics of a song?</p>
<ul> <li>The mastering engineer can create a near-final bounce of the song with all of the previous steps of mastering accounted for, and then use simple volume automation to adjust the loudness by song section.<br> </li> <li>To make the choruses feel more powerful, the engineer can duck the volume of the verses a little in order to make the chorus arrive with surprise and weight.<br> </li> <li>To make each new song section feel as if it arrives with interest, the engineer can start a verse fairly loud, then use a long, soft fade to gradually lower the volume of the verse before the chorus. If done right, the loss of volume sounds imperceptible, yet provides more room for the power of the chorus.<br> </li> <li>Automation adjustments are valuable changes that can really make a song sing, but subtlety is the name of the game. The audience shouldn't be able to notice reductions in volume, but can still appreciate the power of the chorus when it arrives.</li>
</ul>
<p> </p>
<p><span class="font_large"><strong>Wrapping Up</strong></span></p>
<p>It's easy to overlook aspects of the song like this. Especially if you're <a contents="limiting your music far more than it needs to be" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-to-give-your-song-the-perfect-loudness" target="_blank">limiting your music far more than it needs to be</a>. But proper use of macro- and micro-dynamics really do help a song speak in a stronger voice. They may seem subtle, but that's what we're looking for: not one magic gimmick to somehow make music have more sparkle, but meaningful aspects of a song to tune into while engineering, and valuable ways to add polish to an otherwise strong song.</p>
<p>If focusing on the dynamics has helped your engineering, or if you feel I've left out a part of handling the dynamics, please write in the comments below. I love reading your replies.</p>Milo Burketag:miloburke.com,2005:Post/47227872017-05-30T11:35:00-06:002018-04-25T20:24:31-06:00My Journey With Acoustics: Part 1<p>Last week, I wrote about acoustics for the first time in this blog. Everything I wrote about was theory, based on the books I've read, my first-hand impressions of various rooms and arrangements, as many podcast episodes on acoustics as I could find, and an unhealthy number of forum threads digested. However, this week, the rubber meets the road. I finally made substantial headway on the room treatment project I've been putting off for longer than I care to admit. Let me tell you about what I built, where I put it, and how it sounds.</p>
<p> </p>
<p><span class="font_large"><strong>What I Built</strong></span></p>
<p>In last week's post, I stressed the importance of <a contents="using absorption to treat the first reflection points" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/an-acoustic-primer-the-secret-to-better-mix-decisions" target="_blank">using absorption to treat the first reflection points</a> in your room. This is exactly where I started. I used Owens Corning 703 rigid fiberglass: a mix of four-inch and two-inch panels, to build six-inch thick broadband absorbers. The panels are 48" by 24", providing for healthily-sized traps and not requiring any cutting. I built a narrow frame of 1x3" common board <em>behind</em> the insulation instead of around it, in order to maximize the effectiveness of the traps. This is a method many overlook, but Nenne Effe does a neat job demonstrating the technique <a contents="in her video on YouTube" data-link-label="" data-link-type="url" href="https://www.youtube.com/watch?v=_6g13taN9Lc" target="_blank">in her video on YouTube</a>. In fact, her method is one of the best I've seen, and I only deviated in a few ways, all mentioned in this post. I even used her technique for achieving tidy fabric corners.</p>
<p>I built four of these traps, with the four-inch panel closer to the listener and the two-inch panel closer to the wall, addressing the theory that using multiple layers of insulation can cause reflections, particularly if adhesive spray is used between the layers. I figure that if reflections occur, traveling through a solid four inches of insulation before the adhesive spray and four inches after reflecting off the adhesive spray should be enough to absorb any frequency high enough to be subject to that reflection.</p>
<p>I blazed my own trail for the looks: I bought the niftiest looking fabric I could find at Jo-Ann Fabrics that was breathable, and I ended up with a product line called Silkessence, which is shiny and silky and breathable and textured and available in a variety of colors. And learning from earlier mistakes, I emphasized the corner edges by using corner guards cut to the depth of the trap, so the corners don't sag very much.</p>
<p>Six-inch traps are pretty heavy, even without a beefy wooden frame covering all the sides. I struggled to find a method to place them at height, ideally with a gap between the traps and the wall, to increase low-frequency absorption. I landed on Ikea's <a contents="Molger Wall Shelf" data-link-label="" data-link-type="url" href="http://www.ikea.com/us/en/catalog/products/00242358/" target="_blank">Molger Wall Shelf</a>, one per trap, placed on the floor. For stability, I used "hook and eye" packages from the hardware store, with one side screwed into the upper wooden back of each trap, and the other side screwed into the wall. The hooks-and-eyes are not load-bearing, but just a safety measure to keep the traps from tipping over. The four-inch hook screwed into the one-inch board, and the one-inch eye screwed into the wall, effectively spacing the six-inch traps six inches from the wall, which is about the ideal distance for bonus low-frequency absorption.</p>
<p> </p>
<p><strong><span class="font_large">Where I Placed Them</span></strong></p>
<p>I decided to position the traps vertically, to make the most of the vast quantity of 703 fiberglass I bought and the floor space available. This provides effective absorption at the height of a seated guitar player and a standing singer, along with the primary goal of treating at the height of the speakers. And, it was a perfect fit for the height of the Molger shelves from Ikea.</p>
<p>Deciding where the first two traps belong was easy: vertically positioned at the side-wall first reflection points. I used the "mirror trick" to find out exactly where they belong. With the traps as thick as they are, and positioned six-inches from the wall, my sidewalls are now absorbing lower frequencies than my monitors can produce, subwoofers not included. Based on all the theory I've read and learned, sidewall absorption provides the best bang for the buck if you only have a couple of traps to place.</p>
<p>The second two were also easy. If I was only planning to build four traps, I would have put the second pair behind the listener at the first reflection points on the back wall. But since I planned a more extensive acoustic treatment project, I placed them immediately next to the other traps on the side walls, slightly behind the listener. After all, I'm already losing the wall space on the sides, so I might as well make the most of the space that instruments can't fit in due to the first two traps. This also creates a wider reflection-free zone for recording vocals and guitars.</p>
<p>Due to the width of the Molger shelves I'm using as stands, there's a neat three-inch gap between the traps on each wall, small enough to be acoustically invisible at the angle the traps are from the speakers.</p>
<p> </p>
<p><span class="font_large"><strong>Want To See?</strong></span></p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/5322d46c3f09724690c34d41548bd86ac0725667/original/left-wall-traps-2017-may.jpg?1496253553" class="size_l justify_center border_" /></p>
<p style="text-align: center;"><em>The left wall of my studio space. Thankfully, the gap between the traps lets in a little natural light.</em></p>
<p style="text-align: center;"><em>Though I'll still likely require vitamin D supplements.</em></p>
<p> </p>
<p><img src="//d10j3mvrs1suex.cloudfront.net/u/210695/be5d846262ed3f8f465a509760a0b050966d26c3/original/right-wall-traps-2017-may.jpg?1496253553" class="size_l justify_center border_" /></p>
<p style="text-align: center;"><em>The right wall of my studio.</em></p>
<p> </p>
<p> </p>
<p><span class="font_large"><strong>How It Sounds</strong></span></p>
<p>My first impression is clarity. Things are just clearer. I hear more detail and definition in every song I play. I'm picking up on new little bits of percussion, and more granular attributes in synths that I had never noticed before. It's the audio-equivalent of cleaning your glasses.</p>
<p>My second impression is stereo separation. Everything is suddenly so wide! In songs with doubled vocals or doubled guitars, the width and power pop more than I'm accustomed to. I'm also realizing synth parts have significantly more stereo interest than I've ever heard before. Not to mention, I'm hearing some panning beyond the speakers in some songs - a neat trick I've never heard from my own system before.</p>
<p>My third impression is clarified bass. Though the traps are placed at the first reflection points, not in the corners, they are still the first elements of low-frequency absorption I have in my room. I can hear noticeably more power in the kick and bass, and more separation between the two. Somewhat subtle, but it's enough to make me smile.</p>
<p>My fourth impression is the depth of the phantom-center. I wouldn't say I'm getting a pinpoint phantom-center, but on many songs, it sounds like the singer is no longer at my screen, but tucked a few feet behind it. This tells me I'm heading in the right direction. I'm not worried about the phantom-center not sounding razor-sharp just yet: after all, I still have the back wall, front wall, and ceiling reflection points to absorb before the listening seat is truly in a reflection-free zone.</p>
<p>And my major takeaway after listening through almost a hundred songs that I previously liked, buried in playlists on Google Music, is that I feel I now have a much better handle on which mixes from songs I love are lacking, and which are fully developed. Some songs now bother me with how the hi-hat frequencies are EQed, or the vocal now sounds problematic, or the entire mix sounds too narrow, or the entire mix sounds too hazy. I'm picking up on a shrill high-end sheen on a lot of digital songs that I'm not enjoying at all. The remarkable thing to me is that these are songs I've heard dozens of times; songs I enjoy. But now, more than ever before, I feel like I know what I don't like about how they're engineered in very defined terms. Or, sometimes, I'm totally blown away by how clear and full and defined things sound, when I'm listening to a great mix.</p>
<p> </p>
<p><span class="font_large"><strong>Final Thoughts</strong></span></p>
<p>What this means to me is that I now feel I have a secret weapon in assuring that my mixes are solid and that they will translate. Of course, I'll still do reference checks, especially initially after these changes. But I feel like I'm hearing music finally in focus, and I'm now that much better able to do what needs to be done in my own music to make things sound right. And the journey continues: <a contents="keep reading here" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/my-journey-with-acoustics-part-2" target="_blank">keep reading here</a> to hear about the next stage in my room treatment project.</p>
<p>As I was hoping, I'm observing that acoustic treatment is not only a tremendous upgrade to the monitoring chain of a studio, but a powerful tool in helping one hear what is lacking and what works in an unfinished mix. I recommend everyone reading this blog to consider what options you have for acoustic treatment, so you can take advantage of the benefits too.</p>
<p>And if you are thinking you don't know where to begin, I recommend checking out my previous blog entry, <a contents="An Acoustic Primer: The Secret to Better Mix Decisions" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/an-acoustic-primer-the-secret-to-better-mix-decisions" target="_blank">An Acoustic Primer: The Secret to Better Mix Decisions</a>, to get you started.</p>Milo Burketag:miloburke.com,2005:Post/47194472017-05-23T12:55:00-06:002018-04-25T20:41:12-06:00An Acoustic Primer: The Secret to Better Mix Decisions<p>We all want to make mixes that sound cleaner, clearer, have more slam, more power, and more color. We all want interesting, professional sounding mixes, right? What can we do to close that gap between our mixes and the songs we hear from the pros?</p>
<p>I shared in a previous post on how to use reference checks to better <a contents="help your mix sound pro and powerful" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-reference-checks-can-save-your-song">help your mix sound pro and powerful</a>. It's a tool I rely on with every mix I make. And I shared in a different post that <a contents="mastering is more about fixing problems the mix contains" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/mastering-isn-t-a-process">mastering is more about fixing problems the mix contains</a> than it is a defined process of its own. But is there a secret to making mixes that require less fixing in mastering? This is especially important for people mastering their own work.</p>
<p>There is. And the secret is to control the acoustics of your room.</p>
<p> </p>
<p><span class="font_large"><strong>Why Acoustics Matter</strong></span></p>
<p>I know it's not fun for most of us to think about. Acoustic treatment sounds complicated and expensive, and it doesn't have the cool-factor that a shiny new piece of gear has, or the immediacy of downloading a hot new plugin. But the acoustics of your room shape your music far more than you realize.</p>
<p>Imagine a photographer has a big grease smear on his camera's lens. He'll still have a good idea what the subject of his picture is, but because his equipment is smeared with grease, he can't act with precision to capture exactly the shots he wants. Not to mention, there's a major flaw in each shot. And further, he probably can't even tell if each picture is in focus. Yikes!</p>
<p>It's the same with room acoustics. The average room accentuates frequencies that shouldn't be accentuated and minimizes other frequencies that shouldn't be minimized. Further, poor room acoustics cloud the details, preventing you from hearing with clarity. But when you fix some of these problems, you can begin to hear your mix with so much more detail and precision. Then you can hone in on the problems easily hearing what needs to be fixed, providing the direction you need to take a large step towards more professional sounding, better-translating mixes.</p>
<p> </p>
<p><span class="font_large"><strong>Why Does This Matter When My Fans Don't Have Acoustic Treatment?</strong></span></p>
<p>It's true, most of your audience won't listen to your music in a treated environment. But consider that many of your listeners will be listening on headphones, where room acoustics have no bearing. They'll be listening to your music naked, not through a heavy parka. That's important to know.</p>
<p>But also, not all rooms with bad acoustics are the same. In fact, no two rooms are the same. All have slightly different problems. You need to be able to hear with clarity what needs to be done in your mixes so you can make good mix decisions, regardless of the environment your music will be heard in. You can't prepare for every stereo and every room that will play your music, but you can aim to make your music as clean and clear and balanced as possible, so it has the best chance of sounding good to every listener on every stereo.</p>
<p>And more to the point: you as the engineer strive to make perfect music, no matter who will listen back and what they will listen on. Good acoustics will help you zero in on what needs fixing.</p>
<p> </p>
<p><span class="font_large"><strong>Common Room Problems</strong></span></p>
<p>What problems does your room likely have?</p>
<p><strong>1) Masking of detail</strong></p>
<ul> <li>Sound travels directly from the speakers to your ears, of course. But sound also bounces off nearby hard surfaces and reflects back to your ears slightly later than the direct sound. This is a problem, particularly in small rooms. The result is that the sound emanating from your speakers sounds cloudy and indistinct, even a little phasey. High-frequency detail is significantly reduced, panning sounds less distinct, and this problem often adds a bright, ugly sheen to what you hear.<br> </li> <li>To remedy this, place absorption at each of the first reflection points in your room. In other words, put something soft and spongy wherever the sound bounces from your speakers before reaching your ears. You can find the 'first reflection points' by using the 'mirror trick': sit in your mix position, and have a friend hold a mirror flat against the wall, moving it around. Wherever that mirror can be placed that allows you to see one of the speakers in the mirror, you should place absorption.</li>
</ul>
<p> </p>
<p><strong>2) Uneven bass response</strong></p>
<ul> <li>The dimensions of all rooms cause problems: the walls reflect low-frequency sound waves back into the room. And the physical width and length of the room determine which wavelengths of bass get doubled up and which get canceled out, meaning that some frequencies are portrayed as artificially loud while other frequencies are artificially reduced in volume. This leads to very skewed mix decisions in the low-frequency realm, encouraging mixes that don't translate.<br> </li> <li>The solution for this is to absorb as much of the bass as you can by using bass traps. Bass traps absorb bass energy, keeping it from bouncing around the room to further boost and null frequencies. The sooner in time your bass energy is absorbed, the less boosting and canceling occurs in your room. 'Trapping bass' sounds undesirable, since we all love the sound and feel of clean, deep bass. But that's exactly what we achieve when we absorb bass. Bass traps can't absorb bass before sound passes from the speakers to your ears, but only after bass has been bouncing around the room - and it's those extra bounces that lead to that messy, uneven bass sound. By soaking up the surplus bass energy reflecting around the room, we can hear more of the original sound from the speakers and subs, which gives us that strong, clean bass sound that we love to hear. Bass traps can be placed anywhere in the room, though they are most effective in the corners where bass energy builds up the most.</li>
</ul>
<p> </p>
<p><strong>3) Tubby bass response</strong></p>
<ul> <li>Bass energy is very hard to absorb: it requires a lot of soft material to pass through before the sonic energy of bass can be transferred into kinetic or thermal energy. What happens when there isn't enough soft material in the room? The bass continues to bounce around and around, taking a long time to die. This sounds very ugly in music, in that the kick drum sounds more like a whoosh than the impact it's supposed to be. And it makes it difficult to discern the tone and even the pitch of synth bass and bass guitar notes. If you have a hard time hearing if your bassline is in tune, or if you're unsure if it's your drum samples or your room that makes your kick drum sound wooly and soft, you're going to have a hard time making music that sounds good on other speakers in other rooms too.<br> </li> <li>Fortunately, the solution to this is the same as the solution to uneven bass response: absorb as much of the bass as you can as early as you can with bass traps. Bass traps are best built thick, a minimum of 4", but hopefully 6" or 8" thick, or even thicker in the corners. Absorbing the superfluous reflections of bass around your room will help clarify and tune up melodic bass content, and provide the slam that percussive bass deserves.</li>
</ul>
<p> </p>
<p><strong>4) Ugly reverb</strong></p>
<ul> <li>Quite simply, this is caused by having too many bare, hard surfaces in your room. You can hear it from clapping your hands, and even just speaking in a lot of rooms. I know I could when I first moved into my current studio space, an extra bedroom in my apartment. When the high frequencies are allowed to bounce around endlessly from bare wall to bare wall, from hard floor to bare ceiling, the room takes on an artificially bright sheen that isn't pleasant sounding at all. And when your room has its own reverb going on, you'll tend to mix without enough reverb, which will sound funny on headphones or in other rooms, or you won't hear your music with the clarity you need to make the right mixing decisions as the reverb in your music conflicts with the reverb in your room.<br> </li> <li>The trick to squash this, yet again, is more absorption. But this time, you need to aim for more coverage instead of extra thick absorption, since high frequencies don't require a lot of thickness of mass to absorb. Place absorption on your walls wherever there is room.</li>
</ul>
<p> </p>
<p><strong>5) Flutter-echo</strong></p>
<ul> <li>Flutter echo is the high-frequency zinginess that many rooms inherently have. Virtually all bedrooms and small rooms suffer from flutter echo. It's caused by having hard parallel surfaces, like walls, that are left bare. You can often notice something is off just from spoken voice, but it's very easy to identify by clapping your hands once and listening for little rapid zings back and forth. This problem sounds less like reverb and more like weird delay.<br> </li> <li>Like all the other problems I've mentioned today, the solution is absorption. It doesn't need to be thick to combat high frequencies, but it does need to cover as much surface area as possible, particularly in asymmetrical patterns. After all, you only need to treat one side of two opposing walls to kill flutter echo.</li>
</ul>
<p> </p>
<p><strong><span class="font_large">Important Notes</span></strong></p>
<p><em>First</em>, I should mention that every solution I'm offering today is accomplished with absorption. Diffusion is also a powerful tool for acoustic treatment. In short, diffusion disperses sound waves minutely in a myriad of directions instead of strongly in one direct reflection, accomplished without absorbing the sound. Used in quantity, it maintains or even lengthens the decay of sound in a room.</p>
<p>However, good diffusion is difficult and expensive to implement. Also, it is only suitable for large to very-large rooms, while many commercial studios and the vast majority of project studios exist in small rooms. And most importantly, the five common room problems I mentioned above are the five you're most likely to have in your room, and the five that are most important to treat. And it so happens that all of them are most easily and effectively treated with absorption, not diffusion.</p>
<p><em>Second</em>, though I mentioned that detail masking and ugly reverb and flutter echo all can be absorbed by thin acoustic treatment, it's vitally important not to <strong><em>only</em></strong> use thin absorption. If you only use thin absorption, like 1" pyramid foam, for example, you'll suck all the high-frequency reflections out of the room while doing nothing to combat the bass problems and almost nothing to combat the mid-range problems in your room. To avoid this pitfall, it's important to treat with a mix of depths of absorption with a minimum of 2" thick for any absorber. It's more important to cover 30% of your walls with absorbers twice as thick than it is to cover 60% of your walls with absorbers half as thick.</p>
<p> </p>
<p><span class="font_large"><strong>Milo, I Want To Do This Right</strong></span></p>
<p>Good, so do I. You can buy absorption from companies like <a contents="RealTraps" data-link-label="" data-link-type="url" href="http://realtraps.com/products.htm" target="_blank">RealTraps</a>, <a contents="GIK Acoustics" data-link-label="" data-link-type="url" href="http://www.gikacoustics.com/" target="_blank">GIK Acoustics</a>, <a contents="Ready Acoustics" data-link-label="" data-link-type="url" href="https://www.readyacoustics.com/" target="_blank">Ready Acoustics</a>, and more. Or you can build your own, like I did, to save money. If you're building your own, use rigid fiberglass (I used Owens Corning 703) backed by a wooden frame and wrapped in breathable fabric. Cheaper alternatives like mineral wool are also an option, though they may not maintain shape their shape as neatly. My favorite construction video is <a contents="by Nenne Effe" data-link-label="" data-link-type="url" href="https://www.youtube.com/watch?v=_6g13taN9Lc" target="_blank">by Nenne Effe</a> on YouTube: she makes it look easy, but her clever design avoids many of the pitfalls of other absorption recipes, and she does it all on as tight a budget as possible.</p>
<p>A good starting point is to place panels 4" thick at the first reflection points: the side walls, behind the listening position, and ideally in front of the listening position and on the ceiling as well. Add thicker 6" traps, or even better, 'super chunk bass traps' in the corners of the room. And add miscellaneous 2" traps in any areas left over to soak up as much ugly room reverb and flutter echo as you can.</p>
<p> </p>
<p><span class="font_large"><strong>Milo, I Can't Afford To Do This Right</strong></span></p>
<p>No worries, neither could I for many years. You can check your mixes often on headphones, since acoustics have no bearing on the sound you hear with headphones. And you can sit very near to your speakers or monitors: the closer you are to your monitors, the quieter the room's acoustics will sound to your ears by comparison.</p>
<p>Egg cartons are a virtually useless acoustic solution and a fire hazard, so don't start lining your walls with them just because the shape looks similar to pictures you've seen of "studio foam". And those pyramid-shaped studio foam products are a bad choice anyway: the material is too thin and porous to reflect and diffuse high frequencies, much less low frequencies, so by adding pyramid shapes to the foam, you're digging into the thickness the product would otherwise have, which just reduces its low- and even mid-frequency absorption.</p>
<p>You can hang blankets on your walls. The thicker the better. Even the thickest blankets won't do much for your bass problems, but it can help tighten up the highs and high-mids, giving you a bit more instrument clarity and a bit more precision with panning and location.</p>
<p>Things get trickier if you're recording instruments or voice with microphones, because acoustics are paramount for recording too, not just the mixing environment. If you have a large, irregularly shaped room to record in, do your recording there. If you can afford the <a contents="Portable Vocal Booth" data-link-label="" data-link-type="url" href="http://realtraps.com/p_pvb.htm" target="_blank">Portable Vocal Booth</a> made my RealTraps, that can go a long way toward helping you achieve a clean and dry vocal sound. If you can't afford one, see if you can assemble a temporary version out of couch cushions or anything else that is plush. And if you're recording something other than vocals, bring as many cushions and pillows and comforters as you can into the room you're recording in, to help soak up some of the room's ugly reverb.</p>
<p> </p>
<p><span class="font_large"><strong>Closing</strong></span></p>
<p>That's about it for the basics. Write in the comments below if you'd like me to explain an area more clearly or go further in depth. If you want to dig deeper, I shared about <a contents="my own journey with acoustic treatment" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/my-journey-with-acoustics-part-1" target="_blank">my own journey with acoustic treatment</a> in another post. Also, if you haven't put thought into speaker placement, that can have a huge impact on what you hear as well. <a contents="I wrote more about that here" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/why-isn-t-my-mix-translating" target="_blank">I wrote more about that here</a>.</p>
<p>Also, feel free to let me know if there are any aspects you feel I missed, or if you'd like to share your experiences with acoustics or room treatment with me: I'd love to hear them.</p>Milo Burketag:miloburke.com,2005:Post/47094322017-05-16T11:47:30-06:002018-08-19T14:00:54-06:00How Reference Checks Will Save Your Music<p>You know the drill. The song you're working on sounds amazing while you're working on it. But when you hear it the next day, or on a different stereo, it just sucks. What happened??</p>
<p>A number of things, to be honest.<br> </p>
<p><span class="font_large"><strong>Sources of the Damage</strong></span><br><br>First, our hearing adapts very quickly. Within seconds. And if you mixed your track yesterday to sound very fatiguing, but acclimated to it before you fixed it, you likely became used to the problem. This happens to me all the time.<br><br>Second, you're likely not as skilled and talented as a commercial mix engineer with a pattern of chart-topping hits. I know I'm not. You and I do our best, but sometimes we need a reality check to compare our music against what we know sounds good.<br><br>Third, we become so accustomed to hearing the song we're working on from only one set of speakers or headphones that we forget to listen outside those speakers. Maybe the speakers add a little too much boom at 110 Hz and a little too little at 50 Hz, or maybe there's too much 8 kHz or not enough. Despite what the marketing material on speakers or headphones would indicate, there's no such thing as a perfect speaker or headphone. And many that you and I own probably aren't even in the ballpark. How can we make great sounding music on flawed equipment?<br><br>Let me introduce you to my two secret weapons, two different types of reference checks that save my bacon every time.<br><br><br><strong><span class="font_large">Start Comparing Against Commercial Mixes</span></strong><br><br>The first kind of reference check can be done on the same speakers/monitors/headphones you're engineering on. Find two or three songs in the same genre as the song that you're making that you think sound superb. These songs just sing from the speakers! Right? What makes them do that?<br><br>Well, a good sounding song doesn't sound good because of one or two elements: it's the sum of many, many good decisions that leads to a great sounding song. You can't break down all of these elements in a reference check. But here's what you can do:<br><br>Listen to the professional song's chorus for a few seconds, then listen to yours. Listen to the reference song again, then listen to yours. What's different?</p>
<ul> <li>Is their kick drum louder or softer than yours? Does it sound more clicky or more tubby or more plush than yours? To the extent that making changes doesn't detract from your song, mirror that song's engineering choices in your track to get your kick sounding more like their kick.<br> </li> <li>Same for the snare: does their snare sit louder or quieter in the track than yours? Is their snare more snappy or dull or spacious than yours? Try making changes to get your snare to sit in the mix more like theirs.<br> </li> <li>Same for the vocals: are the professionally mixed vocals brighter or darker than yours? More dynamic or more contained? Cleaner or dirtier vocal effects? How is the vocal level relative to the vocal level in your track?<br> </li> <li>What other instruments or elements sound different than in your song? What can you do to minimize the differences in your mix?</li>
</ul>
<p><br>You can take notes digitally or on paper of what needs to be done, or just try fixing things in your song in real time. And while you still retain creative control over what the elements of your song sound like, it can be a huge boost to hear what commercial tracks are doing to get the mix to blend together, particularly relating to the volume of each element compared to the others, and the frequency balance of the entire mix.<br><br>I know this can feel like busywork at times. It can be a bit tedious when you just want to share your music with the world. But if you want to shock and awe the world with how professional and polished and simply good-sounding your music is, learning to hear what makes music sound professional and polished and good-sounding is an important step.<br><br>A word of caution: it's not a fair fight if you're hearing one song louder than the other. Likely, it's the commercial track you need to play at a lower volume in order to match to the volume of your mix in progress. Be sure to match the volume levels to each other before taking notes.<br><br><br>Now that our mix is is sounding closer to a commercial mix on our primary speakers, what can we do to ensure our music sounds good everywhere?<br><br><br><span class="font_large"><strong>Start Comparing On Several Stereos</strong></span><br><br>The second kind of reference check takes you outside of your music space. You already know how your mix sounds on your favorite speakers/monitors/headphones. That's where you made the song, and that's what has influenced all the decisions you made. But since the entire world doesn't listen on the same speakers, you need to make sure the great-sounding mix you achieved translates to other speakers too. Make sure it sounds great on everything, right?<br><br>This is when you take your prized mix and play it back on your bluetooth speaker. Play it back from your computer. Your home theater setup. Your crappy earbuds. Your best sounding headphones. Your buddy's hi-fi system. The classic "car stereo test". You need to know what your mix sounds like on as many stereos as you have access to.<br><br>Whether you take notes physically or digitally, take notes on what sounds different to you; what sounds wrong to you. These are the elements of your mix that you need to spend a little more time on.<br><br>Now this is tricky. No matter how much love you give your mix, it's never going to sound like a million bucks from a pair of crappy $12 earbud headphones. You can't make up for them being crappy $12 earbuds. But what you can do is make sure that your mix sounds as good as can be expected to from $12 earbuds, particularly since that is how a lot of your listeners will be hearing your music.<br><br>Acknowledging that each stereo can't all make your mix sound like a sparkling gem, what does each stereo tell you about your song? Is the kick drum louder or softer than you realized? Does the bass of the entire song need to be ratcheted up or tamed? How are the vocals sitting in the mix? Is the percussion too loud or not cutting through enough? What aspects of your mix sound squirrely and need to be brought into balance?<br><br>Once you have your notes, go back to your mix and start making changes towards eliminating the issues you discovered. Hopefully the changes also sound good on your primary speakers, but you just needed perspective to realize what needed to change. It's also possible that the changes don't sound as ideal on your primary speakers. But if four other stereos told you the same thing, that your kick drum is too loud in the mix and needs to be lowered, you need to trust those four stereos, not your primary speakers. It's more likely that those other speakers are telling you the truth.<br><br>Also, it may be worth it to listen to some of your reference tracks on the same headphones and stereos, so you get a feel for how good each stereo can sound, and what each sounds like when presented with well-engineered, great sounding songs in your genre. <br><br><br><strong><span class="font_large">The Result</span></strong><br><br>When your mixes more closely match the engineering decisions of a fantastic-sounding commercially-produced song, you'll find your song just sounds much more polished. Particularly over time, as you go without hearing your track for a while and then hear it again later. This is a great thing. It means you engineered a better song. And it's totally acceptable to use a couple of pro songs as a guide on how to get there.<br><br>And when your mixes sound better on a wide variety of stereos, that is to say your mixes "translate better", you'll be much happier hearing the songs wherever you hear them, and so will your fans.<br><br>It's an awkward step in the process, but it definitely is a big boost to your music's quality, and a series of steps I make sure never to miss when I'm finalizing a track.<br><br><br><span class="font_large"><strong>Refining the Process</strong></span><br><br>But wait, Milo - you're telling me that I have to listen to all these songs, take all these notes, and make all these mix revisions? And then listen on all these stereos and make all these other mix revisions? That will take forever! I don't have time to listen to my song on thirty-eight stereos!<br><br>No question, this is a time consuming process.<br><br>But it's immeasurably valuable because it directly teaches you how to make your music sound more consistent and more professional. That is the dream, right?<br><br>And ... it gets easier and faster over time. The more you make comparisons to great songs in your genre, the quicker you'll get at identifying what needs changing, and the quicker you'll become at making the changes. Not to mention, you'll probably identify some themes. One of my themes is that I always create songs darker and duller than commercially released tracks. And brightening song after song following my reference checks is teaching me to create brighter mixes to begin with. Which is more suitable for my genre. You as an engineer will start to realize your biases and compensate for them as you create. And that's a wonderful thing.<br><br>And ... reference checks on other stereos get easier and faster too. The more you do it, the better attuned your ear will become for spotting key differences. And the more accustomed you'll become to fixing the same trouble areas. For example, when I mix on my beloved AKG k702 headphones, I find I virtually always add in too much low bass, in the 40-60 Hz range. But reference checks on other stereos tells me that these mixing decisions do not translate! If I were to mix on them now, I know to aim for less energy from 40-60 Hz than I enjoy in the kick drum and bass synth. And I create a better song for it.<br><br>As you do more reference checks, you'll need to compare against fewer songs, and you'll need to check your mix against fewer systems. Because you're becoming a better, more polished engineer that's accustomed to delivering more professional sounding mixes.<br><br>And that's something to celebrate.</p>Milo Burketag:miloburke.com,2005:Post/47007302017-05-09T11:35:51-06:002019-10-12T21:09:00-06:00The Truth About Mastering<p><span class="font_large"><strong>The Big Misunderstanding</strong></span></p>
<p>I wanted to share this because there is so much confusion around the internet on what mastering really is. "Mastering is making your track loud." "Mastering is part of mixing; the engineer always mixes and masters." "Mastering is when you put these six plugins on your master channel."<br><br>I can see where the confusion stems from. It's really hard to describe a process that may be different every time, or may occasionally be doing nothing at all. And it's something that even the masters of mastering, while giving useful tips on what they do, don't really cover the essence of what mastering actually is in simple terms.<br><br>But before we start:<br><br><br><strong><span class="font_large">What<em> Isn't</em> Mastering?</span></strong></p>
<ul> <li>
<em>Mastering isn't mixing.</em> Mixing is combining all the tracks in a multi-track session to sound appropriate and interesting with each other and to tell a story. <a contents="Read here if you want to learn more" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/the-core-of-mixing" target="_blank">Read here if you want to learn more</a>. Mastering is about working with the 2-track export of that mix to polish and finalize it, presenting the mix in its best light.<br> </li> <li>
<em>Mastering isn't just making your song louder.</em> Mastering often includes increasing the final volume, but there is far more to mastering than just turning the volume up.<br> </li> <li>
<em>Mastering isn't a repeatable plugin-chain.</em> Not every song needs the same things to sound its best. Some songs need to be brighter, others darker. Some need to be more compressed, others more dynamic.</li>
</ul>
<p style="text-align: center;"><br>Now that we have a couple of the biggest myths out of the way,<br><br><br><strong><span class="font_large">What <em>Is</em> Mastering?</span></strong><br><br>In the simplest way I can describe it, mastering is the quality control check for your finished mix. Mastering refines the mix in the ways it still needs refinement. Mastering compensates for mistakes the mixing engineer made or inaccuracies in the mixing engineer's playback system. Mastering smooths and sands down songs to sound similar to each other and similar to what people expect commercially made music to sound like.<br><br>Even to me, that doesn't sound very helpful. Why can't we just know exactly what a mastering engineer does?<br><br><span class="font_large"><em>Because a mastering engineer adapts.</em> </span></p>
<p>A mastering engineer does something different every time. And he doesn't know what needs to be done until he hears the song.<br><br>When you tune a guitar, not every string needs to be tightened: some may need to be loosened, and some may need to be left as is. And when a string needs to be tightened, more tightening isn't always better than less. After all, the goal of tuning isn't to tighten all strings by arbitrary amounts. Of course, the goal is to have all strings in tune, tightened just enough to ring true on specific notes.<br><br>Likewise, mastering a song doesn't require the same action for all songs. But of course, mastering is more complex than tuning a guitar, because a machine can't determine the subjective qualities of what makes a good song in the same way that a machine can determine good tuning, and because mastering covers many variables instead of the one variable per string of tuning a guitar.<br><br>Let's explore a few of these variables:<br><br><br><span class="font_large"><strong>Examples of What a Mastering Engineer Might Do</strong></span></p>
<ul> <li>A mix may arrive that the mastering engineer finds too dynamic. It's the mastering engineer's job to compress the peaks of the mix to smooth out the overall volume over time, so the song feels more contained. In this case, the mastering engineer will use a compressor, or a series of compressors.<br> </li> <li>A mix may arrive that's too compressed. In this opposite scenario, the mastering engineer needs to provide more dynamic range to give the song life and energy. Assuming the compression wasn't too extreme, the mastering engineer can use expansion (the reverse of compression, using a compressor with a ratio of less than 1:1) to bring out peaks and provide punchiness and motion to the track.<br> </li> <li>However, if the mix arrived with far too much compression, expansion won't be a suitable band-aid. The mastering engineer will need to request that the mixing engineer back off on compression, particularly on the mix bus, and then send a new version of the song for mastering with this revision. The mastering engineer may not actually be doing anything to the audio in this case, but he's still doing his job by acting as the quality control check for the music, making sure it doesn't get published with that major flaw.<br> </li> <li>A song may have been mixed by a poor engineer that isn't skilled enough to realize he left a lot of 200 Hz mud in the mix, or a lot of harshness at 6 kHz. That's okay: we're all at different points in our learning curve and are learning things in a different order. The mastering engineer would EQ out the peak at 200 Hz or 6 kHz, according to what needs cutting. And boost with EQ if the song needs boosting.<br> </li> <li>A song may have been mixed by an engineer that loves the punch and power of drums, and he mixed the drums 6 dB too high relative to the rest of the song. In this case, a mastering engineer may send the song back to the mix engineer instructing him to lower the drums by 6 dB before it can be ready for mastering.<br> </li>
</ul>
<p>We are beginning to see that the mastering engineer compensates for having a less-than-stellar mixing engineer. This can be a big part of a mastering engineer's job, but what if the mix engineer is talented?<br> </p>
<ul> <li>A song may have been mixed by a great engineer on a poor playback system. Say the mixing engineer's monitors are very sizzly and bright at 12 kHz, and he has his subwoofer too quiet by 3 dB, and his room has a null at 50 Hz. Even though the mixing engineer is good, his monitors and his room and his subwoofer settings aren't, and he likely compensated by EQing out too much 12 kHz, mixing all bass too loud by 3 dB, and especially boosting 50 Hz until it sounds right to his ears. The more experienced mastering engineer in a more carefully treated room with higher caliber speakers can compensate for these shortcomings by adding frequencies missing at 12 kHz, cutting all bass by 3 dB, and especially cutting at 50 Hz, bringing the song to a closer degree of perfection and balance.<br> </li> <li>An album of songs may have been mixed by a good engineer, but the songs likely sound different from each other since they were mixed on different days. Some are brighter, others are darker. Some are louder and others are quieter. A mastering engineer would brighten the dark songs, darken the bright songs, and generally do whatever needs to be done so all songs sound similar dynamically, spectrally, and in volume.<br> </li> <li>An album of songs may have been mixed by a great engineer and all sound consistent with each other, but they may not spectrally fit with what is appropriate for commercial music. For example, most modern pop music has abundant high frequencies on the drums, synths, and particularly on the vocals. A mastering engineer may spectrally shape each song with EQ to achieve the balance appropriate for the genre of the album. This is especially important when the mixing engineer is unaccustomed to mixing in that specific genre.<br> </li> <li>A song may not need EQ or compression, but it might sound a little flat or sterile. A mastering engineer may reach for saturation or an exciter or tape emulation or stereo enhancement to subtly enhance the texture and character of a song.<br> </li> <li>A superb engineer mixing on a great playback system may have introduced all kinds of inaccuracies because he's simply heard the song for too many hours. The current mix is emotional to him. The song is connected to him. He no longer has fresh ears for this music, and he may not have fresh ears for it for months. A mastering engineer may have all kinds of problems to fix not because the mix engineer is bad, but because the mix engineer is too close to the song. A mastering engineer will begin working on the song with fresh ears and quickly assess what it needs, then do it.<br> </li> <li>A song may have been mixed by a superb engineer on a great playback system. It may not need more or less dynamics, or boosts or cuts with EQ. It may have life and character and texture already that really make it sing. If it's a quiet mix, the mastering engineer may merely raise the volume after careful compression and limiting. Or if the mix is already loud and has already been carefully compressed and limited, the mastering engineer may merely say, "This song doesn't need anything." And that is still worth his services: getting the stamp of approval from an expert with fresh ears listening on high-caliber speakers in a carefully treated room.<br> </li>
</ul>
<p><br>But all mastered songs are loud, right? Isn't loudness a big component of mastering? Let's explore that:<br> </p>
<ul> <li>A louder song may catch your attention more than a quiet song if you leave your volume control in the same place. That is, assuming you're not listening on the radio, YouTube, Spotify, Apple Music, or any other service or layer of software that manages the final volume level for you. It can be tempting to make music as loud as possible for listeners not using those services. But making a song loud past a certain point introduces distortion and audible unpleasantries, <a contents="making it uncomfortable to listen to and stripping the dynamics and fullness out of the music" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/dynamic-range-and-the-loudness-war" target="_blank">making it uncomfortable to listen to and stripping the dynamics and fullness out of the music</a>. You can achieve volume only by trading the integrity of the music: more volume comes with more distortion, and at an exponential rate. A mixing engineer may not be the best person to establish the intended final volume of a song, and likely isn't familiar with the methods to gain the most volume for the least amount of audible distortion. The mastering engineer is accustomed to how loud music is expected to be for each genre to be considered commercially viable, and how to achieve the desired loudness by adding the least amount of distortion possible. Also, the mastering engineer has likely had a discussion with the artist and producer on how loud a given song or album should be pushed.<br> </li> <li>A song may have been mixed too loud, and has already been heavily limited or possibly even clipped. In this case, the mastering engineer would ask the mix engineer to export a new mix without limiting, potentially with all the tracks turned down. After all, a loud mix is not a necessary ingredient in creating a competitively loud master. And as we covered, the mastering engineer is likely more familiar with the genre's expectation for loudness, the artist's expectations for loudness, and which tools are best and how to use them to achieve this loudness. Understanding this, it makes perfect sense for the mixing engineer to a deliver a quiet, dynamic, full-sounding mix for the mastering engineer to master and worry about final loudness.<br> </li> <li>When making music loud, a mastering engineer will almost certainly employ a limiter. He may also use several layers of broadband compression to decrease the crest-factor of the mix, likely with very different settings than used for generally aiming to make the mix sound less dynamic. Multi-band compressors are powerful and dangerous tools used to increase volume while maintaining balance among the different frequency spectrums. Also, a mastering engineer could alter the frequency balance of a song to bring up the perceived volume, often by reducing fullness by lowering the bass relative to the high-mid frequencies. Other tools used to increase perceived volume are saturation, deliberate subtle distortion, exciters, and sometimes even intentionally clipping audio in the analog domain before converting back to digital. A mastering engineer will find the best mix of these many tools and how they are used to deliver volume appropriate to the song and album according to the genre and the wishes of the artist and producer. But as a lover of music, I sincerely hope mastering engineers will stop pushing loudness past <a contents="this point" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-to-give-your-song-the-perfect-loudness" target="_blank">this point</a>.</li>
</ul>
<p><br><span class="font_large"><strong>Making Sense of This</strong></span><br><br>Can you master your own music? Well, yes and no.<br><br>"No" in that it's really hard for the person who mixed the song to know how to overcome his own biased preferences on frequency balance and mix balance. Also, it's really hard for the person who mixed the song to give the song what it needs to sound good on every stereo and set of headphones, not just his own playback system in his own room.<br><br>And "yes" in that an engineer can do what needs to be done to a song to make it loud and clean and problem free on his own. But he may need to compensate by abandoning the song for a while to gain fresh ears for it. Or to <a contents="use many reference tracks" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-reference-checks-can-save-your-song" target="_blank">use many reference tracks</a> to keep the balance and dynamics and mix of the song in perspective. Or to listen to the song on many different playback systems in different spaces to be sure the song effectively translates to other systems.<br><br>None of these workarounds are as ideal as sending your song to a trusted master of mastering with a superb system in a superb room, but it can get you a lot closer. And this is important for people who can't afford to hire a mastering engineer for their music.<br><br><br><span class="font_large"><strong>Clear As Mud?</strong></span><br><br>I wish I could provide a clear, step-by-step outline of what a mastering engineer does. Instead, all I can do is make clear why that isn't possible.<br><br>My hope is that my examples offered some perspective on why mastering is so important despite it being something that one can't specifically explain. And it goes without saying that I hope the examples above can give you, my reader, some perspective on what kinds of problems a mastering engineer may fix. So that you can learn to avoid these problems before sending your music off to mastering or deciding to master your music yourself.</p>
<p>If you're interested in learning more about mastering your audio yourself, be sure to check out my guide on <a contents="how to&nbsp;effectively master your music&nbsp;at home" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-to-master-music-at-home" target="_blank">how to effectively master your music at home</a>, for when hiring a mastering engineer isn't an option.<br><br>Now go make some music!</p>Milo Burketag:miloburke.com,2005:Post/46936692017-05-02T12:03:16-06:002018-08-05T16:21:15-06:00The Trick to Perfect Reverb<p>To some, reverb doesn't seem like that exciting of an effect. It's a very old effect. But to be honest: all effects are old. We really don't have that many unique processes to manipulate audio. With rare exceptions, ingenuity surfaces as plugin developers combine old effects in ways we haven't combined them before. But we're still left with the same old tools, just with new controls and new use-case scenarios.<br><br>So yes, reverb is an old tool. But if you're not familiar with it or don't turn to it often, it may be worth giving it another look. There are far more interesting presets than "Cathedral". And I've had friends tell me, "Yeah, the song's okay, but your vocals sound so raw..." after showing them a draft I'm working on just because I didn't start using reverb that early in the song. Reverb can be a core component in a parallel drum mix, and is often a key ingredient in powerful synth leads sounding big. And though my mixes often are on the dryer side compared to many, all of them use reverb in at least several of the ways below.<br><br><br><strong><span class="font_large">Where to Start</span></strong><br><br>I don't intend to make my readers feel stuck if they don't have many reverbs to choose from. A bad sounding reverb plugin can be made to sound much better through careful mixing and integration, which we'll cover below. But it makes sense to start with the best reverb you have. Personally, I'm a fan of "<a contents="Room" data-link-label="" data-link-type="url" href="https://valhalladsp.com/shop/reverb/valhalla-room/" target="_blank">Room</a>" and "<a contents="VintageVerb" data-link-label="" data-link-type="url" href="https://valhalladsp.com/shop/reverb/valhalla-vintage-verb/" target="_blank">VintageVerb</a>" by <a contents="Valhalla DSP" data-link-label="" data-link-type="url" href="https://valhalladsp.com/" target="_blank">Valhalla DSP</a>. They sound very clean and natural and open to me. I've also heard good things from impulse/response style reverbs. And plates can sound quite pleasing too, although that often is a preset style more than a specific plugin.<br><br>The only reverb that's on my shopping list currently is "<a contents="Adaptiverb" data-link-label="" data-link-type="url" href="http://www.zynaptiq.com/adaptiverb/" target="_blank">Adaptiverb</a>" by <a contents="Zynaptiq" data-link-label="" data-link-type="url" href="https://www.zynaptiq.com/" target="_blank">Zynaptiq</a>. I haven't used it first hand. But I'm really intrigued by the approach of modeling the frequency spectrum around the spectrum of the sound passing through it. And its ability to drop out notes from the verb as the signal notes or chords change seems super usable. But it's expensive at $250, and as I said, I haven't personally used it yet. I'm vigilantly waiting for it to go on sale.<br><br>If you're not in the market for a new reverb, assess what you have. Play a simple sound through a reverb and listen to how real it sounds, how spacious it sounds. You kind of want it to have that dark, smooth cathedral-like sound; not that zingy, applause-like slapping of sounds. Listen for creaminess and convincingly delivering the sound of a larger room. To my ears, this makes a good reverb. And if you have two reverb plugins or twelve to choose from, I recommend testing all to identify your best (or best 20%, if you have many) and sticking with what you've found.<br><br><br><strong><span class="font_large">Proper Signal Routing</span></strong><br><br>I know, it's so easy to put a reverb as an insert directly on the audio or instrument channel you want to affect. Particularly if you only want it to affect one channel. But it's critical to put the reverb on a send or a bus if you want to maintain the clarity of the sound through the reverb. When you slap the reverb directly on the track, the initial instrument and all its transients get buried and lost in the mix as you turn the wet/dry up enough to actually hear the reverb.<br><br>Let me use a real-world example to explain. What happens when a singer without amplification steps back in a large room while singing? The room gain of her voice increases relative to the "dry signal" of being close to her, but you still hear all the clarity of her voice if she's in direct line-of-sight. This is what you want to achieve when you set up your reverb.<br><br>How do you do this? Put the reverb plugin on a bus, not directly on an audio or instrument channel. And then create a send for the track or tracks you want to feed into the reverb. Set your reverb plugin to 100% wet. Control the overall volume of the reverb by moving the fader for your new reverb bus. And if you have multiple sounds feeding to it, you can adjust their balance in the reverb by adjusting the send level from each track. It doesn't just sound better, it's more CPU efficient to feed multiple tracks to the same reverb bus. And it also allows you to adjust how much reverb you want in a mix without altering the perceived volume of the dry signal.<br><br>Like everything that's an art, there are exceptions. A few that come to mind for me: I might want reverb to be integrated into a guitar sound before further processing, or I might want to chop up the reverb on a synth with pumping or volume shaping down the line, or I might want to process the instrument and reverb of a particular sound together. In these cases, it makes sense to put the reverb directly on the track being affected, or perhaps earlier in the chain, inside of a guitar multi-effects plugin or within the synth itself. I do this maybe one time in ten; the other nine times sending audio from a track to a dedicated reverb bus.<br><br><br><strong><span class="font_large">Using Reverb for Glue</span></strong><br><br>Reverb can be a powerful tool to get instruments to sound more cohesive together. There are two ways to approach this:<br><br>The first is to send a group of instruments, perhaps all drums or all guitars, through reverb together. This is especially useful for blending with parellel processing, when you may want to create a separate bus for more extreme processing that's kept lower in the mix alongside the clean bus. For example, I may send all drums to the drum bus, which outputs to the master fader. The drum bus might have EQ and compression and other subtle processing on it. But that drum bus could also send to a drum vibe bus which outputs to the master fader. And that drum vibe bus might start with reverb, then get slammed by aggressive compression, then have distortion thrown on top, then heavy saturation, then low-fi processing. The drum vibe bus likely belongs much lower in the mix, but it can provide character and interest alongside the clean drum bus.<br><br>The second way to use reverb for glue is when you want the entire song to feel like the instruments were performed together, all in the same space. It works really well to create a reverb bus or two for all instruments to share. All instruments feed to it, though some more than others depending on the sound you're looking for and how you want to achieve depth in the mix. Another way to do this is to set up two identical reverb buses with identical settings for the room sound you want to achieve. But pan one reverb bus left and the other right. Instruments that are on the left get panned to the opposite, right reverb. And instruments on the right get panned to the opposite, left reverb. This is a useful trick to keep reverb volume lower in the mix while still having it be audible, and it's super useful for providing space and belonging to instruments, as if they really were recorded in a room together.<br><br><br><strong><span class="font_large">Different Styles of Reverb</span></strong><br><br>It's definitely okay to blend different reverb sounds together. For example, I very often create two reverb buses right away for my vocals, though I may send other instruments to them later. One is a bit shorter, perhaps 1.5-2 seconds, adjust length to taste. And this shorter reverb receives a higher send volume to sound louder. And the second reverb is longer, perhaps 5-6 seconds, adjust length to taste. This longer reverb receives a lower send volume to sit quieter in the mix. The result is a nice blend of long and short reverb that tends to sound very natural and real to my ears. I'm not sure why I like it so much. Maybe because it sounds like you're just in front of a stage, and you hear the shorter, louder reverb bouncing off the back of the stage; and the longer, quieter reverb bouncing off the rest of the concert hall. Try mixing your own reverbs and see what works for you.<br><br>Reverb can also change based on the song section. It sounds good for lower, shorter syllables in verses to be sent to a shorter reverb. And the longer, higher, more legato and anthemic lines of a chorus can ring out with longer, louder reverb. You can do this with automation, but it may be easier to just put the lead vocals for the verses on one track routed to one reverb, and the lead vocals for the choruses on another track routed to a different reverb.<br><br>Reverb can also add a certain presence when it's super short. It's been a long time since I worked with mixing rap, and even longer since I worked with radio voice, but if you're aiming for something spoken that needs a little presence and power, try adding a super short reverb, perhaps with the decay time around 0.2 seconds. Adjust to taste. It can add a certain command and sparkle that's perfect for some situations.<br><br><br><strong><span class="font_large">Dirty Reverb</span></strong><br><br>I discovered that I really like dark sounding reverbs. I very commonly put an EQ on my reverb bus and roll off both the lows and the highs. Too much highs in the reverb and it sounds cheesy and fake. Too much lows and it gets muddy and masks the instruments you want to be audible. To be honest, I felt a little guilty shaping the reverb like this, figuring that purity was the ultimate goal and I was abandoning purity. But purity is a funny concept when mixing audio. Maybe a guitar layer needs to sound tinny and lean instead of rich and full, because the voice will add the richness and the bass will add the fullness; and there just isn't room for a super full sounding guitar in the mix. Don't feel you need to honor each track by making it sound its purest and biggest and fullest - instead, honor the entire mix by <a contents="helping each instrument or layer sit where it needs to sit for the benefit of the entire mix" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/the-core-of-mixing" target="_blank">helping each instrument or layer sit where it needs to sit for the benefit of the entire mix</a>.<br><br>Regarding filtering reverb, I felt less guilty as I learned that a number of famous producers and engineers also like their reverbs sounding very mid-rangy. See if it's a style you enjoy too.<br><br>But reverb can be made significantly dirtier still. For example, a few famous producers like to detune vocal reverbs just a little, to set off the original from the reverb and to make it sound darker and cloudier. I've begun using this on my mixes, and I have to admit, I like it.</p>
<p>Another way to make reverb dirty is to side-chain it to the metronome, or the kick drum, or just about anything else. Get the reverb to move and dance with the song.</p>
<p>Yet another way to make reverb dirty is to throw a distortion plugin on it, or add heavy saturation, or vintage simulation, bit-crushers, vinyl crackle, tape hiss, etc. Noise can add so much character, though I find I like adding noise that follows the original signal instead of noise that's purely random or purely constant. Blending is important. But <a contents="adding dirt just wins" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/creative-processing-with-effects" target="_blank">adding dirt just wins</a> in my opinion, when you get the balance right.<br><br><br><strong><span class="font_large">Wrapping Up</span></strong><br><br>I don't expect that all of these tricks and techniques are new to you. But, hopefully I covered a method of using reverb that inspires you to produce more. And if you have a killer technique I didn't mention, I'd love it if you'd share it below so I can try it too. We're all in this together.</p>Milo Burketag:miloburke.com,2005:Post/46652812017-04-25T11:45:00-06:002018-04-26T00:34:38-06:00Empowering Plugin Organization<p>Hey, guys. Another week in the life of a producer. Another week learning to better do what I do, as I'm sure all of you are doing as well.<br><br>I recently switched DAWs from Pro Tools to Studio One for a handful of reasons. This may seem a bit silly, but one of the features I was most excited about when I started using Studio One is that (like many DAWs, I'm sure) it allows me to sort plugins however I want. This is really freeing, but also really handy for organization.<br><br>Before I begin, let me preface by saying that I think the variety of DAWs out there is a wonderful thing. And I know people love to start discussions and even arguments over which DAW is the best. But I firmly believe that the best DAW is the DAW you're most comfortable working in. Learning to use the tools you have is far more valuable than searching for the world's best tool. You could give me a $3,000 guitar, but Eric Clapton could still crush my playing abilities on a $100 guitar from the closest Big Box Mart. So stick with what works well for you, and only switch if it's an upgrade for what you do.</p>
<p><br><span class="font_large"><strong>The Problem</strong></span><br><br>The plugins in my DAW were sorted automatically by whatever tags the developers saw fit to use to describe their plugins. This is handier than viewing one large list of plugins. But it starts to show problems as your plugin library grows. I had plugins showing up in many different categories: for example, a distortion-geared compressor like <a contents="Decapitator" data-link-label="" data-link-type="url" href="http://www.soundtoys.com/product/decapitator/" target="_blank">Decapitator</a> showed up in the "Dynamics" category and also the "Harmonics" category. And a multi-effect plugin like <a contents="Effect&nbsp;Rack" data-link-label="" data-link-type="url" href="http://www.soundtoys.com/product/soundtoys-5/" target="_blank">Effect Rack</a> was kind enough to show up in almost every category because it contained plugins that met the description of almost every category. Also, some newer plugins from less-established developers often wouldn't appear in the proper folder. It was bad enough to have them show up in "Other", but worse when a compressor might show up under "Modulation" because it was inappropriately tagged.<br><br>Maybe you split your time between a few DAWs and haven't yet tapped into the features some provide. Maybe you're looking for a new DAW and haven't considered plugin organization to be very important. Maybe it's built into your DAW already and you never noticed. If so, take notice!</p>
<p> </p>
<p><span class="font_large"><strong>The Solution</strong></span><br><br>Let me tell you what I did: I took a few hours to sort all of my plugins into customized folders. First, I established my priorities. I had to make a list of all the things I might be looking for as I'm searching for a plugin. Each of these deserves a separate folder. Second, I had to make it possible to quickly find my most-used plugins.<br><br>To accomplish the first priority, I made plugin folders according to how I would use them, not technically what they are. That one compressor I only use for mastering? By my rules, it belongs in the Mastering folder, not the Compressor folder. Do I use a multi-effect plugin more for sound design than anything else? Make a plugin folder called Sound Design and put it there. These need to be the terms I think of while searching, not the most technically accurate descriptions. Because this is about improving workflow, not building an encyclopedia, right? That's why I filed the above-mentioned Decapitator as "Distortion", not "Compressor" or "Saturation" or "Dynamics" or "Harmonics". Because I use it when I want distortion, not any of those other words.<br><br>And to accomplish the second priority, I had to get a little bit more creative. I sorted my plugins into three tiers: most used, occasionally used, and never used. This isn't necessary for smaller categories. For example, I have maybe only six saturation plugins and don't need further sorting than a "Saturation" folder. But EQ plugins accumulate faster than NYC generates garbage, it seems. I made separate folders: "EQ - Favorite" and "EQ - Other". If I use an EQ in most every project, it goes into the favorite folder. If I don't use it often but I like it for specific uses or I want to better learn how to use it, I put it into the other folder. And for the rest? EQ plugins I'll never use? Badly dated design? Poor sound quality? VST2 version of a plugin I have a VST3 version for? Mono version of a plugin I'm likely to only use in stereo? I hide it. I'm not throwing away my license for it and can always unhide it if I decide I need it later. But for now, when the odds are quite low that it's the plugin I'm ever going to look for, it's best to keep it out of sight.</p>
<p><br><br><strong><span class="font_large">Examples</span></strong><br><br>I've found that having more folders helps me find what I need more quickly than scrolling through long lists of plugins in fewer folders. Here's my complete list of plugin folders:<br><br>Bass Enhancement<br>Compressor - Favorite<br>Compressor - Other<br>Delay - Favorite<br>Delay - Other<br>Distortion - Favorite<br>Distortion - Other<br>Drum Enhancement<br>Dynamics<br>EQ - Favorite<br>EQ - Other<br>Filtering<br>Guitar Processing<br>Mastering - Favorite<br>Mastering - Other<br>Mixing<br>Modulation<br>Pitch<br>Restoration<br>Reverb - Favorite<br>Reverb - Other<br>Saturation<br>Sound Design<br>Stereo Enhancement - Favorite<br>Stereo Enhancement - Other<br>Utility<br>Vintage<br>Vocal Processing<br>Volume Shaping<br><br>As long as we're being honest, this list will change over time. I'll be adding new folders, removing others. I may find I'm really thinking Y instead of X when I want to find that plugin, and therefore the plugin belongs in folder Y instead of folder X where it is now.<br><br>And though this took me several hours to put together, and though I have to initially spend time with each new plugin to decide where it belongs, I found the benefits two-fold: first, it saves me time again and again, every single day, as I waste less time searching for and more time using my plugins. And second, having the tools right at my fingertips instead of lost and buried means that my creative groove remains uninterrupted from what used to be a very common distraction. And as any creator could tell you, staying in your creative process is critical.<br><br>Yeah, organizing plugins isn't fun. Unless you get an organizational high. I don't. But this time spent sorting was well worth it for me, and it may be for you too.<br><br>Happy producing.</p>Milo Burketag:miloburke.com,2005:Post/46770602017-04-18T11:55:00-06:002018-04-25T23:24:05-06:00Developing Talent: How to Become Better<p><span class="font_large"><strong>Introduction</strong></span></p>
<p>I'm just going to be up-front on this one. I don't believe in prodigies. In fact, I'm not 100% sure that one person can innately have more talent than another. I think our ability to create and create well is entirely up to the experience we've accumulated for ourselves.<br><br>Wait a second, Milo, you say. You mean to tell me that my singer friend that sounds so amazing isn't better than I am? What about my piano-playing buddy that can just hear a melody and play it?<br><br>Well, obviously, no. Lots of people have talents that they can use to accomplish things more quickly or with greater quality. But what I'm getting at is that there isn't some gene for recognizing musical intervals that your parents didn't give to you, or that there is some genetic marker for why your engineer friend with golden ears can recognize specific frequencies way more accurately than you can.<br> </p>
<div style="text-align: center;"><span class="font_large"><em>The way I see it, it's all about practice.</em></span></div>
<p><br>You've probably heard the theory that it takes 10,000 hours of practice to master something - that if you really want to be good, you have to put in the time. But how long does it take to work on something for 10,000 hours? If you work at your craft for four hours a day, five days a week, it will take you ten years! And I've heard tweaks to this theory, that it takes 10,000 hours of focused, directed practice specifically learning things you don't already know or can't already do. And that makes it even more challenging.<br><br>First off, that's daunting. You mean I can't be good at something until I make it a huge priority consistently for ten years?<br><br>Second, how does one even begin with something like this?</p>
<p><br><strong><span class="font_large">Making It Simple</span></strong><br><br>I want to make the case for consistently putting in time. Don't think of it as 10,000-hours-or-bust. Instead, think of it as four hours, or one hour, or whatever time you have today. Big habits start small.</p>
<p>You know what you wish you were better at: you want to be a better songwriter? Then write more songs. You want to be a better producer? Then create more tracks. You already know what you want to be good at, and you already know the truth of what you need to do to get better. It's just a matter of choosing to start small, start today, instead of waiting for some magical moment in the future. That moment won't come on its own, so choose to make it happen today.</p>
<p><br><span class="font_large"><strong>Focus on the Right Things</strong></span></p>
<p>I love listening to production podcasts and songwriting podcasts, reading tips on forums, and watching tutorials on YouTube. But in my opinion, while these are useful to a degree, they're not nearly as beneficial as simply working. You need to build a volume of work for yourself if you really want to learn. And you'll learn so much more from actually doing than just learning about doing. Tutorials have their place, but that should be 10% of your creating time, no more. It's easy to rationalize 50%, and then watch in guilt as that slowly balloons to 95% of our creative time as distractions come up or creative sessions get cut short. But we can control this by <a contents="making the actual work a priority" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/becoming-more-productive" target="_blank">making the actual work a priority</a>. Learning is helpful, but make sure that you're "doing" more than you're "learning". Way more.</p>
<p><br><span class="font_large"><strong>Start Small</strong></span><br><br>In full disclosure, I get stage fright all the time regarding making music, even in the privacy of my own home. I'm afraid to write a song or produce a track because I feel it's not going to be good enough.<br><br>Before I share any more on this, I want you to watch a video of <a contents="the best advice for musicians I've ever heard" data-link-label="" data-link-type="url" href="https://www.youtube.com/watch?v=RDyg_41QF1w" target="_blank">the best advice for musicians I've ever heard</a>. Don't worry, it's only two minutes long.<br><br>When I first watched this little clip, I found Sheeran's perspective so encouraging. You mean I'm not broken because I'm writing bad songs? And that writing bad songs leads to good songs? That's such a refreshing perspective, but when you have enough distance from something to see it clearly, it makes perfect sense: anybody starting out isn't going to be as good as someone who has been doing it for a while. Just like how someone just starting out at the gym shouldn't be discouraged if he's lifting small weights. Same with cooking, novel writing, painting, basketball, anything really. It takes time and dedication to slowly work up to the big stuff.<br><br>Here's the thing about crappy songs: first, no one is going to think they're as crappy as you think they are. You are your own biggest critic. Second, if the song truly is crappy, you don't have to share it. But you do still have to make it if you want to grow as a songwriter or producer. And third, you'll never learn to write good songs or produce good tracks if you don't get your crappy songs and crappy tracks out of your system first.<br><br>So freely create! Do it! Stop letting fear of quality keep you from creating, because it will if you let it. Put in the time doing your thing, doing the thing you hope you'll be paid handsomely for in five years and will be famous for in ten. Yes, that thing you imagine yourself being successful doing in the future? If you want to be that good and get to that place, you need to start doing it now. Start small, and just begin.</p>
<p><br><br><span class="font_large"><strong>Back to Prodigies</strong></span><br><br>I'm sure there's some stickler still reading who wants to point out that so-and-so is so much better than he is, or that Mozart started composing at a ridiculously young age. I fully admit some people are more talented than others. But why? It comes down to experience and time, in my opinion. I suspect wee-little-Mozart didn't blindly stumble into music, but was surrounded by it and coached with it since before he could form permanent memories. A lot of things are easier to learn when the brain is still developing: language is a big one, and music is too. Not that adults can't learn languages or music.<br><br>And I think another piece of the puzzle is what you're encouraged with and discouraged with from a young age. If your family frowns on academics yet praises you for your dancing and rhythm from a young age, you'll probably grow to believe that you're a good dancer and bad in school. And that belief based on the feedback received can focus and drive someone to make that become true, by being willing to take risks dancing around more people, and to take fewer risks in school, by giving up on a hard problem before another student might, just before the answer or the moment of understanding. And all of these experiences stack on each other and build deep ruts that we believe define us.</p>
<p>I grew up being told I can't dance, and my fear of humiliation kept me from trying, which made it become true. The truth is that I can dance: because of my musical experience, I have a great sense of rhythm. I'm not a great dancer, but I'm not incapable of ever dancing ever, like I thought I was. And what if I want to get better? Try dancing more often, either going out to where there's music or in the safety of my home. And if I want to get a lot better? Take lessons. The skill of dancing wasn't given to me. I got a slow start, but I can choose to make up for lost time if it's important to me.<br><br>So maybe your friend with an amazing voice was told she was good from the age of five, and that inspired confidence to sing in front of others and motivation to practice more. Maybe you started with the same raw material, but because someone told you at age five that you stink, you've been avoiding it till now. I'm not going to lie, your friend has an advantage. Probably over me too. But it's not too late for you to start learning. Put in the time yourself, spend the hours practicing, and learn to find confidence from within yourself. That's what you need to grow.</p>
<p><br><br><span class="font_large"><strong>Keeping Perspective</strong></span><br><br>And lastly, 10,000 hours? That's a lot. How are we ever expected to achieve that?<br><br>Ten thousand may be a valuable benchmark for mastery, but that doesn't mean that someone with 9,999 hours of practice has no skill and can't create meaningfully.<br><br>The way I see it, if you want to start guitar, the first ten hours of practice are going to teach you a lot. And anyone would be able to listen to you before and after that first ten hours of practice and hear a marked improvement. Sure, you're not great yet, but those first critical hours have laid the foundation for greater skill. And 50 hours of practice later, I bet your guitar playing will start to sound more like music. And 100 hours after that, you may have friends start telling you, "You're good!"<br><br>And by the time you reach 1,000 hours? I'm sure you could still find faults if you're a perfectionist and pessimist. But if you're honest with yourself? You'll be pretty good at guitar or whatever you started by the 1,000 hours point.<br><br>As I said, it's not 10,000 hours or nothing. Every hour counts, and the early hours count far more than the later hours. Just put in your time and watch the results come to you. And if you're consistent and dedicated and continually pushing yourself to grow instead of retreading old ground, the results absolutely will come to you.</p>Milo Burketag:miloburke.com,2005:Post/46668002017-04-11T12:44:44-06:002018-06-28T15:34:33-06:00Polishing Your Mixes with Dirt<p><span class="font_large"><strong>What I Hear in Beginner Tracks</strong></span><br><br>When I meet new producers and hear their stuff for the first time, I'm often floored by the musical and instrumental creativity they bring to the table. <em>How come I didn't think of using that instrument in that way??</em> I love the creative differences new people, advanced and beginner alike, bring to the table.<br><br>But one thing that often sticks out to me with a lot of tracks I hear is that they sound pretty raw and unprocessed.<br><br>Now, <a contents="mixing is its own art form," data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/the-core-of-mixing" target="_blank">mixing is its own art form,</a> and Youtubers around the web are already covering that in depth. I won't shed any groundbreaking truths on this, but I view mixing as finding the balance between all instruments, giving space for each to speak; controlling each instrument to sound clean and consistent with EQ and compression; and adding interest and sparkle where appropriate, with delay and reverb and other effects. Really, <a contents="less can be more" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/minimalism-in-mixing" target="_blank">less can be more</a>.<br><br>But even a good mix can sound pretty raw when the instruments have a very vanilla, unused sound to them. And this is where the dirt comes in.</p>
<p><br><br><em><strong><span class="font_large">Dirt?</span></strong></em><br><br>When I first started out, I frowned upon intentionally adding distortion to things: <em>why would someone deliberately want to make something sound worse?</em> But the more music I hear and the more music I make, I'm finding that I love the vibe of adding grunge and distortions to otherwise clean sounds. I no longer view it as a pursuit of showcasing instruments in their purest form, but mangling them a bit to make the entire song have more texture and grit to it. And in the world of software instruments that are generally recorded perfectly, even <em>too</em> perfectly, we can accomplish this by destroying the purity of the sounds through dirty effects.</p>
<p> </p>
<p><strong><span class="font_large">How to Switch Things Up</span></strong><br><br>At the risk of giving away too many secrets, I'll share a few things I do to get instruments fitting neatly with the vibe of a song:<br><br>- If an instrument is sounding too clean, reaching for straight-up distortion can really add some grit to it. I like funky, vintage, analog-style distortion for this. A couple of my favorites are <a contents="Decapitator" data-link-label="" data-link-type="url" href="http://www.soundtoys.com/product/decapitator/" target="_blank">Decapitator</a> and <a contents="Devil-loc" data-link-label="" data-link-type="url" href="http://www.soundtoys.com/product/devil-loc-deluxe/" target="_blank">Devil-Loc</a>, though there are many contenders by many brands. Sometimes it sounds best to lay this straight on an instrument, sometimes with the highs rolled off, and sometimes in parallel with the original instead of replacing the original.<br><br>For example, I might have a keyboard in a track that is sounding a little too clean. If I create a send from that keyboard to a bus, I can crush the vitality out of the sound of the keyboard with distortion, and then significantly roll off the highs so it sounds warm and funky instead of sizzly and harsh. Mix a little of this bus in to taste with the original keyboard sound and it takes on a new character.<br><br>- You can get even crazier with multi-band distortion and freaky presets, like with <a contents="Trash" data-link-label="" data-link-type="url" href="https://www.izotope.com/en/products/create-and-design/trash.html" target="_blank">Trash</a> by <a contents="iZotope" data-link-label="" data-link-type="url" href="https://www.izotope.com/" target="_blank">iZotope</a>. If you ever see this plugin on sale, buy it! I use it all the time on synths and basses to get thickness and grit that elaborate on synth patches in a way they just can't achieve on their own.<br><br>- This is a super basic tip, but just rolling off the highs and lows of a track with EQ can give a really different vibe to an instrument. Lo-fi is a versatile effect, and band-limiting the frequencies with an EQ is one of the most powerful tools to get there. With each instrument you apply this to, you have to play around with where you want the low-pass and high-pass to be set, and how aggressive of a slope sounds best to your ears. But experimentation is the fun part, right?<br><br>- I love throwing delays on instruments too, particularly geared towards width or ping-pong. It can add space to a synth patch, thickness to a keyboard, or complexity to drums. I especially like when a delay chain is even more band-limited by EQ than the original instrument, and also if it has some heavy saturation to it. <a contents="Some delay plugins" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/three-of-my-favorite-plugins-july-2017" target="_blank">Some delay plugins</a> have this built in, but you can always accomplish this with other plugins in the chain.<br><br>- Super aggressive compression can play a role too. If you compress something hard enough, particularly with compressors emulated after vintage hardware gear, it can begin to take on a life of its own. You know you're getting there when compression brings out something like reverb that you don't recall hearing in the original track. Of course, this is often too much on its own, so try bringing it gradually into the mix in parallel to the original sound.<br><br>- Vintage has its sound, and there is so much you can do other than band-limiting your signal with an EQ. Tape-wow and flutter effects, aggressive tape saturation, and vinyl crackles are just a few examples. <a contents="One of my new favorite plugins" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/three-of-my-favorite-plugins-july-2017" target="_blank">One of my new favorite plugins</a> is <a contents="XLN Audio" data-link-label="" data-link-type="url" href="https://www.xlnaudio.com/" target="_blank">XLN Audio</a>'s <a contents="RC-20 Retro Color" data-link-label="" data-link-type="url" href="https://www.xlnaudio.com/products/effect/rc-20_retro_color" target="_blank">RC-20 Retro Color</a>, a plugin that combines many different flavors of vintage emulation, each of which can be uniquely tailored. And it's great to flip through presets to find some wacky sort of sonic degradation combination I would never have come up with on my own.<br><br><br><strong><span class="font_large">Make Your Own Rules</span></strong><br><br>These are just a few examples. But I highly recommend experimenting. Try running non-guitars through guitar amp simulators and pedalboard-style processing. Try splitting a synth or bass or drum track into two or three layers and using different distortions or vintage processing on each. Try processing the highs and lows of the same instrument or drum bus with different types of distortions or vintage processing. And here's what I love about this: you could be making future bass or EDM or PBRNB or alt rock, and these tips will still be relevant in helping you find a unique character for your tracks.<br><br>There is so much you can do to help break away from the clean, boring, stock sounds that instruments give you. This definitely applies to synths, but even more-so with sampled instruments that sound too clean to be usable. So get creative!<br><br>Thanks for reading. I really love building this connection with you. And if you have any processing tips to get instruments sounding dirty and alive that I didn't mention, please post in the comments below. I'd love to hear what you guys have come up with.</p>Milo Burketag:miloburke.com,2005:Post/46560292017-04-04T09:24:04-06:002018-06-07T14:36:01-06:00Leveraging Presets to Maximize Productivity<p><span class="font_large"><strong>Introduction</strong></span><br><br>Anybody who sits down to do creative work knows that time is not an unlimited resource. Which is unfortunate, because <a contents="time invested is the number one influence on our abilities" data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/how-to-become-better" target="_blank">time invested is the number one influence on our abilities</a> and our freedom of space to come up with ideas.<br><br>This is my very first post in my production blog, and I decided to use it on a tip for getting the most out of our time, because time spent really is king, and our ability to maximize the time we have unlocks both technique and results.<br><br><br><strong><span class="font_large">The Problem</span></strong><br><br>I can't speak for most of you, but I'm a preset hopper. When I get a new synth, I immediately start jumping through presets to see what the instrument can do. When I get a <a data-link-label="" data-link-type="url" href="https://miloburke.com/production-blog/blog/three-of-my-favorite-plugins-july-2017">new delay plugin</a>, I click through the presets to get a feel for the style of sounds it can make. When I get a new multi-plugin, like <a contents="Effect Rack" data-link-label="" data-link-type="url" href="http://www.soundtoys.com/product/soundtoys-5/" target="_blank">Effect Rack</a> by <a contents="Soundtoys" data-link-label="" data-link-type="url" href="http://www.soundtoys.com/" target="_blank">Soundtoys</a>, I embrace the new creative elements of having someone else program which plugins appear in which order, and how each one of them is set, then slapping that combination on a sound the creators never dreamed it might process. This is a big part of my creative discovery and sound design process.<br><br>The trouble starts when I run out of "creative presets" and start recycling the limited few stock presets I really love, putting them in heavy repetition. Or worse, I decide that I don't like this particular delay plugin or that synth because I don't care for its stock presets, or even that it makes it too difficult to find the presets I might actually use.<br><br>Trouble begets trouble, and I start forming bad habits. I stuck with <a contents="AIR" data-link-label="" data-link-type="url" href="http://www.airmusictech.com/" target="_blank">AIR</a> <a contents="Multi-Delay" data-link-label="" data-link-type="url" href="http://www.airmusictech.com/product/creative-fx-collection-plus" target="_blank">Multi-Delay</a> for way too long over more advanced, colorful, powerful delays because I could easily use it to jump to a ping-pong preset that gave an instrument more life and movement. I decided <a contents="H-Delay" data-link-label="" data-link-type="url" href="https://www.waves.com/plugins/h-delay-hybrid-delay#delay-on-drums-electronic-music-production" target="_blank">H-Delay</a> was a better solution for me than the mighty <a contents="EchoBoy" data-link-label="" data-link-type="url" href="http://www.soundtoys.com/product/echoboy/" target="_blank">EchoBoy</a> because H-Delay put its most interesting presets right up front. And I can't tell you how much time I waste in <a contents="Omnisphere" data-link-label="" data-link-type="url" href="https://www.spectrasonics.net/products/omnisphere/" target="_blank">Omnisphere</a> looking for patches that fit the sound I want and not finding them. It's a major creativity buzz-kill.<br><br><br><span class="font_large"><strong>The Solution</strong></span><br><br>Let me tell you what I do now:<br><br>When I have a song that is finishing the production stage and moving on to the mixing stage, I go into each of the virtual instruments I used and save the preset of what I was using. Whether it was a patch I created, or more likely, a preset I heavily mangled and modified to achieve the sound I was looking for, that was valuable time I spent creating a sound I want to hear - time that doesn't need to be repeated for this type of sound in my future projects. So: I save it as a preset.<br><br>And when I wrap up the mixing stage of a project, I do the same for my effects plugins. Clearly, compressors and EQs need to be set and determined on a case-by-case basis, but it's definitely worth saving your intricately tweaked delay and reverb settings as presets for future use. Same with distortion plugins that you've tweaked for just the right vintage crunch, or getting tape emulation to freak out in just the right way, or the exact way you set your filtering or modulation plugins to breathe a little life and movement into an otherwise stale sound.<br><br>And this works for entire plugin chains too, depending on your DAW: many DAWs allow you to save an entire insert chain of multiple plugins and the settings for each as a single preset. And this can save a heap of time, helping you quickly recreate just the right vocal sound for the same singer on the same mic, or just the right vibe and grunge to mix in parallel on your drums. We're talking major time savers here, in addition to the ability to recreate your favorite sounds that you've made in the past.<br><br><br><strong><span class="font_large">Why It Works</span></strong><br><br>Now, I'm like you. I'm not super eager to start one more clean-up task, one more maintenance routine. Particularly when it seems to take me out of the moment of making music, deferring work on a song even a little. It feels like musical buzz-kill for a moment, no doubt. Though this is exactly why I do it after completing a major phase of a song, not in the middle of creating. That said, you don't have to have been saving presets for months to gain the benefit. Try making even a couple of delay presets set aside that you know please you and achieve a sound that you love hearing: this helps you stay in your groove! This helps you create faster and more freely next time.<br><br>And as for synths: yeah, it will take a while to build a bank of instruments that you like the sound of. But imagine how freeing it will be to flip through presets knowing that every one of them matches your genre, your particular influence, and contributes toward your unique sound. Of course, you likely won't want to reuse old patches exactly as they are, at least not often. But now you have a starting point that is more original than what can be used by anyone else with the same synth, and it's more your style than what anyone else has access to. And with a few tweaks, you'll have a completely new sounding patch that you can use in your next project. It builds and builds, and is a huge ingredient in sounding like you, a big leap forward in getting your music to sound the way you imagine it during creation.<br><br>Give it a try for a week, or one full project. And see if you're not tempted to reach for your own delay presets or synth patches as a starting point next time you're in the zone and not immediately finding the sound you're looking for. I can tell you now: it's addictive.</p>
<p> </p>Milo Burke