Mixing help needed.

  • I've got my first full mix about 90% done. Working hard on it. I like how the mix sounds using my monitors a lot, but when I played back through my regular stereo speakers, I found I needed to drop some bass EQ, so the bass track is not as muddy and thumpin', and raise the upper mids to allow the guitar track to stand out from the rest of the mix (as shown in attachment below).


    Do sound engineers ever run a global EQ over the final cut after rendering, or should I mod those frequencies inside the mix for each instrument before rendering?


    I would like to run vocals over this track, so if I do a global EQ for the mix then I would have to import the global EQ'd track into a new project as a whole, and then do vocals accordingly. Is this acceptable?


    I want to make sure I do mixes the right way, or the common practiced way.


    Larry Mar @ Lonegun Studios. Neither one famous yet.

  • What you've just encountered is the concept of "mix portability," the scourge of sound engineers everywhere. Sounds good in your studio, but play it in the living room and the bass disappears, play it in your car and the bass blows out your rear window, etc. It's a non-trivial consideration and among the more difficult things to get right when mixing music.


    The first and most important thing is making sure that your mixing environment is telling the truth. If you're playing back via speakers, then the speakers could lie, but even if they're honest, your room acoustics get a vote. Studio reference monitors are allegedly designed to be completely flat (FRFR to you guitar type guys). Most of them are pretty good these days, but there can certainly be variations. However, room acoustics are a huge issue. Your monitors can be absolutely state of the art, but sound bounces around your room like a cue ball on a pool table. Even sitting close to the monitors (they're often referred to as "near field monitors" for that reason), you're still not immune to what the room acoustics do to the sound before it gets to your ears.


    Lest you think you can solve the problem by just mixing with headphones (assuming you can stand the ear fatigue), it's much more difficult to find headphones that aren't hyped in some range or another, so once again your source of truth is of dubious integrity.


    Treating the acoustics in your room can get expensive, and if you really want to do it "correctly" it can involve no small amount of rocket science. Acoustics is an in depth field to study. Now, having said all that, here are some real world things you can do without spending a gazillion dollars on hiring an acoustical engineer.


    The first trick is using reference mixes. Chances are good that a major label recording artist got that album mixed and mastered by pros in a pro, acoustically treated, environment. Mix portability is utmost in their minds. So, if you play a CD (uncompressed is a better choice than mp3 since it's your reference point) of a band that's close to the vibe of your music, you can match your EQ to what you hear. For instance, if the bass is too heavy through your regular stereo speakers, it's a good bet that if you compare your bass to the CD's bass on your studio monitors, you'll find that they used less. So, even if it doesn't sound as punchy in the bass when you're mixing, you can be fairly certain that if you have a similar level that your mix will sound right on your stereo, in your car, etc. Same for all other frequencies, of course.


    Add a track in your DAW for the reference song. You may have to jump through a couple of hoops, but remember it's important that the reference track does not go through any processing like compressors, limiters, etc. that you might have on your master bus. You want to hear your song through your master bus, but the reference completely unaltered. Then you can mute / unmute to compare the two when mixing. You'll do frequent "car tests" as it's known, taking your mix to different environments to see how it works until you get it right. Bring your reference CD as well, and try to play them in as many different environments as you can to really get a feel for what makes a portable mix.


    Next, regarding room acoustics, there are a few things you can do. I have professionally manufactured fiberglass panels in my control room and I have a drop ceiling with fiberglass panels to stop the bounce in that direction. That helped a lot, but the frequency response is still not flat. My weapon of choice was the DBX DriveRack PA2. It's a one rack hardware unit that sits between the console and the speakers. You plug in a special mic, run the analyze program and it determines the frequency response at your mix position. It has an eight band parametric EQ that it automatically sets to compensate for too much / too little in the given frequency range to nudge the curve back to flat. That made a huge difference in the reliability of my room.


    SonarWorks has a reference plugin that does the same kind of thing that's less expensive, around $250. I tried their demo and it was a bit twitchy for my taste, and it's the last thing in our output bus so you also have to remember to disable it when you're printing a mix. Still, a lot of people have good results with it.


    As for your studio monitors, while there are certainly superstar quality monitors that you can mortgage your home to buy, what's really more important is to just know the speakers you have. Yamaha NS-10s were industry standards, and they sounded absolutely horrible. However, the point was that you knew how a good mix would sound on them. The same applies to better quality speakers. Just know what a good mix sounds like on them and you're 99% there. Again, reference mixes are your friends. Eventually you'll just instinctively mix the bass lower than you want to hear it in the studio (until you get your environment treated), knowing that this sound in the studio translates to perfectly rockin' bass in the living room, the car, etc.


    One final note regarding test mixes - don't forget your phone! It needs to sound decent through the phone's speakers. You can forget about having much bass response there, so you also need to mix your bass so that upper frequencies for transients, etc. will still give a sense of the bass "being there" when played on device that can't move air in the lower range. Also, and this is even more of a crap shoot, phone earbuds are often massively hyped on the low end. One again, listen to the mp3 version of your reference mix on your phone and see how they managed it.


    If you feel like you just fell down the rabbit hole, don't worry. You'll have lots of company there. These are issues we've all grappled with, and gaining these skills is not an overnight thing. There's no "Portable Mix" button on your computer. You just have to slug it out like the rest of us. Still, if you at least know the battle you're fighting, you can avoid at least some of the nicks and cuts.

    Kemper remote -> Powered toaster -> Yamaha DXR-10

  • Excellent Chris Duncan! I truly appreciate your detailed response!


    Now, I won't feel so bad rendering and deleting, rendering and deleting, rinse and repeat this mix. I'm so dang close though. UGH!

    Larry Mar @ Lonegun Studios. Neither one famous yet.

  • Do sound engineers ever run a global EQ over the final cut after rendering, or should I mod those frequencies inside the mix for each instrument before rendering?


    I would like to run vocals over this track, so if I do a global EQ for the mix then I would have to import the global EQ'd track into a new project as a whole, and then do vocals accordingly. Is this acceptable?

    It occurs to me that I didn't speak to these issues. While there's no one "right" way to mix, a common practice is a three step approach.


    First, "do your eq with the microphone" is a common saying. It means capturing the source so that it properly sits in the mix without needing to EQ it after the fact. While we're here because we use Kempers, if you've ever miked a guitar cab you'll know that even half an inch difference in placement towards or away from the cone will have a very significant effect on the frequency, i.e. more treble closer to the cone, etc. While we don't need to mic a cab, remember that for a guitar, what sounds good when you're rocking out alone isn't necessarily the best tone for the mix. For example, that low end thump you like to hear can get into the bass player's turf and a mix engineer will probably put a high pass filter on it.


    Apply the same thinking to vocals and anything else you track. There's only so much frequency real estate to go around in your mix. If you decide who's going to own what turf and capture your tracks with that in mind, you save a lot of "fixing in the mix" when there's too much of a frequency range on a track that you then have to carve out with eq. And that brings us to the second step. Once you have a good track, most people will typically make eq and other adjustments on the individual tracks rather than applying a curve to the entire mix. It's much easier that way as you get fine grained control.


    Once you've got your tracks tweaked and the faders where you want them, you can consider your master bus for overall processing. It's common to have compressors / limiters at this point to smooth out the entire thing and also to make sure your levels are comparable with other pro mixes. You can certainly strap an eq across the master bus as well, but if you find yourself making big movements there it's usually an indication that you have a problem track or two and should really address it at the source. Personally I don't use an eq on the master bus, and while there's no right or wrong, I'd recommend that you don't, either. Mostly as a training tool to make you fix the problems on the tracks.


    As for vocals, they're just one (or more) tracks in your DAW project for a given song. It's good to get at least a scratch vocal track down as early as possible, as that will help you get a feel for how to keep other stuff out of the way so that it doesn't get pushed into the background. Many in the industry consider the vocal to be the most important part of the song (yes, we all know it's really the guitar, but just sayin'). Consequently, some mixers will bring up the vocal fader first, and then incrementally bring up the bass, drums, guitar, etc. in a "supporting role" kind of mindset. That can be a useful approach. Others want to get the bass & drums up first, and build off the rhythm section, the drop the vocal on top at the end. You should experiment with both.


    A big thing to remember when reaching for eq is "cut, don't boost." If the upper mids of your vocal aren't loud enough, instead of boosting that frequency on the vocals, figure out who's getting in the way (likely it's guitar and / or keyboards), and cut that frequency on those tracks. Another general rule of thumb is to try keeping your moves to around 3db. You can certainly do more (or less), but again if you're having to make a 12db adjustment on one track, it should really make you step back and ask what the real problem is.


    I don't know which DAW you use, but there's a recent feature in Cubase that I just love. The eq window lets you bring up another track, and it overlays the frequencies of both in different colors, so you can easily see who's getting in the way, overcrowded frequencies, etc. I think they got that idea from the Fab Filter plugin, but it's really handy.


    So, get the sound right at the source, do most of your corrections on a track by track basis, and learn to hear who's getting into someone else's turf and get rid of the problem there instead of boosting the victim's frequencies.


    There are a lot of resources out there, but here's a good one for getting familiar with the basics. His audience is home recording folks like us, and he offers a lot of down to earth, practical advice that might help you get up and running.


    Recording Revolution

    Kemper remote -> Powered toaster -> Yamaha DXR-10

  • Excellent Chris Duncan! I truly appreciate your detailed response!


    Now, I won't feel so bad rendering and deleting, rendering and deleting, rinse and repeat this mix. I'm so dang close though. UGH!

    Dude, if you ever get to the point where you never have to go through that, teach a course. We'll all show up! :)

    Kemper remote -> Powered toaster -> Yamaha DXR-10

  • EVERYTHING that Chris said.


    The Reference Mix may be the main starting point. Find similar songs, in the same or similar genre, and make your instruments and mix sound as much like the reference mix as you can. Doing that can lessen the impact of your individual room and speakers, and the more you do it, the more you will be attuned to how to get broadcast quality mixes with your speakers in your room.

  • EVERYTHING that Chris said.


    The Reference Mix may be the main starting point. Find similar songs, in the same or similar genre, and make your instruments and mix sound as much like the reference mix as you can. Doing that can lessen the impact of your individual room and speakers, and the more you do it, the more you will be attuned to how to get broadcast quality mixes with your speakers in your room.

    Absolutely! I have a reference mix but I was previously trying to match up with it outside of DAW with stereo speakers after rendering but never though it would be necessary to add that mix inside the DAW with studio monitors to compare that way.

    Larry Mar @ Lonegun Studios. Neither one famous yet.

  • One of the most common mistakes, Oh have I been guilty of it too, is bass on the bass guitar. If you have ever have or heard raw bass tracks or, DI or both, from mixes made by pro's, there's no bass. Bass on bass guitar is something you build up in the mix. Make sure everything sounds seperate as possible before you even thinking of mixing and even before you start mixing, have a vision of what you would like it sound when finished. Don't just mix and believe it will work out. That profile(s)you love might be wrong for this project. If they clash with vocal e.g. change to other profile(s). The number one wrong all have done is take all the sound they like on all instruments and think it will be fixed in the mix and then wonder why it doesn't as good as they hoped. And ride the faders alot too.

    Ik Mulitmedia has a cheaper thing called ARC https://www.ikmultimedia.com/p…V=Other%20Filter&PSEL=arc

    Think for yourself, or others will think for you wihout thinking of you

    Henry David Thoreau

  • Using it inside the DAW, rounded directly to the master outputs, bypassing all master bus effects is really the key to ending up with a mix that is as portable as your reference mix.

    Agree! I did that yesterday and I am super close to finalizing the mix.

    Larry Mar @ Lonegun Studios. Neither one famous yet.

  • One of the most common mistakes, Oh have I been guilty of it too, is bass on the bass guitar. If you have ever have or heard raw bass tracks or, DI or both, from mixes made by pro's, there's no bass. Bass on bass guitar is something you build up in the mix. Make sure everything sounds seperate as possible before you even thinking of mixing and even before you start mixing, have a vision of what you would like it sound when finished. Don't just mix and believe it will work out. That profile(s)you love might be wrong for this project. If they clash with vocal e.g. change to other profile(s). The number one wrong all have done is take all the sound they like on all instruments and think it will be fixed in the mix and then wonder why it doesn't as good as they hoped. And ride the faders alot too.

    Ik Mulitmedia has a cheaper thing called ARC https://www.ikmultimedia.com/p…V=Other%20Filter&PSEL=arc

    Yes. I was so surprised at how much bass I had to kill from the bass. It's almost like I have to make the bass sound like a guitar in the mix to keep it from muddying up everything. So far, It's been the hardest instrument to get right -and it's like the simplest instrument of all. (Sorry Geddy Lee). ;)

    Larry Mar @ Lonegun Studios. Neither one famous yet.

  • Yes. I was so surprised at how much bass I had to kill from the bass. It's almost like I have to make the bass sound like a guitar in the mix to keep it from muddying up everything. So far, It's been the hardest instrument to get right -and it's like the simplest instrument of all. (Sorry Geddy Lee). ;)

    Yep, nailing the bass has always been one of the hardest things. It's difficult enough in a rock context where the bass tends to be fairly focused. I can't imagine what a battle it must be for the rap and hip hop guys who have those big, wide, expansive "whoomp" bass lines.

    I am super close to finalizing the mix.

    Famous last words. :)

    Kemper remote -> Powered toaster -> Yamaha DXR-10

  • I tell ya, this is exhausting. :wacko: I am happy with the mix so far. I added some tiny addon tracks today that really define the theme of the song. I am about to add vocals to it but afraid I'm going to nuke it to death. I wish I had Paul Rodgers voice because that's what I hear in my head singing this. Chad Kroeger might work to with a bit of Harry Connick Jr for smoothness.


    My wife wants to sing backing on it but I told her no girls allowed at this time.


    BTW, this song is a merger of two opposing genres into one. I am hoping the audience gets it.

    Larry Mar @ Lonegun Studios. Neither one famous yet.

  • Like anything new, it's less fun while you're trying to get up to speed, but that happens quickly.


    Mixing is a completely separate thing from playing guitar or writing the song, and I enjoy it as its own pursuit. I tend to be a bit modal with creativity, so if I'm in a guitar playing headspace I won't feel like fooling with a computer, and when I'm in the zone sitting at the console I'd rather do that than pick up the guitar. When I'm doing either of those I absolutely don't feel like working on writing a book. They all get their own exclusive chunk of real estate and I tend to visit them individually.


    I actually enjoy mixing a helluva lot more than tracking. When I'm recording, it often feels like the take is never good enough and can be frustrating. Mixing is more of an incremental adventure, a constant tweaking to make the song sound better. Even back in the clumsy and noisy days of tape, analog, hiss, ground loops and twitchy cables, it's always been fun for me. It sounds like you're starting to dig it, too.


    My wife wants to sing backing on it but I told her no girls allowed at this time.

    if you're lucky enough to have a wife who enjoys your musical side, let alone one who wants to participate, you've hit the lottery. Let the girl sing! :)

    Kemper remote -> Powered toaster -> Yamaha DXR-10

  • Well, I think I am ready to finalize the mix - at least to let you hear it to give some opinions on what to do next. I only have a very tiny intro solo and did not think the song needed an ending solo for how I want it to be. Then I started playing with a Wah profile over the bridge during a break... and now I might want to put a solo in. :pinch:


    I had the song at 3m38s which is kind of were I like it to be but adding a solo would put it over 4 minutes and maybe longer so I have a conundrum. The bridge can allow me to go really crazy on an extended solo- like forever. :D


    I remember reading somewhere that songs should be limited to 3.5 - 4.25 minutes or something like that. But I guess that was for "radio time" ?

    Larry Mar @ Lonegun Studios. Neither one famous yet.

  • I think the play time thing is more about your intended audience. For some genres, people expect long songs and short ones might feel out of step with everyone else. In others, the quick three minute pop / radio rule is the norm.


    If you're crossing genres as you mentioned, perhaps there's math involved. :)

    Kemper remote -> Powered toaster -> Yamaha DXR-10

  • I think the play time thing is more about your intended audience. For some genres, people expect long songs and short ones might feel out of step with everyone else. In others, the quick three minute pop / radio rule is the norm.


    If you're crossing genres as you mentioned, perhaps there's math involved. :)

    I just ran over the song again and played as is, and then I looped the bridge and played a solo -and it only makes the song better so it has to get added in and the song extended. I was trying to just focus in on the most punctuated solo phrases I could come up, but I am certain now the solo belongs there and the profile I am using will add to both genres.


    You guys got me literally sweating this thing out. I normally would shut the studio down 2 hours ago. LOL.

    Larry Mar @ Lonegun Studios. Neither one famous yet.

  • I think the play time thing is more about your intended audience.

    I think the song length standards probably grew out out of analog recording limitations. Vinyl singles could only hold so much music. Albums were also constrained by the ability to cut grooves in a 12” piece of plastic. Those limits became the defacto standard even though technological improvements meant they no longer had any relevance.


    Obviously radio airplay, audience attention limits and the need to squeeze in adverts also played a part in song length conventions. Mind you the audience attention concept doesn’t really stack up in the digital generation. Technology allows almost limitless song length but audience attention spans seem to have shrunk to about 4 seconds if my kids and their pals are anything to go by ?

  • ...


    Technology allows almost limitless song length but audience attention spans seem to have shrunk to about 4 seconds if my kids and their pals are anything to go by ?

    I have to agree if I happen to be browsing on the internet. Even if reading an online article, I will scroll quickly to the part that draws my attention the most. We live in a super fast "I want it now" world. Everyone needs to relax, but not me. 8o


    I'm getting ready to post this song soon, so y'all can make the recommendations of what I have to change or adjust.

    Larry Mar @ Lonegun Studios. Neither one famous yet.