Tips for better recording sessions and projects

  • After many false starts that often had me starting to re-record over and over again, I put together a list of things I thing will benefit guys who are just starting out with their recordings (pros, kindly excuse for how basic this is, but feel free to add).


    1) Make sure you set the clock and other settings right at the beginning: There's nothing worse than finding out after recording that you didn't set the Kemper to master and your interface to slave. This tip also extends to figuring out what setting you want to use across all the songs in a certain project.

    For example, if you want to record for an audio CD, use 44.1khz, no problem, but if you want to sync audio with video (especially if not using a high end video editing software), set your audio rate to 48khz, which has been made available with one of the recent updates.

    Also remember that latency goes down (theoretically, if you can detect it) at higher sample rates, but processing power and space taken by recordings is more. Some people say they can hear a difference. Most say they can't.

    Make it a point to also change the settings in your DAW to see what works best. For example, in Cubase, do you want ASIO Guard? It adds latency, but prevents audio dropouts and glitches. Do you want to use multiple cores?

    Bottom line is to think it through before putting the proverbial pen to the paper. Choosing the right settings is like choosing the kind of pen, the type of paper and where you are going to write it.


    2) Direct monitoring: I wasted the last several months recording guitar for 10 songs and was wondering, "Why am I so sloppy while recording?"

    Turns out, I was recording using software monitoring. So basically, the signal from the Kemper goes into the interface, which sends it to the computer, which sends the signal back to the interface, which routes it to the speakers, which send the sound to your ears.

    Basically, this latency can be handled, right? Now imagine you are running a VST drum machine and a synth as well. On top of that, you are recording guitar once, then again, then maybe again and again. As your computer plays back more and more tracks, VSTs et al, CPU load increases, necessitating a higher buffer, as a consequence of which you experience higher latency.

    The end result: Compensating (under or over, we aren’t machines) for that latency. You'll see this if you record DI tracks. Stuff will line up sometimes, but most of the time, there will be a shift here or there.

    The solution: Don't use software monitoring. Don't monitor your interface outputs and direct monitor. I'm re-recording (sigh) a project and it is audibly tighter and visually, I can make out that I am playing each chord at the same time. It really helps, even with a computer with 32 gigs of RAM.

    With direct monitoring, even at higher settings, you will hear your guitar being played without a delay and at the same time, the recording will be in perfect timing.


    3) Listen to what you just played: In the interests of speed, some of us just power through track after track without giving it a good listen. Might take up some time, but after you finish recording a track, or even a section make sure you listen to it again. This is especially important for those of you out there who might be recording each section bit by bit and then slicing up stuff and copy-pasting. Don't just listen to the segment, listen to the segment in conjunction with both the part before it as well as after it, otherwise you might find that the transitions are so abrupt that cross fading doesn't help. There's nothing like having packed away your guitars and then realising that there's a bum note or string noise at some part and you can't remember what profile you were using. Heck, even the way you strum the guitar could potentially change (I'm happy, I'm sad syndrome).


    4) Always record a DI track: Even if you are committing to a certain sound on a take, with the Kemper, there is absolutely no reason why you shouldn't keep a DI as backup. Why? Supposing when all the elements of the track are in, you find that the two guitars don't sound as good together in the complete mix. With a DI track, you could always just reamp without the need to re-record a perfect take. Heck, you may even find a better profile a day or month later before you have finished mixing. The DI will come in handy. What's more, let's say you finished a project and released the album today. 10 years from now, you may decide you want to have the song professionally reamped and mastered at a big studio. All you have to do is send them the DIs and you are golden. This is one reason that I always suggest guys get an interface that has more than two inputs.


    5) Save early, save constantly: There have been times when I am nearing the end of a project and then suddenly something goes wrong. Often, this results in hours of time down the drain as well as inspiration. There's a solution to that: save early in the project and save constantly. More often than not, most DAWs have an auto save feature. Enable this and set it to a low value. Also, you can specify how many times the project is saved as a backup file, so that you could go to a version that was saved say 10 minutes ago if you need to. Might seem unnecessary, but you'll find that computer crashes often happen at the damndest time.


    6) Don't record too hot: When recording guitars, it's important that you don't send too hot a signal to the DAW. What happens when you do is that the signal gets squashed and there's no headroom for it to breathe. When this happens across multiple tracks, the recording gets tinier and tinier and gain becomes muddier. You ideally want to track at about -12db or even -18db. Don't get into the trap that louder is better, what you should really aim for is a dynamic recording. This will also make mixing much, much easier as you aren't constantly trying to avoid the master meter clipping. You can always increase the volume later on in the mastering stages.


    7) Make notes: You may or may not have this feature in your DAW, but you should definitely keep a notebook or a file on your computer that has details of the settings used in a specific project, such as the profiles used, tunings, even string gauges and guitars used. Maybe even take a few photos and keep them in the folder where the audio files are stored. This will save you grief many years later, or even a few weeks later when you want to touch something up or revisit a project.


    8) Make a backup: At least twice have I been near the culmination of the Mechanevil project only to have my hard disk crash or some strange bug to crop up that prevents me from opening up a project. These have been devastating setbacks that can damage your morale as well as that of your bandmates. It's often difficult to get back into the same frame of mind and keep things fun as well: after all, who wants to keep playing their songs again and again unless you are on stage. Making a backup is therefore one of the most important things you can do as a musician. Do it after you record every song on an external hard disk or an Apple Time Capsule. DVDs are another good option. Don't get caught with your pants down.


    9) Have a deadline, do your best to keep to it: Nothing worse than not feeling it when you are recording. In that sense, I feel I'm blessed by the fact that I can record at my own pace without having to look at the clock in the studio and doing a rush job. At the same time, procrastination is not your friend when it comes to recording your music. You may record for a while, then think I could improve this, I could improve that and in the end, you will just be second-guessing yourself. You may even lose that frame of mind you were in when you started recording (and trust me, this is the hardest thing to get back), resulting in "not feeling it". Days turn to weeks, turn to months, turn to years? Seriously, don't get caught in that trap. Set a realistic time frame. Nobody's saying you have to do it all in one week or even 10. But set a target and work towards it. Doing so will work wonders with your motivation levels and will ensure that at the end of the day, you will have something to show for the time spent hunched over your guitar in front of a computer while everybody was partying like there was no tomorrow. Fun! :S


    Feel free to share your own recording tips, guys. :thumbup:

  • Record in analogue instead of via spdif.

    Then Kemper clocking is a non-issue.


    Record everything, not just guitars with ample headroom and at 32bit float if you can.

    Use the highest sample rate practical.

    I do almost everything at 96k.


    Monitor with as good a rough balance as you can. Some people call this ‘always be mixing’; but it’s important to have perspective on context as you overdub; don’t just throw stuff on and hope to sort it later.


    It’s far better to commit to a sound and make your subsequent choices BASED on those you’ve already made. Otherwise you can endlessly chase your tail.


    “Perfection” is both unattainable by humans and highly overrated. Don’t “fix” everything. PERFORM.

  • I really don't think 32bit floating point is necessary or practically-discernible for individual tracks in the real world (1500dB dynamic-range mixing in DAW's using 24bit tracks is stupidly-adequate already), but that's just MHO and I'm not going to argue with Will.

    As your computer plays back more and more tracks, the latency imperceptibly (or perhaps perceptibly) increases.

    I disagree, AJ. The delay between when you hit play / record and hear the programme material will increase, and that's 'cause it takes ever-longer to fill the playback buffers as you add more tracks. This can lead some to conclude that overall latency has increased, but in fact the record-monitoring latency, be it for "real" instruments or virtual ones, remains fixed at the number of samples one sets one's project to use (buffer setting).


    To add to point "1", once those setup settings have been arrived at, save the empty project as a template.


    On that subject, you might want to add as a separate point that templates are your friends. Save one for each of the various scenarios, be they genre-driven, track-count-determined, mix-of-virtual-vs-real-instrument-track defined or whatever. Create one template for each basic scenario so one can dive into recording or composing without having to scratch one's head and shift from one-side-of-the-brain focus to the other. A real creative-flow killer, that one is.


    Great effort, AJ. Very thoughtful of you mate. Great points overall I reckon, and it will be appreciated by many here IMHO. Thank you bud!

  • I disagree, AJ. The delay between when you hit play / record and hear the programme material will increase, and that's 'cause it takes ever-longer to fill the playback buffers as you add more tracks. This can lead some to conclude that overall latency has increased, but in fact the record-monitoring latency, be it for "real" instruments or virtual ones, remains fixed at the number of samples one sets one's project to use (buffer setting).


    There's definitely added latency due to more tracks being added, Nicky. Look at it another way.


    Try creating a project with a huge number of tracks and then compare to a project which has fewer tracks.


    At some point, you'll have to bump up that buffer set to 32 or 64 or whatever. More load = greater buffer required.


    And as you increase the buffer, the latency also increases.


    Also, the esoteric example aside, try this with software monitoring, not direct monitoring. I guarantee the latency gets worse with each successive layer.

  • We'll have to agree to disagree, AJ.


    IMHO, more "track load" = greater CPU load, not "greater buffer required".


    The DAW automatically assigns playback buffers as you add tracks. That chews up more RAM (only a little, mind you), and the actual playback obviously demands a greater CPU hit, but the buffer size remains constant even 'though it's being applied to ever-more tracks. Only once one starts to push the CPU too-hard, save for muting / rendering tracks to conserve cycles, should one consider increasing project buffer settings.


    Maybe it's different for you in Cubase. For me, playback-buffer settings, usually defined in s/ms and selectable from within an audio-setup menu, literally determine how much of each track is loaded into ram as a buffer to ensure smooth playback.


    Project-buffer settings, OTOH, are what constrain the speed at which signal can pass through the DAW, IOW, monitoring latency.


    EDIT:

    The dudes using 2000-track templates in DP (scorers and so on) don't have to adjust buffers regardless of the number of tracks they're running... unless they're hitting the CPU too-hard.

  • Pretty sure those 2,000 track presets have external processors running, rather than stuff on the computer, for one.


    Two, of I have project x and load it with 2000 tracks and a buffer of 32, will it start to cough and spit?


    If the same project had 20 tracks, would it cough and spit at the same buffer?


    If it does, would I increase my buffer setting?

    Why would I do that? So that more samples are loaded into my RAM before playback of course.


    I’m running an i7 with 32 gigs of RAM and an off board graphics card. While I appreciate the theory of how it works, there’s a rationale for setting a buffer setting and it has to do with a balance between recording latency and ensuring playback doesn’t screw up.


    Like I said, just take the exaggerated example of 2,000 tracks you provided and assign any random buffer like 32 to it. Somewhere or the other, there’s a trade off in order to ensure that my tracks are playing back without stuttering.


    So obviously, the CPU load cannot be dissociated from recording latency.

  • Mate, you're confusing playback buffers with processing ones, which is why I highlighted those terms previously.


    Here's the simplest way I can explain it:


    You define the number of samples as a fixed-time window during which the DAW has to do its thing. It cannot, nor will it, alter that fixed time; you've set the limit. The more tracks you add, and especially VI's, as you'd know, the greater the demand on the CPU becomes to perform the necessary calculations within the time constraint of, let's say, 256 samples (typical working-buffer setting).


    It will do its utmost to fulfil this requirement, to the point of causing clicks and pops in the monitored audio and eventually probably freezing. At no point will it attempt to increase the buffer size. So, as you pointed out, you increase the buffer size, thus allowing it more room to breathe. It's the CPU hit alone that drives the necessity to increase this size. Track counts are only relevant in that they contribute to this CPU load, but always, the buffer constraint limits the monitoring delay possible. Always.


    Playback buffers are a different story, and I already explained them adequately I feel.


    EDIT:

    All your reasoning in your last post is correct IMHO, BTW. The only "error" is in assuming that the latency increases as tracks are added; it doesn't. This only happens when you manually increase the buffer size. It simply cannot happen any other way. Remember, as I said initially, some folks perceive an increase due to increased DAW-transport sluggishness, and this has nothing to do with the project / audio-driver buffer and everything to do with having to fill more track buffers prior to responding to transport commands.

  • Playback buffers are a different story, and I already explained them adequately I feel.

    For the benefit of those who might be interested:


    I feel that playback buffers almost certainly have their origins in the necessity back in the day to provide a cushion for filling in tiny gaps in playback streaming from disk drives, as well as to cater for more-responsive feels when navigating projects in the early days. Disks weren't able to provide "instant" audio when called upon to do so. Not only were their abilities to locate data less-sophisticated, but this slower response, coupled with the bandwidth requirements of having to stream many tracks at once (all from different locations on the drive, obviously), meant that had buffers that could be filled with the anticipated data not been implemented, playback would've been slow-to-respond and increasingly-choppy / unreliable as more tracks were added to projects and their data became fragmented through edits and whatnot.


    When VI-based sample playback entered the fray, the very same limitations demanded playback buffers for them, except in this case the requirement for instant-audio responses (when you hit a key you expect the instrument to play instantly) was critical and not just a feature for a more-snappy transport control.


    In the case of samplers, the attack portions of the audio files on disk, the length of which can usually be set in the preferences area of a sampler plugin, are loaded into RAM so they can be instantly heard on-demand whilst the hard disk locates its read head/s and begins streaming the rest of the sample/s concerned.


    As stated earlier, in the case of DAW's, it's simply a means to prepare tracks' data for an instant response when you hit the play button. If you jump around the timeline quickly-enough, there won't be sufficient time to fill these buffers and the transport's response time will suffer. The advent of SSD's should have in-theory ameliorated this phenomenon to an extent, but the principle is sound(!).


  • Ah, I understand what you’re saying, Nicky. I agree 100%.

    To explain my point better, take the example of a drum VST or a synthesiser VST. In order to hear the sounds that you are playing, you will have to use software monitoring.

    I have firsthand experience of how that higher latency *involved in software monitoring* really destroys the performance. While it may be constant, we often overcompensate and undercompensate, as we are not machines with fixed timing.

    With guitar, it isn’t very different, especially as most of us can’t use the 64 samples buffer option. After a few tracks, it almost inevitably starts to choke and we have to raise the buffer and latency.

    With direct monitoring, I can set my buffer to whatever to ensure proper playback, even settings of 1024 samples. When I record, there will be no latency at all while I am monitoring and the recordings will be on time.

    With software monitoring, there is the whole added latency of round tripping, as well as the latency required for the buffer setting.

    Consequently, coming to my point about the latency being perceptible (or imperceptible), in my experience, I found that my first DI track was played to one perception of latency, the second was played to another and the third and forth to something else.

    It isn’t a factor contributed to by CPU load until it becomes a question of how much CPU resources you have and do you need to raise the buffer to ensure proper playback.

    I’ll edit the OP to reflect that.

  • 100% correct weight, AJ! Phew! I was wondering if I'd exhausted all the obvious ways of explaining it or not. :D


    You might want to add the suggestion of a small mixer for monitoring. I was going to say something about that 'cause whilst you said, "Don't use software monitoring. Don't monitor your interface outputs and direct monitor", you didn't provide a tangible description of any alternatives for the typical noob.


    I've had a monitoring desk (Mackie Onyx 1620) since, like, 2002-4, and although I've still not been able to use it in-practice, I'm sure I'll be able to count on it to provide latency-free monitoring regardless of the buffer settings in DP. Can't wait. Hopefully sitting there unused, covered in pillow cases like almost all my gear, will see it operate as-new when the time comes, and it's a-comin' brah!


    EDIT: You posted 1 minute after me, so you might've missed my little playback-buffer explanation for those who didn't understand it. I didn't go into much detail earlier as it wasn't relevant to the processing-buffer issue. Take a gander if you like.


  • Already have, Nicky! Thanks for the primer!

  • Ironic that you should say that, AJ, 'cause the term used to describe the filling of playback buffers for tracks is... wait for it... priming.


    My pleasure, as always, bud. <3

  • Great stuff, AJ, as well as the additional clarifications from Nicky and others. I promise you I've made each and every one of those mistakes over the years, usually at the worst possible time.


    As Nicky mentioned, having a mixer / desk is invaluable. The setup I currently have is more than most need (for that matter it's probably more than I need) but the upgrade I've enjoyed the most in recent years is the Yamaha TF5 32 channel mixer, which also comes in 24 and 16 channel flavors. Yamaha is among the big dogs for concert mixers and their preamps are always good quality. The TF5's pres are simply clean and don't have a particular personality, but that's what I use outboard pres for when I want that. But here's the thing I enjoy the most about it - it's a 32 channel USB audio interface as well as a mixer.


    This means whether I'm tracking voice, electric guitar, acoustic, drums, etc. it all goes into the mixer. The mixer shows up as 32 ins and outs in my DAW. I point the USB out from my DAW main mix to two channels of the mixer. There are also ample aux busses, etc. so it's very easy to get a radio quality mix in your ears / headphones as you track, all with zero latency on your instrument. As AJ pointed out, this is really a very big deal because it absolutely makes a difference in feel. Frankly, I'm just not talented enough to ignore latency when I track without it screwing with me.


    These days there's a wealth of digital mixers in all shapes and sizes that have this same architecture, coupling analog mixing with a USB audio interface, to fit many budgets. Even if you never track anything more than guitar and vocals, a small mixer, especially if it's also an audio interface, is a huge workflow enhancement in a wide variety of ways.

  • I also prefer the simplicity of recording in analogue and I'm very much a fan of always be mixing. When I'm tracking I don't want to have the green screen experience, where I have to imagine what it will be fitting into. That's why I use EZ Drummer for songwriting, because I simply can't find the groove with just a click track.


    So, I like to get a solid mix going so that when I'm tracking, it feels like I'm playing the song rather than adding another widget. As you mentioned, it's all about the performance, so anything I can do to keep myself in the moment makes for a better end result.

  • with careful gain staging and recording 24bit is indeed 'just as good' as 32


    what 32 gets you is mostly a kind of clipping safety factour

    Exactly, Will.


    I felt bad "contradicting" what you said, so I didn't say any more, but what I would have liked to have added is that IMHO the place for 32bit float is the final mix being sent to a professional mastering house / engineer. The additional resolution can apparently come in handy for those guys as they dabble and tweak in the rarified air that is the upper-most few dB before clipping. Not mandatory, but possibly-helpful.


    At any rate, nothing for a noob to worry about, and let's face it, many a great recording has been released that never saw 32bit resolution at any stage in its development. One for the "sticklers" / OCD types / audiophiles / creative obsessives... folks like you and me and a few others here. :D


    Full disclosure:

    I'll bet that when I finally get to that stage in a project again, I'll be so blown away by the overall quality I'll be hearing, given that my last effort was on an ATARI / ADAT / Mackie CR-1604 and ROMPlers all going into a portable DAT machine and the long way technology's come in the intervening 25 years, that I probably won't give two hoots and will settle for a 24bit final mix and master the ruddy thing myself at that bitrate. 8o


    Chris, totally with you on everything you said, brother. I've always had the option of buying a digital slot-in module for the Mackie Onyx 1620, but didn't want to be tied to its spec, drivers and whatnot; every purchase is made with the long haul in mind. So, I run it all-analogue and use dedicated interfaces separately. One can either use their internal near-zero-latency routing to send copies of live inputs to the desk for monitoring and / or hit the desk directly and use the direct outs via DB25 connector snake to an interface. The former route induces a few ms latency due to the A/D and D/A conversion that must take place in the interface, and the latter introduces a tiny amount of colouration due to the signal's having to travel through the desk's mic pre's even 'though no gain and therefore obvious flavour is imparted. This is the big dilemma for me, especially when it comes to the Slate mic sim and the Kemper. Still not sure quite which I/O's will go direct-to-interface and which via the desk, but I'm fully aware that it's a first-world headache so I refuse to sweat it and figure I'll just go with the flow when the time comes.


    Hopefully our musings are helping noobs get a handle on this stuff. :wacko:

  • ADAT / Mackie CR-1604 and ROMPlers all going into a portable DAT machine

    We've clearly chewed some of the same turf. In a previous lifetime my world was guitars / JV-2080 -> 24 track ADAT -> Mackie 2408 -> Sony DAT. The dust cover for that mixer still serves today on the Yamaha, as it did on the D8B before it. Patchbays and outboard effects, twitchy gear (Alesis compressors!), ground loops, rewind buttons... I don't miss that stuff even a little.


    And to touch on AJ's comment about always including a DI track, at this point that's the only thing I'm recording. As long as the profile I'm playing through for monitoring has the gain / feel of what I'm going for I can just focus on the performance. When I'm done, I can reamp, drink coffee and scroll through Rig Manager until I find the exact tone I'm looking for, then just press record to render the actual audio. Reamping with this thing is a dream. I press Input, turn one knob and I'm done.

  • Same turf indeed, brother!


    100% how I see the DI thing too. I'll probably end up recording the processed signal along with the DI one just in case it works, but fully expecting it not to. :D

  • We've clearly chewed some of the same turf. In a previous lifetime my world was guitars / JV-2080 -> 24 track ADAT -> Mackie 2408 -> Sony DAT. The dust cover for that mixer still serves today on the Yamaha, as it did on the D8B before it. Patchbays and outboard effects, twitchy gear (Alesis compressors!), ground loops, rewind buttons... I don't miss that stuff even a little.


    And to touch on AJ's comment about always including a DI track, at this point that's the only thing I'm recording. As long as the profile I'm playing through for monitoring has the gain / feel of what I'm going for I can just focus on the performance. When I'm done, I can reamp, drink coffee and scroll through Rig Manager until I find the exact tone I'm looking for, then just press record to render the actual audio. Reamping with this thing is a dream. I press Input, turn one knob and I'm done.


    Why I would always suggest recording the stack as well as the DI is simple: you can't tell from a DI track sometimes whether a distorted section has been recorded the right way. In that sense, you are flying blind while doing the recording itself.


    Better is to record the stack or master mono/stereo as well. That way, when you listen back, you will have an idea of whether you played a part accurately. Sometimes a DI track won't be suitable for that, for example during fast playing or palm muting or complex melodies.

  • In my logic, recording DIs (with the exception of bass guitar) is for when you don’t know where you want to go with a track. If you have a clear idea of where you’re headed, you don’t need DIs, which is something that comes with experience. For me, I don’t have the time to spend trying out different guitar sounds after the fact. I’d rather find the sound and get the performance with that sound and move on, all with the “mixing as you go” mindset. When I first got the KPA, I dabbled with recording DIs simultaneously, but just found that I never used them, if I’d spent the time to find a suitable sound in the first place, which I always do anyway, otherwise I’m not inspired to play .