Preparing your audio files for working with a mix engineer / producer

Congratulations! You’ve decided that you want to take your music to the next level so you’ve hired a mixing engineer or producer. You’ve discussed how you want your track to sound, negotiated fees and worked out what particular brand of fairy dust the engineer is going to use… all you’ve got to do now is send over the audio so they can work their magic, right?

Almost. 

I decided to write this blog after reflecting on some of the experiences I’ve had working with audio brought to me by my clients. In some cases it’s audio they’ve recorded themselves – they may have even done a bit of editing or mixing on it too – or it’s come from another studio where they did the tracking. Either way, there is usually some work involved in getting it ready to mix.

Now I get it, as an independent artist you’re often on a tight budget and need to make sure you get the most out of your sessions. So let me help you with that right now: here are a few things you can do ahead of the session to ensure that whoever you hire doesn’t spend half the session tidying it up instead of mixing it for you.

Don’t get me wrong though, if you really don’t want to tackle any of these and are happy to pay someone to do it for you, then that’s fine. I for one will happily take on these prep tasks. But if you’d rather whoever you choose to work with can get down to doing their mix thing as quickly and easily as possible, then follow these steps:

Tracks or Session files?

Determine if the mix engineer uses the same DAW as you and can thus use your session files directly. Most engineers will support a range of DAWs:  for instance I have Pro Tools , Logic, Ableton and Reaper. If they don’t, you will have to export audio stems for each track in your project. In many ways this is easier for the mix engineer as they don’t need to worry about deciphering someone else’s project setup, but most of the following observations will still apply before you render.

Screen Shot 2019-05-20 at 13.25.12
Rendering stems in Reaper

If you are exporting individual tracks, make sure each audio file is clearly named. If you have used any processing, export a dry (unprocessed) track as well as one with all the processing on. Check with the engineer on their preferred format, usually 24bit wav at the sample rate of the project. Don’t automatically normalise the stems, but watch for digital clipping and if necessary reduce the gain on the track. Logic has a handy feature which normalises the track only if clipping is detected during bounce down.

Fade / x-fade your edits

Wherever you have edited audio clips, fade-in the start and fade-out the end of the clip, or where you have edited together audio clips, ensure there is a cross fade. I can’t emphasise this enough: I’ve had audio stems delivered with clicks and pops embedded in them as the edits hadn’t been cross-faded. It’s simple to do at the time and not doing it eats up way more time and effort further down the line – time that could be spent mixing.

Screen Shot 2019-05-15 at 16.50.53
Using Fades on the boundaries of audio clips in Pro Tools

Name tracks meaningfully

Your mixer won’t have worked on this project with you for the last three years, so they won’t know the channel labelled ‘Bob’s wanger’ is actually a guitar part. So name the channels in a way that makes sense, and if a channel is labelled Peruvian nose flute then make sure there are only Peruvian nose flute parts on it. Obvious maybe, but it can and does happen, for instance when you’re recording through a single input channel and adding different instruments without creating new tracks.

Clear and meaningful track naming in Logic Pro X
Clear and meaningful track naming in Logic Pro X

Group your tracks by instrument

Keep all same / similar instruments together in adjacent tracks. So, all the drum kit tracks together, or multiple layers of guitar for instance. You don’t necessarily need to group them through busses or set up mix/edit groups as this is something the mix engineer may or may not do depending on their workflow.

Screen Shot 2019-05-20 at 12.01.44
Track organisation in Ableton

Comps

Assuming you know the best takes for each instrument, assemble your various takes of a particular instrument / vocal into a single comp and consolidate it, making sure you check your fades / x-fades too. You can keep the old takes on a separate track in case you need to go back in there for something later on.

Screen Shot 2019-05-15 at 16.57.56
Comping with playlists in Pro Tools

Effects

Generally, get rid of any EQ / compression / reverb added on the tracks, unless there are specific sounds you want to keep or illustrate to the mix engineer (for instance that gnarly delay you love or the way that filter sweep moves). Also remember that the engineer may not have the same plug-ins as you, so if there’s something you can’t live without, freeze / commit / bounce it down to a new track so it can be incorporated in the mix. This way the engineer has both the unprocessed audio and the effected sounds to play with

Screen Shot 2019-05-20 at 12.07.39
Unless it’s an integral part of the sound, remove all unnecessary plugins

Automation

Unless absolutely necessary to demonstrate a particular effect or sound, remove all automation, especially channel volume / pan / mute automation as this can really confuse things. It’s usually not immediately obvious if there’s active automation on any parameters unless you specifically view the automation lanes. Make sure the automation settings are not in write mode either.

If you do need to automate volume (e.g. to balance gain between sections), ideally use clip gain (see the next point), or place a gain / trim plug-in in the channel and automate that.

Automation in Logic Pro X
Automation in Logic Pro X

Balance clip gains

Most DAWs support clip gain – use this to balance the levels of clips on each track so that the channel faders don’t need to be moved to extremes in order to hear something, and that all parts on a track are at roughly the right volume to be heard.

Screen Shot 2019-05-20 at 13.19.26

Clean it up

Remove any unused channels or muted / unused audio from the project, unless it’s stuff you think you may need later in the mixing process (e.g. that banshee wail you’re not quite sure about, or those 50 extra guitar / synth layers that you might want in the mix but can’t quite decide on yet). There’s no point in transferring 120GB of data when 110GB of that is out-takes that aren’t going to be used. This especially applies if you are transferring your files over the internet.

Use the ‘strip silence’, ‘compact’ and ‘save a copy in’ features of your DAW if they have them to remove unnecessary audio and files. Most DAWs have good file management features these days. Committing or Consolidating tracks is a good option here as it creates a single file per track containing only the audio you want to hear.

Screen Shot 2019-05-01 at 13.30.05

Use Comments / Labels / Notes

Help the mix engineer navigate the song by using labels / comments / notes. This is especially important if you are not going to be attending the mix session –any accompanying notes will always be useful.

Remove any tracking-specific routing

Sometimes the audio will have been recorded in a studio with a console or other hardware for monitoring, or you may have a quirky recording set-up at home, and the project will have its I/O routing set up for that specific system. Remove the routings and set them so that all the tracks play back to a single master bus, this ensures that everything that has been recorded will be heard.

Fade / X-Fade your edits

Did I mention this? Well let me say it again, Fade / Cross-Fade your edits!

Do all this, and your mixer will be able to spend their time doing what they do best – and you can get the most out of your budget.  

Capturing the Sound of a Space

I’ve been interested in the use of convolution reverbs for a while, and was particularly inspired listening to this interview with Nikolay Georgiev by Lij Shaw for his brilliant Recording Studio Rockstars podcast series. During the interview Nikolay explains how he has fine-tuned his process of capturing the acoustics of a space using a mobile recording rig and various means of generating an impulse (including sine sweeps, a starter pistol and bursting inflated condoms!)

The basic concept is simple enough, excite the space with a burst of acoustic energy and  record the resulting response. The recorded audio is an acoustic  signature of that space which can be applied to any other sound through the process of convolution (literally ‘folding’ the sounds together).

There are two ways of performing this process, the first is the method I will explore in this blog post, using a balloon popping or other short loud burst of sound to approximate an impulse. This is the easiest method as the recorded file can be used directly in the convolution plug-in. The other method involves recording a sine sweep played back through a speaker into the space. The resultant recording needs to be de-convolved to create the impulse response. Although both methods are widely used, the sine sweep is considered better and there are some very good reasons why, but in practical terms you can still achieve great sounding results using a bursting balloon although it may not offer the most accurate representation of the space.

It was my sons 10th birthday the other day, and whilst clearing up the aftermath I found myself with a whole load of inflated balloons that needed disposing of. Perfect, I remembered I wanted to try recording some impulse responses, and this would be a great way to give it a go. I thus equipped myself with the following items:

  • Inflated party balloons (the round ones, not the sausage-type)
  • a pin
  • a Tascam DR-70D

The Tascam is a handy portable recorder that can capture up to 4 simultaneous tracks in uncompressed high quality format, with the added bonus of having 2 built-in small diaphragm condensor mics . You could equally use a computer soundcard and one or two mics.

One of the spaces I was going to capture was a small stairway that runs along the back wall of our house. I have previously had some good results using the corridor for re-amping drums , placing a mic there and playing back sampled drums into the space to add ambience.

The lower part of the stairway has a sloping ceiling which gives a bit of a flutter echo.

IMG_2142

The upper stairs has dense, bassy sound to it.

IMG_2143

I also wanted to try capturing the bathroom downstairs, it’s about 2m x 2m square, completely tiled floor and walls with a textured ceiling, and is quite reverberant.

IMG_2146

The recorder was mounted on a mini-tripod and sat about 6 inches off the floor for the stairway capture, and on the window ledge for the tiled bathroom. One of the pitfalls I encountered in capturing the impulse response was that the initial pop of the balloon is very loud compared to the resulting echos and can result in clipping in the recorder. It took a few goes to get the levels just right, but once set the actual process was very simple: hit record, hold the balloon up and prick it with a pin. I didn’t experiment a great deal with the effects of location, but I generally had the balloon above the recorder when I burst it. 

Post-processing was very simple. I trimmed the impulses such that they started just before the initial transient and then faded them out shortly afterwards. I used Audacity but any DAW or editor will do the job. Once edited, just export them as wav or aiff files ready to load into the convolution plugin of your choice.

Screen Shot 2019-03-11 at 21.30.53

I used the Convolution Reverb Pro device in Ableton Live Suite (it’s a Max for Live device), I like this plug-in as you can just drag and drop the IR file from finder to the device, but it’s pretty simple to do this with Space designer in Logic Pro or Waves IR-1 for examples of other convolution plug-ins.

Screen Shot 2019-03-11 at 16.03.20

Once loaded into the convolution plugin, simply pass audio through it and hear how it sounds.

To demonstrate I’ll use a simple drum loop, here it is dry:

Now 100% wet with the ‘Stairway 1’ IR:

I like the nice low-end thump to that particular reverb

With 100% wet  ‘Stairway 2’ IR:

This one is a bit more present and in your face.

With 100% wet  ‘Bathroom 1’ IR:

This one has a splashy, hard character.

Notice how the character of the space is imparted on the loop. By adjusting the wet dry blend you can easily dial in as much or as little of the ambience as you like.

Another thing you can do is mess with the actual impulse response to change the character of the reverb. For example, here is the ‘Bathroom 1’ IR where I’ve altered the envelope of the sound using volume curves in Audacity:

With 100% wet  ‘Bathroom 1 Processed’ IR:


It gives it a more non-linear type of sound.

You can also apply other processing to the IR’s to create more complex and interesting effects, here is an example where the same IR has been passed through a phaser, envelope filter and finally high passed:

Although there are lots of ways you can refine the process, and there are further subtleties involved that I haven’t explored here, I was surprised how easy it was to capture the reverbs and use them pretty much straight away. I’m looking forward to getting out and about and sampling some more interesting spaces, there’s an old railway bridge near where I live that has a cool echo that I’d like to capture. So if you see a bloke with a balloon hanging out around tunnel entrances, it’s probably just an audio geek capturing impulse responses…