Taking your tracks down the street may involve more than finding the car keys
By Linda Taylor
I recently completed the music for an edgy feature-length documentary. Music is prominently featured—there are about 24 cues, ranging from solo guitar to over-the-top, huge techno cues. Enough tracks to keep any engineer busy.
Originally, I planned to mix the music myself, delivering two-track versions of the music to the film’s sound engineer. Then the film’s producers decided to do both a stereo and a surround mix. While that’s great for the music, it’s bad news for me, as I really don’t consider myself a Mix Maestra. After I met with the engineer to go through the score, he volunteered to mix both the initial stereo version and subsequent surround versions.
But I couldn’t pop the cork off the champagne yet. Getting from the composer’s studio to the mix studio held some challenges, and what I learned in dealing with these challenges will prove useful to anyone planning to take their tracks to another studio for mixing.
The first hurdle I faced was track transfer: getting the mix studio and mine to peacefully coexist. I work on a Mac running Avid Pro Tools, and the mix engineer is working on a PC using Steinberg Nuendo (increasingly popular with surround engineers). In a perfect world, everyone is using the same gear, so it’s simply a matter of saving the project and burning some discs.
We discussed the possibility of exporting my files to OMF, a Nuendo-compatible format. But given the various disaster stories I’ve heard about OMF’s accuracy being a little nebulous, I decided to forgo this method. We briefly entertained the idea of renting a Pro Tools rig for the mix room, but budget constraints prevailed (as always).
The most time-consuming solution was ultimately the most accurate. We decided to export everything to interleaved Broadcast WAV Files (BWF). This format retains original timecode information, and is compatible with both Macs and PCs. Interleaved files were used so the engineer wouldn’t have to import separated (.L and .R) stereo tracks.
I further prepared each track so that the engineer would only have to bring them up “as is” on his system, without worrying about any edits or other specifics that I had dealt with on my system. That process is called different names by different programs (consolidating, merging…)—it turns a track from within a given program into a more “neutral” audio file, more easily recognized by other programs. I made any pending edits permanent in the new audio files, and I made sure that crossfades were click-free before I “consolidated” them. For simplicity, I “consolidated” each track with the same start and end time.
My next task was to prepare the stems for the mixdown. A stem mix is sort of like a shorthand mix. You EQ and pan tracks, add effects, all the normal mix procedures… but instead of bouncing everything to two tracks, you keep different groups (stems) separate. For example, a stem mix might include drums on channels 1–2, bass on channel 3, keyboards on channels 4–5, guitars on channels 6–7, etc. The idea is that when you put all these faders at unity, the stem mix will play back exactly like the stereo version. Stem mixes are favored in post production for their flexibility in dealing with last-minute changes.
Sonically this score is anything but “organic.” Several cues feature a number of low-end pads, moving counter to each other and taking up a lot of space. There are harsh industrial elements, soft acoustic guitars, wheezy mangled Rhodes-type sounds, and lots of motion from all instruments.
My usual method in any mix is to roll off the low end at about 50–60 Hz, using about 6 dB of cut. On this project there was a lot to lose with that approach. Since we’d be mixing in surround ultimately, the engineer asked that I not roll anything off from the low end. This documentary has lots of dialog, but no huge explosions, car crashes, dinosaurs, or other subsonic frequencies in the bass channel…which leaves the LFE speaker (the “.1” in 5.1) reserved almost exclusively for music.
I tend to pre-mix while composing, so I have a good idea of which tracks should be combined in stems. I run 16 outputs from my Pro Tools interface to a Dangerous 2-Bus summing mixer. To my ears, this setup sounds similar to an analog console and helps keep the openness and space around each instrument. I can group a variety of instruments in different stems to see which will give the best result.
Considering the final surround mix, I didn’t want to limit the instruments to only the stereo field. Certain pads lent themselves to creative panning, and I wanted the luxury of making that decision during the surround mix, not before.
Further, while preparing the stem mixes I had no way of knowing exactly where the dialog would live, level-wise. Though I’d already composed around the dialog, I hadn’t heard the final sound mix. Some elements could have been added at the last minute. For this project, the best choice was to deliver the frequency-hogs and panning-hogs as individual tracks.
I rendered all delays and reverbs, and printed any effects that were inherent to the sound, like filters and creative EQ. I bused most tracks individually to a compressor (mostly Crane Song’s Phoenix plug-in) and rendered the result.
Essentially, I pre-treated the tracks as if I were the mix engineer. This gave the engineer room to perform any necessary audio surgery, while not having to re-invent the wheel. I left out housekeeping EQ, but took note of settings I used so I would have a point of reference when I went to the mix studio. I also printed a stereo rough mix for reference.
In keeping with the ‘do unto others’ mantra, I tried to make the transfer from my studio to the mix room as smooth as possible. Communication was essential at every step of the process.
By not working in the same DAW, we lost some of the session management niceties I find important, like comments in the mixing board, or markers to designate different versions of tracks and playlists. These studio aids keep things tidy. So I made track sheets for every cue. Tedious work, but worthwhile not just for the engineer but for myself as well. All session information was noted, including timecodes, sample rate and bit depth, tempo, anything I thought would make the process smoother.
The engineer asked for reference tones printed at 400 Hz at –20 dBFS for 1 minute. I rendered these tones using the Signal Generator AudioSuite plug-in in Pro Tools, and exported them with every cue. Calibration isn’t just for the analog world.
This was definitely not the place to slam levels, and it was requested I aim for –8 dBFS as my high-water mark. We wanted to ‘use the bits,’ but leave a little headroom for any additional processing.
To keep things as clear as possible, I saved each cue to a different session, or EDL (Edit Decision List). It’s a lot easier to make edits or use tempo-based effects when your files line up to a beat grid, as opposed to working within the timecode grid. Even though I had already printed the effects I knew we’d need, I like to plan for changes.
I then spent a beautiful, sunny LA weekend in front of a computer doing batch exports and labeling DVDs. Surf’s up.
The engineer’s studio is in a generously sized loft, with vaulted ceilings and plenty of room between the speakers and the walls. Low-frequency information has time to develop, and I was looking forward to hearing this score in full bloom.
The importing went well, every track spotting correctly to its original timestamp. Since all cues had been saved to their own folder, it was a simple matter of just opening a new session in Nuendo and dragging the files in, using the track sheets as the master plan.
Although all tracks transferred accurately, we uncovered a kink in the system. A couple of tracks contained full-scale digital noise in the blank spots—whether it happened during the export or import, I don’t know. Out of approximately 400 tracks, it only happened on two or three. We simply edited out the problem areas.
The engineer dialed up a starting point for all cues, using my stereo rough mixes as reference. We started by viewing each cue with the film just to make sure there were no surprises.
I was pleased to discover that the tracks needed minimal tweaking. Most tracks sat well in the mix and required only the slightest amount of EQ work. I suggested certain midrange frequency cuts that I had used in my studio, and found that the same numbers applied in the mix room.
The acoustical difference between our studios became quickly apparent during the first cue. To the eye, the engineer had the bass faders much lower than I had them in my studio. But to the ear, the low frequency material was just as loud, full and rich as I expected. Lesson: Never underestimate the value of a good-sounding room.
A couple of cues sounded quite flat, lifeless, and not as “3-D” as I knew they could be. It turned out that the engineer had inserted a multiband compressor across the master bus. While I have certainly used one at the end of the mix, it was an interesting approach for the start of the mix. I asked him to raise the threshold, but the comp was still kicking in much heavier than I wanted. My next idea was to get rid of the compressor and simply lower the overall levels. Not as hot, but much better sounding. He placated me and removed the compressor from all cues (at least until I left).
We moved through the mixes quickly. I did need to redo a couple of exports—places where I missed an automation pass and other small fixes—but for the most part it was a smooth ride.
Now for the all-important checking on other systems. We burned some audio CDs, just to get a quick ‘earball’ on the mix through a TV, boom box, etc. I played the test CD through my home theater system, an unforgiving Bose 5.1 that doesn’t like any of my mixes. Everything sounded great: the low end (my biggest concern) was warm and clear, not boomy at all. A couple of weeks later I got one of my favorite phone calls… the one where the engineer is really, genuinely happy with the final product.