Attention Screen: The Making of Live at Merkin Hall Page 3
If there is one aspect of modern rock recording techniques that tires me, it is the overuse of compressors and limiters. Not only are songs dynamically squashed to the point where they sound uniformly and fatiguingly loud throughout, even when played quietly, but the natural dynamics of the drum and bass guitar tracks, in particular, are reduced so that they are reproduced at an unrealistically uniform level.
The situation is similar in the frequency domain: modern rock recording techniques appear to demand that every instrument be equalized so that it occupies its own unique frequency range. Bass guitar is thinned out in the bottom octave so that it doesn't fight with the kick drum. Vocals are made thinner and brighter so that they don't overlap the region occupied by the drums and keyboards. The highs of cymbals are boosted so that they cut through on top of everything else in the mix.
The result is that when you examine a modern rock recording on a spectrum analyzer, every frequency band is at the maximum possible level all the time. This is an extraordinarily efficient use of the recording medium, but it hardly serves the music. As I have written many times in the past decade or so, when all the dynamic and frequency contrast is removed from the music, the music itself is damaged. The notes—the sounds played by the musicians—are not the music, but merely the framework for the music. As Miles Davis commented, the music exists in the spaces between the notes. If that is the case, it hardly seems appropriate for recording and mastering engineers to fill up those spaces, even if, in their defense, they are forced to do so by the record-company suits' incessant demands to "make it louder."
So, when Bob Reina and I decided to record his band Attention Screen at Merkin Concert Hall, it would be my chance to put my (and the band's) money where my mouth was. In the same way that the musicians in Attention Screen play in uncharted waters, I would make as honest a recording as I could. I may have a myriad of effects plug-ins for my digital-audio workstation software, but I would use none of them. Instead, I would rely on each musician's ability to create music-defining sounds that left room for the three others in the dynamic, spatial, and frequency domains.
And there was another issue, a personal one. The recording projects in which I have been involved the past few years have all required a great deal of time in postproduction, assembling the recorded performances from a large number of session takes. This is not because the musicians and singers I choose to work with are not talented, or don't have the pieces sufficiently under their fingers or in their throats. But the recording process itself both interferes with and changes the act of making music. Rather than the recording session being a documentary of a live performance, it instead becomes a voyage of discovery toward the performance. The musician approaches a passage from different directions in different takes, each take contributing its small part to an eventual viewpoint that the musician deems is what he was striving for. My job in the editing is to work with the musician to assemble the individual passages so that this vision of the performance emerges intact.
I have no philosophical problem with this way of recording. In fact, I feel privileged to be allowed to assume the role of midwife in this process of discovery. But it is undoubtedly consuming of time and energy, and the thought of recording a band that creates its music on the fly, where the later assembly of the performance would be impossible, held a great deal of appeal for me. The band was to play two sets of three improvisations at Merkin, with the soundcheck and encore adding two more improvisations. Given that, at a live Attention Screen concert, each improvisation lasts from 8 to 18 minutes, for the CD release we would be able to pick the best five or six from a total of eight improvisations. Following the mixdown, all I would have to do in postproduction would be to fade each track in and out—a very attractive notion.
Bob, Don, and I visited Merkin Hall just before Christmas 2006, so that Bob could try out the hall's two Steinways. While he did, I sat in various seats in the hall, listening to the balance and thinking about how I would mike the concert. For my classical recordings, I tend to use two or three different distant pairs of mikes, each optimized for a different aspect of the final sound. But with an audience present, it was going to be difficult to place distant mikes where I wanted to. I decided, therefore, to use a single spaced pair of Earthworks QTC-40 omnis as my distant pickup, combining that in the mix with close mikes on each of the instruments, and, in the case of the bass guitar, taking a direct feed from Chris's Trace Elliot bass amp.
For the guitar, I used a single Shure SM57 cardioid, placed about 1" from one of the Fender Vibrolux amplifier's twin speakers, positioned midway between the dustcap and the surround. For the piano, I used a pair of Neumann TLM103 solid-state cardioids, arranged as an ORTF stereo pair (7" apart, angled at 110°) about 15" above the strings in the center of the piano's soundboard. This gave me equal coverage of the instrument's treble, midrange, and bass registers.
I've seen as many as 15 microphones used on a drum kit, but given that some of the great recorded drum sounds from the 1970s—Rolling Stones, Led Zeppelin, Pink Floyd—were captured with as few as three mikes, I used just four for the drums. An AKG D112 was placed just in front of the hole in the front kick-drum's skin, and a tiny Shure Beta 98 capacitor mike was clipped to the snare-drum shell so that its capsule was around 2" above the top skin. These close mikes would give me a good representation of the drums' "body tone," while the snare-drum mike would also capture some attack on the hi-hat cymbals. I then used a pair of DPA 4011 cardioids as an ORTF pair over the drums, around 3' above the cymbals. This would give a good pickup of cymbals, toms, and snare wires, but also some hall sound and a stereo picture of the drums that I would use as a backdrop in the mix.
Because of the minimal time for load-out at the end of the concert—the band would finish their encore around 10:10pm, and we had to be out the front door by 11pm or else we would owe the hall and its staff several hundred dollars in overtime charges—my recording rig was also minimal. The base unit was the Metric Halo MIO2882, an extremely versatile eight-channel preamp-A/D converter, connected via FireWire to a host computer, in this case a Mac mini, which would store the data on its internal hard drive. All the close-miked signals were processed with the '2882; the signals from the two distant omni mikes were amplified and converted to digital with a Metric Halo ULN-2, a lower-noise, two-channel sibling of the '2882. This was word-clock–slaved to the '2882, and also connected to the Mac mini via FireWire.
My recording software was the beta version of Metric Halo's MIO console. All 10 channels were digitized at 88.2kHz with 24-bit word length, with the level of each set to give as close to 0dBFS as I dared get. This is because I was going to have to attenuate the individual tracks in the mix, and I wanted to start with as high a resolution as possible. I asked the musicians to play as loudly as they could in the sound check, and then allowed 6dB headroom.
On a small number of occasions, this wasn't sufficient. Mark's mighty blows on his Korean puk drum clipped the drum overheads before I could adjust the input trim, and one of Chris's repeat-echo'd bass techno-freakouts in "Lap Dawg" drove the A/D converter for that channel into overload. I could fix the instantaneous waveform flat-tops in the drum channels—there were only a handful—by reducing the level of each affected half-cycle by 3–4dB and redrawing the waveform peak at the sample level with Bias Peak's pencil tool, using the unclipped waveforms in the distant mike channels as a guide. The bass-guitar passage was more difficult, as all the peaks were clipped for about a second. However, as Chris was using his Moog effect pedal to produce a ringing sound, I used FFT analysis to determine the frequency range of the signal he was playing, reduced the level of just the affected section by 3dB, then passed it though a high-Q digital bandpass filter set to the same range. This nicely reconstructed the signal waveform.
One thing I couldn't do anything about was when Mark hit the snare-drum mike with a drumstick. I left that in as a mark of authenticity, though I defy anyone to tell me where it occurred!
I had two things in mind when I mixed the tracks using Adobe Audition. Our webmaster, Jon Iverson, tells me that the art of successful mixing is to balance the authentic aspect of the sound against the emotional impact, to create a sound that is convincingly real yet at the same time conveys the music's full impact. And years ago I read something written by A-list engineer Glyn Johns: that unless you're recording in a studio, with each musician in an isolation booth—something that is hardly conducive to inspired music-making, in my experience—bleedthrough between mike channels is unavoidable. To some extent, all the microphones pick up all the instruments. The trick in mixing, according to Johns, is to exploit that bleedthrough, so that it aids rather than fights the mix.
In the same way that some photographers have already composed the photograph before they put the camera's viewfinder to their eyes, I had the eventual mixdown in mind when I was setting up mikes at Merkin and the musicians were settling down in the positions where they were going to play. I allowed the latter to take dominance over my needs—the musicians need to be able to see and hear each other, particularly as no stage-monitor mix would be available to them. (The last thing I wanted was for my mikes to pick up the sound of a loud, distorted boombox of a wedge monitor.) Once they'd got settled, I adjusted my mike positions and orientations to give me what I felt I needed.
The basic mix has the distant omnis fed to the left and right channels, the piano's stereo image panned from stage center to far right, the stereo drum image covering the full width of the stage (a departure from strict accuracy that I felt appropriate), the guitar panned hard left, and the kick drum and bass guitar placed at the center of the stage. The snare-drum mike was initially also placed in the center, but I eventually moved it a little to the right of center so that the hi-hat image was appropriately placed in the soundstage.
The key to a successful live mix is to echo the musicians' positions on the stage. However, with the bleedthrough, particularly of the bass into the drum and piano mikes and the drums into the piano mikes, the mixdown was still complicated. I couldn't adjust an instrument's level without affecting the soundstage. Nor could I adjust a panpot without affecting the image depth. The bass-guitar image, for example, comprises the direct signal from the instrument's preamp overlaid with a phantom image from the distant omnis and a phantom mirror image from the right drum mike and the left piano mike. What I was aiming for in the mix was the impact of the direct-injected instrument, but with the spatial modeling of the bass guitar's image given by the bleedthrough. However, adjusting the piano level would thus also pull the bass-guitar image to the left or right, depending on whether I was increasing or decreasing the level. Adjusting the level of the drum overheads would also make the bass sound more or less reverberant, as well as pull the bass image in the other direction from the piano tracks.
The bleedthrough also set limits on how much gain I could add when a musician was underplaying. For example, Bob did play quietly some of the time—deliberate choices on his part. If I tried to boost his level by more than a couple of dB, the bass-guitar image shifted to the left and the drums started to sound too reverberant. I couldn't depart too far from what the musicians had intended when they were playing live. Which is a wisdom of a kind.
But other than some slight gain-riding—almost never more than a dB or so—on the guitar or piano, to emphasize when each was taking the lead role, and the muting of the kick- and snare-drum tracks when they weren't playing, again to keep the mix clean, what you hear on Live at Merkin Hall is basically what you would have heard had you been sitting on the front of the stage, in the center of the apron: a wide-angle view of the band, with the guitar and piano sounding rather more dry and immediate than the more distant drums and bass, and the hall providing a halo of ambience.
For the downconversion from 24-bit/88.2kHz to 16-bit/44.1kHz, I used Bias Peak 5.2 with the POWR-2 redithering algorithm, which I have found to be sonically transparent. The Red Book files were then converted to a DDP CD master file set with Sonic Studio's PMCD 2.0. The dynamic range of this recording is extreme—the average level is around 10dB lower than a typical high-quality rock or jazz CD. To the naïve listener, it will sound too quiet; below a certain threshold of loudness, the balance won't gel. However, if you play this CD loud, so that the average level is reasonably satisfying, the instantaneous peaks will both reflect what you would have heard in the hall last February and blow you out of your listening chair. This is a true high-end recording: the louder you play it, the more there is to hear.
I must admit that I was well pleased with the sound—but even more with the music. I tease Bob that, to echo Laurie Anderson, his band's music is Difficult Listening Hour. But the more you listen, the more there is to hear.
Attention Screen's music is like good wine: evocations both familiar and unfamiliar surface at different times. Take "Nell's Bells": Don leads off with some rocking between tonic and dominant on his amplified eight-string ukulele, over which Chris rhapsodizes while Mark underpins both of them with some four-in-the-bar rimshots. Bob eventually creeps in with a figure based on the dominant, to emphasize the tonal argument that's taking place, which is later echoed by Bob hammering down offbeat dominants while Mark and Chris pay tonic homage to Bo Diddley. And all the while Don strums on, raising a Hawaiian smile, which triggers a quote from Bernstein's "America" from Bob, which then degenerates into a chromatic fantasy, before Chris finally gives in to Bob's chordal pleading and ends on the dominant, only to have Don resolve the issue with a final tonic plunk.
And even though the band gets "out there," there are flashes of straightness to remind you that you need to be able to draw from life before you can paint in the abstract. For example, at 7:30 in the final track, "Lap Dawg," which was the encore, a passage of swing time from Mark and Chris emerges under Don's lap steel, following an anarchic breakdown of harmony and rhythm instigated by Bob and Chris. And then there's Mark's solo drum outing at the start of "Blizzard Limbs," and Chris's mystical bass solo at the start of "Fruit Forward," and Don's lotar musings on "Mansour's Gift" . . . it's all good!
No equalization, compression, or artificial reverberation (other than Chris Jones's Moog and Boss effects pedals) was used in the mixing and mastering of this CD. The tonal colors, the stereo image, and the range of dynamic expression you hear faithfully reflect what the audience experienced that magic night in Merkin Hall. (You can find Wes Phillips' review of the concert here) You can buy Attention Screen: Live at Merkin Hall from our e-commerce page. I hope you enjoy listening to it as much we did making it.—John Atkinson