Stereo & the Soundstage Page 2
Yes, you're right. It hardly ever happens!
In general, classical recordings from major companies are produced as if they were rock recordings. Each instrument or group of instruments is allocated its own microphone, and the balance engineer panpots the mike outputs into an arc across the stereo stage, sometimes adjusting their levels to approximate the real-life balance, very often to "improve" on what the composer thought correct. Ambience microphones introduce pools of reverberation at strategic locations; sometimes even electronic reverberation is added to an otherwise completed picture to fill in the gaps and create an artificial soundstage. The result, given gifted producers and engineers, can bear a surprisingly close resemblance to a real soundstage. When you listen critically, however, the joins between the elements of the collage can become all too clear, particularly if the playback medium—how can I say it?—throws away some of the fine detail obscuring those joins. Again we have a true stereo image, but a soundstage? Not nearly often enough.
But we're audiophiles, aren't we? We know all about the horrors of misjudged multimiking, don't we? We believe in "simple" microphone techniques, and in honesty in recording. We believe in stereo imaging and the existence of the soundstage.
Simple miking. A simple concept, surely? Put up two or three microphones in front of the orchestra, in positions where the sound of the monitors sounds like that of the real thing, and everything should be hunky dory, right?
Well...yes and no. Remember that a true stereo image is produced by a two-channel recording consisting of amplitude information only. What if the microphones are separated in space, not by a small distance, but by a distance larger than the wavelength of most of the musical sounds—10', say? Unless an instrument or voice is exactly halfway between the two microphones, there will be, in addition to the amplitude information, a time delay introduced between the electrical signal it produces in one channel and the signal it produces in the other. Such time information pulls the image of the source further toward the nearest speaker, resulting in an instability of central imaging and a tendency for sources to "clump" around the speakers. Add to that the fact that the inter-channel amplitude differences produced by spaced microphones do not have a linear relationship with angular direction of the sound sources, and it is hard to see how a pair of spaced microphones can produce any image at all.
This can easily be shown with the Delos recording of the Ravel String Quartet, recorded by Stan Ricker with two omni mikes spaced about 12' apart. The recording is rich, alive, and has a great feeling of space. But there is so much time disparity between the two channels that it cannot be considered a stereo recording at all; rather, it is two different recordings of the same performance that happen to be played back pretty much simultaneously.
"Different?" Yes, and it is easy to prove. Speakers for the playback of stereo recordings should be in phase, as a true stereo recording has a considerable degree of correlation between its two channels; ie, the same signal appears to differing extents in both. When the speakers are out of phase, this correlation results in cancellation and phasey effects, something with which I am sure you are all familiar. Connect your speakers out of phase with this Delos recording and you hear no difference between this sound and the sound with the speakers in phase! The recorded time information totally swamps the fact that the speakers are out of phase.
If two spaced microphones do not have sufficient correlation between the two channels, why not add a third, central microphone, the output of which can be fed to both left and right channels? Won't this pull the image in from the sides and firm up the central image?
Well...yes and no. This is the classic Mercury Living Presence technique, echoed in recent years with much success by Telarc's Jack Renner. There is now a greater sense of solidity to the sound, with stable center images, while the effortless sense of space and depth from the use of omni microphones is true to the needs of the music. But is there a true stereo image? No, as the original directions of sources are only approximately preserved. And neither, therefore, is there a true soundstage. Recordings made with widely spaced microphones are analogous to Impressionist paintings: wonderfully pleasing to the eye and capable of evoking a deep emotional response, but when you get right down to it, hardly realistic in literal terms.
To make a recording capable of producing a true soundstage, we have to go back to Blumlein's 1931 paper, in which he outlined two microphone techniques producing outputs containing amplitude information in the correct linear relationship with the direction of the sound sources.
First is the M-S technique, where a sideways-facing velocity (figure-eight) microphone is spatially coincident with a forward-facing mike. Rediscovered in the '50s, this presents the two outputs in matrix form, as sum and difference signals (footnote 2), and is the basic philosophy behind the Soundfield microphone.
Second is to arrange two figure-eight microphones horizontally coincident at 90°, each positioned at 45° to the forward direction. This is the classic "coincident" technique as used for early EMI stereo recordings, by James Boyk for his Performance Recordings piano records, and by Sheffield for their recent Firebird.
Of all the "simple" techniques used to capture live acoustic music, these two, in all their ramifications, are the only ones to produce real stereo imaging from loudspeakers. As a result, they are the only techniques for recording classical music to give good soundstage.
I am certainly not saying that only recordings made in this manner are worth listening to. This discussion of stereo imaging has ignored such equally important aspects of miking as frequency response and balance; microphone coloration, distortion, and noise; capturing the true dynamics of the music; the quality of the concert-hall acoustic; getting the most musically desirable ratio between direct and reverberant sound; and even the time available to find the best places in which to position the mikes. When all these are taken into consideration, any good recording engineer will tell you that he often has to sacrifice the potential for true stereo imaging, in Blumlein's amplitude sense, to gain benefits elsewhere. And if the results are still musically justifiable, why not?
There's no reason at all why not. In art, you can always make a good case for the ends justifying the means. But...those who write about recordings, and how well technical aspects of those recordings are preserved or changed by hi-fi equipment, have a duty to ensure that they do not mistake attributes of the hardware for those of the software. The next time you read a review in which the writer enthuses over the soundstaging performance of a piece of equipment, ask yourself whether he has been using recordings possessing the ability to throw a real soundstage, or whether, in fact, he has been led by his ears into an aural non sequitur.