The Acoustical Standard (with follow-up)
This topic is a bit controversial, in that my views on it are not shared by John Atkinson and Larry Archibald.
Ever since 1877, when Edison shouted "Mary Had a Little Lamb" into the horn of the first phonograph, the history of audio has seen continuous technological advance, most of it evolutionary in nature (qualitative improvements), some of it revolutionary (breakthroughs such as electrical recording, the LP, stereo, and digital audio). All of these developments were aimed at improving the realism with which sound is reproduced. The phrase "just like the real thing" was a mainstay of phonograph advertising for years before the first electrically cut recordings were introduced (after which the sound was, presumably, "even more just like the real thing"). But the concept of an original and a reproduction was not really formalized until the early 1940s, when someone coined the term "high fidelity." (No one is certain who it was, but the term is generally attributed to Gilbert A. Briggs, founder of Wharfedale.)
By the time the hi-fi fad was in full swing, during the mid '50s, high fidelity had taken on the trappings of a religion. Perfection was considered to be worth pursuing even though everyone knew it was unattainable, like the Holy Grail so fervently sought by the legendary King Arthur. Audio was driven by an ideal that was shared by all, and perfection in audio was easily defined and universally accepted: It was the sound of a live orchestra (or any other producer of musical sounds) in its natural acoustical habitat. The ability to reproduce that sound was the measure of fidelity—the yardstick by which equipment designers gauged the success of their efforts and set the goals for future designs.
Lately, though, there has been a growing reaction against fidelity in sound reproduction. The idea that the reproduction should sound like something has devolved into the idea that it should sound good, regardless of whether or not it sounds right.
Most of today's audio "perfectionists" are phonies, spouting pieties about realism and naturalness and accuracy when in fact they don't really give a damn about any of these things. I recall a visit a few years back by a well-known high-end audio manufacturer, who took one look at the Shure V-15 pickup I was using and said "Why don't you get yourself a decent cartridge?" This was during the early days of MC cartridges, when they had truly abominable frequency responses, and I explained that I preferred the V-15 because it was more accurate than any MC. To demonstrate what I was talking about, I played him a couple of 15ips one-to-one copies of original master tapes, alongside the V-15 playbacks of the equivalent discs. He agreed that they were virtually indistinguishable, then reiterated "But why don't you use a decent cartridge?" I muttered something impolite and gave up.
He was not concerned with the demonstrated accuracy (fidelity) of the V-15, but only with its lack of certain qualities that he liked in MC cartridges. All he cared about was how "good" it sounded—to him. This is okay up to a point; no one should be expected to listen to a system that sounds bad. But the problem with this "sounds good" approach is that it is so subjective that anything goes. It makes audio quality nothing more than a matter of opinion, in which anyone's opinion is as valid (or otherwise) as anyone else's.
This is not to imply that real music doesn't sound pleasant; just that there is an almost limitless range of "pleasant" reproduced sounds, most of which don't relate to realism. But what is "realism" in 1988? It's not what it was in 1955.
Until relatively recently—until the early '50s—all music was acoustically produced, by people bowing and blowing and pounding on devices which resonate in ways that previous generations of listeners had found pleasant to the ear. Those devices were called instruments. Today, we call them acoustical instruments, to distinguish them from electrical instruments, which have come to dominate all nonclassical music in America.
There's nothing intrinsically bad about electronically produced music, as long as it doesn't try to imitate acoustical instruments (which it does poorly). On the other hand, it has no relevance to the audio concept of fidelity or accuracy or realism, because it has no existence in non-electronic reality. You cannot hear it except through amplifiers and loudspeakers, and to use the sound of amplifiers and loudspeakers for the evaluation of amplifiers and loudspeakers is ridiculous.
One of the attractions of electronic music is the almost limitless variety of sounds that can be gotten from a single "instrument," through what is called signal processing. But this means there is no longer any such thing as the "correct" sound for any instrument, and without that, there can be no way of judging what a musical sound is supposed to sound like.
Of course, reproduced realism isn't all that tangible a quality either. Another "underground" audio magazine has taken a lot of flack through the years for its name, The Absolute Sound, because that seems to imply the existence of an absolute standard for the sound of live music. There isn't, but then, TAS has never claimed that there is. Different orchestras sound different, different halls sound different, different seating locations sound different, and microphones are placed where no one ever sits anyway. The "absolute sound" of an orchestra is, rather, a range of sounds, but the limits of this range define rather unequivocally what it is possible for a real, live orchestra to sound like.
This is the only criterion by which the fidelity of sound reproduction can be assessed. You can use a musically accurate system to evaluate the sound quality of an electronically produced recording, but you cannot use an electronically produced recording to evaluate a system unless you are already familiar enough with what the recording really sounds like to make such a judgment.
Bear this in mind when you read in a Stereophile record review that an electronic recording has "excellent" sound. It does not mean you can use that record as a system evaluation tool; merely that the record will sound very good on an already-very-good system. But this is the very trap that the audio industry has fallen into of late. Since electronic music has become the predominant source of America's music, we have succumbed to the temptation to use it as a yardstick for evaluating audio systems, and it is simply not adequate for that purpose.
Unfortunately, the pursuit of realism does complicate matters, because it requires that the pursuer make certain value judgments about the rightness and wrongness of reproduced sounds. You cannot have too much bass or treble range or smoothness or detail, and neither can you have too much freedom from distortion. But harmonic correctness, which is at the core of sonic fidelity, is not a matter of more-is-better, it is one of middle-of-the-roadness.
The perfect reproduction of an acoustical instrument lies at the top of the statistician's bell-shaped curve, with varying degrees of inaccuracy to the left or right of that peak. So the only way you can judge tonal accuracy is by comparing what you hear with what you remember about the sound of the live, acoustical instrument. And the only way to develop that memory is by attending live concerts regularly.
"The sound of the real thing" may mean different things to different people, but, vague standard or not, it is the only consistent standard we have for judging an audio system's performance. Without it as a goal, the whole idea of sonic perfection is meaningless, "perfectionist" audio has nowhere to go, and there can be little meaningful advancement in the future.