Deeper Meanings Page 3

The primary weapon used by the audio engineering establishment to support their position that no differences exist between competently designed and manufactured products is the blind listening test. The blind test, in which the subject must identify sonic differences without knowing the identities of the devices under test, is considered the ultimate weapon in the war against those flat-earthers, the audiophiles. According to the objectivists, if one cannot pass such a test, one's entire philosophical foundation—the validity of listening—is rendered a heap of rubble. In addition, there is widespread condemnation of subjective review magazines like Stereophile purely because we report our impressions of products without blind testing (see "Letters," for example).

Before discussing blind tests in detail, and why I feel they are fundamentally flawed, I offer the following observations of the nature of the recording process and the role of value judgments in that process.

Many of the papers presented at the AES conference addressed the acoustic design of concert halls and playback rooms. During these papers, their authors expressed countless value judgments about the sound quality of the concert halls or playback rooms under discussion. Ironically, there was no outcry from the audio engineering establishment that they didn't reach their conclusions under blind conditions. And these are the designers of concert halls, where any bias would be far greater than that alleged to be held by the audio reviewer.

The whole idea of value judgments, selecting "what you like," needs some elucidation. To the scientist, "what you like" is unimportant because it is not based on any measurable, objective standard. "What you like" is merely the reflection of an individual's biases, limitations, and caprice, and thus has no validity. In reality, however, it is just these judgments of "what you like" that profoundly influence the entire music recording chain.

For example, a recording engineer selects microphones based on how they sound to him, not on any technical specifications. If they don't sound good (again, not assessed under blind conditions), he tries other microphones until the sound is judged "good." The process is clearly one of iterative "subjective" value judgments—"what you like"—yet it is hard to imagine any other way of making recordings. Why are the value judgments made during a recording valid, and those made during a component review questioned? And why are subjective audio reviewers harangued to take part in blind tests to substantiate their value judgments and not everyone else in the chain who has to make decisions—value judgments—based on what they hear? Why the double standard? And why are value judgments made in other fields, film criticism for example, not attacked as merely a reflection of the critic's capriciousness?

To the objectivist, listening for oneself to discover if differences exist is ruled out. Again, I quote Dr. Lipshitz: "...I would like to comment briefly on the frequently-heard but nonsensical request which the 'subjectivists' make of us 'objectivists'—namely that we undertake tests to substantiate their claims for the audibility of a certain effect. How can you expect someone who professes not to be able to hear something demonstrate its audibility? The onus clearly falls on those who claim that they can hear the difference to subject their claims to the harsh reality of a blind test."

Blind tests nearly universally appear to indicate that no differences exist between electronics, cables, capacitors, etc. In fact, one infamous test "revealed" that no sonic differences exist between power amplifiers. Mark Levinson, NYAL Futterman OTL tube monoblock, NAD, Hafler, and Counterpoint power amplifiers were all judged to be sonically identical to each other and to a $219 Japanese receiver (footnote 7). This very test, wielded by the objectivists as proof that all amplifiers sound alike, in fact calls into question the entire blind methodology because of the conclusion's absurdity. Who really believes that a pair of Futterman OTL tube amplifiers, a Mark Levinson, and a Japanese receiver are sonically identical? Rather than bolster the objectivist's case, the "all amplifiers sound the same" conclusion of this blind test in fact discredits the very methodology on which hangs the objectivist's entire belief structure.

Blind tests which have demonstrated audible differences between components have been attacked on the basis of alleged poor methodology. Martin Colloms's blind amplifier tests in Hi-Fi News & Record Review (May '86, p.67, and August '86, pp.7 & 9) showed a 63% correct identification, yet the results were dismissed since some of the subjects refused to respond to each presentation. In addition, Stereophile's blind amplifier tests conducted during the Stereophile Bay Area Hi-Fi Show in 1989 revealed sonic differences between an Adcom GFA-555 and VTL 300W Deluxe monoblocks. However, the results were incorrectly reported—in a paper given at the AES conference attacking the audiophile position—to be of no statistical significance. I refer the reader to the letter in Vol.12 No.10 from Professor Herman Burstein, one of the country's foremost statisticians, who did find a degree of statistical significance in the test results. When JA informed Tom Nousaine, author of the paper that incorrectly ascribed no significance to our results, that Professor Burstein had indeed found a small but nevertheless statistically acceptable evidence of differences between amplifiers, Mr. Nousaine responded that the established acceptance criterion for recognizing a difference was therefore too low. Furthermore, he attempts to discredit even statistically significant evidence of differences in this quote from his paper: "It is wise to be suspicious of results which are barely significant in any endeavor."

If differences do exist between components, why don't blind tests conclusively establish the audibility of these differences? I believe that blind listening tests, rather than moving us toward the truth, actually lead us away from reality.

First, the preponderance of blind tests have been conducted by "objectivists" who arrange the tests in such a way that audible differences are more difficult to detect. Rapid switching between components, for example, will always make differences harder to hear. A component's subtleties are not revealed in a few seconds or minutes, but slowly over the course of days or weeks. When reviewing a product, I find that I don't really get to know it until after several weeks of daily listening. Toward the end of the review process, I am still learning aspects of the product's character. Furthermore, the stress of the situation—usually an unfamiliar environment (both music and playback system), adversarial relationship between tester and listener, and the prospect of being ridiculed—imposes an artificiality on the process that reduces one's sensitivity to musical nuances.

Going beyond the nuts and bolts of blind listening tests, I believe they are fundamentally flawed in that they seek to turn an emotional experience—listening to music—into an intellectual exercise. It is well documented that musical perception takes place in the right half of the brain and analytical reasoning in the left half. This process can be observed through PET (Positron-Emission Tomography) scans in which subjects listening to music exhibit increased right-brain metabolism. Those with musical training show activity in both halves of the brain, fluctuating constantly as the music is simultaneously experienced and analyzed (footnote 8). Forcing the brain into an unnatural condition (one that doesn't occur during normal music listening) during blind testing violates a sacrosanct law of science: change only one variable at a time. By introducing another variable—the way the brain processes music—blind listening tests are rendered worthless.

This intellectualization of the musical experience obliterates the emotional responses where the real differences between components are revealed. Taking this train of thought further, I quote from Zen and the Art of Motorcycle Maintenance:

"He simply meant that at the cutting edge of time, before an object can be distinguished, there must be a kind of nonintellectual awareness, which he called awareness of Quality. You can't be aware that you've seen a tree until after you've seen the tree and between the instant of vision and instant of awareness there must be a time lag. We sometimes think of that time lag as unimportant. But there's no justification for thinking that the time lag is unimportant—none whatsoever.

"The past exists only in our memories, the future only in our plans. The present is our only reality. The tree that you are aware of intellectually, because of that small time lag, is always in the past and therefore is always unreal. Any intellectually conceived object is always in the past and therefore unreal. Reality is always the moment of vision before the intellectualization takes place. There is no other reality. This preintellectual reality is what Phaedrus felt he had properly identified as Quality. Since all intellectually identifiable things must emerge from this preintellectual reality, Quality is the parent, the source of all subjects and objects.

"He felt that intellectuals usually have the greatest trouble seeing this Quality, precisely because they are so swift and absolute about snapping everything into intellectual form."



Footnote 7: Stereo Review: "Do all Amplifiers Sound the Same?" January 1987.

Footnote 8: This information comes from an excellent article called "Toward a Biology of Music," by Andrew Stiller, The Audio Amateur, No.1 1990. Ed Dell's introduction is also insightful.

X