Working in the Front Line Page 7

Metallurgy: Many establishment audio engineers consider that Ohm's Law is wholly sufficient to describe current flow in a wire, and that all metallic conductors must sound the same owing to the fundamental property of free electron mobility in this class of material. However, there is now strong evidence to indicate that the choice of element or alloy for a conductor, its metallurgical history, and its absolute purity all affect the sound quality. This finding, unwelcome for those working in this field, cannot be ignored. It seems a cruel twist of fate that of the many conducting materials tried, high-purity silver sounds the most accurate, as it costs approximately 100 times as much as the substantially effective and most widely used material available: copper.

Some physicists approached on this subject have invoked quantum theory to analyze the behavior of metallic conductors in varying states of practical purity, particularly with respect to the boundaries between metallic crystals.

Geometry: The physical design of a cable is a variable which affects sound quality. There is a strong association between a balanced symmetrical twisted pair or twisted quad construction and a sound quality that is judged to be superior to a coaxial construction. The form of the conductor also matters—whether it is solid-core or stranded. Generally, the single strand is preferable unless the wire is of unusually high purity, and the strands are bound in intimate electrical contact.

Cable assessment
For cable assessment, the reference should be taken to be an absence of cable in that particular link, achieved by positioning the program source very close to the next unit and joining them by pure silver wire links barely 20mm long. The cables under test are substituted for this near-perfect link and their negative sound-quality characteristics assessed. In a recent test (footnote 5), 50 interconnect cables were successfully analyzed subjectively by using the single presentation method. Occasional return to the reference helped refresh the memory, while repeats constituted 25% of all the tests and gave reasonable confidence in the reliability of the judgments.

Conclusions
This article has barely touched on the wide scope of the judgment of sound quality of audio components for review. While it is readily acknowledged that the bulk of the listening tests mentioned are not based on established "scientific" procedure, control methods have been used as far as is possible, However, if "scientific" methodologies had been adopted, many of these results would not have been observed—not, as the cynics would have you believe, because the differences do not exist, but because rigorous subjective testing requires an inordinate time scale, often imposing sufficient stress to desensitize the subjects.

In one well-researched case, however, a pair of good-performing, extensively measured amplifiers was found to be easy to differentiate by ear under normal review conditions, one being clearly more accurate than the other (footnote 6). A single presentation test was subsequently devised for a meeting of the London AES, where a large number of listeners (90 or so) participated in a controlled listening experiment to see a) whether two amplifiers could be differentiated, and b) whether one was preferred to the other. The judgment method required that the audience score each presentation as a new trial, this constituting the database. On first publication of the results (footnote 7), some colleagues helpfully pointed out certain analytical weaknesses (footnote 8).

Sufficiently good data were obtained, however, for a statistician to confirm the validity of the test and find that while the aural sensitivity of the unscreened AES members under the difficult conditions of a public meeting was not very good, they nonetheless were able to collectively discriminate, and moreover did prefer one amplifier to the other. This agreed with the original review findings. The test was exhaustively researched with regard to load matching, absolute level, and the like. CD was the program source, and no switch box was involved.

Good hi-fi reviewing has moved beyond the basic framework of a comprehensive lab test and engineering analysis coupled with descriptions of finish, facilities, and ergonomics and a cursory listening check to make sure all is in order. Extensive listening work using consistent and methodical techniques, especially numerical scoring, has shown that many engineering factors are responsible for audible changes in reproduced sound quality, not least in absolute merit. Many of these factors are at present dismissed by the electronic and acoustic establishment.

Subjective assessment is a learned skill, one which is greatly helped by a familiarity with and an understanding of music. Frequent acquaintance with live, natural sound is also vital. Such a skill may be used routinely to judge fidelity, without persistent calls to statistically prove the results.

The best high-fidelity products are seen to be the result of an alliance between good scientific engineering and the art of high-quality reproduced music. Top audio designers make no secret of the absolute necessity for them to practice or purchase the skill necessary to judge the sound quality of their creations at every stage, from conception to production.

Fundamental research is necessary to track down and quantify the many causes of sound-quality variations now familiar to dedicated reviewers.



Footnote 5: Martin Colloms "Fifty Cables," HFN/RR, July 1990. See also Martin Colloms "Cable Talk," HFN/RR, June 1987, and "Cable Considerations," HFN/RR, December 1985.

Footnote 6: Martin Colloms, "Pot Pourri," HFN/RR, January 1985.

Footnote 7: Martin Colloms and Rosamund Weatherall, "Amplifiers Do Sound Different," HFN/RR, May 1986; also see Martin Colloms's and M.E. Le Voi's further analysis of the London AES amplifier test results, "Views" (Letters to the Editor), August 1986.

Footnote 8: Notably Stanley Lipshitz in a private communication.

X