The Great Distortion Delusion

Hey, kids, here's the Big News. We've been deluding ourselves all along, worrying about piddling little bits of distortion that we can't hear at all. How's your preamp distortion? 1% at 1 volt out? You have a perfect preamp—a veritable straight wire with gain! That ear-shattering shrillness is all in your mind, because it has now been demonstrated that the human ear cannot perceive distortion levels of less than 6–12% on "normally complex music." If you think you can hear 0.1%, you are deluding yourself.

That, believe it or not, is the gist of an article by Robert Carver of Phase Linear Corp., in the May 1973 issue of Stereo Review.

Unfortunately, Mr. Carver was not just indulging in wishful thinking. (Phase Linear makes amplifiers and preamps!) He was reporting on the results of some listening tests that he conducted in collaboration with four golden ears from Stereo Review's staff: Julian Hirsch, Larry Klein, Ralph Hodges, and Craig Stark.

In a nutshell, here's how they went about it. Mr. Carver had had a Phase Linear 400 amplifier rigged up with a switch that introduced predetermined amounts of "notch" ("crossover") distortion to its sound. First, this group of stalwart listeners compared the Phase Linear, at the "no-distortion" setting with "an excellent 300W amplifier (made by a competiing manufacturer)," and assured themselves that "there was absolutely no audible difference" between them. Then they started the distortion tests.

First, they listened to a 60Hz tone, and found they could "just barely" detect 0.15% distortion. Yoicks!

Next, they mixed 60Hz and 7kHz, and distortion becane "obvious" at 2.5%. With a mix of 60Hz, 3kHz, and 7kHz, it took 4% distortion to be audible. Then they tried some music.

On solo voice, it took 6% distortion to be audible, and on percusaion instruments, they managed to get the distortion up to 12% before it "began to affect the sound..."

Their conclusion, which they then proceeded to "prove," was that the simpler the sound, the more audible the distortion, and conversely, the more complex the sound, the more distortion was needed in order to be audible.

The implication of this is more ludicrous than mind-boggling. What these clowns are saying, in effect, is that if you have a signal of sufficient complexity—a passage with full chorus and orchestra, for instance—the ear will become almost oblivious to distortion of any magnitude. And vanishingly low distortion is meaningful only if you wish to listen at length to sinewaves.

To those of us who know we can perceive very small amounts of distortion (and can hear differences between power amplifiers, too), all of this has a slightly Alice-In-Wonderland quality to it. The baby is really a squalling pig, and the smile lives on after the cat has gone. Things are not as they seem; perceived reality is self-delusion. In this case, though, it is Mr. Carver and his panelists who have managed to do a first-rate job of deluding themselves.

To begin with, it was obvious that the panelists, and apparently Mr. Carver too, undertook the tests with some strong preconceptions as to the outcome. The manner in which it was reported, almost smugly, that there was no audible difference between amplifiers at the outset suggests that as a group, they already unconvinced of anyone's ability to hear such things. This attitude, it seems to us, would tend to cast doubts the partiality of the listening panelists, as well as their hearing acuity.

Perhaps it was for that reason that they chose, consciously or not, to employ experimental procedures that were of dubious validity from the start.

For example, the test tones that were used were not related to one another in any musical manner They bore no more harmonic relationship than the components of white noise, which is the worst signal source one can possibly use for detecting distortion. There must be a harmonic and harmonious relationship between tones—as there is in musical sounds—in order for the ear to perceive the inharmonic products of amplifier distortion.

They were right when they attributed the rise in perceptible-distortion threshold with increasing signal complexity to masking effects, but they are wrong in assuming that the same thing happens when an amplifier is reproducing musical sounds.

In comparing musical material, the vast majority of signal information at a given instant comprises overtones, many of which are weak enough in comparison with the fundamental tones that that they should, according to Mr. Carver's observations, be effectively masked by the fundamentals. They are not, and the reason they are not is because they occupy so each more of the audio spectrum than do the fundamentals. Yet most of these overtones are mathematically related in frequency to the fundamentals of the instruments producing them, which is why live musical sound, even at tremendous volume levels, is so readily accepted as "pleasant" to our ears.

But add a little distortion—a very little distortion—to the sound of a full orchestra going full-tilt, and see what happens. Intermodulation produces sum-and-difference tones, most of which are not harmonically related to the fundamentals. The result is what the ear interprets as dissonance. The sound becomes irritating. Harmonic distortion adds new overtones that were not there originally. The sound becomes brighter and "hotter."

Mix in a typical amount of disc mistracking distortion, and the high-frequency breakup energy intermodulates against the program frequencies and splatters the distortion all the way down through the range below it.

Why didn't Mr. Carver's group observe this? Be cause the "musical" material they chose to listen to was either inharmonic in structure (percussion) or was simple enough (voice) to be relatively unaffected.

It would seem to us that the project was improperly conducted from the start. Instead of trying to prove thing cannot be done (ie, hearing minuscule amounts of distortion), they should have challenged some of the people who claim they can do it to prove that they can. Otherwise, the "experiment" becomes no more meaningful than a group of deaf people proving to their mutual satisfaction that, although sounds can be measured, people who claim to hear them are deluding themselves.

Who's deluding whom?—J. Gordon Holt

Postscript: To allow audiophiles to test for themselves how much distortion of various kinds they can hear on pure tones, I included a series of tracks on Stereophile's Test CD 2. Tracks 21, 22, and 23 on this CD allow the listener to compare different levels of second-, third-, and seventh-harmonic distortion superimposed on a pure 500Hz tone, so that they can test for themselves how much of each kind of distortion they can hear. In each track, the distortion steps down from 10% to 0.1% (0.03% in the case of the seventh harmonic), with each step-down in distortion alternating with the pure tone.

The CD also has tracks where the pure tone can be compared with typical tube amplifier distortion, with typical solid-state amplifier distortion, and with typical panel speaker distortion.

Test CD 2 can be purchased from this website's secure e-commerce page.—John Atkinson

X