Is It Real? Or Is It... Page 2

However, in general, I feel that to deny the importance of an undeniable aspect of live sound—its time integrity across the whole audio band—is treading on dangerous ground: it cannot be denied that the transient impact of real sound is always lacking in reproduced.

In real life, all the frequency components making up a waveform arrive at the ears at the same time. In the bass, listening tests have shown that changing the time relationship of a low-frequency fundamental with respect to its harmonics—by a high-pass filter, for example—is audible. In fact, KEF's Laurie Fincham has hypothesized that it is the cumulative effect of all the high-pass filters between recording microphone and playback loudspeaker that leads to the fundamental difference between live and recorded sound. At higher frequencies, however, whenever listening tests have been carried out using continuous waveforms to investigate what happens when the time relationship between a fundamental and its harmonics is changed, it is found that people cannot detect even gross phase distortion. If the high-frequency components of a squarewave in the midband are rotated several times through 360°, the waveform is no longer recognizably square, yet the sound is apparently unchanged.

As it appears that the ear/brain acts as a wave analyzer, not as a waveform-shape detector, am I swimming against the tide of scientific opinion here?

A genuflection to Fourier aside, music is not made up of the continuous waveforms beloved by test-bench engineers. Rather, it consists of an instantaneous sound pressure level which changes according to the logical demands of two things which have no physical reality: the way in which music is structured in time and pitch, and how that structure is ordered by the composer/musician. The causal relationship between the sound-pressure level at any one instant and the next is thus considerably more complex than that typical of continuous test waveforms.

The waveform which one sees on, for example, an oscilloscope screen, is merely the necessary physical carrier, the continuous-waveform "scaffolding," for the musical reality. Engineers measure the properties of this waveform, assuming that if the carrier is perfect, then so will be the music. But as a carrier, by definition, cannot in itself carry any information, the properties of the music must be more than the properties of the carrier. This will be self-evident if you consider that you could measure every physical aspect of a performance of, say, a Beethoven piano sonata, but even if you spent the next ten years analyzing the data, you would still not know whether the performance was outstanding or merely competent.

I first heard this multidimensional nature of the reproduction of music described by Richard Heyser, loudspeaker reviewer for Audio magazine, President-Elect of the Audio Engineering Society, the onlie begetter of Time Delay Spectrometry, and fulltime physicist/mathematician at JPL, who tragically died of cancer in March 1987. I had not known that Heyser was terminally ill when he presented two papers from his hospital bed at the Los Angeles AES Convention last November, but was puzzled by his impatience at the tardiness with which his ideas were being taken up and expanded upon by other workers. Now, sadly, his impatience is only too understandable. A talk presented by Heyser to the London branch of the AES in March 1986 had been a formative experience: I felt like a child who had been led up to a closed door by an adult and then made aware of the existence of incomprehensible wonders on the other side. Richard Heyser will be sorely missed.

In his London talk, Heyser had explained that when it came to correlating what is heard with what is measured, "There are a lot of loose ends!" When it comes to examining why recorded sound is fundamentally different from live sound, I feel that inconvenient loose ends should not be ignored. Live sound is free from high-Q resonant problems, has phase integrity from DC to above the limit of human hearing, is distinguished by unambiguous imaging, and has a dynamic range greater than any recording system. When you read that one or all of these things is not important to reproduced sound, I suggest you look at the reasons why the author has said what he's said. Accept it on a temporary basis if there is no alternative—hardly any loudspeakers are phase-coherent, for example—but reserve judgment regarding its ultimate irrelevance.

If there is to be any chance of reproduced sound getting closer to the real thing, we should preserve a healthy scepticism. To paraphrase Arthur C. Clarke, "When a distinguished scientist states that something doesn't matter, he is very probably wrong."

ARTICLE CONTENTS

X