Ed Meitner: Audio Maverick Page 3

Meitner: When you record something as simple as an electric guitar with a standard microphone and you realize you can't even play it back through a 10" speaker when the sound was produced by a 10" speaker, it makes you wonder what is missing. That sound is a simple event. It doesn't take big technology to pick that up. There's nothing mysterious about what's coming out of this 10" speaker. There won't be anything fast in there. There won't be any complex events happening. It will probably be a bandpass of some kind. So why can't we get that back? Close-miked, far-miked, it doesn't make any difference. When you can't reproduce that with the same intensity or vibrancy, you've got to think that something is getting lost.

We're recording the wrong part of the signal. Many recordings sound okay as far as timbre, but the intensity part is missing—the part that gives you the real feeling. Once we capture that, a lot of problems down the line will go away.

The proof of the pudding is the Brüel & Kjaer paper (footnote 1). The trick is to take two pressure transducers and space them at a precise distance. You then know the time between signals from the transducers, and from that you can calculate velocity. From there you perform a matrixing function to recover the [intensity] signal. It would be a piece of cake in DSP. This would prove the point of what we've been missing in recorded sound.

When you go through the paper, you see that the bandwidth of this system is only 8kHz. But what we really want is the part that makes up the intensity of music, not extended frequency response. On a sinewave, it makes no difference at all. But on a transient, it makes a big difference. The intensity part of the wave is what we want; the pressure part is what we get.

With standard microphones, we're recording the pressure part of the wave, which doesn't give us any information about the intensity. Say you listen to a sinewave by itself at a low sound pressure and it doesn't sound very loud. Then you start to add a higher-frequency event in the velocity region, like at zero crossing. You keep the level so low that it doesn't show up in the pressure part. The intensity of the sound will increase. That's the part that we're not recording.

In playback, we're trying to get the intensity part so we turn up the playback beyond a volume that has anything to do with the natural level. The discrepancy between them is as much as 15 or 20dB. It's like were listening to the real part and what we want is the imaginary part, but we don't get it.

Harley: Is this why you think recorded music is never mistaken for live music?

Meitner: That's a big, big reason for it.

Harley: So you're working on making a whole recording chain based on these principles?

Meitner: Oh yeah. We hope to do some tests in the coming year and make a recording that will prove the technique. I'd like to expose this principle that may bring recordings a little closer to reality.

Harley: I thought your paper on LIM and the LIM Detector (footnote 2) was a real breakthrough. Yet very little attention was paid to the paper; it was almost ignored at the 1991 AES convention. Did you feel that way about it?

Meitner: Well, I figured that it was like most new things. I remember when Matti Otala came out with TIM [Transient Intermodulation] distortion, he pretty much got the same reception. If you keep supplying more and more data, it slowly starts to sink in. The AES is not an organization that will quickly latch on to something. It is a committee-driven organization. No, I wasn't surprised.

Harley: Why has it taken so long for awareness of jitter? We've had digital audio now for 15 years, and it's only recently that jitter has been discussed.

Meitner: Wasn't there also the perception in the analog world that phase response didn't matter? I think those two things are born out of the same mentality: Phase response doesn't matter, group delay doesn't matter, but frequency response—20Hz–20kHz—matters a lot. It's analogous to when you shift from talking about amplitude response into time response. A lot of people don't know about it or don't want to know about it.

I can see a piece of audio equipment being specified conventionally in amplitude dynamic range, but also in its dynamic range regarding time resolution. Which probably to our hearing mechanism makes a lot more sense. And we should see 120dB figures there.

Harley: How do you feel about the standard measurements used today to evaluate audio equipment?

Meitner: They're very inadequate. We should at least understand the blindness of a lot of these measurements. They're all good in one certain area, but they're blind to many other things.

IMD [intermodulation distortion] was a measurement for in-band distortion. It was meaningless to look at harmonic distortion because the harmonics were above the passband. So you developed IMD measurements that would allow you to measure in the passband. There is your warning sign: why do we need both? What does the one tell us that the other doesn't?

So we look at the imprecisions in measurements. We know that amplitude precision is covered 100%. Frequency response is covered 100%. But time response is not covered at all. So what's the obvious solution? Let's look at it. I'm sure this way of thinking was instrumental in me trying to go after jitter.

Harley: How important is it to you to quantify through measurement new tricks that make an audible improvement in a design? Is it important to measure that difference, or can you just accept that it sounds better and move on?

Meitner: I'd like to clarify that a little better. If someone says to me that something measures terribly, I'd first like to know how it measures terribly and still sounds good. There's got to be a reason for it. And we're not that far away from correlating a measurement to what we hear.

We've taken some measurements to the ridiculous limit. I'm sure that no one hears the difference between 0.001% and 0.1% distortion—we know that. But it really depends on what the measurement is and what we hear. And when we get the measurements that are relevant or based on timing—phase, group delay, or jitter—then yes, I would like to know what the measurement is and be happy if it correlates to good sound. If it doesn't correlate to good sound, I would look somewhere else. To me, it's important that it measure well according to all common measurements. But it has to measure well in terms of my criteria for measurement, and of course, it has to sound good. Then I feel comfortable about the design.

Harley: Where do you think digital audio sound quality stands in relation to analog?

Meitner: Delta-sigma [oversampling] converters get very close, and in a lot of ways exceed analog. But having said that, there were fleeting magic moments with a turntable when the cartridge was just right. But the next day it may not do the same thing anymore. At least with digital we can have more stability and repeatability.

What's sad about it is that a lot of the digital transfers on CDs were done before the advent of delta-sigma converters and have gone through some horrible, horrible processing. I have a feeling that this is part of history lost.

Harley: The poor digital transfers may be the only surviving sources for future generations. The analog tapes may be lost or destroyed.

Meitner: That's right. And recordings made digitally still went through brickwall filters and other stuff. That's the grief of digital audio. But the future is quite rosy.



Footnote 1: Svend Gade, "Sound Intensity, Part 1: Theory," and "Sound Intensity, Part 2: Instrumentation and Applications."

Footnote 2: "Time Distortions Within Digital Audio Equipment Due to Integrated Circuit Logic Induced Modulation," presented at the October 1991 AES convention in New York.

ARTICLE CONTENTS

X