Super Audio CD: The Rich Report Page 3

Making the inherent data inefficiency of a delta-sigma datastream worse is the fact that lossless coding is less effective than the MLP used in the PCM-based DVD-A. Because of all the high-frequency energy in the stream (energy that would be rejected by the decimators used in a PCM-based system), the DSD data are less correlated and less susceptible to efficient packing.

Why would Sony and Philips introduce a medium that, while offering higher resolution than CD, uses an inefficient coding scheme, needs great care to avoid stability problems with that coding, and requires analog low-pass filtering to ensure there are no compatibility problems with the playback system? Because, they claim, the DACs to decode the stream can be made cheaply, and it might be possible to pass the datastream all the way to the power amplifier.

Nice marketing talk, but this can never be allowed to happen: If the 1-bit stream came out of the box, it could be perfectly copied. It is important to understand that the data stored on a SACD are encrypted, and that the encryption must be reversed before the 1-bit stream can be sent to the D/A converter. Sony's first SACD players, the SCD-1 and SCD-777ES, have separate chips for their digital operations and their DAC stages, and their product literature discusses the advantages, beyond simple cost considerations, of putting these functions on separate chips.

However, we can guess that the goal for SACD players is for the LSI chip that removes the encryption to also produce the analog output. That way, the in-the-clear datastream need never be taken outside that chip. However, this is hardly a low-cost solution, as analog and digital electronics must be made to coexist on the same chip, adding manufacturing complexity. The problems Sony now overcomes through the use of separate chips will have to be overcome in a different way, or the degradation in performance may have to be lived with.

Note that Sony understands these problems clearly, and discusses them in depth in its current literature on the SCD-1 and SCD-777ES. They point out problems with a 1-bit system in what they call the amplitude and time axes, and discuss complex and expensive solutions to overcome the problem.

One very significant problem they do not discuss is the aliasing of the very-high-level noise near half the sampling rate back to the audio baseband. To achieve close to the theoretical performance of the seventh-order modulator, interfering clock signals on the reference voltages must be attenuated by as much as a million-fold if true 20-bit performance is to be achieved. This is clearly impossible. To prevent noise or tones in the high-frequency region of the 1-bit spectrum from being folded into the audio baseband, op-amp performance must be exceptional. Nonlinearities in the op-amps, including those caused by slewing, can cause this problem.

So if circuit complexity is not necessarily reduced by storing DSD-encoded data on a SACD, with no cost savings to be found, again—why would Sony and Philips do this? Here is my guess, informed by some deep-background info I was given by a Sony employee:

Why use a 1-bit system rather than LPCM? SACD advocates like to point out that the digital signal path is significantly simplified by the elimination of the decimator, interpolator, and DAC noise-shaper. However, this may be a cover to hide the real reason for moving to SACD: storing the data as a single-bit datastream allows better copy-protection than does PCM. Because all bits are equally important in DSD—unlike in PCM, where the MSB is most important and the LSB is least important—intentional bit errors, properly executed, would be inaudible but detectable. Maybe they can be made detectable even after being recorded on analog systems. These bit errors could be SACD's equivalent of the perceptually coded "watermark" so often talked about in connection with DVD-A, or it could be something else. (Information is impossible to come by.)

X