Autonomic Controls Mirage MMS-5A media server Measurements
I examined the behavior of the Autonomic Controls Mirage MMS-5A's analog and digital outputs with Stereophile's loan sample of the top-of-the-line Audio Precision SYS2722 system (see www.ap.com and the January 2008 "As We See It").
The most important characteristic of an S/PDIF output is that it have low datastream jitter. With streamed data, once jitter is introduced, it can only be low-pass filtered, not eliminated; the lower the level of jitter from the outset, the better. With the MMS-5A's S/PDIF output connected to the SYS2722 with a 6'-long Apature cable with an RCA-BNC adapter at the analyzer end, I got somewhat paradoxical results. (All my test files were uncompressed WAVs.) With 16-bit/44.1kHz J-Test data and the MMS-5A set to output 16/44.1 data, the measured datastream jitter was a high 1.429 nanoseconds (700Hz100kHz measurement bandwidth). The spectrum of the J-Test signal as transmitted by the MMS-5A and analyzed in the digital domain is shown in fig.1. The noise floor is about 10dB higher than it should be, burying the odd-order harmonics of the low-frequency, LSB-level squarewave, and implying that with the MMS-5A set to output 16-bit data, at least a bit's worth of resolution is lost.
Resetting the Mirage to output 24/192 reduced the datastream jitter with J-Test data by a factor of three, to just 498 picoseconds. Fig.2 shows the digital-domain spectrum of the Mirage's upsampled data output. The odd harmonics are now clearly defined, with no noise between them, though there is some inconsistency in their levels, which should gradually decline from left to right in this graph. The J-Test signal is very much a worst case, however; with music upsampled to 24/192, the jitter was even lower, at 135.8ps.
With the Mirage's output set to 16/44.1, I then transmitted to the Audio Precision undithered 16-bit data representing a 1kHz tone at exactly 90.31dBFS. The waveform described by the data is shown in fig.3. It should consist of three DC voltage levels, one at 0V and the others at ±1 LSB. Yes, three levels can be discerned, but they are overlaid with noise, this reducing the resolution. Remember that there is no D/A conversion involved in this measurement; this noise is digitally encoded in the datastream. Resetting the MMS-5A's output to 24 bits didn't change this picture. It looks as if the MMS-5A is adding dither at the 16th bit, for some reason. It was only when I reset the Mirage's output to 24/192 that I got the expected perfect reproduction (fig.4).
This supports both my listening experience and Autonomic's recommendation that the MMS-5A's digital output be set to the highest possible rate and bit depth. The question then becomes, how transparent is the sample-rate conversion? Looking first at how well the digital output preserved ultrasonic information, I played a series of tones sampled at 192kHz and ranging up to 95.5kHz. All was well. I then played high-level signals, which will tax the Realtek ALC892 audio chip's mathematical precision. Fig.5 shows the audioband spectrum, analyzed in the digital domain, of the Mirage's data output upsampling a 16-bit, 1kHz tone sampled at 44.1kHz to 24/192. The digital-domain noise floor rises a little below 8kHz or so, and a pair of sidebands at ±344Hz has appeared around the spectral spike representing the tone. But above 8kHz, the noise has dropped to the correct 16-bit level.
I repeated the test with a worst-case conversion, upsampling to 192kHz a 24-bit, 1kHz tone sampled at 88.2kHz. The result is shown in fig.6. The noise below the original signal's Nyquist Frequency (half the sample rate) has risen roughly to the 16-bit level, and while the noise above 44.1kHz has dropped below 150dBFS, some low-level aliasing tones can be seen. Remember that this analysis is performed directly on the digital data output by the Mirage. The D/A processor downstream that turns these data into analog can't put back the resolution that has been lost upstream.
The final test I performed on the MMS-5A's digital output was a spectral analysis with data representing a dithered 1kHz tone at 90dBFS with 16-bit data, setting the output first to 16 and then to 24 bits, but without upsampling. The spectra shown in fig.7 reveal that with the Mirage's digital output set to 16 bits (cyan and magenta traces), the noise floor is 1 to 2 bits higher than it should be, which is why the HDCD flag was obscured in my auditioning of the Joni Mitchell files. With the digital output set to 24 bits (blue and red), the noise floor lies closer to the correct 16-bit level, though upsampling to 192kHz with a 24-bit version of the same signal (not shown) suggested that the 24th bit was being truncated. This is academic, perhaps, given how low a signal level this represents, but it is still an indication that the MMS-5A's DSP engine has some mathematical limitations.
Although Autonomic Controls had emphasized that I should use the MMS-5A's digital output, I did look at the performance of the Mirage's "standard grade" analog outputs. These are definitely for utility use only. Set to Fixed Gain, where the maximum output level was 1.2V, sourced from an impedance of 222 ohms at 1kHz, these outputs offer a noise floor at approximately the 14-bit level, a frequency response that rolls off rapidly above 16kHz, and a fairly high level of random low-frequency jitter: fine for background use, but not for critical listening.John Atkinson