Bruce-in-Philly
Bruce-in-Philly's picture
Offline
Last seen: 6 years 5 months ago
Joined: Dec 10 2006 - 7:23pm
24-bit output
struts
struts's picture
Offline
Last seen: 2 years 10 months ago
Joined: Feb 1 2007 - 12:02pm

Bruce,

It's a great question and one I have spent a lot of time wondering about. I don't claim to have the complete answer but here's a couple of thoughts.

Firstly, in an open-systems world where there are so many (PC, O/S, sound card, media player - including arbitrary plug-ins, source file etc.) independent variables in the equation it is extremely difficult to assert for a generic case that a given file is being replayed bit perfectly. I saw a post on another forum that put it very nicely "with PCs there are very few absolutes". It is even harder to prove in practice and requires a fair degree of expertise and specialist equipment, not to mention a lot of time! But as usual, wherever there is an absence of fact mumbo jumbo races in to fill the vacuum.

Secondly, so few people appear to care! Witness the fact that afaik there isn't a single sound card on the market that was designed to simply produce a S/PDIF stream with the least amount of interface jitter possible. As you pointed out in your original post, compared to gamers and home theater users the PC world clearly does not consider audiophiles a significant target market.

It may not be the Rosetta stone but there are the beginnings of what looks like a really promising digital audio Wiki on the Benchmark Media website. It includes a set-up guide for computer audio and deals with issues like bit transparency, word length, even some surprising revelations about kmixer! It might go some way towards answering your questions, I found it very useful.

Rather than regurgitate the usual superstitions and half-truths these folks have taken nothing for granted and actually tested things themselves. For instance, they have mesaured the bit transparency of a number of media players using an Audio Precision digital signal generator and analyzer. Hallelujah!

Talk about a breath of fresh air. The site is "under construction" so some of the articles are only stubs, but given its solid roots in sound engineering practice and empirical measurment I think this is a really welcome injection of fact into this murky debate. Way to go Benchmark!

For my own part, I bought a 24/96 capable card in order to:

  • Play back hi res programme like the Linn Messiah I mentioned in this thread.
  • Experiment with upsampling, the benefits of which I am a firm believer in after 7 years with dCS Purcells in my big-rig. As I reported in the other thread I found that upsampling to 96kS/s with the Secret Rabbit Code resampler made a worthwhile improvement to the sound of FLAC-encoded CDs replayed through my Grace m902 DAC. I certainly don't claim to fully understand this curious effect, no matter how many times I reread this paper. Now I realize that this is pretty much the exact opposite of bit-transparency, however it works for me and at $10 (voluntary donation) it is certainly cheaper to try than a dCS Purcell!
Editor
Editor's picture
Offline
Last seen: 13 years 4 months ago
Joined: Sep 1 2005 - 8:56am


Quote:
24-bit output
struts
struts's picture
Offline
Last seen: 2 years 10 months ago
Joined: Feb 1 2007 - 12:02pm

Bruce,

Just saw this piece in the Benchmark Media E-Update from May (Okay, I'm a little behind on my reading).


Quote:

Engineer Q & A - Why do I need a 24-bit USB audio device to play 16-bit 44.1 kHz music files?

Why do I need a 24-bit USB audio device to play 16-bit 44.1 kHz music files?

The reason is that digital volume controls and digital mixers increase the word-length of the audio. The longer word-length is a result of multiplication and addition. These arithmetic operations produce long word-lengths that must be squeezed back into a shorter word length. Word-length reduction adds noise and/or distortion to the audio. The amount that is added is determined by the output word length.

The noise and/or distortion added by word-length reduction decreases by 6.02 dB for every additional bit that can be retained. Reduction to 16-bits adds 48 dB more noise than reduction to 24-bits. In general, 16-bit word-length reduction is very audible; while 24-bit word-length reduction produces noise levels that are well below audibility.

Our tests show that 24-bit output devices deliver a dramatic improvement in sound quality even when playing 16-bit material. Native USB output devices have had a reputation for poor sound quality. This is primarily due to the 16-bit limitation.

Benchmark's Advanced USB Audio technology breaks the 16-bit barrier and delivers pristine digital audio to the D/A converter. Benchmark's UltraLock

Elk
Elk's picture
Offline
Last seen: 3 years 7 months ago
Joined: Dec 26 2006 - 6:32am

Bruce,

This is a great question. It is also a great example as to why it is initially quite hard to understand how to use a PC as a music server.

I have not seen a good, easy to understand article on how to make sure you are getting a bit-for-bit accurate data stream from a PC's digital output, whether it is S/PDIF, TosLink, USB, Firewire, whatever. All the manufacturers appear to want us to assume that their equipment is perfect. It is digital after all and digital is perfect, right?

JA's quick response is very good. You want to avoid Window's basic driver and to make sure that all "enhancement features", such as equalizers, etc. are off - as well as to make sure not to use any form of digital volume control.

Benchmark Media's guide is helpful and may be enough to answer your question on 16-bit transparency.

As to why we care about 24-bit: we are having fun trying to get the best possible sound, of course! Seriously, it matters to me as I record at 24/88.2 (or 176.4) and I want to be able to accurately monitor the sound without truncation, as well as put together and play back DVD-A's that play back everything I was able to capture in the recording. Thus, I care whether I can get all 24 bits out of my computer into a DAC and out of my DVD player as well. Now if I was only as good of a recording engineer as my equipment...

Trivia: I record at multiples of 44.1k as getting the best sound for a CD is most important to me. Those who are most concerned with DVD audio record in multiples of 48k as this is the standard audio sample rate for DVD's. I don't know the history behind the difference.

I think that a good, solid article on the basics of using a computer as transport would be great including affordable, accurate sound cards, suggested players, etc.

Bruce-in-Philly
Bruce-in-Philly's picture
Offline
Last seen: 6 years 5 months ago
Joined: Dec 10 2006 - 7:23pm

Sounds like rubbish to me. Again, as you noted, how can anything be "better" than the straight output of what is going in just by adding some zeros? I can believe that interpolating (extrapolating?) a larger word from preceding and subsequent words may actually improve the sound but I am not sure given my contradictory experience with it.

My Audio Alchemy DTI Pro32, which I have between the computer and my Accuphase CD player (used only for its DAC), can be set to interpolate to 24 length. The Accuphase can receive it without truncation. So I tried it of course and I did not like what it did to the sound. It definitely was doing something but I did not like it so I use it in its "straight though" mode.

But I also have experience that tells me it can improve the sound. A few years ago, I had the same AA unit but between an old CAL CD player modded to put out digital, and a Sonic Frontiers DAC (forget model but it was their top-end, Stereophile Class A unit). The SF unit could receive a 24 length and the AA in 24 bit mode did improve the sound. Go figure. From these two experiences, and given that the Accuphase is such a superior sounding machine, I can only conclude that having what poops out the back should be exactly as was fed in. In other words, maybe the bit manipulation was masking or compensating for the flaws of the SF unit. I dunno but I defer to my experience with the superior equipment. Of course I could be wrong as there are more than a few variables here and I do want to believe that these inference engines can and should be able to calculate better data but this is not my experience with my own, better equipment.

Now the AA unit does make a big improvement in sound in my system because of its jitter reduction function as that Audigy card apparently puts out quite a bit of jitter. I was shocked actually as the improvement was more than the correction from the old modded CAL unit (at least as I remember). Anyway, jitter is evil and must be addressed in these computer-as-a-transport systems. 24 bit? I just dunno.

struts
struts's picture
Offline
Last seen: 2 years 10 months ago
Joined: Feb 1 2007 - 12:02pm

Bruce,

I don't doubt your observations but I think your rationale is a bit off. You appear to be mixing up word length (bit depth) and sampling frequency which are two different things.

Each sample encodes the voltage of the signal at the instant the sample was taken so the word length defines the accuracy with which the voltage is measured. A longer word is like finer graduations on a ruler. If a distance is only measured to the nearest mile (c.f. 16-bit) there is no way on earth to subsequently figure out exactly how many inches (c.f. 24-bit) it was.

Neither interpolation nor extrapolation will help here; there is no way that increasing the word length can improve the resolution of the sample post hoc. Note, if the sample is being manipulated, e.g. digital gain or DSP is being applied, the extra 8 bits in a 24-bit word act like "decimal places" which can more accurately capture the result of a division applied to a 16-bit word. The resolution of the sample is not increased but as the manipulaton is more accurately applied it is decreased as little as possible. If the playback chain is completely bit-transparent however, all you end up with is a payload of 8 zeros at the LSB!

I would still be very interested to hear Benchmark's explanation for asserting that unmanipulated 16-bit programme sounds better played back at 24-bit resolution. I'm not quite ready to call it rubbish, but if true it would certainly debunk all this theory!

Successive samples capture the frequency information, i.e. how the voltage varies with time, and additional samples can indeed be interpolated from their neighbours. Converting to a higher sampling frequency ("up-sampling") cannot add any high frequency information (other than distortion) to the signal above the Nyquist frequency, for example a 16/44.1 ADC will have "brick walled" the original signal somewhere below 22.05kHz. It can however help to ameliorate some of distortion artifacts caused by the real-world imperfections of the DAC filter such as aliasing and amplitude ripple.

Or, summa summarum, increasing bit depth cannot really help you in a truly bit-transparent replay chain whereas up-sampling potentially can.

The problem with digital audio is just that there are so many moving parts, many of which are inter-dependent, that it is often really hard to tie effect back to cause with any certainty, you can just never eliminate enough of the variables! In other words your observations could be easily explained by completely different phenomena.

EliasGwinn
EliasGwinn's picture
Offline
Last seen: Never ago
Joined: Feb 28 2007 - 8:09am

Hello! My apologies for the delay in response

Elk
Elk's picture
Offline
Last seen: 3 years 7 months ago
Joined: Dec 26 2006 - 6:32am

Thank you, Elias.

Editor
Editor's picture
Offline
Last seen: 13 years 4 months ago
Joined: Sep 1 2005 - 8:56am

Welcome to our forum, Elias.

John Atkinson
Editor, Stereophile

struts
struts's picture
Offline
Last seen: 2 years 10 months ago
Joined: Feb 1 2007 - 12:02pm

Thanks for that clarification Elias, makes perfect sense now.


Quote:
Ideally, we would want these long words dithered (NOT truncated) to 24-bits and sent along a 24-bit path. Or, if our path is limited to 16-bits, then we would want it dithered to 16-bits.

The problem occurs when the software dithers to 24-bit, but the path is limited to 16-bits.

To any non-gEEks out there curious to understand the concept of dither I would warmly recommend the Wikipedia piece on dither in digital audio. I have struggled to explain this concept without getting bogged down in the math myself on a number of occasions. IMHO this disarmingly clear and elegant explanation sweeps the board.

Unfortunately not all Wikipedia articles reach the same heights, as in the case of this choice piece of drivel from the entry for Bit Depth:


Quote:
By reducing the bit depth, data is lost in each sample using a method of 'a little of everything'; rather than completely strip away, for example, the dynamic range or the frequency range, a compromise is made and each part of that sample drops an amount of data.

Frequency information encoded in one discrete sample? Hmmm...

Caveat lector!

EliasGwinn
EliasGwinn's picture
Offline
Last seen: Never ago
Joined: Feb 28 2007 - 8:09am

I've edited the Wikipedia entry on bit depth: Bit Depth.

If you get a chance, take a look at it and see if anything is still unclear.

struts
struts's picture
Offline
Last seen: 2 years 10 months ago
Joined: Feb 1 2007 - 12:02pm

Thanks Elias, your rewrite is perfectly clear. I noticed one minor error which I fixed.

EliasGwinn
EliasGwinn's picture
Offline
Last seen: Never ago
Joined: Feb 28 2007 - 8:09am

Oh yeah! We do have 0's in our decimal system these days, don't we? I remember when we had to use 'aught' before we had 0's.

EliasGwinn
EliasGwinn's picture
Offline
Last seen: Never ago
Joined: Feb 28 2007 - 8:09am


Quote:
Welcome to our forum, Elias.

John Atkinson
Editor, Stereophile

Thanks John! I love 'talking shop', so the pleasure is mine.

Log in or register to post comments
-->
  • X