lwhitefl
lwhitefl's picture
Offline
Last seen: 11 years 1 month ago
Joined: Jul 10 2006 - 10:46am
SACD player resolution
Editor
Editor's picture
Offline
Last seen: 13 years 3 months ago
Joined: Sep 1 2005 - 8:56am


Quote:
Is 18 - 19 bits ultimate resolution typical of all current SACD players?

It is extraordinarily difficut to get a player's analog noisefloor at level equivalent to better than 20 bits without heroic measures, such as using very low-impedance circuitry. 20 bits resolution is about the limit that human hearing can resolve, I believe.

It can be argued that resolution extends beneath the noisefloor, that a change in the input data will still produce a detectable change in the output analog signal that will be obscured by noise, but I prefer to take a more realistic view.

John Atkinson
Editor, Stereophile

lwhitefl
lwhitefl's picture
Offline
Last seen: 11 years 1 month ago
Joined: Jul 10 2006 - 10:46am


Quote:
It is extraordinarily difficult to get a player's analog noise floor at level equivalent to better than 20 bits without heroic measures, such as using very low-impedance circuitry. 20 bits resolution is about the limit that human hearing can resolve, I believe.

It can be argued that resolution extends beneath the noise floor, that a change in the input data will still produce a detectable change in the output analog signal that will be obscured by noise, but I prefer to take a more realistic view.

John Atkinson
Editor, Stereophile

Then I really don't understand all the emphasis on 24-bit music downloads?

Elk
Elk's picture
Offline
Last seen: 3 years 6 months ago
Joined: Dec 26 2006 - 6:32am


Quote:
Then I really don't understand all the emphasis on 24-bit music downloads?


24-bit provides extra headroom for recording and processing. We really can't use it on playback; the inherent thermal noise of the components in our equipment prevents it.

However, leaving the noise floor and dither in those last four bits may have its own advantages.

Welshsox
Welshsox's picture
Offline
Last seen: 12 years 2 months ago
Joined: Dec 13 2006 - 7:27pm

Ok

What is dither ?

Editor
Editor's picture
Offline
Last seen: 13 years 3 months ago
Joined: Sep 1 2005 - 8:56am


Quote:
Then I really don't understand all the emphasis on 24-bit music downloads?

Computers deal with data in 8-bit bytes. So the next increment after 16 bits for data handling and storage is 24 bits. But the bottom 4 bits will most likely be random noise.

However, if you create a 16-bit version of, say, a Linn 24-bit recording by simply chopping off the bottom 8 bits, then subtract that 16-bit file from the original, you are left with just the bottom 8 bits. If you listen to those 8 bits, you hear a noisy version of the music, meaning that the hi-rez file does offer more information that the 16-bit CD version.

John Atkinson
Editor, Stereophile

Jan Vigne
Jan Vigne's picture
Offline
Last seen: Never ago
Joined: Mar 18 2006 - 12:57pm

Dither is the intentional addition of noise to a signal; http://www.earlevel.com/Digital%20Audio/Dither.html

It has to do with most significant and least signficant bit and it relys on the concepts that while our brains process information in a semi-digital fashion our ears deliver it in an analog fashion. Dither fools the system into thinking the lowest level bits (you know, things like nuance, inflection, decay and ambience) are "more significant" than they would be in the live performance or in analog.

When digital systems throw out the last few least significant bits (because they are the least significant and therefore the most difficult for a digital system to retrieve [you think they didn't work long hours coming up with a name that didn't outright say "most difficult to retrieve"?]) you have Redbook CD (whose priorities were written in the mid 1970's at about the same time that the Atari 400 and "Pong" were the dominant digital media) at about 12 bit resolution - not sufficient for high quality music reproduction. Raise the total number of bits and you can aford to throw away those same least signficant bits and still arrive at a level that is closer to "high fidelity". How you arrive at those extra bits is the trick here. You can't go back twenty years later and add information that was never there.

Elk
Elk's picture
Offline
Last seen: 3 years 6 months ago
Joined: Dec 26 2006 - 6:32am


Quote:
What is dither ?

When a digital system tries to capture the amplitude of an analog waveform it can only assign a fixed numerical value to it.

By way of example, it can only assign a 1, 2 or 3 but cannot assign 1.3, 2.7 or 3.1.

The waveform however has values that are between 1, 2, 3 and 4. The digital system therefore must round up or down. It then assigns a whole number value as this is all that it can do.

This rounding up and down at the lowest amplitude level of the audio signal leads to errors in the reconstruction of the waveform on playback. Think of trying to represent the gentle curve of a jump rope by laying it on a flight of stairs.

These errors introduce distortion. They are most easily heard on the end of decaying notes and in reverb tails.

Dither is extremely low level random noise that is added to the signal. Our ears find the random noise much less objectionable than the non-random rounding errors. To make the noise less objectionable, the noise is typically frequency limited so that little noise is added where our ears are most sensitive.

I hope this makes sense.

Welshsox
Welshsox's picture
Offline
Last seen: 12 years 2 months ago
Joined: Dec 13 2006 - 7:27pm

Elk

What i think you are referring to is the called Quantization error in telecomms. Its the error between the resolvable bit level and the instantaneous analog sample. I remember a thread where everyone dimissed digital noise as not existing and this effectively is what this is. What im still not quite understanding is this idea of only having a limited bit capability ie 17 bits or whatever, surely the more the bits the higher the resolution and therefore the lower the quantization error ? the closer you get to the analog sample the better, how can this be limited ?

Alan

Elk
Elk's picture
Offline
Last seen: 3 years 6 months ago
Joined: Dec 26 2006 - 6:32am


Quote:
What i think you are referring to is the called Quantization error in telecomms.


This is exactly the issue.


Quote:
What im still not quite understanding is this idea of only having a limited bit capability ie 17 bits or whatever, surely the more the bits the higher the resolution and therefore the lower the quantization error?


As I understand it this is correct; the more bit depth the lower in amplitude the dynamic quantization error occurs.

The effective bit depth limitation is based on the inherent thermal noise of the physical components, not on the math.

I don't recall the exactly the specific numbers but we are unable to build circuits that are physically quieter than somewhere around -115 or -120dBFS. This is between 19 and 20 bit resolution (there is a little over 6dB gained by each bit).

Welshsox
Welshsox's picture
Offline
Last seen: 12 years 2 months ago
Joined: Dec 13 2006 - 7:27pm

Im missing something.

At its fundamental level a bit sample is nothing more than a digital representation of an analogue value, the more accurately you can reproduce this analog value in theory the more accurate the reporduced signal is to the original.

If the electronics are not capable of reproducing this level of detail past say 20 bits in digital due to thermal noise than this is an analog limitation not a digital one. the same argument should be made to the original sampling device equally as its applied to the D-A stage.

This does therefore backup the theory if im understanding that 24 bit audio is a scam and the extra resolution over say 20 bits has no value because analog components are not electrically capable of operating with that little themal noise.

Alan

Welshsox
Welshsox's picture
Offline
Last seen: 12 years 2 months ago
Joined: Dec 13 2006 - 7:27pm

One more thing that has always puzzled me about A-D & D-A, how does a single digital sample contain the harmonics of an analog signal ? a bell for example is a complex mix of base frequency with complex harmonics, how do you accurately digitize complex harmonics ?

Alan

john curl
john curl's picture
Offline
Last seen: 13 years 2 months ago
Joined: Jan 20 2010 - 8:01am

Alan, have you ever thought of reading up on the subject, first? I am no digital expert, but these problems and trade-offs have been discussed in GREAT DETAIL over many decades.
You are correct, 24 bit is NOT all that it is cracked up to be, because it is so difficult to effectively 'use' those last bits, but it IS better than 16 bit, because it has more to work with, AND it has significantly increased effective high frequency bandwidth that allows harmonics to be effectively recorded and not just chopped off. Is it perfect? No. I am looking toward 32 bit, myself, and 384K sampling, or even better.

Jan Vigne
Jan Vigne's picture
Offline
Last seen: Never ago
Joined: Mar 18 2006 - 12:57pm


Quote:
a bell for example is a complex mix of base frequency with complex harmonics, how do you accurately digitize complex harmonics ?

Same ol' Alan, always seeing the scam in hifi. Why don't you find yourself another hobby to get pissed about? You've spent your $12.95 on gear, that's all anything is worth to you so move on or finally accept the fact some things do work and are worth money.

jc is right, read about how this is done. Your question assumes the fundamental and harmonics exist as separate signals distinct in time from each other. They are not. Nor does one "bit" represent one frequency.

Jan Vigne
Jan Vigne's picture
Offline
Last seen: Never ago
Joined: Mar 18 2006 - 12:57pm


Quote:
If the electronics are not capable of reproducing this level of detail past say 20 bits in digital due to thermal noise than this is an analog limitation not a digital one.

The electronics don't know analog from digital. They are operating in the digital circuitry, that makes it a digital problem.

Elk
Elk's picture
Offline
Last seen: 3 years 6 months ago
Joined: Dec 26 2006 - 6:32am


Quote:
One more thing that has always puzzled me about A-D & D-A, how does a single digital sample contain the harmonics of an analog signal ?

This is non-intuitive, isn't it?

Everything we listen to, no matter how complex, is a single waveform. This waveform may be made up of many different sounds made by many different things from bulldozers to flutes, but it is still a single waveform.

While the waveform may be complex, it can be represented accurately in digital format if enough samples are taken close enough together in time to capture all the little wiggles.

This is analogous to a single vibrating string on a violin. This string is vibrating at many different frequencies simultaneously but it is still a single string that is vibrating.


Quote:
This does therefore backup the theory if im understanding that 24 bit audio is a scam and the extra resolution over say 20 bits has no value because analog components are not electrically capable of operating with that little themal noise.


Scam may be too strong of a term, but yes the last four or five bits are buried in noise.

24 bit resolution is great when recording as it provides headroom for the unexpected sudden loud transient. It is also great for post production processing as it provides more "room" for the math.

The same debate exists for sampling rates. Some, such as Michale Grace, posit that we need not sample any quicker than 60,000/second and that 88.2 and 96 is just wasted data.

Some of these numbers (such as 88.2 and 96) became standard simply because they are mathematically easy (twice 44.1 and 48 used by CDs and DVDs) which was important with the early chips. We now could easily design for sampling at 60 and downconvert to 44.1 but no one will bother as digital storage is now cheap.

Elk
Elk's picture
Offline
Last seen: 3 years 6 months ago
Joined: Dec 26 2006 - 6:32am


Quote:
If the electronics are not capable of reproducing this level of detail past say 20 bits in digital due to thermal noise than this is an analog limitation not a digital one.

Correct.

We cannot build an analog circuit that will provide an input to an ADC or output from a DAC which is quieter than -120dBFS.

This is pretty much the physical maximum possible on a theoretical basis also as it is the movement of sub atomic particles that is causing the thermal noise. Supercooled circuits are quieter but obviously not practical.

It's neat stuff.

john curl
john curl's picture
Offline
Last seen: 13 years 2 months ago
Joined: Jan 20 2010 - 8:01am

I think that a point of confusion here is resolution vs noise. They can be very different things. One rationalization that early digital designers made was that low level information would be masked or lost due to the limitations of the human ear, so the 'subtle' clues that seem to separate mid fi from hi fi are not necessary or important. More working bits gives more resolution, EVEN IF the noise floor is set by other factors.

Elk
Elk's picture
Offline
Last seen: 3 years 6 months ago
Joined: Dec 26 2006 - 6:32am

Thanks, John. Well said.

Welshsox
Welshsox's picture
Offline
Last seen: 12 years 2 months ago
Joined: Dec 13 2006 - 7:27pm

JV

This is a polite conversation, Elk understands that im not attacking anything im just stating my thoughts and we are discussing them.

If you want to partake in the polite nature of this thread great otherwise butt out.

Alan

Welshsox
Welshsox's picture
Offline
Last seen: 12 years 2 months ago
Joined: Dec 13 2006 - 7:27pm

You still dont answer the fundamnetal questiomn of how you record a complex harmonic with a single value ?

Jan Vigne
Jan Vigne's picture
Offline
Last seen: Never ago
Joined: Mar 18 2006 - 12:57pm


Quote:
This is a polite conversation, Elk understands that im not attacking anything im just stating my thoughts and we are discussing them.

If you want to partake in the polite nature of this thread great otherwise butt out.

Tell ya'what, Alan, when you stop posting that everything in audio is a scam, I'll start being more polite to you. "Polite" is not something you've done well in the past when I've disagreed with you (even when a half dozen others were disagreeing with you), or do I need to remind everyone of just how crazy you can get? So, let's just keep the "polite" crap to a minimum, eh?

You and Elk can continue since you prefer that he agrees with you and I do not. But this is a forum so take your "butt out" and shove it in your first word there.

Editor
Editor's picture
Offline
Last seen: 13 years 3 months ago
Joined: Sep 1 2005 - 8:56am


Quote:
You still dont answer the fundamnetal questiomn of how you record a complex harmonic with a single value ?

You can't, but no-one is saying you can. It is the _stream_ of sample values that defines the analog waveform. And that single 2-dimensional waveform contains within itself all the fundamentals and harmonics of the original sound in just the same way that the varying pressure wave hitting your ear does.

The limitation of a time-sampled system is that it cannot encode frequencies above half the sample rate, so the original signal must be band-limited to that frequency.

John Atkinson
Editor, Stereophile

Welshsox
Welshsox's picture
Offline
Last seen: 12 years 2 months ago
Joined: Dec 13 2006 - 7:27pm

Glad to see the break from the forum hasnt changed your sole purpose of picking a fight for no reason.

Welshsox
Welshsox's picture
Offline
Last seen: 12 years 2 months ago
Joined: Dec 13 2006 - 7:27pm

John

I undertsand that traditional theory is sampling rate is twice the highest frequecy.

It brings up this basic question, ive always understood that the hardest single sound possible to digitize is a bell, strictly because of the harmonics. If you placed this sound on a scope you would see lots and lots of concurrent analog waveforms, it is this combination of multiple waveforms that give the bell its complex sound. Are you saying that the human ear only actually hears a single waveform at a time ? i always beleived a sound was made up of multiple waveforms.

It therefore seems to imply that a good D-A system actually induces those complex harmonics electronically somehow ? lest it would be only producing a single waveform strictly based on the sample. I undertsand that you are saying that the sum of complex harmonics can be a single frequency but is that how it actually works ?

I know some people think im being thick and argumentative but im genuinely just trying to understand how this works. I will agree that my professional life with digital systems does push me not to just accept things without understanding why.

struts
struts's picture
Offline
Last seen: 2 years 9 months ago
Joined: Feb 1 2007 - 12:02pm

Alan,

It may help to think of it this way. Imagine a long rope laid out like a big sine wave with a wavelength of say 20ft. Then imagine that you 'wiggle' a much smaller sine wave into the rope (with a wavelength of say a couple of inches) in such a way that it still also follows the path of the original sine wave. The second waveform has been added to, or superimposed onto the original one.

Now is the result one waveform or two? I suppose in a way you could think of it as two, but since you only have one rope it is really just one (complex) waveform. Just like when you add two numbers the result is a number, when you add two waveforms the result is a waveform. Extend this idea to add another waveform to the same rope, and another, and another. You can add as many as you like but since you have one rope the result is always one waveform.

Now if the frequency of the second waveform is an exact multiple of the first then it is called a harmonic of the first. For instance if it is double the frequency (twice as many wiggles) it is the second harmonic of the first one, if it is three times the frequency of the first it is the third harmonic, etc.

The rope is analagous to the music signal. Regardless how many instruments are playing and how harmonically rich their sounds, each microphone only produces one voltage which varies through time, i.e. one waveform. The voltage at any instant just records the amplitude of the sound at that instant and the way it varies with time encodes the frequency (or frequencies). An actual musical signal contains multiple, possibly thousands, of frequencies overlaid on each other in this way, but there is only one waveform coming out of the mic, only one driving each speaker, and ultimately only one hitting your ear.

In digital terms each sample only encodes amplitude information, in other words the voltage in the wire at the instant the sample was taken. The frequency information is all encoded in how that voltage varies over time. The higher the sampling rate the higher the frequency information that can be encoded. The longer the wordlength the more accurately the amplitude can be measured.

Hope this helps.

Jan Vigne
Jan Vigne's picture
Offline
Last seen: Never ago
Joined: Mar 18 2006 - 12:57pm


Quote:
Glad to see the break from the forum hasnt changed your sole purpose of picking a fight for no reason.

Said the kettle to the spoon.

I remember several threads (which I can call up if needed)where you were told by numerous individuals that you had your facts wrong and you were being intentionally ... "contentious" ... yet when the facts were clearly provided by those numerous individuals you simply ignored them and continued to "pick a fight". You seem to have particular problems with me since I had those facts straight while you simply didn't need facts, you had made up your mind that all of audio is a scam - a point you once again raised in this thread. Don't try to dish out this BS again, Alan.

I remember one thread where you got so childishly frustrated you had to retract a few posts which were purely garbage slurs aimed at me personally.

So, Alan, you are certifiably nuts as far as I'm concerned. Sine your opinion is all of this is a scam, you have nothing to offer that is of any value beyond the parts count of the words you place on the screen. And I don't assume much truth is going to come from someone who simply cannot control their anger any better than you. You are nuts!

Now, I've spent time on several threads accepting the welcome back from so many here. Would you all kindly now KMA! I'm here to discuss hifi not to spar with fools like you.

Lick-T
Lick-T's picture
Offline
Last seen: Never ago
Joined: May 14 2006 - 8:04pm

Great analogy Struts.

This also show why higher samplng rates equal better sound. The higher the sampling rate, the more complex that wave form can be. As I edit music, I can always tell if I'm working with an 88.2kHz sample rate or 44.1. Visually, the 88.2 rate has greater complexity in the wave form, it also looks smoother. Funny how it also sounds that way...

Elk
Elk's picture
Offline
Last seen: 3 years 6 months ago
Joined: Dec 26 2006 - 6:32am


Quote:
Great analogy Struts.

Yes. I was trying to come up with something like this myself.

Struts' rope illustrates how a single waveform contains multiple frequencies at the same time.

I think it is hard to grasp that while various physical parts of the bell may be vibrating at different frequencies in the real world, these sounds combine into a single physical waveform in the air.

The same thing happens even if it is 120 musicians playing in a symphony orchestra. Lots of sounds combine into a single waveform. It's truly incredible.

It is this single combined waveform that we hear and which is sampled by digital equipment.

The sound of a bell is incredibly complex, but as far as the ADC is concerned the waveform that makes up the bell's sound has but a single value at each specific moment in time. This value is what is captured by a single digital sample.

Erick also brings up a great analogy between what a sampled waveform looks like and how it sounds.

Welshsox
Welshsox's picture
Offline
Last seen: 12 years 2 months ago
Joined: Dec 13 2006 - 7:27pm

Good

Just dont bother responding to any of my posts and i will gladly ignore yours.

Kal Rubinson
Kal Rubinson's picture
Offline
Last seen: 1 day 14 hours ago
Joined: Sep 1 2005 - 9:34am


Quote:
The sound of a bell is incredibly complex, but as far as the ADC is concerned the waveform that makes up the bell's sound has but a single value at each specific moment in time.......

...and from the specific point in space at which it is sampled.

Lick-T
Lick-T's picture
Offline
Last seen: Never ago
Joined: May 14 2006 - 8:04pm


Quote:
I think it is hard to grasp that while various physical parts of the bell may be vibrating at different frequencies in the real world, these sounds combine into a single physical waveform in the air.

THEN, that singular waveform gets to your loudspeakers where ALL of those complex wave forms need to be recreated. We typically ask two to three radiating areas (a tweeter, midrange and a woofer) to act like exactly like that bell; a midrange driver has to accurately vibrate in almost infinite ways at the same time in order to recreate the initial sound captured by the microphone.

It is truly amazing to me that the whole process of recording and playback works as well as it does.

Elk
Elk's picture
Offline
Last seen: 3 years 6 months ago
Joined: Dec 26 2006 - 6:32am


Quote:
It is truly amazing to me that the whole process of recording and playback works as well as it does.


Yes!

This is one of the enduring fascinations of the hobby.

Arthur Clarke notes "Any sufficiently advanced technology is indistinguishable from magic."

No matter how much I learn, both music and audio remain magic.

Jan Vigne
Jan Vigne's picture
Offline
Last seen: Never ago
Joined: Mar 18 2006 - 12:57pm


Quote:
Just dont bother responding to any of my posts and i will gladly ignore yours.

Then why aren't you?

You didn't need to get the last word. Just ignore me. Go to the spot on the forum where you click the button and place me on ignore.

Done! You don't see my posts from that point forward.

Otherwise, I will be around when you continue this "all hifi is a scam" BS. And since you're nuts we'll do this again.

Beyond that, you don't have much I care to comment on.

struts
struts's picture
Offline
Last seen: 2 years 9 months ago
Joined: Feb 1 2007 - 12:02pm

To understand the difference in resolution between 16-bit and 24-bit audio consider the following great analogy from one of the Pohlman books:

Imagine that each bit is a sheet of typing paper. A 16-bit word is then equivalent to a stack of paper up to 22 feet high and a 24-bit word is equivalent to a stack of paper up to 5632 feet (over a mile!) high. Removing one sheet of paper changes the sample value. That is the resolution that 24-bit ADCs and DACs have to work to!

Another way to think of it is that if the distance from NY to LA was measured with 24-bit resolution the sampling accuracy (i.e. quantization error) would be 9 inches!! As Elk and Erick point out, it is amazing this stuff works at all, let alone that it works as well as it does.

Anyway, I think we have got to the bottom of it now:

  • To capture the dynamic range of human hearing (120 dB, equivalent to 19.64 LPCM bits), 16 bits (98 dB) are theoretically not quite enough and 24 bits (150 dB) are theoretically more than we need. The choice of 24 bits is a convenience since computers work with 8-bit bytes.
  • In practice however, the dynamic range of audio systems can be further constrained by factors such as the self-noise of electronic circuits.
  • Frequency information such as harmonics is captured in the time domain, not the amplitude domain. Each sample only encodes one value which represents the sound pressure seen by the micrphone at that instant. This is the sum of all the waves at all the frequencies that were playing at that instant.

Alan, I can warmly recommend this book if you are interested in learning more.

Welshsox
Welshsox's picture
Offline
Last seen: 12 years 2 months ago
Joined: Dec 13 2006 - 7:27pm

Struts

Thanks

I definetly get the resolution of the extra bits and totally understand that higher the bit sample the lower the quantization error hence a more accurate analog decoder output.

Im still struggling with the harmonics thing, ill do some more reading on the subject.

Even from my telecom days with RF based time division multiplexors this was a very different subject to get to grips with. So when you refer to the harmonics being in the time domain instead of the digital sample domain i sort of get it but have to admit not enough that its sunk into my thick skull.

Appreciate the patience though !!

Alan

struts
struts's picture
Offline
Last seen: 2 years 9 months ago
Joined: Feb 1 2007 - 12:02pm


Quote:
Im still struggling with the harmonics thing, ill do some more reading on the subject.


Alan,

I've been noodling this on-and-off and I'm afraid I've been unable to come up with anything better than my wiggly rope. Please let me know if you find a simpler explanation, I'd love to see it.

Jan Vigne
Jan Vigne's picture
Offline
Last seen: Never ago
Joined: Mar 18 2006 - 12:57pm


Quote:
Now if the frequency of the second waveform is an exact multiple of the first then it is called a harmonic of the first. For instance if it is double the frequency (twice as many wiggles) it is the second harmonic of the first one, if it is three times the frequency of the first it is the third harmonic, etc.

The rope is analagous to the music signal. Regardless how many instruments are playing and how harmonically rich their sounds, each microphone only produces one voltage which varies through time, i.e. one waveform. The voltage at any instant just records the amplitude of the sound at that instant and the way it varies with time encodes the frequency (or frequencies). An actual musical signal contains multiple, possibly thousands, of frequencies overlaid on each other in this way, but there is only one waveform coming out of the mic, only one driving each speaker, and ultimately only one hitting your ear.

That is the answer, "Regardless how many instruments are playing ... " Should each instrument contain only a single sine wave, the digital signal captures numerous instruments in each instant in exactly the same way it captures harmonics of each instrument. A series of slices is presented by each bit, IOW, time is sliced into "bits" which capture multitudes of sine waves in each instant.

Set the display to "three harmonics" and press "play" to see the slices compressed into a time dependent signal.

http://education.tm.agilent.com/index.cgi?CONTENT_ID=13

http://www.highfrequencyelectronics.com/Archives/May05/HFE0505_Tutorial.pdf


Quote:
This is the sum of all the waves at all the frequencies that were playing at that instant.

Not "the sum" as in added together but all of the various sinewaves which are contained in any one moment of information - captured in the time domain. Not much different than watching a dance group perform under a strobe light. You do not see each individual performer but the group as a whole moving in time, sliced into time differentiated segments by each "capture" performed by the flashing light. Each flash of light represents a time segment captured and each time segment represents a displacement of some parameter as captured by each pulse of light. In the case of the dance group, the parameter is motion and space. In the case of a digital recording, the "parameter" is frequency and amplitude.


Quote:
An actual musical signal contains multiple, possibly thousands, of frequencies overlaid on each other in this way, but there is only one waveform coming out of the mic.

The digital recorder operates on voltage pulses and captures the complex voltages which represent the "sum" of all frequencies - fundamental and harmonic plus ambient - as they are transduced by the motion of the microphone diaphragm. Not much different than an analog recording in that respect should you be able to place a strobe light on your speaker's cone and observe its complex dance of simultaneous frequencies rippling across the surface with each flash representing a slice of the time domain. The digital system is only responding to incoming or outgoing voltage and sees nothing in the way of specific frequencies in the time domain, only voltage pulses one after the other. This is what you would see if you froze the oscilloscope's screen view at any one instant in time, as the "Tutorial" above explains.

"Time" is how we think, "frequency" is how we perceive.


Quote:
Your question assumes the fundamental and harmonics exist as separate signals distinct in time from each other. They are not. Nor does one "bit" represent one frequency.

http://cp.literature.agilent.com/litweb/pdf/5988-6765EN.pdf

struts
struts's picture
Offline
Last seen: 2 years 9 months ago
Joined: Feb 1 2007 - 12:02pm


Quote:
Not "the sum" as in added together


I believe it is exactly that. The vector sum, to be more precise.

Jan Vigne
Jan Vigne's picture
Offline
Last seen: Never ago
Joined: Mar 18 2006 - 12:57pm

I meant not "added" as in "2+2=4" as the lump "sum".

"Sum" as in, "I have 'two' apples and you have 'two' apples. If you care to photograph the "sum" of all the apples, you'll have to include both groups."

struts
struts's picture
Offline
Last seen: 2 years 9 months ago
Joined: Feb 1 2007 - 12:02pm


Quote:
I meant not "added" as in "2+2=4" as the lump "sum".

"Sum" as in, "I have 'two' apples and you have 'two' apples. If you care to photograph the "sum" of all the apples, you'll have to include both groups."


Sorry, but I have to confess you've lost me with the 'photographing apples' analogy. According to my understanding of vector arithmetic the vector components do indeed get added together, exactly as in "2+2=4".

We may have to agree to differ on this one!

Jan Vigne
Jan Vigne's picture
Offline
Last seen: Never ago
Joined: Mar 18 2006 - 12:57pm

Or, possibly, someone else can straighten this out.

struts
struts's picture
Offline
Last seen: 2 years 9 months ago
Joined: Feb 1 2007 - 12:02pm

Jan,

Would this do?

Alan,

This is kind of what I was trying to explain with the rope analogy, I think it helps being able to see an example. The animation Jan linked to above is great and may help too.

Jan Vigne
Jan Vigne's picture
Offline
Last seen: Never ago
Joined: Mar 18 2006 - 12:57pm


Quote:
The animation Jan linked to above is great and may help too.

The animation might make more sense if you pulse between "play" and "stop". Each time you stop the "time" element of the progression you would, in essence, be capturing a slice of time domain signal which would be equivalent to a single digital sample. The only difference being, in the actual digital signal the slices are "independent" of one another and do not contain the previous slice as is shown in the animation. IOW, if you toggle between "play" and "stop", each time you hit "stop" you would view only the new additional information and ignore the previous slices. Joined together as a sampled bit the signal flow forms a series of continuous waveforms which represent fundamentals and harmonics occurring in time.

The difference between bit rates is the resolution of the animation. Rather than completing the entire trip from end to end in, say, 100 slices a higher bit rate would make the trip in 1,000 slices with each being of finer resolution in the time domain. Along with placing more of the signal above the dithered noise my understading of "bit rate" is more to the point of read errors and how many can be discarded with a higher bit count while still maintaining the correct informational plot than could be afforded with a "redbook" quality sample. With more bits being sampled at a higher rate errors become less important to the overall composition and the chances of a sucessful conversion to analog rise significantly. Pretty much the same explanation as the stack of papers, Remove two papers from a stack two stories tall and you'lll still have a stack very much like the original. Remove two papers from a stack two millimeters tall and the difference is far more noticeable to the final result when printing that Wikipedia entry for "time domain".

struts
struts's picture
Offline
Last seen: 2 years 9 months ago
Joined: Feb 1 2007 - 12:02pm


Quote:
The difference between bit rates is the resolution of the animation. Rather than completing the entire trip from end to end in, say, 100 slices a higher bit rate would make the trip in 1,000 slices with each being of finer resolution in the time domain. Along with placing more of the signal above the dithered noise my understading of "bit rate" is more to the point of read errors and how many can be discarded with a higher bit count while still maintaining the correct informational plot than could be afforded with a "redbook" quality sample. With more bits being sampled at a higher rate errors become less important to the overall composition and the chances of a sucessful conversion to analog rise significantly. Pretty much the same explanation as the stack of papers, Remove two papers from a stack two stories tall and you'lll still have a stack very much like the original. Remove two papers from a stack two millimeters tall and the difference is far more noticeable to the final result when printing that Wikipedia entry for "time domain".


Sorry Jan but this is just a bunch of confused pseudoscience.

  • Bit rate is just that, a measure of the number of bits per unit time, e.g. second. The bit rate of an audio file is defined by the sample size (aka bit depth, word length, resolution) and the sampling rate (aka sampling frequency) - as well as compression, if used (but we can ignore that here). You seem to be mixing these two concepts together freely in the remainder of your text.
  • The term 'resolution' refers to bit depth (amplitude) whereas your example of 100/1000 time slices refers to sample rate (frequency). Apples and oranges.
  • Bit rate really has nothing to do with errors, error correction or the robustness of the protocol. Apples and Cadillacs (as my engineering manager used to say).
  • The stack-of-paper analogy doesn't work like that at all. Remove one sheet and regardless of the height of the stack you just toggle the LSB which is pretty much inaudible in a 16- or 24-bit system. I measured a 2mm stack of typing paper and it came to 16 sheets, analogous to a 4-bit system, not quite the resolution we are talking about here.
  • Your analogy would make better sense if you were talking about the time domain. High frequency information is far more susceptible to data errors because the fewer datapoints you have the less accurately you can interpolate over errors. Proof by induction, at the Nyquist frequency interpolation doesn't work at all.

I suggest we leave it here, the discussion could get really unproductive if we carry on down this rathole. You are welcome to the last word if you would like it.

Jan Vigne
Jan Vigne's picture
Offline
Last seen: Never ago
Joined: Mar 18 2006 - 12:57pm

No need. I don't discuss digital to any extent at this time and what I know, I know from memory which is now going on several decades since I read Pohlman's book. Mixing metaphors is easy and frequent at this time.

struts
struts's picture
Offline
Last seen: 2 years 9 months ago
Joined: Feb 1 2007 - 12:02pm

I know the feeling, although my brain is wired slightly differently. I seem to be able to remember surprising amounts of what I learned back in school but can't remember for the life of me which days I promised to pick up the kids

Log in or register to post comments
-->
  • X