The 2011 Richard C. Heyser Memorial Lecture: "Where Did the Negative Frequencies Go?" Measuring Sound Quality, The Art of Reviewing

Measuring Sound Quality

Table 1: What is heard vs what is measurable

This is a table I prepared for my 1997 AES paper on measuring loudspeakers. On the left are the typical measurements I perform in my reviews; on the right are the areas of subjective judgment. It is immediately obvious that there is no direct mapping between any specific measurement and what we perceive. Not one of the parameters in the first column appears to bear any direct correlation with one of the subjective attributes in the second column. If, for example, an engineer needs to measure a loudspeaker's perceived "transparency," there isn't any single two- or three-dimensional graph that can be plotted to show "objective" performance parameters that correlate with the subjective attribute. Everything a loudspeaker does affects the concept of transparency to some degree or other. You need to examine all the measurements simultaneously.

This was touched on by Richard Heyser in his 1986 presentation to the London AES. While developing Time Delay Spectrometry, he became convinced that traditional measurements, where one parameter is plotted against another, fail to provide a complete picture of a component's sound quality. What we hear is a multidimensional array of information in which the whole is greater than the sum of the routinely measured parts.

And this is without considering that all the measurements listed examine changes in the voltage or pressure signals in just one of the information channels. Yet the defects of recording and reproduction systems affect not just one of those channels but both simultaneously. We measure in mono but listen in stereo, where such matters as directional unmasking—where the aberration appears to come from a different point in the soundstage than the acoustic model associated with it, thus making it more audible than a mono-dimensional measurement would predict—can have a significant effect. (This was a subject discussed by Richard Heyser.)

Most important, the audible effect of measurable defects is not heard as their direct effect on the signals but as changes in the perceived character of the oh-so-fragile acoustic models. And that is without considering the higher-order constructs that concern the music that those acoustic models convey, and the even higher-order constructs involving the listener's relationship to the musical message. The engineer measures changes in a voltage or pressure wave; the listener is concerned with abstractions based on constructs based on models!

Again, this was something I first heard described by Richard Heyser in 1986. He gave, as an example of these layers of abstraction, something with which we are all familiar yet cannot be measured: the concept of "Chopin-ness." Any music student can churn out a piece of music which a human listener will recognize as being similar to what Chopin would have written; it is hard to conceive of a set of audio measurements that a computer could use to come to the same conclusion.

Once you are concerned with a model-based view of sound quality, this leads to the realization that the nature of what a component does wrong is of greater importance than the level of what it does wrong: 1% of one kind of distortion can be innocuous, even musically appropriate, whereas 0.01% of a different kind of distortion can be musical anathema.

Consider the sounds of the clarinet I was playing in that 1975 album track. You hear it unambiguously as a clarinet, which means that enough of the small wrinkles in its original live sound that identify it as a clarinet are preserved by the recording and playback systems. Without those wrinkles in the sound, you would be unable to perceive that a clarinet was playing at that point in the music, yet those wrinkles represent a tiny proportion of the total energy that reaches your ears. System distortions that may be thought to be inconsequential compared with the total sound level can become enormously significant when referenced to the stereo signal's "clarinet-ness" content, if you will: the only way to judge whether or not they are significant is to listen.

But what if you are not familiar with the sound of the clarinet? From the acoustic-model–based view, it seems self-evident that the listener can construct an internal model only from what he or she is already familiar with. When the listener is presented with truly novel data, the internal models lose contact with reality. For example, in 1915 Edison conducted a live vs recorded demonstration between the live voice of soprano Anna Case and his Diamond Disc Phonograph. To everyone's surprise, reported Ms. Case, "Everybody, including myself, was astonished to find that it was impossible to distinguish between my own voice, and Mr. Edison's re-creation of it."

Much later, Anna Case admitted that she had toned down her voice to better match the phonograph. Still, the point is not that those early audiophiles were hard of hearing or just plain dumb, but that, without prior experience of the phonograph, the failings we would now find so obvious just didn't fit into the acoustic model those listeners were constructing of Ms. Case's voice.

I had a similar experience back in early 1983, when I was auditioning an early orchestral CD with the late Raymond Cooke, founder of KEF. I remarked that the CD sounded pretty good to me—no surface noise or tracing distortion, the speed stability, the clarity of the low frequencies—when Raymond metaphorically shook me by the shoulders: "Can't you hear that quality of high frequencies? It sounds like grains of rice being dropped onto a taut paper sheet." And up to that point, no, I had not noticed anything amiss with the high frequencies (footnote 3). My internal models were based on my decades of experience of listening to LPs. I had yet to learn the signature of the PCM system's failings—all I heard was the absence of the all-too-familiar failings of the LP. Until Raymond opened the door for me, I had no means of constructing a model that allowed for the failings of the CD medium.

An apparently opposite example: In a public lecture in November 1982, I played both an all-digital CD of Rimsky-Korsakov's Scheherazade and Beecham's 1957 LP with the Royal Philharmonic Orchestra of the same work, without telling the audience which was which. (Actually, to avoid the "Clever Hans" effect, an assistant behind a curtain played the discs.) When I asked the listeners to tell me, by a show of hands, which they thought was the CD, they overwhelmingly voted for what turned out to be the analog LP as being the sound of the brave new digital world!

I went home puzzled by the conflict between what I knew must be the superior medium and what the audience preferred. Of course, the LP is based on an elegant concept: RIAA equalization. As Bob Stuart has explained, this results in the LP having better resolution than CD where it is most important—in the presence region, where the ear is most sensitive— but not as good where it doesn't matter, in the top or bottom octaves. But with hindsight, it was clear that I had asked the wrong question: instead of asking what the listeners had preferred, I had asked them to identify which they thought was the new medium. They had voted for the presentation with which they were most familiar, that had allowed them to more easily construct their internal models, and that ease had led them to the wrong conclusion.

When people say they like or dislike what they are hearing, therefore, you can't discard this information, or say that their preference is wrong. The listeners are describing the fundamental state of their internal constructs, and that is real, if not always useful, data. This makes audio testing very complex, particularly when you consider that the brain will construct those internal acoustic models with incomplete data (footnote 4).

So how do you test the effectiveness of how changing the external stimulus facilitates the construction of those internal models?

In his keynote address at the London AES Conference in 2007, for example, Peter Craven discussed the improvement in sound quality of a digital transfer a 78rpm disc of a live electrical recording of an aria from Puccini's La Bohème when the sample rate was increased from 44.1 to 192kHz. Even 16-bit PCM is overkill for the 1926 recording's limited dynamic range, and though the original's bandwidth was surprisingly wide, given its vintage, 44.1kHz sampling would be more than enough to capture everything in the music, according to conventional information theory.

But as Peter pointed out, with such a recording there is more to the sound than only the music. Specifically, there is the surface noise of the original shellac disc. The improvement in sound quality resulting from the use of a high-sampling-rate transfer involved this noise appearing to float more free of the music; with lower sample rates, it sounded more integrated into the music, and thus degraded it more.

Peter offered a hypothesis to explain this perception: "the ear as detective." "A police detective searches for clues in the evidence; the ear/brain searches for cues in the recording," he explained, referring to the Barry Blesser paper I mentioned earlier. Given that audio reproduction is, almost by definition, "partial input," Peter wondered whether the reason listeners respond positively to higher sample rates and greater bit depths is that these better preserve the cues that aid listeners in the creation of internal models of what they perceive. If that is so, then it becomes easier for listeners to distinguish between desired acoustic objects (the music) and unwanted objects (noise and distortion). And if these can be more easily differentiated, they can then be more easily ignored.

Once you have wrapped your head around the internal-model–based view of perception, it becomes clear why quick-switched blind testing so often produces null results. Such blind tests can differentiate between sounds, but they are not efficient at differentiating the quality of the first-, second-, and third-order internal constructs outlined earlier, particularly if the listener is not in control of the switch.

I'll give an example: Your partner has the TV's remote control; your partner flashes up the program guide, but before you can make sense of the screen, she scrolls down, leaving you confused. And so on. In other words, you have been presented with a sensory stimulus, but have not been given enough time to form the appropriate internal model. Many of the blind tests in which I have participated echo this problem: The proctor switches faster than you have time to form a model, which in the end results in a result that is no different from chance.

The fact that the listener is therefore in a different state of mind in a quick-switched blind test than he would be when listening to music becomes a significant interfering variable. Rigorous blind testing, if it is to produce valid results, thus becomes a lengthy and time-consuming affair using listeners who are experienced and comfortable with the test procedure.

There is also the problem that when it comes to forming an internal model, everything matters, including the listener's cultural expectations and experience of the test itself. The listener in a blind test develops expectations based on previous trials, and the test designer needs to take those expectations into account.

For example, in 1989 I organized a large-scale blind comparison of two amplifiers using the attendees at a Stereophile Hi-Fi Show as my listeners. We carried out 56 tests, each of which would consist of seven forced-choice A/B-type comparisons in which the amplifiers would be Same or Different. To decide the Sames and Differents, I used a random number generator. However, if you think about this, sequences where there are seven Sames or Differents in a row will not be uncommon. Concerned that, presented with such a sequence, my listeners would stop trusting their ears and start to guess, whenever the random number generator indicated that a session of seven presentations should be six or seven consecutive Differents or Sames, I discarded it. Think about it: If you took part in a listening test and you got seven presentations where the amplifiers appeared to be the same, wouldn't you start to doubt what you were hearing?

I felt it important to reduce this history effect in each test. However, this inadvertently subjected the listeners to more Differents than Sames—224 vs 168—which I didn't realize until the weekend's worth of tests was over. As critics pointed out, this in itself became an interfering variable.

The best blind test, therefore, is when the listener is not aware he is taking part in a test. A mindwipe before each trial, if not actually illegal, would inconvenience the listeners—what would you do with the army of zombies that you had created?—but an elegant test of hi-rez digital performed by Philip Hobbs at the 2007 AES Conference in London achieved just this goal.

To cut a long story short, the listeners in Hobbs's test believed that they were being given a straightforward demo of his hi-rez digital recordings. However, while the music started out at 24-bit word lengths and 88.2kHz sample rates, it was sequentially degraded while preserving the format until, at the end, we were listening to a 16-bit MP3 version sampled at 44.1kHz at a 192kbps bit rate.

This was a cannily designed test. Not only was the fact that it was a test concealed from the listeners, but organizing the presentation so that the best-sounding version of the data was heard first, followed by progressively degraded versions, worked against the usual tendency of listeners to a strange system in a strange room: to increasingly like the sound the more they hear of it. The listeners in Philip's demo would thus become aware of their own cognitive dissonance. Which, indeed, we did.

Philip's test worked with his listeners' internal models, not with the sound, which is why I felt it elegant. And, as a publisher and writer of audio component reviews, I am interested only peripherally in "sound" as such (footnote 5); what matters more is the quality of the reviewer's internal constructs. And how do you test the quality of those constructs?

The Art of Reviewing
That 1982 test of preference of LP vs CD forced me to examine what exactly it is that reviewers do. When people say they like something, they are being true to their feelings, and that like or dislike cannot be falsified by someone else's incomplete description of "reality." My fundamental approach to reviewing since then has been to, in effect, have the reviewer answer the binary question "Do you like this component, yes or no?" Of course, he is then obliged to support that answer. I insist that my reviewers include all relevant information, as, as I have said, when it comes to someone's ability to construct his or her internal model of the world outside, everything matters.

For example: in a recent study of wine evaluation, when people were told they were drinking expensive wine, they didn't just say they liked it more than the same wine when they were told it was cheap; brain scans showed that the pleasure centers of their brains lit up more. Some have interpreted the results of this study as meaning that the subjects were being snobs—that they decided that if the wine cost more, it must be better. But what I found interesting about this study was that this wasn't a conscious decision; instead, the low-level functioning of the subjects' brains was affected by their knowledge of the price. In other words, the perceptive process itself was being changed. When it comes to perception, everything matters, nothing can safely be discarded.

In my twin careers in publishing and recorded music, the goal is to produce something that people will want to buy. This is not pandering, but a reality of life—if you produce something that is theoretically perfect, but no one wants it or appreciates it enough to fork over their hard-earned cash, you become locked in a solipsistic bubble. The problem is that you can't persuade people that they are wrong to dislike something. Instead, you have to find out why they like or dislike something. Perhaps there is something you have overlooked.

For the second part of this lecture, I will examine some "case studies" in which the perception doesn't turn out as expected from theory. I will start with recording and microphone techniques, an area in which I began as a dyed-in-the-wool purist, and have since become more pragmatic.



Footnote 3: For a long time, I've felt that the difference between an "objectivist" and a "subjectivist" is that the latter has had, at one time in his or her life, a mentor who could show them what to listen for. Raymond was just one of the many from whom I learned what to listen for.

Footnote 4: This is a familiar problem in publishing, where it is well known that the writer of an article will be that article's worst proofreader. The author knows what he meant to write and what he meant to say, and will actually perceive words to be there that are not there, and miss words that are there but shouldn't be. The ideal proofreader is someone with no preconceptions of what the article is supposed to say.

Footnote 5: My use of the word sound here is meant to describe the properties of the stimulus. But strictly speaking, sound implies the existence of an observer. As the philosophical saw asks, "If a tree falls in the forest without anyone to observe it falling, does it make a sound?" Siegfried Linkwitz offered the best answer to this question on his website: "If a tree falls in the forest, does it make any sound? No, except when a person is nearby that interprets the change in air particle movement at his/her ear drums as sound coming from a falling tree. Perception takes place in the brain in response to changing electrical stimuli coming from the inner ears. Patterns are matched in the brain. If the person has never heard or seen a tree falling, they are not likely to identify the sound. There is no memory to compare the electrical stimuli to."

Share | |
COMMENTS
JohnnyR's picture

......blind testing mentioned by a person that doesn't believe in doing them. Aren't they "too difficult" according to yourself? Wait......nevermind just don't answer that because I know it will end up with you Mr Atkinson making some EXCUSE as to why you can't do blind tests for the magazine when it comes to reviewing products. ZzzzzzzzzzZzzzzz.

John Atkinson's picture

JohnnyR wrote:
Sure is a lot of blind testing mentioned by a person that doesn't believe in doing them.

A man sees what he wants to see and disregards the rest, eh? In the 15,000 words of this preprint, there are just a few paragraphs devoted to a discussion of blind testing.

JohnnyR wrote:
Aren't they "too difficult" according to yourself?

As I say, I took part in my first blind test 35 years ago and since then have been involved in well over a 100 such tests, as listener, proctor, or organizer. My opinion on their efficacy and how difficult it is to get valid results and not false negatives - ie reporting that no difference could be heard when a small but real audible difference exists - was formed as the result of that experience.

John Atkinson

Editor, Stereophile

GeorgeHolland's picture

In other words JohnnyR, Mr Atkinson DID find excuses not to use blind testing but found them worthwhile enough to mention them in his Heyser speech funnily enough.

JohnnyR's picture

A. Join a band as a second rate bass player

B. Fail to get a record contract

C.Pretend he's an electrical engineer

D.Get an offer to ruin.....ermm I meant run Stereophile

E. Proceed to ruin........ermmmmm excuse me......run Stereophile like Hitler would.

F. Spend his time talking online instead of actually doing any real work.

G. Make excuss about his busy schedule and why he can't do DBTs

H Profit!!!

Regadude's picture

Johnny makes plywood boxes in his mother's basement, and then claims he is a speaker expert.

Trolls audio websites. 

 

That is all.

ChrisS's picture

JRusskie seems to have become a failed audio (and DBT) expert in one easy step...

 

A. Failed...

dalethorn's picture

Hitler? I don't know whether to laugh or cry.

Ariel Bitran's picture

also, brings both of these to mind:

http://en.wikipedia.org/wiki/Reductio_ad_Hitlerum

http://en.wikipedia.org/wiki/Godwin's_law

ChrisS's picture

There it is, again...

ChrisS's picture

I think a synapse just exploded...

dalethorn's picture

I asked a question of Harman Testing Lab in the Computer Audiophile forum, and post #11 is the answer, which I thought was perfect. i.e., let the testee control the switch, so whatever adjustment time they need they can accomodate.

http://www.computeraudiophile.com/f12-headphones-and-speakers/behind-har...

Other observation: The comments in this Stereophile article about the noise floating free of the recording due apparently to having a higher playback bandwith - that made my day.

On R.C. Heyser: His description of the 1971 Sylmar quake, hearing a low-frequency "tone" prior to the shaking, got him started on comparing earthquakes and woofers and low-frequency playback in rooms, and he got beat up on a lot for those writings. But having been involved with both, those observations of his were like gold in the bank. Saved me lots of money.

Regadude's picture

Aren't you the dude who "tested" the Shure SRH 940 headphones and claimed they were almost as good as the 800HDs from Sennheiser?

I own the SRH940s. I was somewhat willing to believe you, but when I saw that your source was an Ipod, I just flicked your review in the trash.

You chose a 200$ source to test 325$ and 1500$ headphones?  And you believe your results mean something?

 

no

dalethorn's picture

My review of the 940, agreed upon by more than 95 percent of respondents, was vetted with specific music tracks listed in the review and subsequent comments, using ipods alone, ipods with analog amps, and desktop DACs and amps. Some of the naysayers are either impatient or just grumpy and disagreeable, which the comments in this Stereophile article perfectly illustrate.

Regadude's picture

How nice of you to call people who question your methodology grumpy, disagreable, etc. I question your methodology because it makes no sense to test 325$ headphones with an ipod. It makes even less sense to test 1500$ headphones with an ipod...

I actually listened to the Shure SRH940 and HD800 today, using my ipod Nano. You are correct, they both sound almost the same. They both sound like shit!

I know from experience that these headphones really shine with a good source, and especially, proper amplification. As good as the Shure is (I own them), it is no match for the HD800 when both are hooked up to decent equipment.

Your test is like testing the acceleration of a Ferrari and a Prius in your 12' driveway. You won't get much better results from the Ferrari in such a limited distance.

As for the 95% who agree with you, what gear did they use (ipod)? Were those 95% SRH940 owners and fanboys? 

dalethorn's picture

Saying "they both sound like shit" when one of those is a revered $1500 headphone of great pedigree and popularity among actual audiophiles is telling of your judgement. I would suggest taking a deep breath, pick a headphone and some music tracks you like, then tune out the matrix and just enjoy your music. You'll be less grumpy that way.

Regadude's picture

Don't try and hide behind my words! They both sound like shit through an ipod! What a weasel you are. I did not say the 800HD sounded like shit... BUT IT SOUNDED LIKE SHIT THROUGH AN IPOD. You rigged your test, Dale. 

Again, you use insults...

 

dalethorn's picture

A long time ago one person on a big audio site challenged people like you to be more specific instead of throwing words like s**t around. There were just two persons who offered test tracks with which to compare those headphones, but everyone else (like you) just threw out expletives, and a couple members even made serious threats. Now I have over 30 years experience comparing headphones like the Stax and Sennheiser series, and quite a bit of time invested with the Audio Engineering Society as well as reading Stereophile magazine, so even if nobody in the world were to agree with my conclusions, I'm certainly qualified to test and evaluate these devices. I'd suggest you refrain from purely ranting with words like s**t and offer specific testable facts, or just go away.

This article may help you:

http://dalethorn.com/Headphone_Ipod_Versus_Amp.txt

dalethorn's picture

BTW, and speaking of Prius and Ferrari, you may be one of the better off who don't mind paying $1500 for a headphone and then shelling out $2000 for a headphone amp. But let's pretend for a moment that you have to pinch pennies to get those luxurious goods, so when you finally have them you really appreciate what you have. In this latter case, were you to stumble across an ipod touch or iphone of recent manufacture, with the new Earpods ($29 purchased variety only) being driven by the Dirac DSP player, and had an opportunity to listen at length, it might make you ill to realize what you got for your $3500.

Regadude's picture

My headphone amp is a whopping 400$ (CI Audio VHP 2).

I do pinch pennies. The VHP 2 and the Shure 940 are mine.The Sennsheiser 800HD I borrowed from a friend.

Stop trying to cloud the issue with non relavant issues.

Regadude's picture

How about we let Stereophile test both headphones! First through an ipod, then with decent equipment!

Both headphones sound similar through an ipod.

Plug them into a receiver, they sound less like shit.

Plug them into a decent headphone amp, and they both sound MUCH BETTER. BUT the Sennsheiser pulls away and wipes the floor with the Shure.

Buy yourself a CI Audio headphone amp (the one I use), or any other decent headphone amp Dale... Before buying a 2500$ DAC to install on an ipod.

DUH...........................

GeorgeHolland's picture

The continuing saga of ChrisS being a troll and all around noncontributing poster. Classy act there ChrisS.

Mostly what I got from Mr Atkinson's article was the amount of photos of himself rather than Mr Heyser. I suppose a memorial lecture should be all about the presenter and how he sees fit to turn it into reasons why Stereophile does what it does regarding testing and the lack there of.

"Yes, what you think you are hearing might by dismissed as being imagination, but as the ghost of Professor Dumbledore says in Harry Potter and the Deathly Hallows, "Of course it's all happening in your head, Harry Potter, but why on earth should that mean that it is not real?"

Oh I see now. Stereophile's philosophy is based upon a ficticious Harry Potter character who spouts that what we hear is real.I'll be sure to let those people being treated for such voices in their heads that all is well and medication is not necessary.

"Footnote 2: For a long time, I've felt that the difference between an "objectivist" and a "subjectivist" is that the latter has had, at one time in his or her life, a mentor who could show them what to listen for. Raymond was just one of the many from whom I learned what to listen for."

Well if only that was true. Your readers tend to all be subjectivsts who aren't trained what to listen to but blindly ( I made a pun there since blind testing is not wanted at Stereophile) believe whatever they think they hear as the truth when in fact measuremnts will show that either they heard nothing related to the measurements or perhaps it's Dumbledore once more in their heads, make believe or wishful hearing.

All in all I found your presentation 90% about yourself and Stereophile and reasons why you defend subjective listening and gloss over things like cables and doing serous testing and throwing in various quotes and ambiguous stories barely related to the subect you were supposed to be talking about, unless that subject was muddling the understanding of Stereophile and telling everyone your own personal history instead of Mr Heyser's.

"Third, one well-known skeptic sitting in the audience tonight criticized my abstract a few weeks back on the grounds that I am just offering "hypotheses about stuff that might be just to stir the pot, while offering no real explanations."

Class act there Mr Atkinson, speak out and criticise someone you don't like or agree with in the audience who then has no opportunity to speak up and address the audience themselves. Very childish and immature.I already see how you do the same on here. "My way or the highway"

JohnnyR is once more sadly correct, Blind testing is not and wil not be used by Stereophile. Just because you Mr Atkinson, know how to push a few buttons to measure speakers, does not make you even close to the same league as Mr Heyser was. He furthered the cause of objectivity and made Audo magazine the best audio magizine there ever was. The Audio Critic is a very close second. Mr Aczel used to be one of the worst subjectivists around untill he admitted to being such and saw the light and spoke out against foolish subjective reviews. Testing will always be the most important part of audio magazines. Simply letting reviewers spout off about "blacker backgrounds" "lifted veils" or other such nonsense belittles both the reader and the so called reviewer not to mention the product under "test". Truth in testing is what Mr Heyser strove for not some silly review about rainbow foil, magic bowls or pebbles.

John Atkinson's picture

George Holland wrote:
Mostly what I got from Mr Atkinson's article was the amount of photos of himself rather than Mr Heyser. I suppose a memorial lecture should be all about the presenter and how he sees fit to turn it into reasons why Stereophile does what it does regarding testing and the lack there of. I found your presentation 90% about yourself and Stereophile...throwing in various quotes and ambiguous stories barely related to the subect you were supposed to be talking about, unless that subject was muddling the understanding of Stereophile and telling everyone your own personal history instead of Mr Heyser's.

You have misunderstood the nature of the lecture, GeorgeHolland. The Richard C. Heyser Memorial Lectures are not intended to be _about_ Dick Heyser but instead are offered to honor his memory. The invitation for me to be the October 2011 lecturer clearly stated that I was free to choose any subject I felt appropriate; my triple career as a musician/audio engineer/audio editor and the conclusions I have formed as a result of my 4 decades' experience in those careers were felt by the AES Technical Council members to be an eminently suitable subject. 

George Holland wrote:
Class act there Mr Atkinson, speak out and criticise someone you don't like or agree with in the audience who then has no opportunity to speak up and address the audience themselves. Very childish and immature.

The skeptic in question had emailed me before the lecture to let me know he would be in the audience and would take an active part in the anticipated Q&A session. I included this mention in the preprint to give him the necessary opening. As it turned out, he didn't attend, but I saw no reason to delete the point he made.

George Holland wrote:
Just because you Mr Atkinson, know how to push a few buttons to measure speakers, does not make you even close to the same league as Mr Heyser was.

I agree. As you can tell from my discussions of Dick Heyser's writing and thoughts in the preprint, I have an enormous amount of respect for what he achieved. I have never claimed to be his equal, nor would I. But again I must emphasize that the AES Heyser lectures are not intended to be _about_ Dick Heyser. That is your misunderstanding, I am afraid.

John Atkinson

Editor, Stereophile

Rolli666's picture

I am not sure as to why a person would even want to respond to these posts. Some problems really are better ignored as to invoke the spirit of go away.  I for one appreciated this lecture for what it is thank you.

John Atkinson's picture

Rolli666 wrote:
I for one appreciated this lecture for what it is thank you.

Appreciate your comment. It is rare that a magazine writer has the opportunity to gather all his thoughts in one place and I owe a debt of thanks to the Audio Engineering Society for inviting me to do so.

John Atkinson

Editor, Stereophile

 

ChrisS's picture

Whoa! There goes another synapse! Now they're both brain dead... I duelled extensively with JRusskie re: blind testing on the Forums and I daresay, he shot himself many times with his own arguments.

Saga? Can't wait for the movie version!

Sorry, Ariel... I'll try not to let it happen again.

Dunadan's picture

...which is a valuable insight, I think, from this article that Mr. Atkinson has generously preprinted. His stories--which, though I am not a longtime reader, I presume also reflect the history of Stereophile--mirror the tensions found in the philosophy of science between "reality" and "perception", and the question of whether our observations are truly disinterested or if our minds shape how and what we see (or hear). And, really, what exactly does a blind test resolve? That the reviewer likes one piece of equipment over another, presumably without factoring in aesthetics or cost? But "liking" one thing better than another is still a subjective judgment, and who's to say YOU would share that judgment? Never mind the problem of translating sounds heard by another into words on the magazine's page and then back again into the sounds heard by the reader when he or she auditions the gear. Or the issue of whether sound quality can be adequately described by a series of measurements, any more than colour can be described strictly in terms of wavelength, or consciousness in terms of vibrating atoms in your brain. That's not to say these things aren't significant, but to say that they add up to the sum of our human experience is absurd.

Thanks for a reflective and enjoyable article (at least in my mind).

GeorgeHolland's picture

The purpose of blind testing is soley to see if you can hear a difference between two units. After all the subjective rants of "oh this lifted several veils of crud from the liquid sound" or "I heard a definite improvement" I find it sad that Sterophile keeps saying that DBT or even SBT are not a valid way to test those claims. Either you can hear a difference or you can't yet those that claim they do shy away from proving that fact with a blind test. Afaid they might be shown to be wrong is my opinion, so Stereophile allows them to go merrily along in their own self delusions while spending way too much money on sham products.

ChrisS's picture

In a properly set up DBT, let's say for two amplifiers, what exactly will the listener be listening for?

 

(And what if the lamb heard a difference, but not the shepherd boy?)

GeorgeHolland's picture

Are you seriously so dense that you can't figure it out for yourself? Perhaps that's why you depend upon Stereophile to tell you what to buy?

http://home.provide.net/~djcarlst/abx.htm

http://www.matrixhifi.com/contenedor_ppec_eng.htm

Read and learn.

ChrisS's picture

So Georgie,

If the lamb can hear that Amp A can make the test system sing in a range of 10hz-60khz, but the shepherd boy can't hear in that range so can't tell the difference between Amp A and Amp B, which amp should the shepherd boy buy?

Pages

X
Enter your Stereophile.com username.
Enter the password that accompanies your username.
Loading