The 2011 Richard C. Heyser Memorial Lecture: "Where Did the Negative Frequencies Go?" Measuring Sound Quality, The Art of Reviewing

Measuring Sound Quality

Table 1: What is heard vs what is measurable

This is a table I prepared for my 1997 AES paper on measuring loudspeakers. On the left are the typical measurements I perform in my reviews; on the right are the areas of subjective judgment. It is immediately obvious that there is no direct mapping between any specific measurement and what we perceive. Not one of the parameters in the first column appears to bear any direct correlation with one of the subjective attributes in the second column. If, for example, an engineer needs to measure a loudspeaker's perceived "transparency," there isn't any single two- or three-dimensional graph that can be plotted to show "objective" performance parameters that correlate with the subjective attribute. Everything a loudspeaker does affects the concept of transparency to some degree or other. You need to examine all the measurements simultaneously.

This was touched on by Richard Heyser in his 1986 presentation to the London AES. While developing Time Delay Spectrometry, he became convinced that traditional measurements, where one parameter is plotted against another, fail to provide a complete picture of a component's sound quality. What we hear is a multidimensional array of information in which the whole is greater than the sum of the routinely measured parts.

And this is without considering that all the measurements listed examine changes in the voltage or pressure signals in just one of the information channels. Yet the defects of recording and reproduction systems affect not just one of those channels but both simultaneously. We measure in mono but listen in stereo, where such matters as directional unmasking—where the aberration appears to come from a different point in the soundstage than the acoustic model associated with it, thus making it more audible than a mono-dimensional measurement would predict—can have a significant effect. (This was a subject discussed by Richard Heyser.)

Most important, the audible effect of measurable defects is not heard as their direct effect on the signals but as changes in the perceived character of the oh-so-fragile acoustic models. And that is without considering the higher-order constructs that concern the music that those acoustic models convey, and the even higher-order constructs involving the listener's relationship to the musical message. The engineer measures changes in a voltage or pressure wave; the listener is concerned with abstractions based on constructs based on models!

Again, this was something I first heard described by Richard Heyser in 1986. He gave, as an example of these layers of abstraction, something with which we are all familiar yet cannot be measured: the concept of "Chopin-ness." Any music student can churn out a piece of music which a human listener will recognize as being similar to what Chopin would have written; it is hard to conceive of a set of audio measurements that a computer could use to come to the same conclusion.

Once you are concerned with a model-based view of sound quality, this leads to the realization that the nature of what a component does wrong is of greater importance than the level of what it does wrong: 1% of one kind of distortion can be innocuous, even musically appropriate, whereas 0.01% of a different kind of distortion can be musical anathema.

Consider the sounds of the clarinet I was playing in that 1975 album track. You hear it unambiguously as a clarinet, which means that enough of the small wrinkles in its original live sound that identify it as a clarinet are preserved by the recording and playback systems. Without those wrinkles in the sound, you would be unable to perceive that a clarinet was playing at that point in the music, yet those wrinkles represent a tiny proportion of the total energy that reaches your ears. System distortions that may be thought to be inconsequential compared with the total sound level can become enormously significant when referenced to the stereo signal's "clarinet-ness" content, if you will: the only way to judge whether or not they are significant is to listen.

But what if you are not familiar with the sound of the clarinet? From the acoustic-model–based view, it seems self-evident that the listener can construct an internal model only from what he or she is already familiar with. When the listener is presented with truly novel data, the internal models lose contact with reality. For example, in 1915 Edison conducted a live vs recorded demonstration between the live voice of soprano Anna Case and his Diamond Disc Phonograph. To everyone's surprise, reported Ms. Case, "Everybody, including myself, was astonished to find that it was impossible to distinguish between my own voice, and Mr. Edison's re-creation of it."

Much later, Anna Case admitted that she had toned down her voice to better match the phonograph. Still, the point is not that those early audiophiles were hard of hearing or just plain dumb, but that, without prior experience of the phonograph, the failings we would now find so obvious just didn't fit into the acoustic model those listeners were constructing of Ms. Case's voice.

I had a similar experience back in early 1983, when I was auditioning an early orchestral CD with the late Raymond Cooke, founder of KEF. I remarked that the CD sounded pretty good to me—no surface noise or tracing distortion, the speed stability, the clarity of the low frequencies—when Raymond metaphorically shook me by the shoulders: "Can't you hear that quality of high frequencies? It sounds like grains of rice being dropped onto a taut paper sheet." And up to that point, no, I had not noticed anything amiss with the high frequencies (footnote 3). My internal models were based on my decades of experience of listening to LPs. I had yet to learn the signature of the PCM system's failings—all I heard was the absence of the all-too-familiar failings of the LP. Until Raymond opened the door for me, I had no means of constructing a model that allowed for the failings of the CD medium.

An apparently opposite example: In a public lecture in November 1982, I played both an all-digital CD of Rimsky-Korsakov's Scheherazade and Beecham's 1957 LP with the Royal Philharmonic Orchestra of the same work, without telling the audience which was which. (Actually, to avoid the "Clever Hans" effect, an assistant behind a curtain played the discs.) When I asked the listeners to tell me, by a show of hands, which they thought was the CD, they overwhelmingly voted for what turned out to be the analog LP as being the sound of the brave new digital world!

I went home puzzled by the conflict between what I knew must be the superior medium and what the audience preferred. Of course, the LP is based on an elegant concept: RIAA equalization. As Bob Stuart has explained, this results in the LP having better resolution than CD where it is most important—in the presence region, where the ear is most sensitive— but not as good where it doesn't matter, in the top or bottom octaves. But with hindsight, it was clear that I had asked the wrong question: instead of asking what the listeners had preferred, I had asked them to identify which they thought was the new medium. They had voted for the presentation with which they were most familiar, that had allowed them to more easily construct their internal models, and that ease had led them to the wrong conclusion.

When people say they like or dislike what they are hearing, therefore, you can't discard this information, or say that their preference is wrong. The listeners are describing the fundamental state of their internal constructs, and that is real, if not always useful, data. This makes audio testing very complex, particularly when you consider that the brain will construct those internal acoustic models with incomplete data (footnote 4).

So how do you test the effectiveness of how changing the external stimulus facilitates the construction of those internal models?

In his keynote address at the London AES Conference in 2007, for example, Peter Craven discussed the improvement in sound quality of a digital transfer a 78rpm disc of a live electrical recording of an aria from Puccini's La Bohème when the sample rate was increased from 44.1 to 192kHz. Even 16-bit PCM is overkill for the 1926 recording's limited dynamic range, and though the original's bandwidth was surprisingly wide, given its vintage, 44.1kHz sampling would be more than enough to capture everything in the music, according to conventional information theory.

But as Peter pointed out, with such a recording there is more to the sound than only the music. Specifically, there is the surface noise of the original shellac disc. The improvement in sound quality resulting from the use of a high-sampling-rate transfer involved this noise appearing to float more free of the music; with lower sample rates, it sounded more integrated into the music, and thus degraded it more.

Peter offered a hypothesis to explain this perception: "the ear as detective." "A police detective searches for clues in the evidence; the ear/brain searches for cues in the recording," he explained, referring to the Barry Blesser paper I mentioned earlier. Given that audio reproduction is, almost by definition, "partial input," Peter wondered whether the reason listeners respond positively to higher sample rates and greater bit depths is that these better preserve the cues that aid listeners in the creation of internal models of what they perceive. If that is so, then it becomes easier for listeners to distinguish between desired acoustic objects (the music) and unwanted objects (noise and distortion). And if these can be more easily differentiated, they can then be more easily ignored.

Once you have wrapped your head around the internal-model–based view of perception, it becomes clear why quick-switched blind testing so often produces null results. Such blind tests can differentiate between sounds, but they are not efficient at differentiating the quality of the first-, second-, and third-order internal constructs outlined earlier, particularly if the listener is not in control of the switch.

I'll give an example: Your partner has the TV's remote control; your partner flashes up the program guide, but before you can make sense of the screen, she scrolls down, leaving you confused. And so on. In other words, you have been presented with a sensory stimulus, but have not been given enough time to form the appropriate internal model. Many of the blind tests in which I have participated echo this problem: The proctor switches faster than you have time to form a model, which in the end results in a result that is no different from chance.

The fact that the listener is therefore in a different state of mind in a quick-switched blind test than he would be when listening to music becomes a significant interfering variable. Rigorous blind testing, if it is to produce valid results, thus becomes a lengthy and time-consuming affair using listeners who are experienced and comfortable with the test procedure.

There is also the problem that when it comes to forming an internal model, everything matters, including the listener's cultural expectations and experience of the test itself. The listener in a blind test develops expectations based on previous trials, and the test designer needs to take those expectations into account.

For example, in 1989 I organized a large-scale blind comparison of two amplifiers using the attendees at a Stereophile Hi-Fi Show as my listeners. We carried out 56 tests, each of which would consist of seven forced-choice A/B-type comparisons in which the amplifiers would be Same or Different. To decide the Sames and Differents, I used a random number generator. However, if you think about this, sequences where there are seven Sames or Differents in a row will not be uncommon. Concerned that, presented with such a sequence, my listeners would stop trusting their ears and start to guess, whenever the random number generator indicated that a session of seven presentations should be six or seven consecutive Differents or Sames, I discarded it. Think about it: If you took part in a listening test and you got seven presentations where the amplifiers appeared to be the same, wouldn't you start to doubt what you were hearing?

I felt it important to reduce this history effect in each test. However, this inadvertently subjected the listeners to more Differents than Sames—224 vs 168—which I didn't realize until the weekend's worth of tests was over. As critics pointed out, this in itself became an interfering variable.

The best blind test, therefore, is when the listener is not aware he is taking part in a test. A mindwipe before each trial, if not actually illegal, would inconvenience the listeners—what would you do with the army of zombies that you had created?—but an elegant test of hi-rez digital performed by Philip Hobbs at the 2007 AES Conference in London achieved just this goal.

To cut a long story short, the listeners in Hobbs's test believed that they were being given a straightforward demo of his hi-rez digital recordings. However, while the music started out at 24-bit word lengths and 88.2kHz sample rates, it was sequentially degraded while preserving the format until, at the end, we were listening to a 16-bit MP3 version sampled at 44.1kHz at a 192kbps bit rate.

This was a cannily designed test. Not only was the fact that it was a test concealed from the listeners, but organizing the presentation so that the best-sounding version of the data was heard first, followed by progressively degraded versions, worked against the usual tendency of listeners to a strange system in a strange room: to increasingly like the sound the more they hear of it. The listeners in Philip's demo would thus become aware of their own cognitive dissonance. Which, indeed, we did.

Philip's test worked with his listeners' internal models, not with the sound, which is why I felt it elegant. And, as a publisher and writer of audio component reviews, I am interested only peripherally in "sound" as such (footnote 5); what matters more is the quality of the reviewer's internal constructs. And how do you test the quality of those constructs?

The Art of Reviewing
That 1982 test of preference of LP vs CD forced me to examine what exactly it is that reviewers do. When people say they like something, they are being true to their feelings, and that like or dislike cannot be falsified by someone else's incomplete description of "reality." My fundamental approach to reviewing since then has been to, in effect, have the reviewer answer the binary question "Do you like this component, yes or no?" Of course, he is then obliged to support that answer. I insist that my reviewers include all relevant information, as, as I have said, when it comes to someone's ability to construct his or her internal model of the world outside, everything matters.

For example: in a recent study of wine evaluation, when people were told they were drinking expensive wine, they didn't just say they liked it more than the same wine when they were told it was cheap; brain scans showed that the pleasure centers of their brains lit up more. Some have interpreted the results of this study as meaning that the subjects were being snobs—that they decided that if the wine cost more, it must be better. But what I found interesting about this study was that this wasn't a conscious decision; instead, the low-level functioning of the subjects' brains was affected by their knowledge of the price. In other words, the perceptive process itself was being changed. When it comes to perception, everything matters, nothing can safely be discarded.

In my twin careers in publishing and recorded music, the goal is to produce something that people will want to buy. This is not pandering, but a reality of life—if you produce something that is theoretically perfect, but no one wants it or appreciates it enough to fork over their hard-earned cash, you become locked in a solipsistic bubble. The problem is that you can't persuade people that they are wrong to dislike something. Instead, you have to find out why they like or dislike something. Perhaps there is something you have overlooked.

For the second part of this lecture, I will examine some "case studies" in which the perception doesn't turn out as expected from theory. I will start with recording and microphone techniques, an area in which I began as a dyed-in-the-wool purist, and have since become more pragmatic.

Footnote 3: For a long time, I've felt that the difference between an "objectivist" and a "subjectivist" is that the latter has had, at one time in his or her life, a mentor who could show them what to listen for. Raymond was just one of the many from whom I learned what to listen for.

Footnote 4: This is a familiar problem in publishing, where it is well known that the writer of an article will be that article's worst proofreader. The author knows what he meant to write and what he meant to say, and will actually perceive words to be there that are not there, and miss words that are there but shouldn't be. The ideal proofreader is someone with no preconceptions of what the article is supposed to say.

Footnote 5: My use of the word sound here is meant to describe the properties of the stimulus. But strictly speaking, sound implies the existence of an observer. As the philosophical saw asks, "If a tree falls in the forest without anyone to observe it falling, does it make a sound?" Siegfried Linkwitz offered the best answer to this question on his website: "If a tree falls in the forest, does it make any sound? No, except when a person is nearby that interprets the change in air particle movement at his/her ear drums as sound coming from a falling tree. Perception takes place in the brain in response to changing electrical stimuli coming from the inner ears. Patterns are matched in the brain. If the person has never heard or seen a tree falling, they are not likely to identify the sound. There is no memory to compare the electrical stimuli to."

ChrisS's picture

Who's doing the listening in your tests, JRusskie? Do you know any 18 year old musicians? Oh, of course not...But you probably keep company with a bunch of construction guys (lucky you!) with damaged hearing. They should all find that there's no difference between any products.

ChrisS's picture

Once again, comrade JRusskie, you are on your Quixotic journey down that twisty, winding path for "truth"... Being an upstanding citizen of the former-USSR, you should know about "truth", right?

DBT is SCIENCE, and SCIENCE is ALWAYS RIGHT, especially during the heyday of the Union of Soviet Socialist Republics and you've been hanging on to this idea since the dissolution of the USSR. Hanging on to something you really want to be TRUE.

Comrade JRusskie, just because you want it, doesn't make it so.

Ariel Bitran's picture

just wanted to let you know i haven't forgotten about you.

i've been south of the equator spending time with my father and brother, but now that I'm back in the Stereophile office, i'll answer your question in full a little later.

peace out homeslice.

be nice.

Regadude's picture

Asking Johnny to be nice... Wow, you are very optimistic Ariel! 

GeorgeHolland's picture

Ariel, how about doing your job and deleting post and start placing a 30 day BAN on Regadude and ChrisS for spamming this whole article day after day with nothing less than pure drivel and taunts? I've seen kids frums better moderated than this fiasco but it's already turned into a child's forum the way they act. You would think they had some dignity but we haven't seen any yet.

Ariel Bitran's picture

I believe the internet is pretty much open range, so I'm happy to just watch folks speak their minds using whatever manner to express themselves.

one thing that bothers me is when a member is unneccessarily rude to someone who is being respectful or even innocently posting unknowing of the snark they may receive. in those instances, I say something so new users are not discouraged to continue posting.

On the other hand, if you're being provocative with your initial comment, I can only expect others to react to your provocation. 


The other policy I encourage and have stated many times is as follows 

"don't feed the trolls"


Finally, we do delete comments from ANY member we feel is totally off-topic and insulting to an individual member, as we want the conversation to continue and bring light to the subject  at hand rather than turn into a name-calling game. That being said, I'd like to delete Regadude's comment above since it just makes fun of Johnny. Also, I probably shouldn't have typed be nice. i dont know what i was thinking. I think i just wanted a rhyme in there, and also to include a general suggestion of positivity, but deleting RegaDude's comment would delete your reply, and this is a conversation I'd like to keep around.

Regadude's picture

What did I do? In response to your comment of "be nice" to Johnny, I merely joked that you were asking a lot. A lot, because Johnny is often not nice. He is that way, that is a fact. I have no responsibility for his behavior. I just pointed it out.

Don't shoot the messenger!

ChrisS's picture

What is reliability?

Think, Georgie, think.

Ariel Bitran's picture

the two links provided earlier giving examples of some DBTs.

I found the matrixhifi test to be ignorable: who is the sample? how did they select these people? how are they representative of a population of listeners as a whole? in order to gather significance from these these tests, the first and most important step is determining your sample, sample size, and how you select your sample. this just seems like a bunch of friends having fun. also, since there were multiple components being switched at the same time, system synergies could have been the cause of the weaker sounding more 'hi-fi' system. maybe those components weren't right for each other, but the cheaper system just sounded better. at least in ABX, they only changed one piece at a time

what i found interesting in the ABX test was the user's ability to control the change of system component themselves. this helps eliminate the idea that the listener might feel like they are getting 'duped' or constantly searching/guessing for the difference.

Also regarding the ABX method, the # of times a difference was heard was 33. the # of times no difference was heard: 29. Interestingly, cables were the least discernable. 

DBT is time consuming and for signnificant results you need a large sample size (to represent a large population of listeners). With a small sample size, as in both of these tests, you risk a greatly flawed hypothesis and will lack confidence in your results. 

I don't want to whip out my textbooks, b/c i have other stuff to do, but 17 listeners is not nearly large enough of a sample size to even represent a population of 70,000 Stereophile readers (for example). Then we run into an even bigger problem of "who" is selected-->ie what type of sample you are trying to represent.

I've heard repeatedly that H/K does have a successful DBT model. I'm sure it takes them years to perform each experiment, and it is wildly expensive and time consuming. You need a large sample size for any of this stuff to matter, not a few dudes in a basement.

GeorgeHolland's picture

More excuses from Stereophile?  Why am I not surprised. You scoff and blow off any attempts at doing a DBT by the people I linked to yet when it comes to "reviewing" cables and amps it's suddenly okay to accept a sighted biased review as being true?  Laughable sir but then I suppose you being an emploee have to toe the line. So be it.

I give up. Go on and trust your sighted "tests" while ignoring a DBT or even a SBT. I know your boss won't let you do any. He "knows" it all. *eyeroll*

Regadude's picture

Hey George! Ariel provided you with a more than satisfactory answer. So why are you still complaining? You never heard the expression "agree to disagree"?

Ariel is right; sample size is crucial. Anyone who has a basic knowledge of statistics understands this. 

JohnnyR's picture

George was talking about simply doing a DBT or SBT among friends like he linked to. Let's say you and 5 pals think the "Humungo" amp  blows away every other amp you have listened to. So you set up a simple DBT or even a SBT to see if you can RELIABLY pick which amp is which WITHOUT sighted bias. In other words using your EARS???? I have read on the forums here how much people rely upon "What I heard" yet don't trust a simple test to prove that they can.

So lets say the "Humungo" amp is just simply "liquid, lifts 400 veils and makes the back ground blacker", what ever silly terms you wish to use, If so then you should be able to pick it out 100% of the time doing a DBT. If not and lets say you only choose right 50% of the time then that's proof you were only guessing and couldn't tell which was which. So much for saying sample size is critical. If no one can pick the "Humungo" amp from the other one then where are your rationalisations for saying it's better?

I can tell though that none of you have even bothered to try a SBT or DBT so it's all a waste of time speaking to the peanut gallery. I love how much money you both have WASTED on pricey products that only look good and sound the same. Funny as hell, go on spending YOUR money I love ityes

ChrisS's picture

JRusskie just beat himself with his own thinking...

Regadude's picture


Johnny wote:

"I can tell though that none of you have even bothered to try a SBT or DBT so it's all a waste of time speaking to the peanut gallery. I love how much money you both have WASTED on pricey products that only look good and sound the same. Funny as hell, go on spending YOUR money I love ityes"

How do you know what I've bought with my money? I will go on spending my money (but not on your plywood speakers you make in your basement), that's what it's made for!


JohnnyR's picture

.a fool and his money are soon parted.

Regadude's picture

Well I am not that much of a fool with my money. I have not yet bought any of your JohnnyR brand speakers! 

ChrisS's picture

"...came out to play,

Georgie Porgie ran away!"

Delusions of absolute truths cloaked in "science" aren't much to hide behind, eh Georgie...

I hear our Man of La Mancha, JRusskie, is looking for his Sancho Panza to accompany him on his journey in search for TRUTH.

Happy trails!

ChrisS's picture

Georgie, Look at all those scientists backing you up!

If you take a college level research methodology course, Georgie, you may not appear so laughable. At this point, there appears to be very little knowledge in what you say about reviewing audio products.

GeorgeHolland's picture

Here is how you review audio products........ looks in Stereophile and buys whatever the product of the month is. Spends too much money but doesn't care. Goes online and talks like a 5 year old and spews insults and THINKS he's smart. Case closed.

Regadude's picture

George, name ONE SINGLE product that I have bought. Just one. You and Johnny have this little fantasy in which you think you know everything and everyone. 

How about you list your gear? If you are so good and knowledgeable about buying audio products, do share with us the products that are George Holland worthy. If you actually have any audio gear...

GeorgeHolland's picture

ZzzzzzzzzZZzzzzzzzzzzzzz sorry but conversing with the likes of you is boring and makes my brain hurt considering the amount of BS you spew,

ChrisS's picture

Take one course of college-level research methodology and call us in the morning...

rl1856's picture

Do you like what you are hearing ?  If no, move on until you do.  If yes, then shut up and relax.  This is a hobby focused on the enjoyment of the creative output of artists.  It is not about how many proverbial angels can dance on the head of a pin.

Go listen to MUSIC !

hnipen's picture

Thanks John for a very interesting and exciting presentation, lots of interesting information here and I'm surprised, to say the least, from the lack of positive feedback.

There are many who are skeptical to some of the ways of doing measurements in Stereophile and in some ways I'm one of them too, especially the way speakers are measured so close, large array speakers like the bigger Dunlavy's and some others will not sum up very nicely in this way. We do, however, not live in a perfect world and Stereophile cannot afford an anechoic chamber, so this is probably the best they can do.

I wish John would share more of this kind of information as he has gathered lots of knowledge during a long interesting career at Stereophile and other places.

Go on John :-)

Merry Christmas

Cheers harald

absolutepitch's picture

John, thanks for getting this lecture pre-print available for us to read. I have been looking forward to this.

I agree that there is a lot fo information combined into one lecture that anyone would need a lot of time to learn and understand the details. Pardon me for paraphrasing some of your words below.

Regarding the null result of DBTs, your description of the interpretation is in agreement with what I remember from statistics classes. I might add that a statistician would include a probability value or confidence band with the interpretation (something to the effect that 'the null hypothesis of no-difference-detctable is accepted with high probability'), and equally for the case when a difference is detected with high probability. I personally think DBTs should be done for product reviews, but agree that valid DBT's are difficult and time consuming (expensive) to do correctly, as Dr. Toole has shown in his writings.

The example of the 'backwards' impulse being not agreeable to listeners is something I have noticed in reference to digital recording. It's a wave form that does not occur naturally in music production, so reproducing it should sound 'bothersome'.

I also agree with the previous post, that more articles like this would be welcomed, to further highlight how complicated this field really is.

bernardperu's picture

I have read your essay with great pleasure (all of it!) and I think it is a great example of the Liberal Arts and Science coming together. In the end, it feels like a piece of applied music philosophy, which I find fascinating. It also seems to be free of busines-oriented interests, as your opinion on cables clearly suggests. It is awesome and very unsual to meet an accomplished person who gives priority to his passions and principles over financial interests (as also expressed on your 2012 writing on the CES and Las Vegas). 

I consider myself to be an audiophile that turns off the lights and tries to connect his emotions with the music with a very relaxed mind (this seems to be a category in itself, as the un-relaxed passive listeners who cannot focus on the music on a mid to long term basis tend to be very opinionated). Having said this, I recently purchased a pair of Class D mono amps that can clearly connect me to the music (Hephaestus brand). I have not ever listened to amps which are over 15k. Within similar prices, class D seems to be the better choice (but how relative this can be, Jon!)

I will continue to follow your writings with deep admiration and I thank you for making a difference on my musical experience (which is passed on to my girlfriend and my child). 



GeorgeHolland's picture

Clueless you are if you think Mr Atkinson is something special angle

ChrisS's picture

Yoda you are?

Andreasmaaan's picture

It's a pity that some proponents of DBT as the only valid methodology have used the comments thread here to launch personal attacks against JA. Personally, I found the lecture fascinating and thought-provoking, and I thought that the nuggets of personal history provided a powerful context for the thoughtful opinions expressed. For better or worse, Stereophile doesn't restrict itself to DBTs as their only reviewing tool, but JA does measure every piece of gear his reviewers review - a practice which ensures that the opinions of the reviewers are grounded in objective data, or otherwise as the case may be. I'm not sure why this approach, coupled with a reliance on an income stream from advertising, seems to place JA in line for so much personal vitriol. If similar attacks were levelled at me in my professional life, I'd be mortified and enraged.

To cut what is risking becoming a lengthy expression of indignation short: thank-you JA for a wise and thought-provoking read.

JohnnyR's picture


"For better or worse, Stereophile doesn't restrict itself to DBTs as their only reviewing tool, but JA does measure every piece of gear his reviewers review - a practice which ensures that the opinions of the reviewers are grounded in objective data"

Do you even READ the reviews? Seriously dude. Stereophile DOESN'T do DBTs at ALL! Atkinson has said repeatedly that they are "too difficult to do". Show me ONE single DBT he has done in a review please. Plus he does NOT "measure every piece of gear his reviewers review". Cables, power cords, record demagnitizers, just to name a few. You obviously have been reading another publication NOT Stereophile.


Enter your username.
Enter the password that accompanies your username.