The 2011 Richard C. Heyser Memorial Lecture: "Where Did the Negative Frequencies Go?" Measuring Sound Quality, The Art of Reviewing

Measuring Sound Quality

Table 1: What is heard vs what is measurable

This is a table I prepared for my 1997 AES paper on measuring loudspeakers. On the left are the typical measurements I perform in my reviews; on the right are the areas of subjective judgment. It is immediately obvious that there is no direct mapping between any specific measurement and what we perceive. Not one of the parameters in the first column appears to bear any direct correlation with one of the subjective attributes in the second column. If, for example, an engineer needs to measure a loudspeaker's perceived "transparency," there isn't any single two- or three-dimensional graph that can be plotted to show "objective" performance parameters that correlate with the subjective attribute. Everything a loudspeaker does affects the concept of transparency to some degree or other. You need to examine all the measurements simultaneously.

This was touched on by Richard Heyser in his 1986 presentation to the London AES. While developing Time Delay Spectrometry, he became convinced that traditional measurements, where one parameter is plotted against another, fail to provide a complete picture of a component's sound quality. What we hear is a multidimensional array of information in which the whole is greater than the sum of the routinely measured parts.

And this is without considering that all the measurements listed examine changes in the voltage or pressure signals in just one of the information channels. Yet the defects of recording and reproduction systems affect not just one of those channels but both simultaneously. We measure in mono but listen in stereo, where such matters as directional unmasking—where the aberration appears to come from a different point in the soundstage than the acoustic model associated with it, thus making it more audible than a mono-dimensional measurement would predict—can have a significant effect. (This was a subject discussed by Richard Heyser.)

Most important, the audible effect of measurable defects is not heard as their direct effect on the signals but as changes in the perceived character of the oh-so-fragile acoustic models. And that is without considering the higher-order constructs that concern the music that those acoustic models convey, and the even higher-order constructs involving the listener's relationship to the musical message. The engineer measures changes in a voltage or pressure wave; the listener is concerned with abstractions based on constructs based on models!

Again, this was something I first heard described by Richard Heyser in 1986. He gave, as an example of these layers of abstraction, something with which we are all familiar yet cannot be measured: the concept of "Chopin-ness." Any music student can churn out a piece of music which a human listener will recognize as being similar to what Chopin would have written; it is hard to conceive of a set of audio measurements that a computer could use to come to the same conclusion.

Once you are concerned with a model-based view of sound quality, this leads to the realization that the nature of what a component does wrong is of greater importance than the level of what it does wrong: 1% of one kind of distortion can be innocuous, even musically appropriate, whereas 0.01% of a different kind of distortion can be musical anathema.

Consider the sounds of the clarinet I was playing in that 1975 album track. You hear it unambiguously as a clarinet, which means that enough of the small wrinkles in its original live sound that identify it as a clarinet are preserved by the recording and playback systems. Without those wrinkles in the sound, you would be unable to perceive that a clarinet was playing at that point in the music, yet those wrinkles represent a tiny proportion of the total energy that reaches your ears. System distortions that may be thought to be inconsequential compared with the total sound level can become enormously significant when referenced to the stereo signal's "clarinet-ness" content, if you will: the only way to judge whether or not they are significant is to listen.

But what if you are not familiar with the sound of the clarinet? From the acoustic-model–based view, it seems self-evident that the listener can construct an internal model only from what he or she is already familiar with. When the listener is presented with truly novel data, the internal models lose contact with reality. For example, in 1915 Edison conducted a live vs recorded demonstration between the live voice of soprano Anna Case and his Diamond Disc Phonograph. To everyone's surprise, reported Ms. Case, "Everybody, including myself, was astonished to find that it was impossible to distinguish between my own voice, and Mr. Edison's re-creation of it."

Much later, Anna Case admitted that she had toned down her voice to better match the phonograph. Still, the point is not that those early audiophiles were hard of hearing or just plain dumb, but that, without prior experience of the phonograph, the failings we would now find so obvious just didn't fit into the acoustic model those listeners were constructing of Ms. Case's voice.

I had a similar experience back in early 1983, when I was auditioning an early orchestral CD with the late Raymond Cooke, founder of KEF. I remarked that the CD sounded pretty good to me—no surface noise or tracing distortion, the speed stability, the clarity of the low frequencies—when Raymond metaphorically shook me by the shoulders: "Can't you hear that quality of high frequencies? It sounds like grains of rice being dropped onto a taut paper sheet." And up to that point, no, I had not noticed anything amiss with the high frequencies (footnote 3). My internal models were based on my decades of experience of listening to LPs. I had yet to learn the signature of the PCM system's failings—all I heard was the absence of the all-too-familiar failings of the LP. Until Raymond opened the door for me, I had no means of constructing a model that allowed for the failings of the CD medium.

An apparently opposite example: In a public lecture in November 1982, I played both an all-digital CD of Rimsky-Korsakov's Scheherazade and Beecham's 1957 LP with the Royal Philharmonic Orchestra of the same work, without telling the audience which was which. (Actually, to avoid the "Clever Hans" effect, an assistant behind a curtain played the discs.) When I asked the listeners to tell me, by a show of hands, which they thought was the CD, they overwhelmingly voted for what turned out to be the analog LP as being the sound of the brave new digital world!

I went home puzzled by the conflict between what I knew must be the superior medium and what the audience preferred. Of course, the LP is based on an elegant concept: RIAA equalization. As Bob Stuart has explained, this results in the LP having better resolution than CD where it is most important—in the presence region, where the ear is most sensitive— but not as good where it doesn't matter, in the top or bottom octaves. But with hindsight, it was clear that I had asked the wrong question: instead of asking what the listeners had preferred, I had asked them to identify which they thought was the new medium. They had voted for the presentation with which they were most familiar, that had allowed them to more easily construct their internal models, and that ease had led them to the wrong conclusion.

When people say they like or dislike what they are hearing, therefore, you can't discard this information, or say that their preference is wrong. The listeners are describing the fundamental state of their internal constructs, and that is real, if not always useful, data. This makes audio testing very complex, particularly when you consider that the brain will construct those internal acoustic models with incomplete data (footnote 4).

So how do you test the effectiveness of how changing the external stimulus facilitates the construction of those internal models?

In his keynote address at the London AES Conference in 2007, for example, Peter Craven discussed the improvement in sound quality of a digital transfer a 78rpm disc of a live electrical recording of an aria from Puccini's La Bohème when the sample rate was increased from 44.1 to 192kHz. Even 16-bit PCM is overkill for the 1926 recording's limited dynamic range, and though the original's bandwidth was surprisingly wide, given its vintage, 44.1kHz sampling would be more than enough to capture everything in the music, according to conventional information theory.

But as Peter pointed out, with such a recording there is more to the sound than only the music. Specifically, there is the surface noise of the original shellac disc. The improvement in sound quality resulting from the use of a high-sampling-rate transfer involved this noise appearing to float more free of the music; with lower sample rates, it sounded more integrated into the music, and thus degraded it more.

Peter offered a hypothesis to explain this perception: "the ear as detective." "A police detective searches for clues in the evidence; the ear/brain searches for cues in the recording," he explained, referring to the Barry Blesser paper I mentioned earlier. Given that audio reproduction is, almost by definition, "partial input," Peter wondered whether the reason listeners respond positively to higher sample rates and greater bit depths is that these better preserve the cues that aid listeners in the creation of internal models of what they perceive. If that is so, then it becomes easier for listeners to distinguish between desired acoustic objects (the music) and unwanted objects (noise and distortion). And if these can be more easily differentiated, they can then be more easily ignored.

Once you have wrapped your head around the internal-model–based view of perception, it becomes clear why quick-switched blind testing so often produces null results. Such blind tests can differentiate between sounds, but they are not efficient at differentiating the quality of the first-, second-, and third-order internal constructs outlined earlier, particularly if the listener is not in control of the switch.

I'll give an example: Your partner has the TV's remote control; your partner flashes up the program guide, but before you can make sense of the screen, she scrolls down, leaving you confused. And so on. In other words, you have been presented with a sensory stimulus, but have not been given enough time to form the appropriate internal model. Many of the blind tests in which I have participated echo this problem: The proctor switches faster than you have time to form a model, which in the end results in a result that is no different from chance.

The fact that the listener is therefore in a different state of mind in a quick-switched blind test than he would be when listening to music becomes a significant interfering variable. Rigorous blind testing, if it is to produce valid results, thus becomes a lengthy and time-consuming affair using listeners who are experienced and comfortable with the test procedure.

There is also the problem that when it comes to forming an internal model, everything matters, including the listener's cultural expectations and experience of the test itself. The listener in a blind test develops expectations based on previous trials, and the test designer needs to take those expectations into account.

For example, in 1989 I organized a large-scale blind comparison of two amplifiers using the attendees at a Stereophile Hi-Fi Show as my listeners. We carried out 56 tests, each of which would consist of seven forced-choice A/B-type comparisons in which the amplifiers would be Same or Different. To decide the Sames and Differents, I used a random number generator. However, if you think about this, sequences where there are seven Sames or Differents in a row will not be uncommon. Concerned that, presented with such a sequence, my listeners would stop trusting their ears and start to guess, whenever the random number generator indicated that a session of seven presentations should be six or seven consecutive Differents or Sames, I discarded it. Think about it: If you took part in a listening test and you got seven presentations where the amplifiers appeared to be the same, wouldn't you start to doubt what you were hearing?

I felt it important to reduce this history effect in each test. However, this inadvertently subjected the listeners to more Differents than Sames—224 vs 168—which I didn't realize until the weekend's worth of tests was over. As critics pointed out, this in itself became an interfering variable.

The best blind test, therefore, is when the listener is not aware he is taking part in a test. A mindwipe before each trial, if not actually illegal, would inconvenience the listeners—what would you do with the army of zombies that you had created?—but an elegant test of hi-rez digital performed by Philip Hobbs at the 2007 AES Conference in London achieved just this goal.

To cut a long story short, the listeners in Hobbs's test believed that they were being given a straightforward demo of his hi-rez digital recordings. However, while the music started out at 24-bit word lengths and 88.2kHz sample rates, it was sequentially degraded while preserving the format until, at the end, we were listening to a 16-bit MP3 version sampled at 44.1kHz at a 192kbps bit rate.

This was a cannily designed test. Not only was the fact that it was a test concealed from the listeners, but organizing the presentation so that the best-sounding version of the data was heard first, followed by progressively degraded versions, worked against the usual tendency of listeners to a strange system in a strange room: to increasingly like the sound the more they hear of it. The listeners in Philip's demo would thus become aware of their own cognitive dissonance. Which, indeed, we did.

Philip's test worked with his listeners' internal models, not with the sound, which is why I felt it elegant. And, as a publisher and writer of audio component reviews, I am interested only peripherally in "sound" as such (footnote 5); what matters more is the quality of the reviewer's internal constructs. And how do you test the quality of those constructs?

The Art of Reviewing
That 1982 test of preference of LP vs CD forced me to examine what exactly it is that reviewers do. When people say they like something, they are being true to their feelings, and that like or dislike cannot be falsified by someone else's incomplete description of "reality." My fundamental approach to reviewing since then has been to, in effect, have the reviewer answer the binary question "Do you like this component, yes or no?" Of course, he is then obliged to support that answer. I insist that my reviewers include all relevant information, as, as I have said, when it comes to someone's ability to construct his or her internal model of the world outside, everything matters.

For example: in a recent study of wine evaluation, when people were told they were drinking expensive wine, they didn't just say they liked it more than the same wine when they were told it was cheap; brain scans showed that the pleasure centers of their brains lit up more. Some have interpreted the results of this study as meaning that the subjects were being snobs—that they decided that if the wine cost more, it must be better. But what I found interesting about this study was that this wasn't a conscious decision; instead, the low-level functioning of the subjects' brains was affected by their knowledge of the price. In other words, the perceptive process itself was being changed. When it comes to perception, everything matters, nothing can safely be discarded.

In my twin careers in publishing and recorded music, the goal is to produce something that people will want to buy. This is not pandering, but a reality of life—if you produce something that is theoretically perfect, but no one wants it or appreciates it enough to fork over their hard-earned cash, you become locked in a solipsistic bubble. The problem is that you can't persuade people that they are wrong to dislike something. Instead, you have to find out why they like or dislike something. Perhaps there is something you have overlooked.

For the second part of this lecture, I will examine some "case studies" in which the perception doesn't turn out as expected from theory. I will start with recording and microphone techniques, an area in which I began as a dyed-in-the-wool purist, and have since become more pragmatic.



Footnote 3: For a long time, I've felt that the difference between an "objectivist" and a "subjectivist" is that the latter has had, at one time in his or her life, a mentor who could show them what to listen for. Raymond was just one of the many from whom I learned what to listen for.

Footnote 4: This is a familiar problem in publishing, where it is well known that the writer of an article will be that article's worst proofreader. The author knows what he meant to write and what he meant to say, and will actually perceive words to be there that are not there, and miss words that are there but shouldn't be. The ideal proofreader is someone with no preconceptions of what the article is supposed to say.

Footnote 5: My use of the word sound here is meant to describe the properties of the stimulus. But strictly speaking, sound implies the existence of an observer. As the philosophical saw asks, "If a tree falls in the forest without anyone to observe it falling, does it make a sound?" Siegfried Linkwitz offered the best answer to this question on his website: "If a tree falls in the forest, does it make any sound? No, except when a person is nearby that interprets the change in air particle movement at his/her ear drums as sound coming from a falling tree. Perception takes place in the brain in response to changing electrical stimuli coming from the inner ears. Patterns are matched in the brain. If the person has never heard or seen a tree falling, they are not likely to identify the sound. There is no memory to compare the electrical stimuli to."

Share | |
Comments
bernardperu's picture
Why would you read Stereophile anyway?

JhonnyR

 

If Stereophile has the principle of not doing DBT and for you doing so is a matter of principle why oh earth would you even care to read Stereophile? Your statement above suggests you have very extensive knowledge of the content of Stereophile's articles. So you keep reading stereophile's reviews knowing they are irremediably flawed?? How perverse is this?

 

The real question lies on your double blind tested hatred. 

 

Whatever your answer is I will have to assume that your hatred is a broken record and it will just go on.

GeorgeHolland's picture
He probably reads them as I

He probably reads them as I do for the laughs.smiley

JohnnyR's picture
You Didn't Comment..........

......on your error filled post above so why should I comment on your assumptions? If you wish to worship Fearless Leader then by all means bow down and do so. I myself prefer to ask questions and be skeptical about his "methods" and the reason "why" he doesn't test ALL items under review.

John Atkinson's picture
You're Welcome

Andreasmaaan wrote:
It's a pity that some proponents of DBT as the only valid methodology have used the comments thread here to launch personal attacks against JA.

Indeed. I am not sure why I have become a lightning rod for these audio skeptics. Perhaps it is because it is a matter of religious belief on their part - having had extensive experience of blind testing, I rejected it and thus became a heretic in their eyes. Or perhaps it is just our courtesy in allowing them the space on this website to express themselves.

Andreasmaaan wrote:
JA does measure every piece of gear his reviewers review - a practice which ensures that the opinions of the reviewers are grounded in objective data, or otherwise as the case may be.

As my reviewers do not see the measurements until after they have written the review, I take my hat off to them.

Andreasmaaan wrote:
To cut what is risking becoming a lengthy expression of indignation short: thank-you JA for a wise and thought-provoking read.

You're welcome and my thanks to everyone else who appreciated the preprint. It was an honor to have been invited by the Audio Engineering Society to give this lecture - it's not often that you get the opportunity to look back over a 4-decade career!

A Happy New Year to everyone who surfs this site.

John Atkinson

Editor, Stereophile

JohnnyR's picture
Ahhhhhhhhhh Typical Atkinson Answers

You ignore the errors and just plow along blissfully gobbling up the unwarranted praise.

"JA does measure every piece of gear his reviewers review"

As I pointed out above, along with other errors on the part of Andreasmaaan,you do NOT test every piece of gear your reviewers review, but then you pretended that you do in your response. Only certain items are tested by you. Please get your own facts straight if that's even possible for you to do so.

Regadude's picture
Johnny brand speakers

Johnny, I wish you a great new year 2013, in which you sell 10 times more of your home made speakers than in 2012. 

Since you sold zero of your speakers in 2012, you need to sell 10 X 0=0 speakers in 2013 to meet this objective. Maybe you can hire George Holland as your director of marketing and sales... 

Get cracking, you have only got 364 days left to sell your quota...

JohnnyR's picture
Happy 2013 Raghead

[flames deleted by John Atkinson]

Unlike yourself Raghead, I will be DESIGNING and BUILDING my own speakers and learning. You do know that word "learning"? Hmmmm maybe you stopped after age 10. You on the other hand will be on here *yawning* and acting the fool and spending your money on Stereophile reviewed crapola.yes

Regadude's picture
So only your speakers are good?

So according to you Johnny, everything Stereophile reviews is crapola?! There is not one good piece of equipment that has been reviewed by Stereophile? Hundreds of reviews, not one good product?

But you, in your basement, build quality speakers... 

I have challenged you before (as well as George) to name your gear. What kind of gear merits the Johnny seal of approval? 

As always, you always ignore my challenge (George chickened out to). It seems you can only complain, but cannot even provide a list of your gear. 

I guess ever seeing pictures of your "famous" speakers is also out of the question. 

All talk and no action. 

Regadude's picture
Johnny the pussy

So Johnny, where are your speakers pics and specs?

In your imagination.... [flame deleted by JA]

GeorgeHolland's picture
Johnny never offered to show

Johnny never offered to show you anything other than his backside which I think is approriate considering the childish actions of yourself.What does it matter what our audio components are? Anything we mention would be ridiculed by yourself no doubt. Sorry but I'm not playing your silly game. Good God grow up and act like a man for once will you? angry

John Atkinson's picture
Getting Facts Straight

JohnnyR wrote:
As I pointed out above . . . you do NOT test every piece of gear your reviewers review, but then you pretended that you do in your response. Only certain items are tested by you. Please get your own facts straight if that's even possible for you to do so.

Out of morbid curiosity, I looked at the past 15 issues of Stereophile - we published 98 full reviews and follow-up reviews. Of those 98 reviews, 83 were accompanied with a full set of measurements, or 84.5%. So even if not every review includes measurements, the vast majority do, which supports Andreasmaaan's point.

I have explained to you before, in other comment threads, why we do not publish measurements of analog playback components, which is due to lack of resources. I have also explained to you why we do not publish measurements of tweaks, which is that it it is difficult to determine exactly what to measure. I don't see any reason to reopen those discussions, but that you continue to be fixated on these two issues and continue to raise them is sad.

And, as I have warned you, JohnnyR, I am continuing to delete comments from you, and others, that are nothing more than flames aimed at other readers.

John Atkinson

Editor, Stereophile

JohnnyR's picture
Get Out Your Dictionary

"Reviews" means just that, any item reviewed by your staff. If you think "tweaks" don't qualify then they should be labled as "Opinions" not a review.angle All of those full reviews you speak of were what speakers, amps, preamps, cd transposrts and the like?

Yeah I know why you don't test tweaks, that would involve you using those horrible SBT or DBT. That's a really nice system you have set up. "We don't believe in DBTs or do them or can't afford to do them"......"We don't know what to test on the tweaks"..........add the two up and you get "We don't have to show that tweaks actually do anything other than what our "reviewer" subjectivly alludes to in their sighted biased listening using their golden ears"......EXCUSES Mr Atkinson plain and simple. no

Once more I ask you to read and comment about this link:

http://forums.audioholics.com/forums/loudspeakers/83412-diy-loudspeakers...

You seem to think DIY can't do anything right in your previous post in answer to Raghead. Insulting intelligent people that actually build better speakers than the status quo can produce is beyond being pompus in your case.THAT is what's SAD.

Don't you have some real work to be doing instead of being on the forums? I thought that was Ariel's job. Let him earn his pay.

ChrisS's picture
Circle Game

JRusskie,

Look up "fixation" and "perseveration" in your dictionary. "Absolute truths" regarding audio products seems to exist only in your basement and in your head.

John Atkinson's picture
What?

JohnnyR wrote:
John Atkinson wrote:
I looked at the past 15 issues of Stereophile - we published 98 full reviews and follow-up reviews. Of those 98 reviews, 83 were accompanied with a full set of measurements, or 84.5%. So even if not every review includes measurements, the vast majority do, which supports Andreasmaaan's point.

"Reviews" means just that, any item reviewed by your staff. If you think "tweaks" don't qualify then they should be labled as "Opinions" not a review.angle

I am not sure what you mean. You were objecting to Andreasmaan's statement that Stereophile accompanies its reviews with measurements as not being true. I offered the analysis above to show that he was correct.

JohnnyR wrote:
Once more I ask you to read and comment about this link:

http://forums.audioholics.com/forums/loudspeakers/83412-diy-loudspeakers...

You need to consult with your fellow traveler GeorgeHolland, who criticized me about linking to other sites :-)

JohnnyR wrote:
You seem to think DIY can't do anything right in your previous post in answer to [Regadude.] Insulting intelligent people that actually build better speakers than the status quo can produce is beyond being pompus in your case.THAT is what's SAD.

With respect, you are arguing with the voices in your head here. I haven't written anything about DIY speaker designers not being able to do "anything right." What I have said is that a DIY designer, like yourself, doesn't subject his loudpeakers to the scrutiny of a disinterested marketplace, something professional designers do as a matter of course. So the question whether or not DIY designs are superior to commercial designs isn't put under independent scrutiny. It remains unsupported conjecture, as far as I am concerned.

John Atkinson

Editor, Stereophile

GeorgeHolland's picture
Mr Atkinson, you could 

Mr Atkinson, you could  compare DIY against manufactured speakers but of course you won't. Might upset those that can't make good speakers and I'm not talking about the DIY cheeky

Johnny plainly stated that "tweaks" are reviewed in your magazine and online yet they aren't tested, hence you don't test every component under review. You have a reading comprehension problem? How many components reviewed by Jason Serinus have been tested? uh huh, right.

So now you wuss out about commenting on links? That proves you had nothing to say in the first place about it so you shouldn't have opened your mouth.Cop out time for Mr Atkinson.

The only unsupported conjecture on here are your abilities to print honest reviews. Your ability to avoid testing  alot of audio components is well known by now. Very impressive spinning you do on here.wink

John Atkinson's picture
Reading Comprehension

GeorgeHolland wrote:
Mr Atkinson, you could  compare DIY against manufactured speakers but of course you won't. Might upset those that can't make good speakers and I'm not talking about the DIY cheeky

Back in the early 1990s, Stereophile's Corey Greenberg was one of the judges for a DIY loudpeaker competition in San Francisco. There were 2 winning designs and as part of the prize, those 2 speakers were subject to a full set of measurements in Stereophile, published in the March 1992 issue. The speakers' measured performance was good but not great and certainly didn't embarass professional designers.

GeorgeHolland wrote:
Johnny plainly stated that "tweaks" are reviewed in your magazine and online yet they aren't tested, hence you don't test every component under review. You have a reading comprehension problem?

I don't believe so. JohnnyR was wrong, in that we very rarely publish full reviews of tweak products. Of the 98 reviews published in the magazine January 2012 through March 2013, there was just one review of a "tweak," Robert Deutsch on the HiFi Tuning Supreme Fuses in May 2012. ( I don't count the AudioQuest power strip reviewed by Kal Rubinson's review in December 2012 as Kal didn't make any comment on its sound quality, only on its utllity.)

GeorgeHolland wrote:
How many components reviewed by Jason Serinus have been tested? uh huh, right.

The correct answer is none. But that doesn't support JohnnyR's case either, as Jason's reviews are published on the website Secrets of Home Theater and High Fidelity, which has no connection with Stereophile. I hardly believe I am obliged to support reviews of products in competing publications with measurements in Stereophile.

John Atkinson

Editor, Stereophile

JohnnyR's picture
Wow Nostalgia Time Kids

"Back in the early 1990s, Stereophile's Corey Greenberg was one of the judges for a DIY loudpeaker competition in San Francisco. There were 2 winning designs and as part of the prize, those 2 speakers were subject to a full set of measurements in Stereophile, published in the March 1992 issue. The speakers' measured performance was good but not great and certainly didn't embarass professional designers."

Golly "only" 20 years ago and as we all know, nothing new has happened since then to make speaker design any better (Insert BIG sarcasm face here) cheeky Talk about a lame example Atkinson.

Yeah yeah, "full reviews" whatever, Plenty of "little revews" though all the freakin time and all they are , are OPINIONS but you know what? You let your reviewers get away with murdering the intergrity of the audio world with their stupid banter about  "blacker backgrounds,lifted veils and HUGE improvements in the sound" without them having to justify their claims, Pretty slick there pal.

Oh and Jason Serinus "reviews" as posted about the recent RMAF isn't connected to Stereophile? Give us all a break, Your EXCUSES are even lamer this time than usual.

Yep typical Fearless Leader goobley gook. Did you major in Universisty in the double speak of Orwell's 1984? If not then you are gulity of plagiarizing the concept.

John Atkinson's picture
Not making sense

JohnnyR wrote:
Golly "only" 20 years ago and as we all know, nothing new has happened since then to make speaker design any better...

Again you are arguing with the voices in your head, JohnnyR. Of course a lot has happened in the past 20 years, especially the advent of low-cost measuring equipment. But you're missing my point, which is, as I wrote earlier today, that a DIY designer doesn't subject his loudpeakers to the scrutiny of a disinterested marketplace, something professional designers do as a matter of course. You have repeatedy claimed that your speakers are as good as if not better than commercial designs. However, unlike engineers like Kevin Voecks, Paul Barton, Richard Vandersteen, Jeff Joseph, etc, your ability to pay your mortgage and feed your family doesn't depend on your skill as a speaker designer. Their's does, and it makes a difference.

Look, if you seriously believe your speaker designs are fully competitve with commercial designs, email me your measurements. I'll get back to you with my interpretation of what they mean, assuming they are as comprehensive as what I publish with every Stereophile speaker review. Put up or shut up.

JohnnyR wrote:
Yeah yeah, "full reviews" whatever, Plenty of "little revews" though all the freakin time and all they are , are OPINIONS but you know what? You let your reviewers get away with murdering the intergrity of the audio world with their stupid banter about  "blacker backgrounds,lifted veils and HUGE improvements in the sound" without them having to justify their claim...

You're not making sense. "Andreasmaaan" was discussing reviews in the magazine being accompanied by measurements. I was discussing reviews in the magazine being accompanied by measurements. You seem to be discussing something that exists only in your mind. And unlike Professor Dumbledore's quote, that doesn't mean it's real :-)

JohnnyR wrote:
Oh and Jason Serinus "reviews" as posted about the recent RMAF isn't connected to Stereophile?

That was a show report. No-one other than yourself equates a magazine's coverage of a show with its formal equipment reviews. Are you seriously suggesting that magazines shouldn't publish show reports without including a full set of measurement data for each room they report on? Really? There isn't an audio magazine or website that does that.

John Atkinson

Editor, Stereophile

JohnnyR's picture
BLAH BLAH BLAH

Go have another pink drink Fearless Leader.

Me let YOU interpet my frequency response? Don't make me LAUGH. You can't even interpept Wilson't ragged ass frequency response laugh

[usual flames deleted by John Atkinson]

John Atkinson's picture
Making my point

JohnnyR wrote:
John Atkinson wrote:

Look, if you seriously believe your speaker designs are fully competitve
with commercial designs, email me your measurements. I'll get back to
you with my interpretation of what they mean, assuming they are as
comprehensive as what I publish with every Stereophile speaker review.

Me let YOU interpet my frequency response? Don't make me LAUGH.

You make my point for me, JohnnyR. It was a serious offer on my part. Professional loudspeaker designers send out their products for review and for the public scrutiny of possible customers. Amateur speaker designers, such as yourself, are often afraid to let anyone else judge their work.

John Atkinson

Editor, Stereophilee

Regadude's picture
There you go

I have been challenging Johnny tp provide pictures and or the specs of his speakers. He has never met my challenge.

Now the editor of Stereophile offers to look over his data. Johnny again makes up excuses to avoid providing any information on his heavenly speakers.

Big talker...

bernardperu's picture
Contamination

Dear Jon,

What should have been an enlightening discussion of your great lecture has been contaminated by ill-conceived comments. Some individuals have posted comments that have created an environment of nastiness. 

I would seriously urge you to start deleting ill-conceived comments from now on, whether they use flamable language or not. Some people are just turned on by the flames, but others aren't. And those others would much rather focus on the joy of engaging in the music and exchanges related to it.

You do not run a state-owned site but a private one that is likely to become a lot less lucrative (in all senses) if such nasty comments are to become visible.

I am not even pointing out names and we all know who I am talking about.

I am just giving you feedback that comes from an avid reader of Stereophile online. 

Finally, the correction of my use of English was mentioned by one of these individuals. Please allow me to say that English is my second language and it is oxidized. Sorry about that. 

Thanks!

 

Site Map / Direct Links