Do blind audio tests settle any arguments about audio equipment? Or are they just a way to find out who has the best listening abilities?

<I>Stereophile</I>'s Jon Iverson maintains that blind audio tests can only provide judgment on the listening acuity of those taking the test; not the relative merits of the equipment used in the process. But the subject is still a hot topic in our forums. What do you think?

Do blind audio tests settle any arguments about audio equipment? Or are they just a way to find out who has the best listening abilities?
They determine useful differences between components
27% (35 votes)
They determine who is the best listener
17% (22 votes)
They're good for both
21% (28 votes)
I have no idea
8% (11 votes)
I could care less
27% (35 votes)
Total votes: 131

COMMENTS
Al Marcy's picture

Deaf TV tests are my thing ...

G.S.  White's picture

I miss the speaker shoot-outs! Especially for the crowded field of sub-$2000/pair speakers. Bring 'em back!

Don Mawhinney's picture

I have become convinced that some of us just have better ears. After all, we know there are differences in eyesight. I know I have a severe (70dB) rolloff above 6kHz, which, I'm sure, affects my ability to perceive subtle differences in "air" and inner detail. I also believe there is a learning element as well. The capabilities of my system are well beyond merely bass/treble extention and clarity. So where does the next level of performance come? What do I listen for now? I have difficulty recalling the sonic signature of the "A" of an A/B test when listening to the "B," whether it be cables, CD players, capacitance loading of my Helikon, etc. Often, in fact, both may be quite listenable. Which is more real and what listening venue do you compare them to? Still listening and learning ....

DAB, Pacific Palisades, CA's picture

I agree with Jon: too much ivory tower (no reference to loudpeakers) nonsense.

Dave in Dallas's picture

Seems to me the only thing that's certain is the listener cannot see the components. My problem with yje approach is the assumption that the pieces of equipment being evaluated are getting fair and equal treatment. I'm not sure that's safe to assume, particularly when you're talking about high-end gear.

Gerald Clifton's picture

I checked"They're good for both," but this has to be qualified. I think the closer the two components are in overall quality, the less useful blind testing becomes. If you have a well-known or overhyped brand that is quite different from a lesser-known brand put beside it, and the latter sounds obviously closer to what you remember from the concert hall, you will learn something very useful, without having had your mind cluttered by brand recognition or any preconceived bias toward the hyped equipment. But when the differences get very subtle, it can become a parlor game to discover the best listener. You have to live with components that are very close in sound quality to uncover the subtle differences that will ultimately separate one as superior. JA's piece a couple of months ago, about his experience with a Quad amp and another amp, hit the mark. Selecting the best system components, sadly, is a trial-and-error process spanning many weeks when the choices are close. It will take awhile for the important differentia to emerge. Then, if you initially chose wrong, you just spend more money, upgrade, and learn from the experience. One aspect of choosing the best for ypur listening that cannot be decided blindfolded is, which component sounds the best with the greater percentage of your software? That takes weeks or even months to determine. Anybody out there want to listen blindfolded for weeks or months?

Brankin's picture

First, the saying is "I couldn't care less," as in could not care less. If you "could" care less, that means you do care somewhat. A blind audio test is so meaningless on so many levels regarding how I purchase equipment. I give it even less consideration than a subjective review. This is a hobby for me (the equipment, not music), I couldn't possibly care less! If I get screwed or taken on something, so what! I'll sell it! What I spend is disposable income. I should probably be sending the money to a food bank...

Yiangos's picture

Personally,I'd go for group testing, but if blind listening is only useful in determining who has the best listening abilities,well,here's a good way to find out which reviewer to trust and who to kick out!

Louis P.'s picture

By now, I have heard for myself all of the things that are not supposed to exist based on double blind tests. And I did not always know what, if anything, had changed. Best example, I noticed the difference between that Theta Pro Basic DAC on back-to-back trips to my dealer when it went from version II to III without knowing that I was listening to the new model (I bought the III). Sorry folks, life isn't experienced double blind. Once one amplifier sounds different from the rest, they all don't sound the same, so who cares which panel members can tell the difference, and which can't. This whole thing is like an annoying zit that won't go away, and wastes valuable (finite) space in the printed magazine.

Mike Agee's picture

Everything being equal, they would be fine, but I can't see how everything could practically be made equal. Take cables: Even my broken-in cables don't relax sonically for minutes to hours after inserting them, while using a switch would introduce a crude variable and waiting for them to relax would obscure the memory of the other cables being compared. Many changes in mysystem take hours if not weeks to fully apprehend, how could a quick switch encompass subtle yet important distinctions? Most systems sound very different depending on where the listener is in the room, so for group blind tests how objective would opinions be coming from listeners who are receiving the same sounds differently? The "blind" part of the tests make them impractical, better to use trustworthy humans doing their best who are aware of the variables and take them into account.

Roy E.'s picture

You can't design an objective test to evaluate differences that are mostly subjective.

David L.  Wyatt jr.'s picture

If you've done any graduate work in a social science, you know how much predispositions affect perceptions. We are not Mr. Spock, even when we want to think we are. I think blind tests could be easily rigged, but anything that takes extraneous variables out of the equation improves accuracy.

Allen's picture

They determine useful differences, but they also may show who is the better listener. They may also show up irrelevant differences such as something that sounds good initially but is much harder to live with! Dangerous!

Al Earz's picture

I think it does reveal the better sound of the equipment and also the listener ability to decern the different qualities of the equipment. I have always loved doing A/B comparisions in showrooms. It always amazes me that the saleperson seems to enjoy it as much as I do.

Colin Robertson's picture

You know, who cares? I am at the point where when I listen, I just try to keep an open mind about things, so as to not bias myself as to which is better. I have never felt the need to blindly listen to gear. I like to know how each component works, and how they differ from other components when I listen. I find that the best way to determine what makes a component sound the way it does.

Woody Battle's picture

If set up correctly, blind tests could be very useful for judging audio equipment. However, the majority of the advocates for blind testing seem to no idea how to set up a test correctly. The way most blind tests have been set up in the past would obscure all but the very worst audio problems.

audio-sleuth@comcast.net's picture

I have a Big Mac for lunch one day, a Whopper five days later. I could still tell you which I liked better, right? Why would I switch back and forth real quick when trying to tell if I like something or not? Try it as you would use it. If when you listen to music you switch back and forth a lot than auditioning it that way would be okay. I just sit and listen.

Clay White's picture

If they have any value at all, and I think that's a stretch, they might provide a way to award the "best listener" merit badge. Certainly, they pervert the purpose we all have in developing our systems - enjoyment of our recorded music. I fervently hope that the results of this poll will put an end to the seemingly endless debate (that's the kindest label I can use) between JA and AK. If JA will get back to making recordings, some of us will buy them. Arne isn't likely to make any contribution to Stereophile.

Norman L.  Bott's picture

A person's ability to focus and their ability to listen vary with each individual. I do not believe that blind testing helps anyone other than the one doing the listening.

bjh's picture

should have option to select: They're accomplish neither

Gerald Neily's picture

The most meaningful blind test is between system X and live music, and live music wins every time, even if your ears aren't so great. This demonstrates that the piddly little differences between components don't matter much relative to the supreme live music standard. So let's quit arguing.

Patrick Taylor's picture

If you use statistically diverse listeners, then you will have useful information about the equipment, and if you use statistically diverse equipment you'll see something about the listeners.

Teresa's picture

A/B and A/B/X testing dos not work because they introduce stress into the equation. Music listening should be relaxing. Better Idea is listen to each component for 30-60 minutes in a relaxed environment with lights dimmed and eyes closed. Afterwards take notes. No switching back and forth, ever! Compare the notes and sonic memories of both tests. This is the correct procedure to compare components.

OvenMaster's picture

IMHO, blind audio testing should be the only way to go. I wouldn't reveal the brand name of a product to test personnel or let the product be visible if at all possible until the test results have already been submitted to an editor. Knowing the brand name of a piece of gear or seeing a product's build quality or appearance in advance can only prejudice someone's perception of the product, especially if they have knowledge of a brand's qualities. If it's about the sound, then it should remain about the sound, and nothing else.

Johannes Turunen's picture

Sometimes I do close my eyes to hear better...

Tim Rhudy's picture

I could not care less!

Kurt Cannon's picture

Almost anyone can be trained to be a "best listener" with a bit of ABX practice. Blind tests are quite reliable in determining whether there are actually sound differences between components. What they don't do is tell you whether a given component will continue to sound great year after year.

John M Humphries's picture

Blind audio testing certainly will highlight the listening ability of the tester, but will it provide information about the component quality? In theory I believe it can. In practice I think it is going to require a lot of testing over a long period of time. Quick A/B comparisons can sometimes highlight some differences but other more subtle but important qualities of the component will be missed. While not blind testing realizing the particular weaknesses of my current speakers took months of use. Most people seem to think they will suss out the differences in a component in an hour or a day. This will only happen when one component is clearly superior to the other.

Jason M,'s picture

To state the obvious, subjective tests are just that. If one can't hear a difference, then does it matter to the average consumer? I will agree that over time, I have noticed flaws in my system that were not present in the beginning. My wife could say the same of me. I do think that the establishment press (those who rely on advertising) and manufacturers are afraid of blind testing. Not because there aren't differences in components, but because the average listener will likely come to the conclusion that positive differences are generally not worth the incremental cost. In any case, unless a valid control is established, the only useful data we get from a blind test is what type of sound the listener prefers. Since no one in this hobby can seem to agree on what the standard should be, that might be the best argument yet for blind testing.

Ozzie's picture

If you can hear no differences between two components under test, save your money and stick with what you have. Buy more music with your windfall. Did the term subjective not come from someone who raved about a component? When pressed, the same person could not definitively point out any real differences. They then folded and said that it subjectively sounded better. Come on guys, music is subjective. The reproduction of music is not. Otherwise, I have a red violin that subjectively sounds like a Stradivarius.

Pages

X