Audio, Meet Science

For a field based on science, high-end audio has a relationship with its parent discipline that is regrettably complex. Even as they enjoy science's technological fruits, many audiophiles reject the very methods—scientific testing—that made possible audio in the home. That seems strange to me.

Of course, there are reasons—not necessarily good ones—why some people in audio have a low opinion of science. The entire hobby/industry suffered a grievous blow a generation or so ago, when too much focus was put on something—how an amplifier measured—that resembled science, and too little on how such products actually sounded. A little later, the Compact Disc's promise of technical "perfection" yielded dubious sonic benefits when compared with an older, simpler medium, the LP. And there has been, over the years, no shortage of self-identified scientific types manipulating purportedly scientific tests, or ignoring inconvenient test results, to support their preconceptions.

Subjectivists, meanwhile, sometimes seem to intentionally hold themselves up for ridicule. A few audio writers, especially for the online 'zines, seem eager to prostitute themselves for the latest preposterous product—the Intelligent Chip, the Shakti Hallograph Soundfield Optimizer, the Machina Dynamica Brilliant Pebbles, the Marigo Audio Lab Dots, the Tice Clock. Meanwhile, some prominent industry folks have consistently failed, in my opinion, to maintain a sufficiently skeptical posture toward such products. It doesn't help that in audio there has been a long tradition—a dogma, even—of "trusting your ears," despite abundant evidence from neuroscience and cognitive psychology that our ears and other senses, though supremely acute, are supremely unreliable.

Yet science—and scientific testing—has much to offer audio. Audiophiles ought to embrace scientific methods with the same healthy skepticism with which they embrace sighted, subjective equipment evaluations: as a tool that, though subject to misuse, is invaluable in the hands of honest people with the right set of skills.

One form of testing that's especially important—and especially controversial—is the use of rigorous methods to validate apparent perceptions: Did you really hear what you thought you heard? Is the sound really different with that new amplifier/cable/CD player from what it was with the old one? Does putting that photo in the freezer really change the way the system sounds?

Rigorous tests can offer scientific validation of subjective observations.(footnote 1) Such validation doesn't come easily, but when it comes it's a force for truth, justice, and the American way.

Such tests have two possible outcomes: Either you establish scientifically that there's a difference between A and B, or you fail to establish a difference. And here's a crucial point that's often overlooked by people on the objectivist side: While a positive result establishes the reality of a perception to a certain level of confidence, a null result—a failure to reliably detect a difference—does not indicate the nonexistence of that difference.

To cite an oft-quoted phrase that's sometimes attributed to Carl Sagan, "absence of evidence is not evidence of absence" (footnote 2). Or, to cite Les Leventhal in an AES paper, "Properly speaking, a statistical conclusion about H0 from a significance test cannot justify 'accepting' the scientific hypothesis that differences are inaudible" (footnote 3).

In simpler language: Differences we think we hear but that testing fails to validate may nonetheless be real. An experiment that fails to show a difference between the sounds produced by two amplifiers does not indicate that no audible difference exists.

Many of the rigorous listening tests that are relevant to audio are difficult and tiring to perform, requiring serious concentration, many repetitions, and sometimes heavy lifting. Things can be made easier by using many listeners at once, but then the only conclusions you can draw are about the average characteristics of the group. The group average may not permit distinguishing cable A from cable B, but that doesn't mean a particularly golden-eared member of the group can't.

One of the beautiful things about science is that often you can make your experiments more sensitive by applying new technologies. The likelihood of a non-null result can be squeezed and squeezed until it approaches zero, and you can begin to feel sure that there's really nothing happening. But in audio, there's not a whole lot you can do to make your tests more sensitive. Your measuring instruments are limited not by technology, but by the ear/brain system of very human listeners.

The dullness of our tests—their insensitivity—leaves a space where reviewers can roam free. Within that space, reviewers may wax poetic about palpability and sweetness and air and light without fear of contradiction by science. It's a space where much could be happening—apparently is happening—but where it's impossible to be sure whether it's happening or not. A lot of life is like that.

Of course, this is not the only space where audio reviewers operate. They may—and frequently do—roam outside this safe habitat, making observations about easily audible things that no doubt could, with a bit of work, be scientifically verified. But there is little incentive for audio writers to take such tests, especially when they are already sure of what they've heard.

If my argument so far seems to favor the subjectivist side, it's now time to rebalance things. On reading a paper by a young colleague, the great physicist Wolfgang Pauli (1900–1958) commented, "It's not even wrong." He meant that the ideas proposed in the paper could not even be tested. For Pauli, this was the ultimate insult.

Luckily for us, the human population is diverse. Not everyone feels as Pauli did. Yet a science-based activity without scientific constraints, in which the only distinction among tweaks that appear to be nothing more than snake-oil, well-designed amplifiers, and speakers with good dispersion characteristics are the vicissitudes of personal aural experience, makes me uncomfortable. I find myself craving some certainty, if only to put a little more space between the creations of a skilled audio designer and, say, a jar of pretty rocks.



Footnote 1: Perhaps the best example is the testing of audio codecs and lossy compression schemes intended for the delivery of digital audio. While these tests almost certainly underestimate audibility of some compression artifacts, they routinely establish the audibility of others.

Footnote 2: The Demon-Haunted World: Science as a Candle in the Dark (1995).

Footnote 3: "How Conventional Statistical Analyses Can Prevent Finding Audible Differences in Listening Tests," Leventhal, presented at the October 1985 AES Convention in New York. Preprint 2275. See also his article in Stereophile. H0 is the null hypothesis, the hypothesis that the effect under test is inaudible. Leventhal is saying, in other words, that a failure to detect audibility is not evidence of inaudibility.

Share | |
COMMENTS
0pteron's picture

The topic and the discussion are both quite stimulating. I would like to offer a possible explanation for the reluctance of many folks to view science with the respect it might otherwise deserve:

So far as I know, there is no test that measures the various apparent dimensions of a soundstage. So far as I know, there is no test that tells me whether a given sound is that of a trumpet, oboe, cornet, violin, viola or flute playing a given note.

Until such time as these tests are available, there is a gap between what is being measured and what I, and I believe a lot of other people, are interested in.

It is this kind of test result that will tell me whether or not a cable or other component makes a difference.

When these tests are used, I will be delighted to pay more attention to science. Until then, the contribution of science is intersting, but not nearly as important as my ears in evaluating equipment and recordings.

Cheers.

Jon

John Atkinson's picture
0pteron wrote: "So far as I know, there is no test that measures the various apparent dimensions of a soundstage. So far as I know, there is no test that tells me whether a given sound is that of a trumpet, oboe, cornet, violin, viola or flute playing a given note."

Exactly so. That is because all these are constructed by the brain based on the information that reaches the two ears. In that sense they are not real. The reality is two audio-bandwidth pressure waves that vary with time emitted by the two loudspeakers. The "oboeness" or "trumpetness" etc, and where the acoustic objects that represent those instruments are located within the soundstage, are constructs within the listener's brain and thus are not available for measurement.

I examined this subject in the introduction to my January 1992 review of the Westlake BBSM-6F loudspeakers - see www.stereophile.com/standloudspeakers/192west .

John Atkinson
Editor, Stereophile

Jorgensen's picture

John - I agree that "The "oboeness" or "trumpetness" etc, and where the acoustic objects that represent those instruments are located within the soundstage, are constructs within the listener's brain and thus are not available for measurement", in the sense that "meassurement" here means measurement using a microphone and a computer or similar. But this does not affect whether we can or should meassure differences by comparing audio components in a blinded fashion, or if this type of testing is scientific and in that sense more reliable than unblinded testing.

Let me start by stating what I am not saying. I am not saying unblinded tests are useless. I read Sterephile with great interest and feel confident that you have very capable reviewers that have the great advantage of having heard lots of equipment that I have never heard. Some of them may even be equipped with the famous 'golden ears'.

There is no doubt that someone looking to buy an audio component would be interested in tests that were as reliable as possible. I think blinded testing is generally more reliable than unblinded testing, particularly when it comes to 'soft' outcomes that can be influenced by psychology and preconceived notions, such as is the case in hi fi, whether we like it or not.

This is supported by good research and is one of the reasons that blinded wine tastings are so much fun (another being the alcohol, of course). Being told that a wine is very expensive will, on everage, make people evaluate it as better tasting than if they are told it is cheap. You could call this a placebo effect or say that the reason the wine tastes better doesn't really matter, as long as you get the good experience. But if I found out that I had really just bought a a case of cheap wine at a high price, I would still feel cheated.

I therefore see blinded tests as a potentially valuable supplement for Stereophile that could put it in a league of its own among the plethora of hi fi mags in this world.

geoffkait's picture

The fact that tests are Blind has precious little to do with how reliable they are. Much of the reliability and success of blind tests, like any type of test, have to do with experience/competance of the tester, resolution/competance of the audio system used for testing as well as less definable or even unknown factors. Besides, there are no Standards for blind tests, and even proponents of blind tests disagree on the protocols that should be employed. So it goes....

Geoff Kait
Machina Dynamica

Jorgensen's picture

I'm sorry Geoff, but this is where you are mistaken. Blinding is pivotal for reliable evaluations of subjective outcomes, such as when testing a new versus an old painkiller. It really isn't just my opinion. It is a mainstay of scientific testing and the effect of blinding in this context is extensively documented in the scientific litterature. Those who are not blinded will substantially over-estimate the effect of interventions aimed to change subjective outcomes. This is why blinding is required in guidelines describing how to test subjective outcomes and why no respectable medical journal will accept a study that did not use blinding, if this was reasonably possible (blinding isn't always possible, e.g. when comparing surgery to medication). Even the best scientist (or most experienced stereophile reviewer)can be influenced in their judgement of effect by looks, price, or expectations. You can add any number of factors to this list. This is not a flaw on their behalf. It is basic human nature. But we can do something to minimise this effect: blind those who assess.

To improve reliablility of testing hi fi equipment, it would be a valuable addition to a regular review if the components were sent before a blinded panel, to see if the finding of the the experienced reviewer with all the background knowledge could be confirmed. Much like when you send something for a lab test.

geoffkait's picture

There you go again with the medical analogy silliness. That analogy has been used by proponents of blind testing since Christ was in daipers and it's still not a convincing argument. As if we're supposed to respect the medical profession, that's a laugh. The only reliable way to find out how a component, speaker or any audio device is going to sound in your system is to get a hold of one and insert it into your system. Then you can blind test until the cows come home. Everything else is a pig in a poke. Why would you take someone else's word for anything, anyway? That's what I'd like to know.

If blind testing is as important as you claim how come no one actually uses it, I mean except for a few eccentrics? Good luck with all those medical journals. :-)

"There are none so blind as those who will not see." - old audiophile axiom

Geoff Kait
Machina Dynamica

Jorgensen's picture

Goeff - I wonder why you you read Stereophile at all? You write that:

"The only reliable way to find out how a component, speaker or any audio device is going to sound in your system is to get a hold of one and insert it into your system. (...) Everything else is a pig in a poke. Why would you take someone else's word for anything, anyway? That's what I'd like to know".

If this was the case, there would be no justification for hi fi journals, blinded testing or not.

Audio components do have specific characteristics and qualities that can be described in a meaningful way to help those in the market for such things, keeping in mind that their importance is partly reliant on the context in which they are used (I think anyone would agree with you on that). If you do not accept this, everything stops.

I can easily understand why you wouldn't trust the medical profession. Too many scandals have come from thoughtless collaboration with big pharma. But what I am asking you to trust is science, not the medical profession. I do not simply claim that blinding is important. I point out that there is solid documentation for this. It is a fact in the same way that evolution is i a fact. I hope you do accept evolution, despite examples of constructed 'missing links' by misguided scientists.

I can think of several reasons that blinded testing in hi fi has not gained acceptance.

1: It is uncomfortable to be proven wrong. It is much nicer to remain within your frame of understanding of this world, especially if you have made a career of selling your subjective judgements (no offense meant, I appreciate that skilled people have done this).

2: It could threaten the existence of entire audio businesses (none mentioned, none forgotten). If trained listeners cannot identify a difference in a blinded test, is it an important difference at all, if one exists?

3: It could threaten income from advertising for hi fi journals.

4: Most audio buffs are not scientifically trained and do not know how easy it is to be influenced by context when making subjective judgements.

5: Tradition in the field.

This list can be expanded ad nauseam.

geoffkait's picture
Quote:

Goeff - I wonder why you you read Stereophile at all? You write that:

"The only reliable way to find out how a component, speaker or any audio device is going to sound in your system is to get a hold of one and insert it into your system. (...) Everything else is a pig in a poke. Why would you take someone else's word for anything, anyway? That's what I'd like to know".

If this was the case, there would be no justification for hi fi journals, blinded testing or not.

Good point. I didn't think of that. Heh heh

Quote:

Audio components do have specific characteristics and qualities that can be described in a meaningful way to help those in the market for such things, keeping in mind that their importance is partly reliant on the context in which they are used (I think anyone would agree with you on that). If you do not accept this, everything stops.

As I stated previously, the only meaningful way to determine how a particular audio product will perform, and if it will perform to your satisfaction is to obtain it and see how it works out in your system - with all the myriad variables of your room and your system and your ears. Isn't that really the only context that matters?

Quote:

I can easily understand why you wouldn't trust the medical profession. Too many scandals have come from thoughtless collaboration with big pharma. But what I am asking you to trust is science, not the medical profession.

Geez, where have I heard that before? Oh, yeah, in JA's article, As We See It, the subject of this thread -- "Trust Science." Ha Ha

Quote:

I do not simply claim that blinding is important. I point out that there is solid documentation for this. It is a fact in the same way that evolution is i a fact. I hope you do accept evolution, despite examples of constructed 'missing links' by misguided scientists.

I hope you're not trying to say that one must accept evolution to be a good, scientific audiophile. Next you'll be telling us audiophiles have to accept Catholicism. There's a lot of documentation for that, too. I suppose those who don't accept blind testing are condemned to eternal darkness. Ha Ha

Quote:

I can think of several reasons that blinded testing in hi fi has not gained acceptance.

1: It is uncomfortable to be proven wrong. It is much nicer to remain within your frame of understanding of this world, especially if you have made a career of selling your subjective judgements (no offense meant, I appreciate that skilled people have done this).

Unfortunately, when results of blind tests are negative and it appears Science has saved the day, guess what? - the test was not performed properly by the proper person under the proper conditions. Thus, just like any test, Blind Tests don't necesarily prove anything. You can't hang your hat on them. There are no standards for blind tests. Everyone is just stumbling around in the dark. Is it possible that promoters of blind tests have something to sell, too?

Quote:

2: It could threaten the existence of entire audio businesses (none mentioned, none forgotten). If trained listeners cannot identify a difference in a blinded test, is it an important difference at all, if one exists?

Sorry, it's already been demonstrated many times that they can't identify a difference in a blind test, at least reliably. If your statement were true The Amazing Randi would be in the poorhouse now. Someone should come up with the definitions of "a trained listener" and "an important difference in the sound."

Quote:

3: It could threaten income from advertising for hi fi journals.

It's more likely promoting something so frought with contradictions and controversy as blind testing would threaten income from advertising for hi fi journals.

Quote:

4: Most audio buffs are not scientifically trained and do not know how easy it is to be influenced by context when making subjective judgements.

I suspect you have it backwards. The scientifically trained audiophiles realize how suspect blind tests are for the reasons I've stated. And the trained listeners among us sympathize with those who are unsure of their listening ability, the ones who need someone else to lead them by the ear, so to speak.

Quote:

5: Tradition in the field.

Hmmmm.... is that anything like the tradition of dunking witches in water until they drown? Ha Ha

{quote]This list can be expanded ad nauseam.[/quote]

I'm sure you're correct. I've read Zen and the Art of Debunkery, too.

Geoff Kait
Machina Dynamica
Advanced Audio Conceits

c1ferrari's picture

Good read, Jim. Thank you.

brindoporeso's picture

Jim Austin brings up the issue of "‘trusting your ears,’ despite abundant evidence from neuroscience and cognitive psychology that our ears and other senses, though supremely acute, are supremely unreliable.” On the subjective level, this unreliability is not necessarily a problem. We have all experienced optical illusions in which we perceive patterns and objects that demonstrably have no existence outside of ourselves, yet the perception really happens. Doubt arises only concerning external reality, not our perceptions. The philosopher Daniel Dennett points out the absurdity of saying, “I think I may be in pain, bit I’m not sure.” What’s to doubt?

Also, a word of caution: Although one is perfectly correct not to expect anything from “a jar of pretty rocks,” that is not the real issue. Penicillin was initially derived from a glop a ugly mold. Politicians often ridicule spending money on research involving what they consider silly organisms, even though modern genetics derives from the study of fruit flies. The point is that both pretty rocks and ugly mold are subject to the same logical constraints. Popperians utilize “Conjecture and Refutation,” such that anything can be the source of theories: (dream, drug-induced hallucination, etc.). But theories must then stand up to the necessary logical tests.

Having finally visited the website devoted to “Brilliant Pebbles” (BPs), I can now apply the general principles discussed earlier and make some concrete comments. Of the issues arising in the paper, I will focus on one. I am not a merchant and have nothing to sell. I merely wish to offer an example of the sort of scrutiny and criticism to which I have been subject in my world.

Suppose a sugar pill with blue dye costing $100 per pill benefitted 20 percent of patients, and that a sugar pill with red dye costing 2 cents per pill similarly benefitted 20 percent of patients. The claim that the blue dye was the cause of the effect would not be supported. The effect of the blue pill is fully explained by the simple fact that it is a pill. That is, it is fully accounted for by its “pillness.” It is only by exceeding this threshold that something would deserve to be called a drug. Also, the difference in price is not justified by a difference if performance. Failure to outperform a placebo earns a candidate drug no medals. If an expensive treatment works, but an inexpensive placebo works just as well, it allows patients to enjoy the effect while simultaneously saving money.

The principle issue regarding BPs is that the discussion, as presented on the visited website, offers an explanation for why they would outperform generic, amorphous stones, but no report of the use of the control that would isolate this factor.

The difference in performance between BPs and generic stones is either statistically significant or it is not. If it is, then the null hypothesis is falsified and mere “stoneness” or ”petrosity” is shown to be insufficient to account for the observed effect. If the difference is not significant, then the null hypothesis stands unfalsified and the effect of BPs is explained by their “stoneness,” which is also a property of generic stones that are available in vacant lots at a substantial discount.

Which is the case? The answer may be published elsewhere, but there it is no report of it on this particular website. In the absence of this control, there exists only an explanation in search of an explanandum. Thus, this report taken in isolation offers little of interest to science. (Its value as poetry is left to the appraisal poets.) This is not to say that BPs do not work. But even if they do, this is really only worth knowing when it is also known that ordinary stones do not. A theory is offered from which is derived the expectation that ordinary rocks would not work, but this remains an untested hypothesis, not empirical data. Woulda-coulda-shoulda is not a valid category in logic any more than it is in Simon Says. This indeterminacy leaves the null hypothesis on the throne, yawning and trimming its fingernails. Ludwig Wittgenstein reformulated Ockham’s Razor to say that whatever is explanatorily unnecessary is meaningless. A defense attorney should have to fix what ain’t broke only when the jury is too stupid to recognize when it ain’t broke. Competent jurors and scientists need no such help. Exerting effort to get to where one already sits (fait accompli) is, as per Wittgenstein, unnecessary, wastefully and meaningless.

This should mitigate possible concerns about veracity. Even if this BP business is all a lie, it is an insufficient one, such that even if it is all true, it is still not enough. It takes the form of an accent fallacy, as if one said, “Jim Austin was sober today.” Perhaps he always is. “These stones work” may be a true statement, but it raises the question: “Which ones do not?” Crystaline structure per se can be found in table salt, a bag of which should be tried before resorting to anything more expensive in case its crystals turn out to be fortuitously tuned for this application.

Health insurance providers do not pay for experimental treatments because they are waiting for the other shoe to drop. In science, ignorance of the existence of that other shoe is intolerable. Simply being “pretty rocks” is not grounds for derision, but if confession were to reveal this to be a hoax, grounds for surprise would be lacking.

Little controversy arises about all this on the superficial level of commerce: a commodity is produced, people pay money for it, end of story. Merchants are not always required to show their work exhaustively and science is not always the issue. In a recent documentary, when astronaut Frank Borman was asked about the science objectives of the Apollo 8 mission, he replied, “Science?!” Additionally, the occupancy of hot kitchens (such as peer-reviewed scholarship) is voluntary. (“Doc, it hurts when I do this.”) However, when science is the issue, incomplete experimentation and testimonials submitted by persons of uncertain ontological status are not enough to command scholarly attention.

With regard to the Jorgensen exchange: The null hypothesis is not an opinion, and was in force prior to the cited Lancet paper. The report of no effect does not change the situation from ignorance to enlightenment. It reports the situation to be unchanged. Upon acquittal, the status of a criminal defendant does not change to innocence from something else. Jorgensen’s students need to understand these things if they hope to qualify, respectively, for science or jury duty. Whatever one should or should not do prior to forming an opinion, my students would not be taught to form opinions but to recognize facts.

geoffkait's picture
Quote:

Jim Austin brings up the issue of "‘trusting your ears,’ despite abundant evidence from neuroscience and cognitive psychology that our ears and other senses, though supremely acute, are supremely unreliable.”

Yes, and isn't it ironic that it's JA who brings up the issue since he's one of those who actually doesn't trust his ears, you know, what with all the demands for data and meaurements? Nor has he ever heard the BP demonstrated or even seen them, for that matter, maybe in a picture LOL. That's a favorite argument of skeptics, especially skeptics of outrageous and controversial tweaks, one right out of Zen and the Art of Debunkery.

Quote:

On the subjective level, this unreliability is not necessarily a problem. We have all experienced optical illusions in which we perceive patterns and objects that demonstrably have no existence outside of ourselves, yet the perception really happens. Doubt arises only concerning external reality, not our perceptions. The philosopher Daniel Dennett points out the absurdity of saying, “I think I may be in pain, bit I’m not sure.” What’s to doubt?

Fortunately for experienced audiophiles, optical illusions have precious little to do with home audio reproduction unless, of course, one's not really sure what he's listening to, you know, doesn't trust his ears! Noone is saying only one test is always sufficient; sometimes these things are more difficult to pin down so multiple tests might be necessary, perhaps using a different system for the test might be necessary. More negative or ambiguous results have been obtained on bog standard systems than this world dreams of. Ha Ha So, it's not really so much of a philosophical issue, as you seem to think, but more of the old goldfish in a fishbowl issue, if you see what I mean.

Quote:

Also, a word of caution: Although one is perfectly correct not to expect anything from “a jar of pretty rocks,” that is not the real issue. Penicillin was initially derived from a glop a ugly mold. Politicians often ridicule spending money on research involving what they consider silly organisms, even though modern genetics derives from the study of fruit flies. The point is that both pretty rocks and ugly mold are subject to the same logical constraints. Popperians utilize “Conjecture and Refutation,” such that anything can be the source of theories: (dream, drug-induced hallucination, etc.). But theories must then stand up to the necessary logical tests.

Thanks for the warning. LOL
I certainly agree that things should stand up to tests, who wouldn't? Seems like a reasonable statement. But it may be best if the tester is someone who hasn't made up his mind already, eh? and someone capable of assembling a competant test system and with experience testing tweaks and especially controversial tweaks. (You wouldn't want expectation bias to enter the test, now would you? LOL We all know that's not a very scientific approach.) Isn't Investigation really the most important part of skepticism?

Quote:

Having finally visited the website devoted to “Brilliant Pebbles” (BPs), I can now apply the general principles discussed earlier and make some concrete comments. Of the issues arising in the paper, I will focus on one. I am not a merchant and have nothing to sell. I merely wish to offer an example of the sort of scrutiny and criticism to which I have been subject in my world.

I can harldy wait.

Quote:

Suppose a sugar pill with blue dye costing $100 per pill benefitted 20 percent of patients, and that a sugar pill with red dye costing 2 cents per pill similarly benefitted 20 percent of patients. The claim that the blue dye was the cause of the effect would not be supported. The effect of the blue pill is fully explained by the simple fact that it is a pill. That is, it is fully accounted for by its “pillness.” It is only by exceeding this threshold that something would deserve to be called a drug. Also, the difference in price is not justified by a difference if performance. Failure to outperform a placebo earns a candidate drug no medals. If an expensive treatment works, but an inexpensive placebo works just as well, it allows patients to enjoy the effect while simultaneously saving money.

You are making a strawman argument as your assumption, or at least untested assumption, that "an inexpensive placebo works just as well" is false, another illogical argument, again right out of Zen and the Art of Debunkery. I'm starting to detect a pattern here.

Quote:

The principle issue regarding BPs is that the discussion, as presented on the visited website, offers an explanation for why they would outperform generic, amorphous stones, but no report of the use of the control that would isolate this factor.

Actually, I do no such thing. I don't offer an explanation why the BP outperforms other generic, amorphous stones. I do, however, explain how BP works. If you wish to hire a theird party tester to test amporphous rocks vs BP free free. In fact I wish you would.

Quote:

The difference in performance between BPs and generic stones is either statistically significant or it is not. If it is, then the null hypothesis is falsified and mere “stoneness” or ”petrosity” is shown to be insufficient to account for the observed effect. If the difference is not significant, then the null hypothesis stands unfalsified and the effect of BPs is explained by their “stoneness,” which is also a property of generic stones that are available in vacant lots at a substantial discount.

I have not addressed generic stones. That is yet another fallacious argument on your part. The fact that some stones are found in vacant lots has no bearing on my explanation of BP, anymore than any other type of material has any bearing. You might as well ask why didn;t I compare BP to ALL other types of material? LOL

Quote:

Which is the case? The answer may be published elsewhere, but there it is no report of it on this particular website. In the absence of this control, there exists only an explanation in search of an explanandum. Thus, this report taken in isolation offers little of interest to science. (Its value as poetry is left to the appraisal poets.) This is not to say that BPs do not work. But even if they do, this is really only worth knowing when it is also known that ordinary stones do not. A theory is offered from which is derived the expectation that ordinary rocks would not work, but this remains an untested hypothesis, not empirical data. Woulda-coulda-shoulda is not a valid category in logic any more than it is in Simon Says. This indeterminacy leaves the null hypothesis on the throne, yawning and trimming its fingernails. Ludwig Wittgenstein reformulated Ockham’s Razor to say that whatever is explanatorily unnecessary is meaningless. A defense attorney should have to fix what ain’t broke only when the jury is too stupid to recognize when it ain’t broke. Competent jurors and scientists need no such help. Exerting effort to get to where one already sits (fait accompli) is, as per Wittgenstein, unnecessary, wastefully and meaningless.

All of this discussion and philosophizing with regard to ordinary rocks vs mineral crystals is irrelevant, as is comparison of BP to other types of materials. Out of scope.

Quote:

This should mitigate possible concerns about veracity. Even if this BP business is all a lie, it is an insufficient one, such that even if it is all true, it is still not enough. It takes the form of an accent fallacy, as if one said, “Jim Austin was sober today.” Perhaps he always is. “These stones work” may be a true statement, but it raises the question: “Which ones do not?” Crystaline structure per se can be found in table salt, a bag of which should be tried before resorting to anything more expensive in case its crystals turn out to be fortuitously tuned for this application.

It might all be a lie or a hoax. But how would you know? You have not investigated it, only relied on your skeptic's arguments, which, as I've pointed out, are often fallacious and/or illogical.

Metals such as copper and silver have crystal structures as well, but I'm not suggesting using metals as a substitute for BP. Nor do I suggest on my web site that all mineral cystals have the same effects in audio applications. Quite the contrary - I explain that differnt crystals have different characteristics and effects.

Quote:

Health insurance providers do not pay for experimental treatments because they are waiting for the other shoe to drop. In science, ignorance of the existence of that other shoe is intolerable. Simply being “pretty rocks” is not grounds for derision, but if confession were to reveal this to be a hoax, grounds for surprise would be lacking.

You certainly wouldn't be the first self-proclaimed skeptic to come along and react to the "pretty rocks in a jar." or to use words like hoax and ignorance. BP has been "reacted to" for 8 years by many skeptics in their rush to judgement. But one must ask, whatever happened to scientific curiosity and the scientific method?

Quote:

Little controversy arises about all this on the superficial level of commerce: a commodity is produced, people pay money for it, end of story. Merchants are not always required to show their work exhaustively and science is not always the issue. In a recent documentary, when astronaut Frank Borman was asked about the science objectives of the Apollo 8 mission, he replied, “Science?!” Additionally, the occupancy of hot kitchens (such as peer-reviewed scholarship) is voluntary. (“Doc, it hurts when I do this.”) However, when science is the issue, incomplete experimentation and testimonials submitted by persons of uncertain ontological status are not enough to command scholarly attention.

Skeptics are free to investigate any claims they feel are outrageous, and any scientific group is free to test those claims. So far, I have heard the same old skeptic arguments, the ones straight out of Zen and the Art of Debunkery. By the way, you left out one of the skeptics' favorite arguments - so I'm going to help you out here - that argument is: why are audiophiles who believe in Brilliant Pebbles often the same ones who believe in homeopathy and UFOs? LOL

Quote:

With regard to the Jorgensen exchange: The null hypothesis is not an opinion, and was in force prior to the cited Lancet paper. The report of no effect does not change the situation from ignorance to enlightenment. It reports the situation to be unchanged. Upon acquittal, the status of a criminal defendant does not change to innocence from something else. Jorgensen’s students need to understand these things if they hope to qualify, respectively, for science or jury duty.

Whatever one should or should not do prior to forming an opinion, my students would not be taught to form opinions but to recognize facts.

Apparently haven't even followed your own advice as most of the philosophizing and objections in your post resulted from having formed a foregone conclusion based on opinion, not facts.

Cheerio,

Geoff Kait
Machina Dynamica

Brucest's picture

I apologize for joining this discussion late.  I am a recently returned subscriber, and unfortunately I find much of the same mindset that caused me to let my previous subscription lapse.

This article is an exception, and I find it one of the more interesting comments of the objectivist vs subjectivist debate in audio.

I take the point about the ultimate insturment for judging equipment to mean that you and I might listen to the same equipment produced sound, and disagree as to whether A is better than B.

However, it seems to me the critical question is whether the ear is consistent. 

For example, one listener, hearing both a solid state amp and a comparable tube amp, might prefer the tube amp on the grounds that the resulting sound is sweeter and warmer.  This preference can be very scietifically investigated if the listener is consistent in forming his preferences.

For example, one could ask as to what are the differences between the amps from various technical measurements.  We are likely to observe a rather large difference between the two amps with regard to the most basic measurement, HD.  Posit that the tube amp has greater HD.  

A logical question: is the preference related to the greater HD?  The test: by some means, introduce an equal HD to the less expensive solid state amp's output.  If the preference disappears, then we can conclude it was associated with the higher HD.

If this is true of a large number of persons interested in sound reproduction, then we have an interesting result.  Such persons should not be expected to change their preferences simply because they seem to be based on an "imperfection".

Rather, a relevant question is whether the preferred HD might be more economically and controlably introduced in the solid state amp.

A seperate question is which reproduction is more "accurate".  This question is potentially of less interest, because it's irrelevant to what one might prefer.  However, it still may be useful as a guideline for examining preferences.

It seems to me this question, with some difficulty, could also be investigated scientifically using the ear as the instrument.

Crudely one might imagine the following experiment: in a concert hall with a quartet, a person is equipped with a set of very good headpones.  Stereo mikes colocated with the listener, capture the sound of the orchestra and the signal is routed through a solid state and tube setup alternately.  The listener would then be able to hear the the sound without the headphones, and then (blindly) be asked which amp more closely reproduces the sound.

There are a number of possible outcomes.  The listener may find one or the other of these to most closely capture the sound.  The listener may find them both so different from the natural sound so as not to be able to make a conclusion.  He may also find the less accurate sound to be preferred to the original.

IMO, a substantial problem with the subjectivist approach is that little guidance is provided as to technical design criteria.  Is XYZ multibuck peice of equipment sound "better" because of a design element whose effect can be objectively correlated with the subjective preference?  

If this is the case, then engineers can pursue less costly means of achieving that design criteria.

In the case of the previous example, the technical solution might be a solid state amp, much less expensive than tube, which can introduce a range of desired HD.

Pages

X
Enter your Stereophile.com username.
Enter the password that accompanies your username.
Loading