Rules of Reviewing

My eyes were inexorably drawn to a surprising headline this morning: "New Studies Say Universe Younger than Objects In It." A study by Indiana University's Michael Pierce has just been published establishing a new value for "Hubble's Constant" (the ratio of velocity to distance for distant, receding galaxies) which suggests that the universe may be as young as 7 billion years old; at the same time, researchers at Harvard are saying that the universe is somewhere between 9 and 14 billion years old. Quite a discrepancy! (A billion here, a billion there—pretty soon you're talking real age.)

But that's not the news. The news is that the oldest stars in the universe are thought to be 16 billion years old—anywhere from 2 to 9 billion years older than the universe in which we, probably naïvely, think they exist, depending on which age-of-the-universe estimate you accept. "So we have a problem," Pierce commented.

Compared to the question of whether or not cables have inherent sonic characters, this looks like a pretty big dilemma! And just the kind of inexplicable phenomenon Richard Lehnert refers to as the proper subject of scientific study in his response to Calvin Damon on the first page of "Letters."

With that fortuitously discovered intro (reading the newspaper is one of my more advanced techniques for avoiding writing "The Final Word"), I'd like to talk about how Stereophile carries out its equipment reports. The philosophical basis for our equipment reports was set down about 32 years ago by J. Gordon Holt: We're here to give you an idea how a component will sound (or not sound) when you get it home to try it in your own system—as perfectly illustrated by Gordon in his review last month of the Arcam Delta 100 cassette player (Vol.17 No.10). How will it sound? Is it easy to use? How heavy is it? Are there problems? How does it interface with other equipment? These are the questions we must answer.

Since JGH invented the concept of the subjective high-fidelity sound review, Stereophile's reviews have evolved somewhat—particularly since JA's arrival on the scene in 1986—but have never strayed from the original intent. We now run almost every piece of equipment through a significant measurement regime, with some of the measurements published along with our subjective observations.

Why? You well might ask, given that our task is to describe the sound of each component. There are three basic reasons: 1) to verify electrical competence; 2) to establish a measurement database for observation of long-term correlations; and 3) to search for measurements which correspond to specific observations. All three of these are important, to varying degrees. Sometimes our verification of electrical competence makes it clear that the product is broken—just as would a loud hum in one channel—in which case we request a second sample. Or we may discover, as Tom Norton did last month with the Air Tight ATC-2 line-level preamp, that it's possible to plug in a product in such a way as to produce 86V on the chassis, which could be very dangerous were you to grab hold of the chassis standing in bare feet on a damp concrete floor.

The long-term database seems to be the least reader-valued purpose of our measurements, but for me it's the highest one. I've been reading our measurements since the beginning, and I've definitely found things which make a difference. For instance, I almost always like speakers with a gently tilted-down treble response—they tend to sound more like live music, given the recordings I play. I don't, however, like speakers which are tilted down all the way from 30 or 40Hz—they tend to sound bass-heavy, a characteristic of which I'm particularly intolerant. There are also a lot of measurements that I don't think matter, or which I simply don't understand.

Finally, we look for measured performance which corresponds to subjective observations. Much can be learned here. For instance, I would conclude that DO likes products which have rolled-off high frequencies; many of the products to which he gives good reviews measure as rolled-off above 10kHz. Or, if you wonder why JA didn't exactly cotton to the sound of the Velodyne DF-661 (Vol.17 No.6, p.75), just look at its frequency-response graph—it's all there in a nutshell. Of course, a lot of observations don't correlate with any measurement we make, just as some measurements imply the opposite of what we heard. These measurements go into the database for future explanation, or possible future abandonment. (Correspondingly, you might conclude that a particular reviewer's observations are too anti-measurement for you to rely on them.)

Another addition of JA's has been his insistence that a reviewer evaluate whether he or she would spend their own money for the product. This is a critical part of the review process; after all, our publication of a strongly positive review means that that's exactly what we're expecting you to do. No matter how many virtues a product has, a reviewer's conclusion that he or she wouldn't lay out the dough for it constitutes a large negative.

Most important, though, is the same review style that Gordon came up with. When you take the product home and hook it up, how will it sound? There's a big leap of faith here. My home, for instance, is significantly, and very audibly, different from the home of everyone else in New Mexico who writes for Stereophile. When I read a review, I have to transpose that review into my home, and I have to account for what I know of that reviewer's tastes.

You have to, too. I hate to say it, but this requires quite a bit of work on your part. You have to read Stereophile long enough to know the reviewers' styles, tastes, likes, and dislikes. You almost certainly have to audition in your own home some of the equipment reviewed, to see how your experience matches that of the reviewer. To interpret reviews even more accurately, you have to buy, or borrow long-term, a piece of reviewed equipment to see if you get used to its flaws—or more annoyed by them.

Having decribed the intents and goals of Stereophile's review process; I'd like to talk about how we meet those goals.

We acquire products for review in a variety of ways: reviewers hear them at trade shows or at a friend's house and request them from manufacturers; manufacturers call to see if we'll accept a product for review; manufacturers send product unsolicited (highly unrecommended); or we read commentary in other publications and become interested.

We have some firm ground rules:
• All submitted products must be production units, not prototypes (samples from the second production run or later are recommended).
• Manufacturers must have five or more dealers in the US, with few exceptions (eg, widely advertised mail-order, with a money-back guarantee; fewer than five dealers, but very prominent ones in major population centers).
• Every product we receive is taken to be a product for review. (We do not undertake private consultancy.)
• Every unbroken sample we audition is included in the final review, even if we later hear a sample that sounds better.
• We assume a sample is broken only if the problem is obvious: eg, loud hum, intermittent sound, sparks and fire, one channel obviously different from the other, or the like. We specifically will not call a manufacturer known for good sound just because the submitted sample sounds bad; otherwise, those manufacturers would be getting second chances the others don't, based simply on reputation.
• Manufacturers have the right to set up a piece of equipment in one of our listening rooms to show us how it works, and to make sure it survived its journey. This right may be difficult to exercise—we barely have the time for even two manufacturer visits a month—and may well not take place in the actual reviewer's listening room; but it does allow the manufacturer some insight into our evaluation methods and listening conditions.

The manufacturer's visit is intended for him or her to verify the product's operation and setup in our circumstances, and to afford an opportunity to explain design philosophy, technology, features, history, suitable associated equipment, etc. Our job during these visits is to learn as much as we can, and to verify that, whatever our observations of the product's performance, it's what the manufacturer intended and expected. Occasionally we'll ask leading questions to verify that the product is performing correctly, but we strive to avoid offering value judgments while the manufacturer is visiting—that's the province of the review itself.

Once the visit, if any, is over, the review process is closed to the manufacturer until they receive the preprint of the edited review. Many manufacturers expect feedback on the review's progress, but our reviewers are prohibited from offering it (though, as you've been able to read in these pages, they are imperfect in observing this prohibition—to the detriment of their future as reviewers). Again, our job is reviewing, not design or consultation.

The manufacturer will hear from us before completion of the review if the product breaks; we'll request a second sample, and report on both. Also, a reviewer may request clarification about the product's operation, design, or history. Sometimes the manufacturer changes the product during the review period; we accept the new sample and report on both.

Once the manufacturer receives a review preprint, many things can happen: factual errors (such as price, phone number, circuit-design details, history of production) are corrected; a "Manufacturer's Comment" may be submitted; or they may appeal to us for submission of a second sample, since the negativity of our review must have been caused by a defective sample. Whether we accept submission of a second sample depends upon the legitimacy of the manufacturer's argument, as determined by JA. Normally, there has to be some evidence that the product actually was defective: measured noise substantially higher than the specifications, for instance, or description of a manufacturing problem discovered just a week before. JA is normally generous, but he must also be skeptical; otherwise, we would have to do every negative review at least twice.

Two options are completely unavailable to the manufacturer: suppression of the review, or alteration of its value judgments (except for addition of our experience of a second sample). Reviews, once written, are published. Sometimes a second sample requires some addendum to the original review. Sometimes publication of the review is delayed for one or two months (this happens for a variety of reasons, including simple lack of space). All reviews, though, are published, no matter the commercial implications, legal action, or loss of friendship which may result.

Stereophile, of course, accepts advertising; you may have wondered if advertising ever affects reviews. The answer is no. Never (footnote 1). This question is substantially answered simply by reading the magazine and observing the results. Advertisers get good reviews and bad reviews; non-advertisers get good reviews and bad reviews.

Our reviewers are interested in hi-fi products and music, not advertising; most of them are substantially unaware of which products are advertised in Stereophile. Our advertising representatives never discuss the subject of reviews with our writers and editors, and pressure for any specific kind of review is never brought to bear on a reviewer.

Manufacturers do, of course, know if they have a product in for review; but they almost never know when that review will be published until they receive the review preprint (long past ad-copy deadline)—and even then, they may be surprised when the review is put off for another issue or two. Our advertising reps don't know which products are being reviewed in a particular issue until they get their photocopy of the issue after it's been sent to the printer. Ads are simply not sold on the basis of what reviews are in the magazine.

I long ago decided that the only way for a magazine of integrity to deal with the impact of negative reviews on advertising sales was to simply ignore it. I've never had even a second thought on the subject.



Footnote 1: There have been two exceptions, and not in the direction readers might assume: First, in the case of the Polk RTA 11t loudspeaker (Stereophile, March 1990, Vol.13 No.3, p.142), JA and I felt that Polk was attempting to reach our audience through advertising without being willing to submit a product for review and risk the consequences. Accordingly, we bought one of their popular models and reviewed it because they were an advertiser. The review was negative, and Polk subsequently stopped advertising.

Second, Velodyne had just signed a long-term advertising contract when we received their DF-661 loudspeaker for review. Our initial audition persuaded us that the review was certain to be negative, so we published the review in the next possible issue (Stereophile, June 1994, Vol.17 No.6, p.75). We were sure that Velodyne would want to cancel their advertising upon seeing the review, and we would have felt uncomfortable accepting their money knowing how negative the review was going to be.—Larry Archibald

X