Columns Retired Columns & Blogs |
A Question of Reliability
A letter in the April 1988 issue (Vol.11 No.4) from reader Harold Goldman, MD, decried the seemingly appalling failure rate of high-end products, citing a $10,000/pair power amplifier, an $11,000 turntable, and a $1500 CD player which had all been reported in recent issues as having failed during or shortly after testing by Stereophile. And Dr. Goldman's list was far from complete. We have also experienced during the past couple of years the failure, or inoperation upon delivery, of two $2500 solid-state power amplifiers, a $1700 subwoofer, a $5000 hybrid amplifier, two pairs of $1200 loudspeakers, several pairs of under-$1000 loudspeakers, and many CD players costing over $1000 each, mainly those based on Philips transports.
Footnote 1: You will see from the survey form in this issue that we are polling our readers on the reliability and satisfaction offered by their equipment: this will give us a large enough sample to generate some statistically reliable data. We intend to publish the results of that survey later this year.J. Gordon Holt
Scandalous, right? When you pay a premium price for something that is supposed to be better than most of its competition, you should not expect this to happen, right? It rarely happens with Sony or JVC or TEAC mainstream products costing a third as much, so why should higher price mean lower reliability? Because priciness tends to count against reliability, not for it.
Normally, a new product goes through three testing stages, called Alpha, Beta, and Gamma testing. Alpha testing is the performance and reliability testing of a prototype unit in the factory, prior to initial production. Beta testing is pre-production field-testingusually by persons not employed by the manufacturer, often by dealers. Gamma testing is letting the customers do the reliability testing. By buying it, they get the privilege of testing it at no cost to the manufacturer except for free repairs. Gamma testing is not considered good business practice, but often it's all a high-end manufacturer can afford.
The more costly a product and the more tightly strapped its manufacturer, the fewer pre-production prototypes can be made for Beta testing. Typically, only one or two samples will be made for Alpha testing, and some high-end manufacturers don't feel they can afford to do any Beta testing at all. It isn't usually until the product starts coming off the production line (early Gamma), or even some weeks or months after it's been in use in consumers' homes, that problems start to show up. By that time some of the audio magazines have already been sent samples for review, and there are lots of the new product in listening rooms all over the country.
High-end products are also often designed right on the hairy edge of materials technologyanother way of saying that they work just below the point of active-device or dielectric breakdown. Frequently, the edge is closer than the designer thinks, and then a week's worth of production goes out in the field and topples like a row of dominos. Other times, a manufacturer who assiduously paid his Alpha and Beta dues still gets sandbagged.
Parts suppliers have a habit of pulling an OEM version of the ol' switcheroo, making a "small and insignificant" change in an already approved part which results in the unit becoming unreliable. It takes a very sharp QC department to catch such changes, but occasionally there is no way of knowing a component has been changed until it's too late. This happened to one of the oldest loudspeaker manufacturers some years ago. A supplier made an S&I change in a single part, and although it increased the part's vulnerability to aging, the manufacturer had no way of detecting the change. The weakened part took almost five years to fail, during which time thousands of speakers had been sold with that part in it. Then they started to break down, and eventually, all the speakers made in the intervening period failed. Even though it wasn't his fault, the manufacturer conscientiously repaired every one without charge, but it nearly put him out of business. Fortunately, most parts weaknesses don't take that long to show up.
Nonetheless, the fewer samples of a product in the field, the longer it takes for it to reveal its weaknesses. A huge firm like Panasonic, for example, may turn out 1000 of a new midpriced receiver in a week. If one out of 200 has the same hitherto-undetected design weakness, they'll know about it in a week or so and can correct the problem immediately. But a small high-ender making an expensive product may produce only 10 of them in a week, and if the same one out of 20 is going to break down in the same way, the manufacturer may not be able to see a pattern emerging until after many months of production.
Such is the pressure of the marketplace, every company feels obliged to unveil a new model every six months or so. A small high-end firm trying to keep up with this rat race may have a new product out before it has even completely debugged its last one. Four such products in simultaneous manufacture can give any small firm a devastating reputation as a purveyor of junk. (Word travels fast in high-end audio circlesparticularly bad words.) On the other hand, the buyer has a right to know that a particular manufacturer's products are dying like mayflies when that is the case, which is why we report in our equipment reviews any defects, breakdowns, and out-of-the-box failures that occur, as well as publishing letters from readers who've had troubles with specific products (footnote 1).
A few high-end manufacturers do go for absolute reliability, but few audiophiles are very happy with a concomitant factor: astronomical prices. Cello and Mark Levinson Audio Systems are examples of this no-holds-barred approach, clubbing the reliability issue into submission by throwing in the most expensive parts money can buy and backing this with a 100% QC philosophy to eliminate any "infant mortalities." They then demonstrate their confidence in this approach by tacking a five-year warranty on their line. But few audiophiles see their products as representing good value for money, whining that "They don't sound that much better than products costing half as much." That isn't all that the extra cost is buying; it's buying stringent Alpha and Beta testing, in order to avoid the kind of tarnished image seen by Dr. Goldman.
Some others have devised, through years of unhappy experience with such matters, ways of reliability-testing every part that comes into their factoryanother expedient that greatly increases cost, but not as much as the Levinson/Cello-style overkill approach. Threshold, for example, has a gadget which torture-tests transistors to the point of breakdownwithout damaging them! But such measures are not available to the smaller, newer high-end firms, which must ultimately rely pretty much on the assurances of their parts suppliers that this or that will take a 15% overload "without batting an eye." When a part's eye does bat, the manufacturer's eye gets blackened and Stereophile receives another angry letter about the unreliability of overpriced high-end equipment.
So, while you don't have to feel sorry for the maker of your brand-new $3000 preamplifier when it fails at 6 pm on Friday, don't be too hard on him, either. Believe it or not, most high-end manufacturers are very conscientious about product reliability, and get as upset about failures as you do. They do the best they can under difficult circumstances, while resisting as best they can the urge to raise their prices to the point where their stuff could be very reliable but unaffordable (footnote 2). What is, however, quite unfair, in my eyes, is the almost universal practice of making the customer pay to ship the failed product back to the factory for "free" warranty repair, especially when the product in question is a 150-lb amplifier. The customer purchased the component in good faith, and it is not his fault that the damned thing blew up five minutes after it was switched on: he should not be responsible for any part of the cost of repair. In an ideal world, if you buy from a local dealer, you should be able to persuade him to swap your dud for another if it fails. But if you buy by the FOB, you must return by the FOB, and then you're at the mercy of the pounds-times-miles tables. It shouldn't be.
Everything considered, though, most high-end equipment is really pretty rugged. If you've purchased a new product recently, consider this: Barring catastrophic "user error"like touching your pickup stylus to the rough edge of a rotating discthe chances that a product will break down spontaneously diminish with time. If yours makes it through the first week of daily use, it will probably see at least five years before it needs repair. A Sony receiver may outlast it, but which would you rather listen to during those five years?J. Gordon Holt
Footnote 1: You will see from the survey form in this issue that we are polling our readers on the reliability and satisfaction offered by their equipment: this will give us a large enough sample to generate some statistically reliable data. We intend to publish the results of that survey later this year.J. Gordon Holt
Footnote 2: Gordon, you are too kind here. No matter how hard a manufacturer finds it to make a living, there is still no excuse for using its customers as the final stage in its QC/debugging procedures (something endemic among computer software "manufacturers"). When compiling Stereophile's "Recommended Components" listing, we do take notice of such matters as long-term reliability (when we have evidence), and whether a product is mature (ie, "debugged" or not).John Atkinson
- Log in or register to post comments