I agree with Jim Austin about the need for introducing a more scientific approach to testing in the audio review field.
In July 2008 Stereophile posted an interesting interview with Kevin Voecks, the chief designer for Revel loudspeakers. Among other things he said:
[Double-blind] listening tests over the past 10 years have taught us [at Revel] one other thing. Above the midprice range of loudspeakers, there is no correlation between the sound quality and the loudspeaker's price. Although many high-priced loudspeakers do perform adequately in our listening tests, the most expensive speaker in a given double-blind listening test may be the least preferred by our listening panel.
--http://www.stereophile.com/interviews/608kev/index.html
In his recent book Sound Reproduction, Floyd Toole related the experience of Harmon International loudspeaker researchers in their double-blind testing. They found that people ranked speakers differently when they could see them. He also notes that many expensive and very well reviewed speakers are rated only so-so when they are tested under double-blind conditions. These speakers generally also have measurable shortcomings. (pages 357-362, 396-398)
While double-blind or even simple blind testing is difficult, Stereophile should make an effort to do these sorts of tests in some circumstances. I believe Stereophile's now deceased founder J. Gordon Holt called for blind testing in an interview with Editor John Atkinson in an As We See It published a few years ago.