Columns Retired Columns & Blogs |
Loudspeakers Amplification | Digital Sources Analog Sources Featured | Accessories Music |
Columns Retired Columns & Blogs |
Loudspeakers Amplification Digital Sources | Analog Sources Accessories Featured | Music Columns Retired Columns | Show Reports | Features Latest News Community | Resources Subscriptions |
And then compare them to whatever reference speaker the reviewer has.
I agree, not just reviewing them based on the memory serves. And we all know what 2 references it should be...WHISPER and those other ones....HELIX...well maybe a third one those big horn things that look like ship air tunnels....they are a bit too Advantgard for me
Anybody here actually read Stereophile?
Many reviews, maybe most, contain direct comparisons.
I don't have an issue handy, but (I think its him) Robert Reina adds comparisons between speakers to almost every review he does.
The majority of reviews have sections comparing the gear in question to recently reviewed gear, as well.
The reviews are chock full of contextual references to other gear.
Try reading the middle parts instead of just jumping to the conclusions of the reviews and you should be well pleased!
Yes, RR does seem to keep a running mental log of the sound of speakers he's reviewed. But I think we all know how unreliable audio memory can be. Besides, it's just not the same thing as having two pairs of speakers in the room at the same time, playing the same music. Gee, they might even go crazy and get one of those little speaker switch box thingys to expedite the process.
Ya, I feel that way too some times Buddha. Sam Tellig reviewed two DACs in the same column this month. Unless of course the poster expects them to do shootouts where they pick a "winner." They'll never do those for two very good reasons. First, it's so system dependent it would pointless because the results. Secondly, it would piss off advertisers without serving the readers in any meaningful way.
There was a time when Stereophile would do that, especially with speakers. They
would have 6 or so reviewers and 6 or so pair of speakers and each reviewer would
rate the speaker on a scale of 1-5. After each pair was auditioned and the reviewers
would assign their grade, the results would be tallied and averaged, along with how
each reviewer graded each speaker.
I really liked the "shoot-out" aspect of the speaker tests.
The last I read about this sort of thing was that resources and geographic differences
between the reviewers have made that no longer possible.