Communication Breakdown

Classical and jazz notwithstanding, an awful lot of new music is highly compressed, processed, and harsh, and it's about time we got used to it. Musicians, producers, and engineers are, in large part, on board with the sound, and any suggestion of making less-compressed recordings, with a wider dynamic range, is met with confused stares, or worse. One superstar producer didn't take kindly to my suggestion that he make two mixes for his new project: the standard compressed one, and another, less-crushed version. That didn't fly; he said there could be only one, the mix approved by him and the band, and that to them, a less-compressed mix wouldn't sound better. This producer is an audiophile, but he's not the least bit interested in making music for audiophiles. Harshness, it seems, isn't just a byproduct of compression; it's an integral part of the sound of today's music.

When I was nine or ten, I'd mistune my pocket AM radio to add distortion to the sound. The static fuzz fascinated me, but my mother couldn't understand what I was doing, and would leave the room. Clearly, distorted sound can sound good to only some ears, some of the time. If I'd been 40 years old in 1969 and a big Sinatra and Bennett fan, I'm sure that the music of Jimi Hendrix and Led Zeppelin would have sounded like noise to me.

Distortion in rock had begun in 1954, with Link Wray's "Rumble," and it was cranked up to 11 with the Kinks' "You Really Got Me" and Blue Cheer's cover of Eddie Cochrane's "Summertime Blues." Keith Richards managed to pull glorious grunge out of his acoustic guitar for the Rolling Stones' "Street Fighting Man" by playing to and overdriving his tiny Philips cassette recorder—that was the only way he could get the texture he wanted for the tune. Greg Lake's massively distorted vocals in King Crimson's "21st Century Schizoid Man" jumped out of the radio in 1969. Lou Reed's Metal Machine Music and Neil Young's Arc are albums of unrelenting guitar feedback and distortion. There are countless other examples. Ear-shredding distortion has its admirers, there's nothing new about that.

Of course, all of that was 100% analog distortion. Today's hyperdigital compression has a very different character, and winds up sounding offensively harsh, gritty, and fuzzy to tender baby-boomer sensibilities. We don't have a problem with distortion per se, but 21st-century digital clipping presses the wrong buttons for us oldsters. The under-35 crowd doesn't seem to mind. Can you say, "generation gap"?

The aesthetic of reproduced sound, from acoustically recorded Edison cylinders to electrically recorded 78s, LPs, cassettes, CDs, SACDs, DVD-As, MP3s, and FLACs wasn't a straight line to perfect sound forever. The limitations of each format changed the sound of music, and other than audiophile and some classical labels, no one ever really tried to make lifelike-sounding recordings.

Producers and engineers are after a sound, not fidelity. For example, in real life a rock singer and his acoustic guitar could never be heard next to fully cranked electric guitars, bass, and drums, but with a mix engineer's tastefully applied compression, the balance can sound surprisingly natural. Of course, the engineer isn't trying to accurately capture a complete performance by the band; bits and pieces of music are Pro Tooled and tweaked, and vocals are recorded later, then Auto Tuned and mixed to be audible above the fray. Dynamic-range compression and limiting are definitely part of today's sound, but I assure you that compression was an essential part of the sound of every classic rock album in your collection. It's always been here.

I recently bought the new Spoon album, They Want My Soul, and while I loved the tunes, the sound is nasty and grating. I kept listening anyway. It's a terrific album, as good as anything they've done, and as I listened—at home, through my Zu Audio Druid V speakers, and on the go, with Sennheiser IE 800 in-ear headphones—I began to understand: The harshness is part of the music. So I stopped gnashing my teeth and went with it. It wasn't easy to recalibrate my ears to listen through the sound, but once I got there, I could focus on the music. It was the same with Arcade Fire's last two albums, Reflektor and The Suburbs—they were rough on the ears, but now I'm learning to listen through the grit. Newly remastered old titles on CD and LP frequently suffer from overt dynamic compression, to make the music sound more contemporary.

We can clutch to our chests our 180gm LPs of Dark Side of the Moon and Aja and reject all new music—a lot of folks my age checked out long ago, even before the late 1990s, when the Loudness Wars of ever-escalating compression crunch began. That's fine, but if you want to hear what's happening now, you'll have to accept that young bands aren't going to change their ways.

Then again, a friend, mastering and recording engineer Bob Katz, claims that, on streaming services such as Spotify and iTunes Radio, there's an ongoing shift to loudness normalization: the levels of individual tunes are automatically adjusted to all have the same overall perceived loudness. Loudness normalization doesn't compress dynamics within a track; instead, it automatically adjusts the volume level from song to song. Katz believes that, as loudness normalization becomes more widespread, mastering to maximize loudness will be futile.

We audiophiles can be appalled by the sound of contemporary music all we want, but we can't impose our preferences on everyone else. I'm sorry, Bob, but I can't see the crunchy aesthetic reversing course anytime soon. Over the long term, sure—maybe sonic realism will be the next big thing . . . in 2025.—Steve Guttenberg

ARTICLE CONTENTS

COMMENTS
kb2tdu's picture

If it's between today's commercially viable, autotuned, dynamic-smashed sound, or musicality, I have no need to suffer contemporary tastes. The many decades of 20th century recordings, most of which I have yet to hear, I will enjoy without missing the current commercial oeuvre at all.
To suggest compression is about normalizing levels among tracks is a strange tack, at best. Radio station processing chains have been doing that for at least 50 years, and probably inured most to the absence of dynamic range.
Now excuse me, I have some listening to do. Those Edison cylinders are around here somewhere.

tonykaz's picture

It's crappy "dog food" but the dogs eat it the hell out of it !

Tony in Michigan

Allen Fant's picture

All parties are to blame here. If said band/artist does not care enough about their "art", to want to the best sonic attributes, how can we expect the rest of the players on the team to care as well?

AussieSteve's picture

I think modern engineers prefer compression because too many "musicians" these days can't play an instrument, read music or most importantly - Actually Sing!!! My wife likes some modern singers, and when I play her acoustic or live versions of a song she says,"Who's that?" The person cannot actually pull off a note without digital help. Too many (not all) of them are guilty of it. It seems the record companies are more concerned now with cashing in than producing fine music, even if I get thrashed for saying it. Nothing new there, maybe that's life.

jimtavegia's picture

And then we wonder why the masses don't embrace high end audio when the reality is that most of main stream music does not require great equipment for playback. Plus there are no standards for industry recording engineers or producers. If you can get someone at the labels to think you are dabomb, what else do you need? With all of the great equipment and cheap hirez recording equipment it is hard to imagine why this is happening. The tentacles of ITunes has killed the music business. For high res we are left with artists we don't know or reissues of stuff we already own at $30-$50 a pop. I am about to just rotate what I already own and not buy much else. I am really enjoying my new CD of the Portland State Choir recorded by JA. The first new piece of music I've bought in months.

Patrick Butler's picture

The masses don't embrace high end audio because they are not even aware of its existence. Once I became aware I learned that every piece of music that I own benefits from a better playback system.

John Atkinson's picture
jimtavegia wrote:
I am really enjoying my new CD of the Portland State Choir recorded by JA. The first new piece of music I've bought in months.

Glad you like the CD, Jim. Both Erick Lichte and I felt that there was some special music-making happening during the sessions last May.

John Atkinson
Editor, Stereophile

Allen Fant's picture

Interesting...SG uses the cover of the new U2 release, yet does not mention in this article? Pretty good release, btw, no quite sure how to interpret the cover art though...?

voxi's picture

That's why the cover is brilliant! It's Larry's son that he hugs. The First assosiation is obvioulsly dirty - it says something by itself. But I can pair U2 with porngraphy so I looked the title of the album and then I realized. The same was with their last cover of No Line On The Horizon... You wrote that it sounds good? Do you have CD or LP?

Osgood Crinkly III's picture

The problem is not compression, which has been around for years, but processing, which is making music amusical. The voice doesn't sound like a voice, a piano a piano, a guitar a guitar. It's all processed cheese.

The second problem is that music is rarely made by musicians playing together at the same time in the same place. It's all overdubbed. This is so bad that recording studios are all but extinct. Music is cut and paste.

Commercial pop has always been about 2 things: money and the lowest common denominator. The ends haven't changed, just the means.

A magazine, such as this one, devoted to $50,000 amplifiers and reissues of 40 year-old recordings, of course, has no place at the table.

jimtavegia's picture

The first generation redbook recordings I make for my musician friends in my mancave/studio sound very good, and yes, I do believe that the long chain of processing does kill much of the new music. but, because I don't compress the music stays alive, dynamic, and very real sounding as it should. When I go to 2496 it is even much, much better.

Catch22's picture

Music is being engineered based on the way it is being consumed. It's a mistake to believe that any effort is being made to make a quality recording when that is NOT in demand based on the way the product will be consumed by the target customer.

THAT is why we can expect no change from the recording industry.

stodgers's picture

The only format of music where sales are *increasing* is vinyl, so you can't really say that how music is being consumed is driving the decision. I think it is more because the current popular acts were fans of indie and lo-fi recordings that had little musicality and depth and were the product of rushed recording time.

Rick Tomaszewicz's picture

...because? Come on. "Classical and jazz notwithstanding,..." Classical and jazz has never been mass market. Why should we even care how pop music is recorded? Little of it is "acoustic" or "unplugged". As deliberately poorly recorded as The Black Keys are, they're still artists with real chops. No matter how "garage" they sound, the message is loud and clear. Maybe even louder and clearer because of the poor recording. Sort of, "We're so competent, we fired the engineer and twiddled the knobs ourselves!" Classical is mostly about the composer's message. Jazz is mostly about the performer's feelings at the moment. Both of these genres have so much going on, there's little room left for engineering interference. Best just to record them as faithfully as possible.

eriks's picture

Not to discount your observations, but at the same time, I remember my grandfather, (1900-1989) making the same allegations in the early 80s. Of course, it wasn't so much about distortion but about the complete disconnect he felt with the music of the day.

The solution, says I, is on the cover of this monnth's Stereophile. To support high resolution recordings and playback. Even if we can't hear the technology, we can certainly hear the mastering.

Best,

Erik

1979's picture

I never feel like enough can be said about this topic. Often times I feel that the assumption is that compression and distortion= bad sound. I have made records ( but I'm far from being successful at it) and I have mixed live music as a primary means of income for about 10 years of my life. In this time I have used and abused all sorts of analog and digital compressors and heard the effect of that in front of me.

I would like to say the there are several types of distortion I have encountered when using compression, and it should be noted that most of these 'types of distortion' have been around since the first compressor ever existed, or at least since engineers could control attack and release times and mis-apply them. There are ultimately 2 families of distortion which can result from compressors:

1) Distortion in the gain stages of the compressor be it FET, Tube, or Digital. Some compressor distort in a musical manner, some distort more aggressively and some some sound like absolute garbage. It should be noted that any electronic device passing and amplifying signal is capable of this - not just compressors.

2) Distortion resulting from the time based activity of the compressor. If you have a compressor grabbing peaks within 1ms of incoming signal and the letting go very quickly larger waveforms can become obscured as a result. For instance, take a 60Hz sine wave and compress that signal with extremely fast attack and release times. It is possible for the compressor to grab and let go faster than it takes the waveform to complete a cycle. There is distortion as a result and whether or not this is deemed to be musical is subjective.

Digital Compressor/limiters can have infinitely fast attack and release times, and can even 'look ahead' of incoming peaks- reacting before a transient can see the light of day. The act of brickwall limiting needs to be fast: Grab those peaks and let go quickly if the average level is to remain 'loud'. If the compressor lets go to slowly the effect is that of simply turning the volume down, not increasing average level. Now, alot of these digital brick-wall algorithms are sophisticated enough to 'adapt' to incoming waveforms- but they are not perfect- they are not capable of squeezing more than a couple extra db of average level out without artifact. Let’s remember here I'm not blaming digital here for bad sound (I will do that later), I'm blaming digital for creating possibilities that would not be possible with analog gear. That said there are analog compressors than be very fast and are capable of 'brick-walling'

Compression can make things better or worse. Compressors can enhance transients or destroy them. It all depends on application. Seeing as how they have been around for a very long time, Its difficult for me to accept the idea of compression=harsh.

Lets think this through from different angle. Lets think of the workflow of modern rock and roll recording:

1) An indie rock band goes into the studio to make a record. They record in a mid-level size studio that has one live room and one vocal booth. Producer and band decide they will record the drums and bass first. The drummer is hammering away full boar in the live room, and the singer is in the vocal booth playing acoustic guitar (which they will overdub later) and singing (which they will overdub later) The producer DI's the bass, and DI's the electric rhythm guitar both of which they will re-amp later- so hopefully these will be keeper tracks. Everyone is wearing headphones- no one is listening to how their tone or instruments sound in the room or against each other acoustically. This is dynamic range problem #1. Only seasoned studio players can gauge their interaction in this environment. Once they get to overdubbing, the musician is now responding to what might be un-natural bed track dynamics. If anyone has seen a singer perform live who has just started using In-ear monitors (as opposed to floor monitors) you will now exactly what I'm talking about. The dynamics of their performance can be eerily unnatural and jumpy. Better get the compressor out to smooth out these dynamically flawed performances.

2) There can be a commitment in the process of recording this indie rock band , to place mics in manner that yield a track with the least amount of bleed from adjacent sounds, rather than placing mics in fashion that illustrate the best picture of the source being recorded. The concept here is that individual performances can be easily overdubbed without having a ‘ghost tone’ of the botched take in all of the other adjacent mics recording performances you want to keep. This usually results in close micing instruments in a way that does n’t capture the dynamics or timbre realistically. The idea of capturing the ‘space around the instrument’ is lost. The goal of this method is to achieve tracks that can be easily cut and pasted.

Now this is a far cry from the way they would have recorded Muddy Waters or Howlin Wolf at Chess Records. From my understanding, they weren’t thinking too much about isolation and bleed. They were thinking about how all of these instruments worked together in the room, together, at the same time. A whole lot of calculation was done plotting out where these instruments worked best in the space, and how all the mics on these instruments reacted together to create one big cohesive picture. Imagine telling Muddy Waters he was going to have to redo his vocal in an iso booth (without his guitar in his hands).

To take this even further, apparently Bob Dylan was once asked to put on headphones so the engineer could ‘punch in’ a vocal phrase. Bob told the engineer he did n’t do headphones and insisted the whole band get ‘punched in’ playing behind the one phrase he had to redo.

3) Back to the indie rock band making a record: The producer of this Indie Rock Band is recording to Avid Protools ( Likely 24/96 in this day and age). The band is recorded through Avid AD converters. High Frequency is rendered as faithfully as it can be at 24/96, but a complete rotation of a 10khz waveform cycling at 10,000 times per second is described 9.6 times per waveform rotation. (Im not a scientist here so if anyone thinks I’m describing how this work incorrectly please feel free to correct me).The Producer then pushes protools faders around (DSP applied) loads five different Plug-ins per channel (mucho dsp applied) and starts tweaking. The original source recording is constantly being re-described with 24 bit words, 96000 times per second. He or She then runs individual tracks out of protools through the Avid DA converters into their favourite analog outboard equipment and and back into through the same AD converters. Eventually, a stereo mix is run out of protools through the Avid DA through a bus compressor (to get it louder and 'glue' the mix together) and the mix is then sent back through AD again to create a stereo file to be sent off to mastering.

At some point we should ask “Are we dealing with smooth continuous waveforms? or are we dealing with a bunch of 'pixelated' connected dots attempting to describe a waveform. By the time the mix is finished,and send off to mastering- it is copy of a copy of a copy. It is a digital picture printed, and scanned back in, printed and scanned back in. It isn't compressors that are killing music, it is the number of times mediocre AD/DA converters are being hit, and the number of times a 'plugin' is reinscribing the source. Given their speed, it is high frequency waveforms that take the biggest hit in this process. The higher frequencies end up becoming synthetic; that is that they are a synthesis of a fragmented description. By the time a finished mix dribbles out the ass end of a DAW , fundamental tones are estranged from their higher frequency harmonics. The timbre of a complex signal is no longer a cohesive image. All of this is the number #1 reason for HARSH.

Now, I’m not saying that everyone makes music this way- There are plenty of great engineer/producers out there. Many engineers are well aware of the corrosion that occurs from over-processing and over-converting. They are well versed in getting great sounds, getting great performances, and most importantly they are genius when it comes creating a place for music to happen. In the end, keeping the band happy and getting paid is really the number 1 objective here. I will say this: Go sit in a control room in front of DAW screen with a band paying you (all of which translates into you paying your rent, feeding your kids, buying more gear,…. whatever). All they have listened to is the lossy garbage on their IPOD. No one signing your pay check gives a flying $%%^& about anything to do with the audiophile world. All they care about is what they want their record to sound like. Musicians don’t think in terms of artifacts or sonic consequences. 90% of the time they think ‘additively’ - “just add this plug-in to add this sound”. Try explaining anything I‘ve said in the above to a band paying you to engineer their record. It is irrelevant.

The last record I engineered, I got everyone into the same room, I put the drum kit in the centre of the room, baffled it off, and then I set up everyone’s amp beside a live sound floor wedge (so they could hear the vocal) I wanted everyones performance to be informed by their own tone, their own tone reacting in the acoustic space, and reacting against the other performances. So thats what we did. We only overdubbed some vocals, and some acoustic guitar parts. Easiest record I ever made- It basically mixed itself. It sounded huge and natural.I thought it was the best thing I had ever done. I can’t say that the band felt the same way. Anyway, last record I engineered.

Patrick Butler's picture

like

dallasdingle's picture

Reminds me of a recent South Park episode where Stan's dad turns out to be Lourde the pop singer. He uses autotune to make him sound like a young girl. You have to watch a short commerical,but if you go to 13:20 of this video,he describes his technique to Stan. Probably not far from the truth. http://southpark.cc.com/full-episodes/s18e03-the-cissy

Allen Fant's picture

U2-

Thank You for the info- I have the CD.

Bansaku's picture

I personally believe the whole reason for compression, with the exception of complacency, has to do with the way people listen to music. Being a child of the 1980's when it came to audio it was all about the big, powerful home stereo systems. Manufacturers of the components put a lot of time and care into their products to produce powerful, accurate sound. Portable sound was in it's infancy.

Although portable tape and CD players certainly existed, in North America it was still a relatively small market. Even throughout high school in the 1990s, I was one of the few in my classes (if not the only one) who regularly used a CD/Walkman on a daily basis. The people that had them were the recluse Metal/Punk guy that sat at the back and drew dragons and demons on his binder throughout math class, or the technogeek who proudly wore full sized cans and had the latest and greatest in Dolby noise reduction. It was never the jock, the popular kids, or heaven forbid...a girl. Rarely did the average Joe have one and if they did it was usually reserved for the workout or jog. For the most part portable music was essentially a BoomBox/Ghetto Blaster. Music was still being produced to be played through big speakers, and big headphones.

In comes the MP3 and the digital age. Devices got smaller. Storage capacity increased to a point where one can throw a whole album in MP3 form on a little card. Costs started coming down for both consumers and manufacturers. Apple opens iTunes and allows anyone with an internet connection to download the music they want, anytime, from the comfort of their own homes. The age of convenience was upon us.

Desktop PCs were being replaced with Laptops, Walkmans were being replaced with iPods, and the Home Stereo had shrunk down to a receiver the size of a DVD player coupled with 5 small satellite speakers and a 'sub woofer'. Everyone now could be part of the audio crowd without breaking the bank, while Audiophiles were being pushed aside in favour of the almighty (China) dollar. Why cater to a small few when they can sell to everyone?

Unfortunately, in my opinion, that was the start of the downfall in music quality. As the public ate up and willfully accepted smaller and more portable devices, the push to micronize technology meant concessions would have to me made. It all comes down to delivery. Not only were the new devices underpowered, mostly due to battery life, the speakers themselves shrunk. Laptops and iMacs with tiny speakers, iPod docks, and earbuds was fast becoming the norm. Trying to hear music through these meant cranking up the volume only to still have it be inadequate. The solution was to take a cue from TV and Radio broadcasters; Compression.

For years over the air broadcasters compressed the audio. This was for 2 reasons; To boost the signal so that their station could be heard louder than the station next to it on the dial, and to compensate for the small speakers found in portable radios and automobiles. Not only did they want to be heard on any device, they wanted to be heard the loudest! This in itself was not a problem, as the stations were compressing the music as it goes out over the airways. The music you buy in the stores was left as is in all of it's purity.

In comes the loudness wars. The same time digital technology and delivery was taking off, so was the next generation of Pop star; No talent, no problem, computers can fix that. They are good looking and can make the labels millions! Cheap music was being pumped out by artists yearly, and the $$ was rolling in. But that wasn't good enough apparently. Much like the pissing match between broadcasters, labels were feeling the (self) pressure to be heard the loudest, everywhere, on everything, regardless of anyones desire.

The solution was to force it upon us with compression during the final mix. This started with mainstream commercial albums in the late 1990s, and migrated to EVERYTHING as of today! Bands I grew up with in the 1980s have sadly succumb to this trend in the 1990s. I can actually pinpoint this to 1995-1997. I listen to a wide range of music, but I was a metal-head growing up and still listen to the genre. I can take the discography of a band like Megadeth, Slayer, or Queensryche, and view each albums compression level via software and see quite clearly, like a geologist viewing a line in bedrock, the literal point when music went from being pure (minimal compression) to the heavily compressed crap we have today. The sad part of it is that because music is digital, you can include a 'flag' in the metadata to make the song (or album) louder! Yet labels and engineers butcher the sound and force it.

I personally will not accept this 'new sound'. The better my gear, the worse it sounds. Ever wonder why Audiophiles loathe Beats Headphones? It is because Beats sound great with the heavily compressed music of today, and it is only because they are tuned for that sound. Once you play music on vinyl, or 24/96Khz, music that has little or no compression, Beats headphones show how horrible they actually sound. Beats are a pure harmony of pop-culture fashion and mass produced music. This is why they outsold Sennheiser, Beyerdynamic, and Grado combined last year. This is why Jay-Z outsells Rebeca Pigeon. It's not about quality, it's about $$$! Just my 2 cents!

Sphillips9's picture

Cancel subscription. Compression ok? Stereophile now officially sucks. You're on the wrong side.

John Atkinson's picture
Quote:
Compression ok? Stereophile now officially sucks. You're on the wrong side.

You're confusing compression and deliberate distortion as an artistic tool, which is done almost all the time, with compression applied to the recording as a whole. See Thomas Lund's comments on the second page of this Web article.

Quote:
Cancel subscription.

Cancel your own subscription.

John Atkinson
Editor, Stereophile

Dr.Kamiya's picture

In my opinion volume normalization is a necessity in today's world of playlists and nonstop playback. It wasn't an issue when we actually had to walk to the stereo to change the record or CD, but modern digital libraries can have thousands of songs from hundreds of albums, each of which is likely mastered to varying degrees of loudness.

Normalization allows convenient playback while protecting our ears and equipment when "shuffle albums" suddenly decides to queue up Nirvana (replay gain usually suggests -9dB or more) right after Diana Krall (usually 0dB). If it encourages an end to compression for loudness sake, then that's just more to love about normalization.

tmrdean's picture

I believe it was in the 1950s when Bob Brown created his electrical/mechanical feedback device for guitars. Something about loose windings over an iron core with the result being fed back to "enhance" the original sound. He was still selling a few in the 1970s. Between polio and falling off a horse Bob was in a wheelchair when I knew him.

Al from Hudson Avenue's picture

Not for nothin', but if you're recording rock or pop and you don't compress, you ain't gonna sell any records. When someone plays your record and it's a tenth of the apparent volume of the one before and the one after, you're out of the game.

The only question is whether it's going to be done horribly badly (Rhino Doo Wop Box 2) or very well (original King Records R&B 45 or a Three Stooges short).

I'm tinkering with the idea of putting up unmastered versions of my albums on my site for the coneheads who have big stereos and can actually reproduce something near the dynamic range of the unsmashed record.

daerron's picture

Without all those compressed albums around, normalisation wouldn't be such a requirement either. Sadly the situation reminds me a lot about advertising on cable TV, which is why I now rent/buy my content. I do make a point of playing my high quality albums for my friends who are used to today's compressed albums, we are all in our 30s. It's always great to see the jaws drop when putting up something like the HD version of R.E.M. Automatic for the People. I've also started making a point of asking for refunds on albums that sound really bad on my stereo system. What is the point of listening to something that makes your ears hurt even if some are from your favourite artists? Sadly the world we are living in today overstates the importance of association over quality and the record industry seems to have fully embraced this. Fortunately there is still a wonderful catalog of music out there to fall back on.

skris88's picture

Unlike classical or jazz recordings where the orchestra or band play together, pop music of this decade is very rarely if ever played Live.

Such music actually cannot be re-created in real time! Our stereo thus is the musical instrument, it's no longer just trying to re-create a band on a stage.

So our expectations need to be re-calibrated for new music.

Actually there is a lot of fun stuff out there, that isn't overly harsh or "digital" sounding. My sources? The annual release of the Grammy Awards CD for one - ripped to a 16/44 WAV file and normalised to 80db RMS.

JA (you've told me before) I know I'm adding more noise apparently by not leaving them to peak at 99db, but I'd live with that any time than risk having my speakers and hearing damaged when my media player randomly switches to a 2013s track with only a 5db dynamic range immediately after a 1980s track with a 20db dynamic range!

Interestingly enough I used to normalise to 85db but one day noticed a really huge improvement on peaks and high frequency extension on some Japanese traditional drums recording when I changed normalistation to 80db RMS. These tracks must have some 20db peaks which are simply dropped by the DAC - to ensure there is zero (really horrible) digital distortion on overload?

X