MQA Tested Part 2: Into the Fold

Loss is nothing else but change, and change is Nature's delight.—Marcus Aurelius

Master Quality Authenticated (MQA), the audio codec from industry veterans Bob Stuart and Peter Craven, rests on two pillars: improved time-domain behavior, which is said to improve sound quality and what MQA Ltd. calls "audio origami," which yields reduced file size (for downloads) and data rate (for streaming). Last month I took a first peek at those time-domain issues, examining the impulse response of MQA's "upsampling renderer," the output side of this analog-to-analog system (footnote 1). This month I take a first look at the second pillar: MQA's approach to data-rate reduction. In particular, I'll consider critics' claims that MQA is a "lossy" codec.

Is MQA lossy? Yes, but that's not really the right question. That is my conclusion after reading widely about it, and talking and corresponding with experts, including MQA's Bob Stuart and digital engineers who don't support MQA. At worst, lossy is a cudgel—a scare word intended to evoke associations with MP3 and a negative emotional response. At best, framing the issue as one of lossy vs lossless largely misses the point.

Critics commonly claim that the Internet's ever-increasing download speeds render MQA's reduction in file sizes unnecessary. Why turn to an unproven "lossy codec" when, instead, we could use a familiar lossless codec like FLAC, or no compression at all? But as with every aspect of the codec, there's a serious philosophical perspective behind MQA's commitment to data-rate reduction. For Stuart, efficiency in the delivery of musical information is an aesthetic, even an ethical commitment.

The most important information in any audio recording, as everyone surely agrees, is what falls within the audioband of 20Hz–20kHz—ie, the range of frequencies audible to the human ear—and the audioband is adequately covered by CD's 44.1kHz sampling rate, with the caveat that squeezing a well-behaved antialiasing filter into the narrow transition band presents challenges (footnote 2). A sampling rate of 48kHz provides a bit more room for more gradual filtering.

So why not go for 48kHz? We could surely get consistently good sound from a 48kHz distribution format, assuming musicians and recording, mastering, and hardware engineers do their jobs well. However, it's widely thought that sound quality improves when the sampling rate is doubled, to 96kHz, and further improves when it's doubled again, to 192kHz, and on and on. That's the conventional wisdom, the result of many subjective experiences by countless audiophiles. But recently that rarest of things in high-end audio—actual scientific evidence—has emerged (footnote 3).

If nearly everyone agrees that increasing the sampling rate improves quality, most also agree that with each doubling the sonic benefit shrinks. And the necessary data rates quickly become formidable: With DXD, the highest-rate version of PCM in common use (352.8kHz), the data rate may exceed even the capacity of our own neural circuitry (footnote 4). Even if all those data were meaningful, they would be more than we could take in. Fortunately, they're not all meaningful—or equally meaningful. Most of those data, in fact, are noise.

MQA's goal, with their "audio origami," is to fix things so that the increase in sonic quality is proportionate to the increase in file size or rate of data streaming—the aesthetic and ethical commitment I mentioned. It became clear from interviews that Bob Stuart cares deeply about this. But as John Atkinson points out in this issue's "As We See It," the notion wasn't pioneered by Stuart and Craven. It was first expressed by Claude Shannon (1916–2001), who laid much of the foundation for digital audio. Indeed, there's a unit of measurement in communication theory that expresses how much information, on average, a single bit holds—a key idea for MQA compression. It's called the shannon.

Stuart's and Shannon's notions of efficiency are appealing and, it seems to me, common-sense. Few people drag bags of rocks on hiking trips, or walk the length of the city to visit the apartment next door. The prevailing view in audio, however, is that we should mindlessly hang on to every bit of musical data, no matter how few shannons it contains. Rejecting that approach is one of several radical things MQA does.

A fair number of recordings have music-correlated information above the audioband (footnote 5), especially just above it. You can't hear it, or not directly, but musical instruments do have harmonics up there. It's there, so we might as well keep it, or as much of it as possible. But if we don't try to distinguish between music and noise, this comes at a high penalty. Because the higher we go above the audioband, the less real information there is. Stuart mentioned in an e-mail to me that much fewer than 1% of recordings contain musical information above 48kHz—something he knows for a fact, because MQA's encoders collect such information as they do their work.

Audio origami comprises two distinct processes; MQA calls them folds. The first fold, which they call encapsulation, folds data from four times the baseband rate (176.4 or 192kHz sampling), or higher, down to half those rates. Very high-rate files—think DXD—may require two encapsulation folds.

The second fold, called type-L, folds data from 2x to 1x—from 96 to 48kHz, or from 88.2 to 44.1kHz. This is the range where most of the ultrasonic musical information is, so it's important to keep these data intact.

I spent a lot of time reading about this aspect of MQA, and several days grilling Stuart by e-mail. I managed to achieve a rough, schematic understanding, not of MQA per se but of what the issues are and aren't—not necessarily of how MQA works, but of how it could possibly work.

First, I'll focus on the encapsulation fold.

To answer this article's key question: MQA's encapsulation fold is lossy. That's okay, though—or so goes MQA's argument, and I mostly agree—because at those ultrasonic frequencies there's almost nothing but noise. Almost. And noise is just noise, which is to say, it's uniform. You just need to learn a few things about the character of the noise: its level, its spectrum, maybe something about phase relationships. You can preserve that information in minimal space—just a few coefficients of a power-series approximation, say. Once that's saved—buried beneath the noise floor, say—you could strip away the noise completely during encoding, and use the buried information to add back the noise during the decoding stage. Again, I'm not saying MQA works like this, but it could work this way, with little or no degradation in audio quality.

In that handful of recordings mentioned above, however, there is that tiny bit of musically relevant information present at such high frequencies. If it's there, we'd like to keep it. How does MQA deal with it? I believe it's largely preserved, though I admit I'm not 100% clear on this point. In our correspondence, Stuart emphasized the importance of preserving this "correlated spectral data," as he calls it—correlated, that is, to the actual music in the audioband, where it "manifests as micro-temporal structure"—and that's the whole point of MQA. The MQA encoder, he told me, can detect "spectral components above 48kHz," and has "several strategies" for dealing with it, including choosing from among more than 2000 encapsulation algorithms. The encoder will choose the option that will "allow the decoder to most accurately reproduce the signal 'envelope' and slew-rate." Does that mean that this musically relevant information is preserved—and how well? As I said, I'm not sure—Stuart never quite committed on this point—but it seems a lot of trouble to go to just to throw it out.

Now consider the L-type fold, which takes the data down to their transmission rate of 44.1 or 48kHz. I asked Stuart if the L-type fold is lossless in the usual, traditional sense—that the original data can be recovered in bit-perfect form. It's a complicated business, and his answer was characteristically nuanced, with several annotations. Here are the key points of Stuart's carefully annotated answer, in his own words:

"[T]he MQA encoder estimates the 'triangle' and a suitable guard band and can encode that in an L fold. In an L fold, the upper octave uses a specific 2-band predictor which gives a very good waveform estimate. That is then losslessly buried according to mastering choice. . . . The remaining 'touchup' signal is packed below the noise-floor."

This is less complicated than it sounds. The left-pointing "'triangle'" is the region bounded by musical information above, the noise floor below and the lower end of the 2x frequency range on the left—in other words, it's where all this range's music-correlated data lives (fig.1). The "guard band" is a few bits below the noise floor that may contain recoverable musical information—because research has shown that we can hear into the noise floor to a depth of 10dB or so.

218MQAfig1.jpg

Fig.1 Spectral analysis, 0Hz–96kHz, the "musical triangle" (blue trace), bounded by noise (orange trace) at its base and the baseband Nyquist frequency (vertical line at 22.05kHz) to its left (5dB/vertical div.).

"In any case," Stuart concludes, "a full stream is decoded to exactly the 'Core' resulting from the Encapsulation step."

Stuart is saying, then, that in an L-type fold, musical information above the noise floor, plus part of the noise, is losslessly compressed, then fully restored on decoding to exactly what it was before the L-type fold occurred. (The noise, apparently, is not perfectly rendered, or not all of it.) This all begins to hint at what the problem is with the whole lossless/lossy formulation: lossless compared to what?

To sum up the losslessness issue: In its folding and unfolding, MQA distinguishes between music-correlated data and noise, tries hard to retain the music-correlated data, but sensibly worries much less about preserving the noise bit by bit. This allows MQA to achieve their goal of preserving the benefits of high-resolution data without the burden of large, weighty swaths of pointless noise.

At this point I would love to show you what an MQA-decoded file looks like, point out all its nuances, and tell you what they mean. But I've recorded and analyzed—or tried to—dozens of MQA files, and have noticed few patterns. I've likely encountered most of MQA's 2000+ encapsulation algorithms: Every MQA file seems to do things in a different way, diverging from the apparent non-MQA source in many different ways, and sometimes not at all. Every time I thought was getting a handle on some aspect of MQA, I'd look at another file and find that it didn't conform to my theory.

So instead of showing you a typical MQA file—there's no such thing—I'll show you an outlier. I captured the output of a Mytek HiFi Brooklyn DAC+ with its built-in MQA decoding enabled, with my Focusrite Scarlett 2i2 audio interface. The fast Fourier transform (FFT) was carried out in Adobe Audition. The Focusrite is limited to a top sampling rate of 192kHz, so I've missed some high frequencies. The Focusrite contributes its own noise at frequencies above 60kHz or so, but its contribution should be the same for all data; the differences you see are surely real.

Fig.2 shows the frequency spectrum of the run-in noise and several seconds of music for an MQA file and the original DXD file it was very likely encoded from. DXD noise is blue, MQA noise is turquoise; DXD data are magenta, MQA data are orange.

218MQAfig2.jpg

Fig.2 Spectral analysis, 0Hz–96kHz, Prolog for soprano and piano, DXD run-in noise (blue), MQA run-in noise (turquoise), DXD music data (magenta), MQA music data (orange) (5dB/vertical div.).

This measurement shows the most dramatic departure I've seen of an MQA file from the file it was made from. The noise added by MQA begins to rise at about 10kHz—well down into the audioband—and reaches +10dB or a little more by 20kHz. The noise continues to rise above the audioband, to more than 20dB by 30kHz.

The difference between these files was so large that I thought I might be able to hear it, especially in the recording's unusually long run-in section of about four seconds. So I did what I could to minimize extraneous environmental noise, waited (and waited) for my building's boiler fan to turn off, and turned the volume up loud. I played the run-in and the first few seconds of music over and over. My RadioShack SPL meter indicated peak levels in excess of 110dB (C-rated, slow).

The music is Prolog, the first section of Olav Anton Thommessen's Veslemøy Synsk (after Grieg), for soprano and piano (Marianne Beate Kielland and Nils Anders Mortensen), recorded to DXD and available in several formats (2L 2L-078). In MQA or DXD, the music is great and the sound is superb.

Not only did I fail to hear a difference, I heard nothing at all before the first piano note, in MQA or DXD. That run-in noise was inaudible. Recording engineer Morten Lindberg was careful in his choices—no surprise.

Earlier, I wrote that the lossiness/losslessness question is beside the point. Here's what I was getting at. There certainly are differences in the MQA version of this recording, and the changes due to the MQA encoding/decoding seem to indicate some loss of resolution, even if it isn't audible. And yet, whatever the cause of the differences between the DXD and MQA versions—whether the results of origami or of some other change wrought during encoding—well, we already knew that. We already knew that MQA changes the music. MQA's goal is to make music sound better, and you can't make music sound better without changing how it sounds. Lossiness is beside the point.

How, then, should we evaluate MQA? The next step would be to confirm that it indeed offers superior time-domain behavior, which will require cooperation from Bob Stuart and the MQA team. The next step after that would be to get a fix on MQA's sound in some rigorous way—something beyond casual subjective observations. Then we can start to weigh, with analysis and careful listening, any frequency-domain degradation against time-domain improvement.

Finally, we need to decide whether MQA is good or bad for music. We audiophiles probably won't get to decide MQA's fate, but we do get to have an opinion.—Jim Austin



Footnote 1: I'll be writing more about MQA's time-domain claims in future articles.

Footnote 2: Kinks and corners in a filter induce ringing in the time domain.

Footnote 3: As with many things in high-end audio, the evidence is mainly anecdotal, though several studies have been done, and the results of those studies have been combined into a meta-analysis by Joshua D. Reiss, published in 2016. The conclusion: Higher sampling rates are audible, but the effect is small.

Footnote 4: Writing about this topic requires some awkward nomenclature: If a trumpet makes noise but you can't hear it, is it music? If a tree falls in the woods . . .

Footnote 5: Although the instruments with the most ultrasonic information tend to be those, like cymbals, that don't really have harmonics, in the usual sense.

COMMENTS
deckeda's picture

I get it. MQA aims to toss the noise living up where can’t hear anything anyway, while keeping the music’s artifacts experienced-yet-not-directly-heard. I’ll buy that, as should any audiophile who implicitly understands that commonly published specs never describe sound “quality” or “performance.”

I’m reminded of every cassette deck magazine ad that claimed flat response out to 18kHz or better, as if that was all that mattered.

This article is the best explanation I’ve read about the file/stream size aspect. Nicely written, Jim.

Solarophile's picture

So like any lossy encoding system, it assumes certain things about what humans can hear and what we can't hear. They set a threshold in the "nether regions" which they feel are unnecessary and just "noise".

I guess sorta like MP3 which applies the psychoacoustics to the whole audio band instead of just the 2fs unfolding.

spacehound's picture

Except that the fully complete MQA process splatters audible aliasing artefacts right across the 20Hz to 20KHz range, in addition to the other distortions of and absences from the original whether the process is complete or not ('not' being the absence of an MQA DAC).

An MQA DAC simply adds the aliasing above to the first set of distortions/absences that MQA has already caused.

Some people seem to prefer that result but their personal preferences don't make MQA 'better', just 'not the same as the original'.

DH's picture

I suggest everyone read the paper John Atkinson provided a link to:
https://www.xivero.com/downloads/MQA-Technical_Analysis-Hypotheses-Paper.pdf
it's a bit of a different perspective.

As far as lossy and file size: a) MQA is definitely lossy. It has a depth of at best 17 bits, and throws away all sorts of material. When you look at MQA files in an analyzer, there’s basically nothing above 48k. MQA and Jim Austin apparently think the lost bits are irrelevant to SQ.

That's a matter of opinion. I don't hear some sort of consistent improvement when I've listened and compared MQA to non MQA with an MQA DAC. Sometimes I've liked it, sometimes not (i.e., I sometimes think it sounds WORSE).

So MQA is a lossy format that some people will prefer.

And so my question is, do we want some entity like MQA deciding for us that those bits are irrelevant and don't effect SQ? We've been told that before and decided it isn't true.

b) the file size argument in favor of MQA is sort of bogus. You can make an 18/96 FLAC file from a 24/96 file and it will be smaller than an MQA download file and have higher real resolution/more depth. There's also free “entropy” software available for eliminating unnecessary bits from FLAC, and it really does only throw redundant bits away. So MQA isn’t needed for smaller download sizes.

drblank's picture

over time, as the MQA catalog size increases, it will be more compelling to stream Hi Res using MQA than people paying gobs of money to buy the digital downloads. While it's great to have uncompressed files, which is typically what I listen to on my computer, moving forward, I actually will prefer streaming MQA in addition to my pre-existing catalog. But I've been collecting CDs (ripping to AIFF) and digital downloads since the 80's when CDs became available. But for those young kids out there that have limited funds, they can simply stream MQA, get a reasonably priced DAC for as little as $200 and buy a decent pair of headphones and they are off and running and they don't need to spend tens of thousands or more, especially when they don't have it.

MQA, as I see, is going to change how the masses listens to music as it becomes more affordable (hardware side) and the catalog is current.

I actually have stopped buying digital downloads, unless there is something REALLY special that I don't have in my current collection. My CD purchases have also dwindled to next to nothing as well.

spacehound's picture

"I will consider claims that MQA is a lossy codec"

"Consider" whatever you want, it won't alter reality.

MQA being lossy is not a "claim" as you imply, it's a fact.
Stuart, when pushed, said so himself.

And there are two parts of MQA's lossiness. It deliberately discards some data, and reduces the bit depth of what it keeps to "about 17" (in Stuart's own words) from the original 24.

And neither Stuarts attempts to redefine 'lossy' nor your parroting of his words will change the fact that MQA is lossy.

What's more, you could equally apply Stuart's words to MP3 or AAC. And we all know the are lossy too.

music or sound's picture

What does that origami mean? Does MQA try to control the reproduction of music?

NeilS's picture

I took the accompanying graphic to the article as an eloquent and succinct statement that notwithstanding the form, in substance MQA is about money, not music.

AJ's picture
Quote:

The most important information in any audio recording, as everyone surely agrees, is what falls within the audioband of 20Hz–20kHz—ie, the range of frequencies audible to the human ear—and the audioband is adequately covered by CD's 44.1kHz sampling rate, with the caveat that squeezing a well-behaved antialiasing filter into the narrow transition band presents challenges (footnote 2).

Footnote 2: Kinks and corners in a filter induce ringing in the time domain.

..and the evidence that *visible* "ringing" seen in scary tales, is audible, to anyone, with music, especially 50-70+ year old males, is where???
Missed that Footnote??

AJ's picture
Quote:

However, it's widely thought that sound quality improves when the sampling rate is doubled, to 96kHz, and further improves when it's doubled again, to 192kHz, and on and on. That's the conventional wisdom, the result of many subjective experiences by countless audiophiles. But recently that rarest of things in high-end audio—actual scientific evidence—has emerged (footnote 3).

Footnote 3: As with many things in high-end audio, the evidence is mainly anecdotal, though several studies have been done, and the results of those studies have been combined into a meta-analysis by Joshua D. Reiss, published in 2016. The conclusion: Higher sampling rates are audible, but the effect is small.

Quote:

Reiss - these results imply that, though the effect is perhaps small and difficult to detect, the perceived fidelity of an audio recording and playback chain is affected by operating beyond conventional consumer oriented levels. Furthermore, though the causes are still unknown

Jim, where are you getting this allegation that >48kHz sample rates cause anything audibly beneficial?
You very own source says otherwise.
Not to mention that study is quite controversial, due what JAs statistics professor might call "statistical mining".
If you are going to put any weight into those cherry picked studyies, then you will also have to accept that high bitrate MP3 rival "Hi Res", because that's what some found.
Or maybe, the studies were not quite valid. Can't have it both ways there.

AJ's picture

Very simply:
Take any dual output MQA DAC. Route output 1 straight to preamp input 1 as anyone would listen to MQA. Take the 2nd DAC output, run it through a 16/44 "time smearer" loop, send to preamp input 2, make sure preamp out to amp or active speakers is the same V for both inputs (level matches).
Take you time listening, switch between input 1 and 2 at your leisure. Tell us what difference you hear.
After that, I'll let you know the next step..

ednazarko's picture

As I read the MQA explanation above, I remembered some things from back when I used to play. There's a fair amount of sound information caused by the interactions between the sounds produced by instruments, and between instruments and the space where they're played. Indirect musical info. (?) An example of what I mean: Two people playing the same note but with slightly different intonations produce audible "beats" in addition to the two tones. We used to play around with that effect from time to time. It was easier to make happen in some spaces, and could be done over a broader range of "beats" in some spaces.

When I played in orchestras, there were some conductors who wanted instruments and sections in different locations from where other conductors wanted them. There were a few times when we'd play in a new space and the conductor and musical director would change spacing between players and sections. It was seldom "just move here" but a process of changes that finally settled.

I couldn't absolutely tell the difference with orchestras, because I was sitting in the middle of it all, and they were engineering the ensemble sound for the audience. But... the same musicians were playing the same notes on the same instruments, and the sound changed enough to justify spending very expensive rehearsal time adjusting the shape and configuration of the orchestra. Some of what they were adjusting was time information (sound over distance.) I suspect some was changing the interaction between sounds. What I could absolutely hear is how different my section sounded to ourselves if we were seated in different places. Same musicians, same instruments, same notes, but the sounds produced interacted with each other differently.

I've heard the bigger picture differences in brass choir and big band configurations, where sometimes I was able to listen front of house. Some spaces sounded pretty bad initially, but shifting the center of the group, distance between players, and in a couple brass choir situations, reversing the layout - could have significant and audible improvements. Damn if I could tell you exactly WHY things sounded better or worse. If I (or most people) could, the process wouldn't have always been an experimental one. That difference may have been up (or down) in the edges of what's easy to hear.

I've always felt that the differences I hear in high res files compared to lower res or compressed files are that hard to define information of sound waves interacting with each other. Like the intonation beats, but not as obvious, maybe up (or down) in the noise, maybe not in the "normal range of hearing"... but it does change what's heard. Seems to me that MQA is trying to preserve some of that "non-musical" or indirect information produced by the interactions of the musical information, in the performance space, that's also preserved in those straight forward higher res files.

In my MQA vs not experiments, I always seemed to prefer MQA over red book of the same recording more with live music. Not so much with the stuff where everyone's laying down tracks in their own little booth, dry, and then sound engineered after, but where all the variables in a performance are truly variable. Same is true for me for high res versus red book. I don't hear as big of a difference with studio recordings, although I wonder if where I do hear a difference, the recording session was more organic and live.

I do hear the differences being claimed. I can tell you better or worse, more or less real, but I can't always say why. That's why I think it's that indirect info that's causing the difference. And why I enjoy explanations like this.

AJ's picture
Quote:

In my MQA vs not experiments, I always seemed to prefer MQA over red book of the same recording more with live music.

Except you'll never know whether it's the MQA or the remastering, since it is now well known that some MQA files are totally different masters from the redbook version.
As with any "experiment", GIGO
I explained a method above yours, for a real test of "MQA vs Redbook"

ednazarko's picture

I can't reliably discern the difference between a Stradivarius from a violin produced yesterday. When I can tell, I can't always tell you what the difference is. A great violinist can. That means there's a lot of information in the sound that's meaningless to most, and meaningless means inaudible. (Since you hear with your brain.) Doesn't mean that information isn't there, nor that it's meaningless.

spacehound's picture

Are just gibberish.

As proven by Jim's testing results totally (and perhaps accidentally)destroying Stuart's 'transparent pipe' claims .

Which of course was also gibberish, but thankfully Jim has now identified it as such.

And of course it also totally destroys the very reason for MQA, which was 'Master Quality Authenticated'. The MQA result is not even remotely similar to what was put down in the studio.

spacehound's picture

Jim Austin in "MQA Tested Part 2: Into the Fold"

I quote:
1) "there's a serious philosophical perspective behind MQA's commitment to data-rate reduction."
2) "For Stuart, efficiency in the delivery of musical information is an aesthetic, even an ethical commitment."

The first is a totally unquestioning, uncritical, unattributed, and 'adorned' comment straight out of the 'Stuartspeak' book where Stuart's actual words were "MQA is a philosophy rather than a codec".
And the 'adornments' are the adding on of the 'aesthetic' part which is Jim Austin simply parroting JA's words. Stuart neither said it nor implied it himself. Nor did Jim Austin, he just copied it from JA. The second 'adornnment', which is the 'ethical' part is entirely Jim's own work.

The second is simply an unquestioning re-statement of a Stuart comparison where Stuart implies (but is careful not to actually state as that would be a demonstrable falsity) that 88.2/24 is less efficient that 44.1/16 because it uses three time the bandwidth, while totally ignoring that it carries three times the data.

WHAT DO I THINK OF THE MQA 'SOUND'?

Since I retired (not from the audio business in which I have never been involved) I don't have the equipment to measure anything so I listened using Tidal.

Equipment: My equipment is at least the equal of anything owned (or generally used) by the Stereo staff. It comprises a fully MQA-capable DAC which also has a high-quality volume control, and which, unlike some, totally disables the entire MQA function including the filters automatically when it plays a non-MQA file, a power amp, and a pair of quite big full-range speakers.

1) Tidal 'HiFi' mode (44.1, no MQA)
2) Tidal 'Master' mode (MQA does the 'unfold', DAC filters are whichever of the non-MQA choices I have made)
3) Tidal 'Master' mode (MQA passthrough, DAC does the unfold, discards my choice of filter and uses whichever of the several MQA ones it thinks 'best' until it has completed playing the MQA file then automatically reverts to my previous filter choice.

Result:
Differences are very small. Which I prefer (which is NOT a definition of sound quality) depends on what I am playing and what 'mood' I'm in, time of day, etc., probably more that the choice of music does.

Overall my preference is probably 2. Let Tidal do the 'unfolding' so the DAC stays in it's 'regular non-MQA' mode while still getting 88.2 or 96 rather than 44.1. I think there may be a valid 'technical' reason for that involving the 'weak' MQA filters not being used in these circumstances..

And NONE of it sounds anywhere near as good as the 9:24 2L Mozart concerto in D major - allegro in the downloaded non-MQA DSD 64 or DSD 128 form, though I am not generally a DSD fan. Sure, that I think it is 'better' is not a valid measurement of sound quality, but it's the only one that sounds as if the solo violin at about 7 minutes is actually in my room.

JimAustin's picture

To address your early comments: Has Stuart written that MQA is a philosophy, not a codec? If so I haven't seen it, or don't remember it. Where has he written it? On the other hand, I have interviewed Stuart several times, and he has made his best case to me. The judgments expressed in this article are my judgments--my opinions. I'm parroting nothing. You are free to question my judgment--although others will have to decide for themselves whether to pay any attention to what you say, given that you, a retired person with nothing to risk but your personal reputation, refuses to say who you are.

... which becomes especially relevant when you make claims like, "My equipment is at least the equal of anything owned (or generally used) by the Stereo staff." As Peter Steiner put it:

spacehound's picture

And you have used it on others who disagree with you, which to me is a sign of 'desperation' because you haven't got any answers.
1) What difference would it make if you know my name or the names of some others? None, because you would not have heard of us anyway.
2) I see you are trying to 'knock' my equipment. Remember that unless I visited your house I would not know what you actually use either, we just have to take your word for it, just the same as you have to take my word. So your 'dog' cartoon applies to you every bit as much as it applies to me or anyone else.

So those 'attacks' totally disposed of I will reply to your more worthwhile comment (there only appears to be two).

Where did Stuart say it?
Page 2 of "MQA Questions and Answers" Stereophile August 11 2016.

"Aesthetic" - JA used a near identical phrase to you on the CA site and may have repeated it here. Sensibly, he did leave off your near sainthood phrase for Stuart.

NeilS's picture

Is it now Stereophile policy that commenters not using their real names are considered and can be insulted by its columnists as having no integrity?

spacehound's picture

MQA is basically pointless.

1) All the claimed 'recording by recording' studio 'authentication' process is simply not happening in most instances. It may be happening for a few brand new recordings made in studios where the MQA approved equipment is already in place but not otherwise.

2)The MQA 'fold' etc.
a) It 'folds' the high res content into the lower end. To get room for this it has to reduce the bit depth of everything from the original 24 to about 17. Thus it is lossy so not as the original. This bit depth reduction may not actually matter as the human ear does not have that resolution. But it is not 'improved' over any randomly selected 96/24 recording you choose as MQA imply, it is degraded. This degradation probably doesn't do any audible harm but it serves no audible purpose either, it exist only to enable the 'fold'..

b) Any content above 96K is artificially created by upsampling. So it is not real. To be fair, there won't be much content above 96K. Quite likely none at all. So any mention of above 96K will be just 'marketing garbage'. Again, to be fair, I do not know if they mention it at all.

c) To make the above 44.1 content available the DAC has to use a 'weak' filter. Unfortunately this splatters quite high level 'aliasing' all over the 20Hz-20KHz range. Hearing this aliasing may be why some people falsely say MQA is 'better' than the original when they actually mean 'more to their personal preference'.

3) MQA uses FLAC compression for transmission. Unfortunately MQA processed input files don't compress as well as 'regular' music files so FLAC, while remaining 'bit perfect', has to reduce the compression ratio for bit perfectness to be preserved. So MQA does not compress into FLAC efficiently.
As a result the file size is not significantly smaller than a 96K -'regular' file encoded in FLAC despitew all the MQA 'machinations'.

So as I said, it's all pointless from the user's point of view.

John Atkinson's picture
NeilS wrote:
Is it now Stereophile policy that commenters not using their real names are considered and can be insulted by its columnists as having no integrity?

No, it is not our policy. Posters are allowed anonymity.

John Atkinson
Editor, Stereophile

Fokus's picture

A bit of conjecture, a lot of regurgitation of what the MQA people tell you. No analysis, no proper thinking. And this while it has been known pretty well since early 2015 what MQA does, and does not, do.

adamdea's picture

I applaud the slightly more inquiring tone of this article, Jim; but I remain puzzled at the questions you don't ask.
Of course it's easy to say that whatever losses MQA makes, they probably aren't audible. As any fule 'no', there have been lossy codecs around for ages designed by really clever people at (eg) Bell Labs and the Fraunhofer Institute and based on actual listening tests to determine audibility. These codecs are basically transparent, leaving aside trick-pony killer-samples, well below 320 kbps.
Any argument about what information you can lose has to be based on a theory of audibility. If you are going to say that any information more than 10dB below the noise floor can be lost, you are just setting a particular bar for a lossy codec. It is fundamentally no different than that set up by the people behind MP3 or AAC, except that they are using much more sophisticated and evidence-based psycho-acoustic modelling.

If we are going to set audibility as the benchmark there is no point bothering with mqa at all.

And of course, there will always be some people who say they can hear the difference. yes some people out there say that MQA sounds worse than echt hi rez. But then this is a hobby where people can hear the difference between plugs with the screws aligned differently.

Hi rez is of course not necessary, because as the evidence tells us, either no one can hear the difference between it and FLAC, or (putting it at its highest) very few people can do so, and probably not on most material. But if someone really does want it, then they are probably not going to be satisfied with the rather half baked arugment that MQA preserves exactly the right amount of inaudible material.

And that takes me to the last point- the lossy part of MQA is not so much about what it chucks out as the spuriae which it includes. Whilst it preserves some of the 48-96Khz band it does so by aliasing that information into the 24-48khz band and then allowing that information (and other components in that band not derived from infrmation in the 48-96khz band) to be imaged back into 48-96Khz on reconstruction. This leaves some of the material in each of the two band corresponding with what was there in the original signal and some material in each of the two bands which is spurious. References to trying to "retain the music-correlated data" gloss over this point. If you retain 100% of that data and add an equal amount of false data (and this is *not* noise), then you have to ask whether it can really be described as retaining the information.
In the scene at the end of Spartacus where everyone says "I'm Spartacus", what has happened to the information about who is Spartacus?

adamdea's picture

If you haven't read this, you really should.
http://www.audiomisc.co.uk/MQA/origami/ThereAndBack.html

as well as these other pages by the same author.
http://www.audiomisc.co.uk/MQA/intoshape/NoiseShapingHighRez.html

http://www.audiomisc.co.uk/MQA/cool/bitfreezing.html

http://www.audiomisc.co.uk/MQA/bits/Stacking.html

spladski's picture

I'm Spartacus.

adamdea's picture

Jim and JA (if you are looking)
At last, after over a decade of waiting, someone has finally done the comparison which any thoughtful person could see needed to be made, before any sensible analysis could be made of the alleged problem of 44khz sampling using conventional filters. ta dah! where is the pre-ringing with real music?

http://archimago.blogspot.co.uk/2018/01/audiophile-myth-260-detestable-digital.html#more

Now why hasn't stereophile ever done this? and particularly why are we ditzing around with those silly old impulse responses and (at best) pretending that is merely a matter of esoteric mathematical nit picking that the dirac delta function is an illegal signal, when it is in fact a matter of practical reality that it tells you nothing about the effect of a filter on a real world recorded music transient?

I dare say that Archimago's analysis is not the last word; but it is a shame in every sense that it is, as near as dammit, the first.

spladski's picture

Even a cursory analysis of the manufacturer's reply, reveals marketing drivel and obfuscation. Apparently Bob Stuart has become a neuroscience engineer, with knowledge of the brain's processing of auditory stimuli, beyond the rest of us.

There is no neuroscience, simply what you prefer. We have physco-acoustic studies which have led to 3D audio and room simulators for headphones. There is a long way to go before we even discuss neuroscience.

There is no authentication. No printed name or signatory to the authentication. The artist, recording engineer, mixing engineer, and mastering engineer can, and are bypassed with another level of subjective tweaking.

There are many devices that make things sound 'better' e.g. an Aphex Aural Exciter. Do they claim that they are creating the original analog waveform? No. You can give me any master recording and I can make it sound better. So what? Anyone who claims they are recreating the original analog waveform from a completed mix is either exhibiting arrogance of the highest order, or trying to sell you something by sleight-of-hand. A completed mix cannot be fixed. This is pure arrant nonsense and your common sense tells you that. You can only make subjective changes. To fix anything, you have to return to the recording and mixing stage. But common sense left the playground, crowded out by inane technical discussions.

If you install a phase compensated record amp into an analog recorder you get a different square wave response and a different sound. So now we have two original analog waveforms. The phase compensated version resembles an inductive response.Then there is the sister response, the capacitive filter version. So now we have three original analog waveforms. Which is correct? The phase compensation is an engineering solution. It works the same, every time, on what ever material. MQA is tweaking because there are 1999 ways to get it wrong.

If MQA had declared itself honestly with scrutiny before deployment like the MPEG standards there wouldn't be any vitriol. But if you dress up marketing lies in the cloak of neuroscience, you should expect it.

I was present in a mixing session of a high profile band's live album. Involved were two digital multitrack recorders, one twenty-four track analog, and one eight track analog tape machines. Jim Austin and John Atkinson believe that Dr. Neuroscience can recreate the original analog waveform from the completed mix; they have bought the spiel.

Ignorance is bliss. It allows people with triple degrees in astro physics to kiss the feet of a god.

Brian Lucey's picture

I'm late to the party it seems. Working daily in the studio will do that.

MQA is a rude, cynical business losing millions, it's a harmonic scheme for money, it's sold as lossless which is a lie (although Austin is giving cover for that early claim of a lossless patent by saying "who cares?"). MQA is sold as Mastering Engineer Authenticated, meaning approved ... which is the lie to end all lies. Also sold as "correction" which is tough to believe someone would say with a straight face.

They are bulk processing back catalogs to create a market, so MQA has zero integrity, yes Bob I mean you personally. MQA processing of approved masters is altering, meaning damaging/changing/stepping on client/producer/label/manager/artist approved work to make money for these guys. No one needs it, except Stuart and the team of greedy people on board. As a mastering engineer it's offensive without words. These men at MQA lost their Meridian business to DVD. Sorry gents, that is rough, and I feel for you ... yet do not go putting your greedy, manipulative, authoritarian fingers into the Recorded History of Music with this offensive bull sharkey.

Certainly there are subjective cases of "preference" for the 'Sound of MQA', because like mp3, or Mastered for iTunes or a DA or speaker ... everything has a sound. People are people, the ego likes to have a vote. MQA processing to my ear, listening to my work pre and post MQA, has some harmonic distortion and maybe even a volume boost as a result just big enough to help win an A/B. Louder is better !

I understand audiophiles can be suckered. The mastering engineers who like this artifact might as well wave a banner saying "I can't craft untouchable masters on my own, I'm inept, so I need this artifact, randomly, all over my shoddy work!" The rest of us are 100% against this travesty for profit.

And please, don't talk about the vetting. Of course one division of a major label (accounting) will overrule another (content producers), that's just corporate greed.

In principle, "correction" of music is a fallacy. There is no perfection in music or music playback. Rooms, temperature, humidity, we the listener, are all in flux. Mastering finds a repeatable result knowing it's in flux.

Also ... what is the personal insecurity of those of us who seek perfection with music? Music is organized distortion, from room compression to the massive additive distortion happening today with everything produced except maybe classical. We add distortion to recordings, like we alter the EQ of mono tracks in a mix ... we love distortion ... for the emotional impact. And by the mastering stage, we coalesce this cocktail of artifacts with supreme precision. Everything interacts. And it's signed off on by all parties. MQA steps on all of this.

Dear folks at MQA, 16 bits is not a small file. People who care will download the larger files, not stream. And faster streaming gives MQA a death date just like DVDs, did you not learn anything? If you want to make serious money and change the world, build a better mp3. We all would love that and you would make billions.

Finally, and slightly off topic. 44.1 is not inferior to 88.1 or 96k or 192. Therefore getting 192 down to 48 by (insert BS term here) is not anyone's goal who understands music production. The "more samples is better" myth is built on the notion of music as perfectionism. Conversion QUALITY is four things: Analog path, clock, converter chip and filter. 44.1 can be great. 96k can be bad. It's about the hardware in total. Perfectionism and "fixing" after mastering ... could not be more naive (giving Stewart the benefit of the doubt) and thus dangerous.

Music intends four things: Intimacy, Connection, Community and Elevation. There is nothing perfect needed, possible or part of the listening transaction. In fact, we like the imperfections. The humanity. Please, stop the madness, it's rude and dumb and set to die in time anyway.

The ONLY FILE that matters is the native sample rate of the mastering session. It cannot be improved in any way by anything that changes it. The rest is lies for money. Like this article, many people making money here. Even from the controversy. Get a real job. Go make some art. Create something. Or at the least, don't be complicit.

I was contacted by Mytek to represent MQA in LA, along with Bob Ludwig who they hoped to represent on the East Coast. They processes my work in the best way they can, and I have since heard some of my work catalog post processing. Yuk. If I wanted that distorting in there, I would have added it in the first place.

www.magicgardenmastering.com

X