I've Heard the Future of Streaming: Meridian's MQA

Meridian's Bob Stuart at the Manhattan launch, showing the law of diminishing returns regarding increasing the sample rate of PCM encoding.

In almost 40 years of attending audio press events, only rarely have I come away feeling that I was present at the birth of a new world. In March 1979, I visited the Philips Research Center in Eindhoven, Holland and heard a prototype of what was to be later called the Compact Disc. In the summer of 1982, I visited Ron Genereux and Bob Berkovitz at Acoustic Research's lab near Boston and heard a very early example of the application of DSP to the correction of room acoustic problems. And in early December, at Meridian's New York offices, I heard Bob Stuart describe the UK company's MQA technology, followed by a demonstration that blew my socks off.

With a pair of Meridian digital active speakers being fed audio data from a laptop, Bob was playing 24-bit files with sample rates up to 192kHz, yet the data rate was not much more than the CD's 1.5Mbps! Not only that, but there was palpability to the sound, a transparency to the original event, that I have almost never heard before, which Jason Victor Serinus can testify to (see below).

With this low a data rate, MQA will allow what appears to be true high-resolution audio to be delivered over the same Internet pipes over which music lovers currently experience at best CD-quality audio from Tidal or Qobuz.

What was behind this "WTF" moment?

MQA
MQA is the result of years of research by Bob Stuart and his collaborator, the renowned English engineer Peter Craven. (For more Information on MQA, click here.) It involved going back to first principles, and examining modern research on the perception of sound as well as a fundamental examination of the nature of music. (Bob presented a paper on this concept to the Audio Engineering Society in Los Angeles last October: "A Hierarchical Approach to Archiving and Distribution" (downloadable here. Much more detail on the theoretical and philosophical basis for MQA than I have room for here in this report is available in Bob's paper.)

Bob kindly shared his Powerpoint presentation with me and I reproduce some of the slides, in order to be able to describe what MQA does. The following is my interpretation of Bob's presentation; if there are errors, they are my own.

Fig.1 The peak spectrum of a Ravel String Quartet, sampled at 192kHz with 24-bit PCM. On such a diagram, data-rate is equivalent to area

Fig.1 is a variation on what is called a "Shannon Diagram"; it shows the information space required by an audio signal with level shown as the vertical scale in dB, frequency as the horizontal scale in kilohertz. The peak levels of a string quartet recording are plotted against frequency (red trace), while the blue trace shows the level of the recording's background noise. (Bob says that regardless of the type of music, this basic triangular shape in information space is characteristic, music and musical instruments having evolved to match our hearing sensitivity.) The gray shaded area to the left of the graph above the green line shows how much of this information space is preserved with a 16-bit/48kHz PCM recording. Below the green line is shown how much more is preserved with 24-bit PCM.

When the sample rate is doubled to 96kHz, the areas shaded in pink is now preserved in the recording; when it is doubled again, to 192kHz, the full information space is captured.

If you look closely at the red and blue traces, four things become apparent:

1) That a sample rate of >96kHz is required to encode all of the music information in this recording.
2) That the information present above around about 55kHz is noise.
3) That within the baseband (<24kHz), the noisefloor is above the 16-bit quantization limit.
4) That the musical information only occupies a fraction of the information space, generally a triangle-shaped area with the greatest dynamic range at low frequencies and the smallest dynamic range at ultrasonic frequencies. This is because of the fundamentally self-similar nature of music with respect to its spectral content.

The last point ties in with the observed fact that every time the sample rate of PCM is doubled, while there can be a slight improvement in sound quality, the data rate and necessary storage space for the file also double. Stereo 16/48k audio has a data rate of 1.54Mbps; 24/96k data 4.6Mbps; and 24/192k data 9.2Mbps. Linear PCM treats every coordinate of the information space as being equally important, hence the inefficiency in terms of capturing and transmitting musical data in PCM form.

Bob Stuart then introduced the concept of "Audio Origami": of folding the ultrasonic-frequency components of music back into the baseband so that the data bandwidth could be reduced.

Fig.2 MQA uses the statistics of music information to encode the 4Fs-octave data within a small area in the information space below the 4Fs Nyquist Frequency.

The first step is shown above. The musical information above 48kHz has a very small dynamic range, so despite the sample rate required being 192kHz, the area it occupies in the information space diagram ("C") is small. MQA encapsulates this data in the small rectangular area around –130dBFS between 24 and 48kHz, an area that with all real recordings would otherwise be random numbers ie, noise. The result is a 96kHz-sampled file that contains within itself musical data sampled at 192kHz.

Fig.3 The same encapsulation process can be performed on the 2Fs octave data.

But there is no reason to stop there. Fig.3 shows how MQA encapsulates the musical information between 24 and 48kHz sampled at 96kHz ("B") in the baseband information space "A." While the dynamic range of the musical information in this octave is greater than that above 48kHz, the encapsulation uses lossless encoding, resulting in another small rectangular area than can be buried beneath the recording's noisefloor in the base band below 24kHz.

Fig.4 A MQA 24-bit file sampled at 48kHz contains all the information necessary to recreate a music signal sampled at 192kHz.

Fig.4 shows the final result of this folding and packing: a 24-bit MQA file sampled at 48kHz contains all the musical information corresponding to an original recording sampled at 192kHz. You are not getting something for nothing: The data above the baseband Fs is packed sufficiently beneath the recording's noisefloor, using subtractive dither, in an information space area that would otherwise be random, that it will not have audible consequences. When this file is played back with an MQA decoder, it unfolds to give the original resolution and bandwidth required to playback the music without loss.

Lossless compression of this file gives a further reduction in data rate, to what I understood to be about 1.5Mbps per channel for a stereo recording, only slightly more than that required for uncompressed CD audio and about twice that required for transmission of a 16/44.1k FLAC or ALAC file (my guesstimate).

The benefit for streaming is obvious. Without any increase in bandwidth, streaming services can offer true high-resolution audio. And as I write these words, the UK service 7digital announced a global strategic partnership with Meridian Audio "adopting MQA to become the first digital music platform provider to offer this high quality audio format for both downloads and streaming."

As a MQA decoder is software and modern DACs are controlled by firmware, there is no obstacle to hardware companies offering MQA decoding other than the need to license the technology. (When I asked Bob about licensing at the Manhattan launch of MQA, he was less than forthcoming on details, however, saying that there would be announcements at next month's CES.) But the first MQA-capable decoder to be available is the Mk.2 version of Meridian's Explorer and I understand that a firmware upgrade for the Meridian Prime that I reviewed last October will allow MQA playback.

But . . .we already have access to high-sample-rate audio via sites like HDtracks. Other than reduced download times, what is the benefit of MQA?

There are two benefits, one commercial, the other concerning sound quality:

Backward Compatibility
If you look again at fig.4, above, you can see the green line that defines the 16-bit noisefloor of the 16-bit PCM format. All the extra encapsulated information lies well below that limit. So if the file is truncated at the 16th bit, you have the equivalent of a normal baseband digital recording, sampled at 48kHz in this example. To a DAC or player that doesn't have MQA decoding, the MQA file will play as a normal 16-bit file.

This means that record companies will not have to offer multiple file formats. A single inventory will serve both the general public and audiophiles. Atlantic chairman Craig Kallman was at the Manhattan event and when I talked with him, was very enthusiastic about this aspect of MQA.

There is an analogy here with the LP record: one inventory that can be played on massmarket players, yet responds with increased quality for those who invest in better players. And MQA intends to guarantee the provenance for the high-resolution masters—no more upsampled CD masters masquerading as "hi-rez."

Correcting the Source
Returning to the triangle of musical information in fig.1, the old question is why do we need to preserve and reproduce frequencies above the limit of human hearing, even if we can do it? Bob spent some time discussing this in his presentation and it comes down to the fact that the ear-brain doesn't just operate as a frequency analyzer. Evolution has fine-tuned the system to be able to detect temporal differences that are equivalent to a bandwidth considerably greater than 20kHz and that the anti-aliasing filters in A/D converters and reconstruction filters in D/A converters introduce temporal smearing that it is considerably greater than what our ear-brains are tuned to expect from natural sounds: this smearing is, I believe, responsible for so-called "digital" sound.

The MQA encoder and decoder together have been designed to have a transient response of the same form and order as that of the temporal sensitivity of the ear-brain. And if at the MQA-encoding stage, the temporal effect of the A/D converter can be compensated for, the complete system offers a transparent window into the original musical event. Meridian describes this as "taking an original master further, toward the original performance, in an analogous way to the processes expert antique picture restorers use to clean the grime and discolored varnish from an Old Master to reveal the original color and vibrancy of the work."

Judging by the recordings I heard in Manhattan, some dating back to the early 1950s, I feel the launch of Meridian's MQA is as important to the quality of sound recording and playback as digital was 40 years ago. I have sent Bob the hi-rez PCM masters for some of my own recordings so I will be able to judge for myself the impact the technology has. In the meantime, Jason Serinus's reaction to a recent dem of MQA in California follows on the next page.

COMMENTS
Plato65's picture

This is very interesting. I understand how this allows for smaller files, but where is the improvement in sound quality coming from? How does one compensate for the temporal effect of the A/D converter?

BRuggles's picture

Now all we need is more music to be mastered in such a way as to capitalize on these benefits! As a fan of off-the-beaten path of metal, there is a woeful lack of heavy music mastered for people who care a lick about audio. That said, for some bands (like Mastodon), they take advantage of the opportunity to master for the likely customers; i.e. their vinyl is mastered less hot than their CDs. Maybe these sorts of bands should use that same mix for the high-Rez digital masters as well.

Or scrap the overly hot versions altogether...

John Atkinson's picture
Plato65 wrote:
How does one compensate for the temporal effect of the A/D converter?

I understand that mastering with MQA uses Meridian's "apodizing filter" but fine-tuned to the actual A/D converter originally used. According to Bob Stuart, this is possible because 1) there is only a small population of professional A/D converters and 2) record companies actually keep good records on what converter was used for the original sessions and/or mastering. Almost all CDs from the early 1980s, for example, were mastered with one of the Sony PCM1600 family.

John Atkinson
Editor, Stereophile

Plato65's picture

Thanks for the response! I considered that impossible, but I hadn't realized those two points. This seems like an entirely different thing from the more efficient data delivery and raises the further question: If correcting for the problems of the A/D converter gives such a sonic benefit, why isn't this done by the record companies as a matter of course? Were the problems not appreciated before or is Meridian the first one able to solve them? I presume that the problems of the D/A converter are similarly addressed, but there is a much larger number of those. So does this work only with Meridian's own D/A converters?

drblank's picture

It's interesting what really goes on with some of these recordings. Many times, they are told what to do, like use cheap monitors and to make it Radio-friendly, which means lots of audio compression, limiting, and to make it LOUD so it jumps out over the radio. Many times, the mastering engineer makes a variety of different mastered versions and the producer/artist/record label makes the final decision, so the Mastering Engineer only does the work but the final decision is someone else.

I don't listen to modern metal music all that much, but what I have listened to seems VERY compressed/limited and i hate that type of sound. It's just hard for me to listen to a lot of the recordings not necessarily because of the music, but because of how the recordings sound. I like some of the old rock bands from the 70's and a lot of their recordings sound like crap because they were pushing the tape mighty hard and I don't think they can remove that kind of distortion. A lot of these bands record purposely pushing analog tape into heavy saturation and it sounds like crap by the time we get the recording. It's unfortunate, but nothing they can do once it's done.

I would send a note to the bands you like and complain about which version you have that you don't like, and maybe tell them what CD you do like. Maybe they'll do something about it. After all, these bands are trying to get and keep an audience.

Archimago's picture

Okay. While we can argue that perhaps MQA could reconstruct a more accurate representation of the original recording with ultrasonic extensions, how is this "lossless" in the sense of what we normally define as lossless?

The bottom line is that the encoded signal is not the same nor can be decoded as the original flat 24/96 or 24/192 in FLAC. I suppose one could consider it as "lossless" up to 16/48 with what looks like "lossy" psychoacoustic encoding to "fill out" the stuff above 24kHz based on assumptions of what is worth keeping in those high frequencies. (Basically by borrowing the lowest bits in the original 24-bits and assuming they're "random" - not unreasonable of course.)

Is this basically what we're talking about?

deckeda's picture

So Jason I guess you're saying that the Metallica song was remastered from the analog tapes (and yet not released for sale ...) with MQA's temporal fix? Otherwise, how could the MQA version sound better than the official 24/96 version? And what about the provenance of this MQA version vs. the 24/96 released in 2011? Same ADC converter, and levels etc?

I find MQA interesting but there remain wide gaps in the discussions of these comparisons.

dalethorn's picture

From what I read here and elsewhere, that's it exactly. But some clarification is needed on 'lossless' in regard to that "random unused" area, just in case ...

SoNic67's picture

It is lossless because that portion is anyway lost by DAC's. Every DAC on market today has a analog limit for their THD and Noise. Regardless that they are sold as "24 bit" or "32 bit" they barely scratch 22 bit limit.
If we add to that the THD+N actual amplifiers in chain (headphones or speakers), -120dB hard limit imposed by MQA seems reasonable.

SoNic67's picture

I am imagining that it is corrected to sound better on a normal Delta-Sigma DAC by changing the group-time delay for higher frequencies.

I always listened to my rock albums on an older R-2R DAC because on my SACD player the cymbals sound smeared (by the noise-shaping algorithm I suppose).

JR_Audio's picture

Thank you John for spreading the knowledge about the concept behind this idea. That's what I thought (in basics) it would work, when I first heard about it early December.

I would really like to see a stop of the ongoing higher sample rate race and instead using “intelligent” (acoustically optimized) filters for AD and DA converters,

and foremost a stop in that stupid loudness war, that I hate so much.

Looking forward seeing you at the CES.

Juergen

comp.audiophile's picture

Hi John - I talked to a couple highly respected engineers at CES about MQA. Each of them said MQA was not lossless in the traditional sense but rather the encoding was audibly lossless. Do you agree?

VirusKiller's picture

You may be interested in the following post on the Meridian users' forum in which Bob Stuart graciously allowed me to quote his explanation of why MQA is "10x better". The short of it is his claim that the overall chain impulse response is only 50µs with a leading edge uncertainty of an incredible 4µs, and zero pre- or post-ringing.

http://www.meridianunplugged.com/ubbthreads/ubbthreads.php?ubb=showflat&...

John Atkinson's picture
VirusKiller wrote:
You may be interested in the following post on the Meridian users' forum in which Bob Stuart graciously allowed me to quote his explanation of why MQA is "10x better".
http://www.meridianunplugged.com/ubbthreads/ubbthreads.php?ubb=showflat&Number=226336#Post226336

Thank you very much for the link. I decided to keep my discussion of that part of Bob's presentation short, as it would require much more explanation that was practicable in this short report.

VirusKiller wrote:
The short of it is his claim that the overall chain impulse response is only 50µs with a leading edge uncertainty of an incredible 4µs, and zero pre- or post-ringing.

I believe that this time-domain behavior is responsible for the superb sound quality I heard at the Meridian dem. As I wrote, I have sent Bob some of my own hi-rez files for MQA mastering, so that I will be able to compare the sound of the MQA version both with the original files and with the "Red Book" baseband version on a non-MQA DAC.

John Atkinson
Editor, Stereophile

jimtavegia's picture

My hesitation in buying a usb dac up to now may pay off with this new technology. Just when I thought there would be nothing new under the sun....here it is. And, with the Explorer it will be affordable. You can't beat that.

Thanks for the heads up.

Archimago's picture

But you missed out on years of playing hi-res digital audio up to now? (Maybe you've been playing SACD/DVD-A/SPDIF?)

Seriously though, I don't see how this system would be better than a "flat" 24/96 or 24/192 that we have these days.

The target truly is bitrate reduction for limited bandwidth streaming, not making things sound better than the original 24/88, 24/96 or 24/192, etc... based on the descriptions. We'll know soon enough I guess.

phastphill's picture

I anxiously await results of blind A-B preference tests comparing MQA to uncompressed 24/192, and further to plain old 16/48!

Music_Guy's picture

Do I have it right that an additional implication of the article is that while higher sampling rates provide tangible sound improvements, deeper (24 vs. 16) bit depths do not? Should suppliers be offering 96/16 and 192/16 files? (MQA notwithstanding)

Archimago's picture

Indeed, that does sound like the implication of this coding mechanism.

Assuming the technique uses the full lower 8 bits of a 24-bit file for the high frequency encoding, then 16-bit noise floor is all we get in the audible spectrum (and the implication that this is all we need according to Meridian).

Maybe they can just use the lower 6 bits and give us true "lossless" 18-bits (which certainly the better DACs are capable of) or some combination (maybe take the last 4 bits and allow true 20-bit noise floor challenging even the best DACs)? Wondering if listening tests were used to figure out where the threshold is between what they feel as useful information to keep on the top end frequency wise and what might be of benefit to maintain losslessly in the actual audible spectrum noise floor.

No free lunch whenever one strives for "perfection".

dalethorn's picture

Meridian could do audio a great service in disclosing each and every compromise they make with full details, for each codec they end up licensing.

John Atkinson's picture
Archimago wrote:
Maybe they can just use the lower 6 bits and give us true "lossless" 18-bits (which certainly the better DACs are capable of) or some combination (maybe take the last 4 bits and allow true 20-bit noise floor challenging even the best DACs)?

If you look at the final graph, it does appear that the buried data, which Bob Stuart says is encoded to resemble noise, lies beneath the 18th bit. As many DACs barely achieve 18-19 bits of resolution, this may well be a sonically valid compromise.

This is conjecture on my part but there is a huge commercial benefit for the record industry with MQA that is not true about FLAC etc: the record company will no longer be selling a duplicate of their master, with all the implications for piracy that that implies, but something that might well sound identical to the master but doesn't allow the master to be recreated. In other words, it is like DRM but without all the hoopla that traditional DRM involves, which, as well as the single file inventory, may well be why record industry people appear to be supportive of MQA.

John Atkinson
Editor, Stereophile

Talos2000's picture

What is described here (I know nothing more about MQA than what I have read in JA's article) is a plausible high quality compression system. OK, I get that. But it describes a method for encoding the information that is already present in a 24/192 recording and encapsulating it losslessly in a 24/48 bitstream. Given that, the most you can hope to get from it, audio performance wise, is an identical sound to the 24/192 original (or whatever other original format was employed).

Yet that is not what JA and JVS appear to describe. What they claim to have heard is an improvement over the original hi-rez recording. And, in JVS's take, not just a small improvement. Such an improvement cannot result solely from a lossless compression algorithm. It can only arise from active massaging of the input signal, probably assisted by substantial pre-processing of same.

If so, this would amount to nothing less than a "sound-quality-improving DSP process", something which if correct, would have SERIOUSLY profound implications for the audio industry. This why it just doesn't ring true to me. You don't generate fundamental improvements of that magnitude (taking JVS's observations as fundamental truths here) by doing the sort of (relatively) simplistic data shovelling described here. At least, I don't think you do.

The other thing that I found irksome was the "Authenticated" aspect of the MQA process. Nothing seems to describe how that authentication would work. Practically, I don't see how it could be anything other than a digital policeman with no teeth. And even if it had teeth, you would never get agreement in this industry as to whose teeth they were going to be.

John Atkinson's picture
Talos2000 wrote:
The most you can hope to get from it, audio performance wise, is an identical sound to the 24/192 original (or whatever other original format was employed).

From the "audio origami" aspect of MQA, yes. That is correct.

Talos2000 wrote:
Yet that is not what JA and JVS appear to describe. What they claim to have heard is an improvement over the original hi-rez recording.

Yes, that is what we experienced. This, I believe, is due to the time-domain performance, where the impulse response of the complete system, from ADC to DAC, had been adjusted to be of the order of the sensitivity of the human ear-brain; see the link in the posting above.

John Atkinson
Editor, Stereophile

Talos2000's picture

Right. So there are two separate issues here. The first boils down to new ADC/DAC technology which is independent of the stream format used to convey the data from one end of the chain to the other. The second boils down to a new compression technology, which is independent of ADC used to create the original PCM bit stream. Neither requires the other, as best as I can tell, based on what you wrote.

The latter of the two has purely commercial ramifications, but the former could be more substantive. Having said that, though, DSD avoids PCM's anti-aliasing (and to a much lesser extent reconstruction) filters entirely, rather than merely optimizing them.

So I perceive MQA as having more commercial than technological import.

John Atkinson's picture
Talos2000 wrote:
DSD avoids PCM's anti-aliasing (and to a much lesser extent reconstruction) filters entirely, rather than merely optimizing them.

It is fair to point out that this is at the price of very high levels of ultrasonic noise. No pain, no gain.

John Atkinson
Editor, Stereophile

Talos2000's picture

Yes, very fair. On the other hand, the ultrasonic noise problem can be addressed with an appropriate choice of reconstruction filter, which could be specified to have an impulse response like those espoused by MQA. And maybe some of them already do.

I know I am harping on here, but my point is to query where the beef is in the MQA announcement.

Jason Victor Serinus's picture

Although it is probably unnecessary for me to say this, I saw neither this article nor any technical explanation of MQA before heading to Audio High, listening, and submitting my observations. I didn't even know if JA would publish my report as submitted, which he has, or integrate my listening impressions into his own story.

As is true of any reviewer for Stereophile, who does not see JA's test results before submitting their review, all I had seen was Meridian's very basic press release, and their even more basic MQA website (link above). John had told me that he and Kal had been "very impressed" by the MQA demo in NYC, but I had no idea that he considered MQA a fundamental sonic breakthrough 'as important to the quality of sound recording and playback as digital was 40 years ago."

AudioNerd's picture

John,

Thank you for making sense of MQA. I've searched high and low for the past week for more detailed information, but to no avail. Your explanations and graphs helped immensely! I now have a much better understanding of both the compression technique and the source of stated sound quality improvements. I'm looking forward to learning more, especially the comparison of your original vs. remastered recordings and major label and industry adoption plans. I know you'll keep us informed.

Happy Holidays!

ednazarko's picture

As a photographer who shoots several different camera systems, depending on the gig, this is a familiar concept. Each camera system has its own bit depth for raw files (12bit - micro-4/3 cameras, Sony's interchangeable lens cameras, most raw-capable point and shoots; 14bit - Nikon, Canon, etc high end DSLRs; 16bit - medium format digital backs that cost $30K). In all cases, the raw files are 16 bit files, just with those different output bit depths inside. All JPG files however are 8 bit.

For normal snapshots of undemanding subjects, JPG is perfectly fine, which is why most cameras sold are never used to produce raw files. (Raw adds several additional steps to your workflow to get a usable image, each of which you could mess up and end up worse off than if you'd just shot JPG.) Yes, there are compression artifacts in JPG, but you need a very skilled eye to know what to look for, and they're below most peoples' perception for most images.

For raw files, while the pipeline and file format is 16 bit, cameras other than the really high end ones output less than 16 bits of image data. Saves money because it's less processor and pipeline intensive so you can use cheaper stuff, and since most people couldn't tell the difference between 12 bit raw and 8 bit JPG, perfectly fine. Most output media - the internet, computer screens, and all but the really high end printers - use 8 bit rendering anyway.

There are then different compression schemes for raw files, because they're HUGE otherwise. Example: for my Nikon D800E, a 36 megapixel chip, lossless compressed 14 bit raw is about 42mb file size, which produces a 200 mb file when opened in any editing software. Lossy compressed 14 bit is about 36mb. Uncompressed raw 14 bit is 73mb. (Raw files are purely sensor data, and when they're processed into an image, they end up much larger than the raw data file.) Since most people look at files on 8 bit color screens, or print on 8 bit color printers, they're wasting a lot of disk space and effort if they're not using a compressed format of some sort.

I've done comparisons of raw compressed files, both lossy and lossless, and at 12 bit and 14 bit, to uncompressed 14 bit, and there is a difference. (I've got 16 bit printers and 14 bit screens.) Even with my best image processing and output techniques applied, I can tell 12 bit from 14 bit, and lossy from lossless compression. I've tried to find situations where lossless compression is inferior to none, and I've never been able to torture a file hard enough to get a difference.

I need to re-read this MQA explanation a few more times because it's still not clear to me whether what's being done is equivalent to having raw file data turned into the image that's output, or whether it's like what's done with lossless compression, where math is used to store pixel values that occur many times in a single representation, and to store high bit values (precisely) in smaller spaces. But from the first two reads I've done, I'm very comfortable that you can significantly reduce high def audio file sizes with some smart math, in a way that allows the complexity and richness of the larger file to be perfectly preserved and re-produced, and that would allow lower-bandwidth equipment to re-produce excellent quality by ignoring the lossless compressed data. Because I'm doing it every day in my photography workflow.

VirusKiller's picture

To be clear, MQA encodes the data relevant to human hearing losslessly (and far more of it than has been thought necessary in the past). Other data not relevant to human audio discrimination are discarded by the MQA encapsulation in a lossy manner.

ednazarko's picture

There's a lot of data in a raw file that can be expressed in more efficient ways, and that's what happens in lossless raw file encoding. Lossy encoding tends to affect image highlight quality - the high ultra high frequency data equivalent, throwing away info that's not useful to printing and viewing... but may be relevant for extensive image manipulation. It throws away or compresses data that's usually not valuable.

bracondale's picture

So if MQA is adopted, does this mean the end of the road for DSD, or can MQA be used with DSD ?

John Atkinson's picture
bracondale wrote:
So if MQA is adopted, does this mean the end of the road for DSD, or can MQA be used with DSD?

No, there is no compatibility between MQA and DSD. Which should not be surprising, given that Bob Stuart has always been a very vocal critic of the DSD concept.

John Atkinson
Editor, Stereophile

VirusKiller's picture

AFAIK, DSD masters can be encapsulated with MQA.

John Atkinson's picture
VirusKiller wrote:
AFAIK, DSD masters can be encapsulated with MQA.

But only, I believe, if they are first transcoded to PCM. If you look at the spectrum of a typical DSD-encoded audio file - fig.1 at www.stereophile.com/content/super-audio-cd-rich-report-page-2 - and compare it with fig.1 in this report, the enormous amount of noise above 50kHz occupies too much information space to be encoded with MQA so must be discarded.

John Atkinson
Editor, Stereophile

drblank's picture

that this MQA technology will be available with these 3rd party music player software so we don't have to buy a MQA enabled DAC? The reason is that many of us have integrated DAC/amps now and we have good DAC chips (a lot are now using the ESS9018) chips which can play just about anything, which is the reason why I bought my current unit and if/when they start to release MQA encoded tracks I would like to just use a 3rd party player (Amarra, PureMusic, Audirvana, etc.) to play those files and to hear the "improved" sound of MQA, without having to sell my powerDAC. I'm a little upset that my Meridian Direct DAC didn't get a firmware update like their Prime Headphone amp did.

I'm also wondering how recording/mastering studios are encoding this content. Did they mention if this is sold or given away as some sort of plug in or app for the studios to use for the encoding process? Haven't seen much of that discussed.

VirusKiller's picture

Some useful information from Bob Stuart on the Hitchhiker's Guide to Meridian at this link.

spuds1001's picture

Hello John,
I know I'm not following the thread, but I was wondering why you haven't reviewed the Meridian DSP 7200 as it was recommend as a future item "K" on your recommended component list. In addition, you mentioned it in your review of the 808.2(i) CD player. Are you going to review the DSP 7200 SE or 5200 SE?

Regards,

Brian

John Atkinson's picture
spuds1001 wrote:
I was wondering why you haven't reviewed the Meridian DSP 7200 as it was recommend as a future item "K" on your recommended component list.

We included the '7200 in Class K because of Kal Rubinson's experience of it. But he has not yet received samples for formal review.

John Atkinson
Editor, Stereophile

Daveedooh's picture

Does this new technology mean that I can buy a new machine that will play my old CDs in new Supa dupa stereophonic sound, or, does it mean I will have to buy a new machine and new CDs / downloads?
Not everybody's a technoholic!

John Atkinson's picture
Daveedooh wrote:
Does this new technology mean that I can buy a new machine that will play my old CDs in new Supa dupa stereophonic sound, or, does it mean I will have to buy a new machine and new CDs / downloads?

As MQA needs to be applied at the mastering stage in a recording's production, it doesn't improve the sound quality of your existing CD collection. It is really only relevant to downloads.

John Atkinson
Editor, Stereophile

SoNic67's picture

1. Lossless - I get it. Any modern day DAC, no matter if labeled 24 bit or 32 bit, it is limited on analog side by THD+N. That barely makes it below -124...-130dB. And even that it doesn't matter because the following elements in the chain (amplifiers, headphones or speakers) will degrade that further. A -120dB hard limit is more than adequate.
2. Improvements of sound quality - I assume that is due some kind of group-delay preprocessing meant to compensate the delay in a certain model of Delta-Sigma DAC. That is cool, but it might work good only with a DAC with the same number of FIR filters arranged in the same way.
I always listen my rock albums (44.1kHz SR) via an older R-2R DAC. The cymbals sound less smeared, more distinct and clear than via my SACD player (that has a good Delta-Sigma DAC). However, the temporal differences seem to became less when I go up in SR (newer rock albums at studio SR 96kHz, or LP digitized).
Maybe that's why the crummy DSD (I hate the noise shaping) become so popular lately - people realized that temporal coherence is more important than the actual number of bits (because what is over 18-20 bit is sounding all the same).
3. DRM. I feel that this technology will win if some kind of DRM is enabled. This will convince the studio (and streaming) players to accept it and make it a distribution standard.

Ricardo's picture

Yes, true. It is the name of the Game. But I am somewhat amazed that you can justify that as a good thing. DRM financially benefits the patent holders, the licensees, and the Streaming companies and record companies who will surely restrict your use of the music you buy. No doubt they would love to lease you music for each piece of gear you play it on. Heck, just the idea of leasing rather than selling music ... for same price, that comes with playback restrictions is going make the record companies punch drunk. Good for them. But I would rather own my music without restrictions of any kind if I'm buying it. It's only naive to believe that if MQL is widely adopted that it won't become draconian in its DRM restrictions. I wouldn't be so excited about that eventuality. You can argue that its good for the music business, but I think we can limit those benefits to the equipment manufacturers, record companies and streamers. The listeners and if history teaches us anything, the musicians also, will be left out in the cold.

hansbeekhuyzen's picture

MQA does three things: provide a transparent channel, offer quality assurance and reduce the bit rate by leaving out unused code space. Although discussed quite often, we humans are incapable of hearing beyond 20 kHz, many of us are lucky to hear 14 kHz. In audio technique that is immediately linked to a bandwidth. But where in linear techniques bandwidth and temporal behavior are interlinked, they are not in our auditory system. According to neuro scientific research our hearing has a resolution in time of 10µs to even 5 µs for trained listeners (conductors are mentioned, but I am sure the same goes for audio reviewers). This time resolution is the time it takes from the moment the first flank of a sound reaches our ears and the auditory center of our brain to start interpreting it. The Shannon/Nyquist theorem states that any waveform of a continuous sound up to half the sampling frequency is sampled without loss. But music is not continuous and timing will be changed to fit the sampling grid. That's why 192 kHz sounds better dan 96 kHz, it offers a time resolution of 5.4 µs. I did an article on this and other MQA matters. I don't know if it is allowed to link to it, but I trust it will be removed if it is not. Here's the link: http://tinyurl.com/ojuhv6v

Dcbingaman's picture

In 2014 Meridian released firmware updates for their entire digital lineup which included "apodizing" filters for both A/D and D/A converters. (For most Meridian products, these are implemented with DSP's.). After performing this update, I ran an experiment by running my phono preamp output, (a Pass XP-15) both directly to my analog preamp (a First Watt B-2) and through my Meridian G68XXD processor which coverts everything to digital and then back to analog through the new apodizing filters. I could not hear a difference in the signals, (once levels were matched with my trusty Radio Shack SPL meter.)

Thinking my acuity could be off, I invited my geek friends from the Gateway Audio Society, all tube analog fanatics, over to my room and performed the same experiment, (unbeknownst to them). NO ONE could tell the difference. I knew the Meridian was pretty good, but this was a surprise.

I then started listening to CD's and SACD's of the same recording played back via an Oppo 105D feeding LPCM through HDMI to a Meridian HD621 to the G68XXD. Again, no significant differences could be heard. I came to the conclusion that Meridian was on to something significant, and the apodizing filters were at the core of it. According to Bob Stuart, the apodizing filters are designed to preserve time integrity in playback up to an 8 microsecond first derivative of the signal. This is significantly different than any frequency optimized filter I am aware of.which depends on faulty assumptions on the human hearing mechanization. In Bob's model, the ear does not measure frequency, per se, it measures impulse amplitude - the first derivative of the change in acoustic pressure on the eardrum. The brain then integrates the ear's measurements into "sound". Many years ago, Pioneer tried to implement a similar concept called Legato Link into their CD players, but did not have the technology or math modeling to solve the problem.

I think MQA is the commercialization of Meridian's time accurate filtering technologies combined with a new encoding technique in a format that other vendors can take advantage of. The reason these apodizing filters have not been noted before is that there are simply so few Meridian systems in use for high definition audio playback in the United States and Asia. The are a lot of believers in the UK and in Europe, however, that have understood that a better digital playback system has been in use for sometime. MQA will allow this technology to be enjoyed on non-Meridian systems. In addition, the publishing of MQA-encoded recordings will allow tailored time corrections in the LPCM data itself for the A/D's that produced the original digital master. In current Meridian digital playback systems, these corrections already are implemented by the reconstruction filter for the most common A/D's (Sony) and timing errors.