dCS Network Bridge network player Specifications

Sidebar 1: Specifications

Description: Network player/Roon endpoint. Inputs: Ethernet, Apple AirPlay, USB 2.0 (data), 2 word clock (BNC). Outputs: 2 AES/EBU on (3-pin XLR), for PCM output up to 24-bit/384kHz or DSD128 in DoP when used as dual AES; S/PDIF (coax RCA), for PCM output up to 24/192 or DSD64 in DoP; SDIF-2 (BNC), for PCM output up to 24/96 or SDIF-2 DSD64; word-clock (BNC), for PCM data up to 96kHz.
Dimensions: 14.2" (360mm) W by 2.65" (67mm) H by 9.65" (245mm) D. Weight: 10.2 lbs (4.6kg).

Finishes: Silver, Black.
Serial number of unit reviewed: 0052911.
Price: $4250. Approximate number of dealers: 18. Warranty: 3 years, parts & labor, to original owner.
Manufacturer: Data Conversion Systems, Ltd., Unit 1, Buckingway Business Park, Anderson Road, Swavesey, Cambridge CB24 4AE, England, UK. Tel: (44) (0)1954-233950. US distributor: Data Conversion Systems Americas, Inc., PO Box 541443, Waltham, MA 02454-1443. Tel: (617) 314-9296. Web: www.dcsltd.co.uk.

COMPANY INFO
Data Conversion Systems, Ltd.
US distributor: Data Conversion Systems Americas, Inc.
PO Box 541443
Waltham, MA 02454-1443
(617) 314-9296
ARTICLE CONTENTS

COMMENTS
dalethorn's picture

"Both sounded beautiful, but the Bridge software again delivered clearer sound, with sharper highs, more saturated colors, and maximal liquidity and transparency. Roon's sound was smoother, with more apparent emphasis of the midrange and bass............Swami Serinus predicts: The more files you've got, the more you'll use Roon."

So, ummm, how does one get the best sound and still manage the files? I have methods that work very well on a Mac using any of several players, but the above reads like a "don't go there".

"You can choose among three Mappers, two of which are new. Hint: New Mapper 3 sounds a mite warmer, softer, more analog-like; new Mapper 1 offers sharper lines and more color contrast."

And this Mapper thing, which I'm sure is really great, seems to say once again "you can't get the best sound without another compromise." I wonder whether this inner-DAC mapping will become some kind of future standard, be more automated, make better choices for the user who doesn't want to fiddle with it....

spacehound's picture

they are generally 'level headed' and straightforward.

Right, to the Bridge and the Vivaldi.

I wish they wouldn't DO this 'alternative mapper' stuff.
I don't want "a mite more warm" or "sharper" - I want ACCURACY to the source. THAT'S what High Fidelity MEANS.
And the source is a CD, file, or whatever, that I have purchased. If I want it 'nice' or 'to my taste' I can get that for a tenth of the price, but it ISN'T 'high fidelity, which is what we pay dCS's prices for.
(It's the same with adjustable suspension on cars - they can't make up their minds so they leave it to us to fiddle about with 'adjustments'.)

If you don't like how it sounds just buy a different recording, don't deliberately distort the sound with switches to suit your 'preferences'.

And I do NOT agree with Jason's comment that the bridge will improve the sound above a direct DAC USB input - UNLESS that input is rubbish compared to the other inputs, which is rare nowadays. Particularly so on a DAC of the price that a dCS Bridge buyer is likely to have purchased.
Given a half-decent DAC USB input, it can't. It's impossible. An extra box in the chain cannot improve what wasn't there do begin with, it can either do nothing or add distortion and/or noise. Nothing else.

scottsol's picture

Your criticism of the mapper choices is based on the false assumption that perfection is an option. It isn’t. While I don't know the specific's of the mapper differences, your statement could just as well apply to DCS' multiple filter choices. Here, for example, it is not possible to simultaneously optimize amplitude response, time response and elimination of out of band noise. Consequently there is no most accurate filter, just options with different compromises.

You are absolutely correct that the USB source shouldn’t make a difference. And yet, as Galileo would say, it does.

spacehound's picture

And not being 'facts' opinions aren't right or wrong. But below I have tried to put at least some facts in.

Of course perfection is not reached. But the flatter the frequency response over the range of the human ear the nearer we are. Anything outside that range doesn't matter. (Also see below.)

Time (and phase) responses are simply another 'mathematical' way of saying the same thing. EG - if it goes to 20KHz the quickest rise time it can manage will be will be 1/20.000th of a second from 'silence' to full output. And phase accuracy is based on an equally 'direct' relationship to frequency response.

And of course we don't have a clue how the record producer intended it to sound anyway, so we have no 'standards'. "I like this sound best" isn't a standard at all, SO, leaving out total junk, and IF we are interested in high fidelity we have no way of telling good from bad unless we MEASURE accuracy.

And of course as dCS and most others do not specify an output frequency response above 20KHZ REGARDLESS of the sample rate any analogue high frequencies resulting from the higher sample rates above the 'CD' 44.1 is a total waste of time and effort as it isn't coming out of the DAC (or at best is way down). And that of course includes the result of MQA 'unfolds'.

supamark's picture

"...Time (and phase) responses are simply another 'mathematical' way of saying the same thing. EG - if it goes to 20KHz the quickest rise time it can manage will be will be 1/20.000th of a second from 'silence' to full output..." is incorrect.

You're assuming a rise time like in an analog circuit. Going from all zeroes to all ones in a single sample would essentially be instantaneous - a vertical line with no rise time. The D/A reconstruction algorithm and analog output section will dictate the actual rise time of the output. Also, no natural sound has an attack that is literally instantaneous (i.e., zero rise time) so the point is moot.

Oh, and the frequency (so long as it's below Nyquist freq) doesn't matter for your thought experiment - the reconstruction algorithm will output a sine wave starting from the positive apex (full output) whether it's at 5 Hz or 20kHz. It would only matter if your point was that there are minute timing differences between the analog output and the original analog input that increase with frequency due to the reduced number of samples taken by the A/D converter with rising frequency... which maybe is where you were trying to go?

scottsol's picture

So you would never accept a 1dB drop at 20K in trade for a significant reduction in ringing?

spacehound's picture

It's how it is.

And if we can't hear the 20KHz (which only very young people ever can) we won't hear the much lower level ringing either.

And our ears aren't 'flat' anyway, our hearing at or near 20KHz is much reduced. If the output power was increased so we heard 20KHz at the same loudness as we heard it at (say) 8KHz it would result in permanent hearing damage.

BTW: I thought some more about 'perfection'. I'm not going to edit my previous post to reflect my new thoughts, so here goes:

We must accept that on their original 'mapper' dCS did the best they could.
And from Jason's review these two new ones are 'warmer' or 'sharper'.
So they are alternatives, not a step nearer 'perfection' - as I said, we don't know what it was SUPPPOSED to sound like. Nor do the people at dCS.

scottsol's picture

On that basis we have no reason not to think that all three may be flawed and we certainly have no reason to think that being warmer than the original mapper makes it less accurate, as that implies that we should use the original, imperfect, mapper as a reference.

So what is the problem with allowing the listener to choose among the three flawed mappers the one that sounds best on their music systems.

spacehound's picture

".......So what is the problem with allowing the listener to choose among the three flawed mappers the one that sounds best on their music systems."

1) If we have the slightest interest whatsoever in 'High Fidelity', which is presumably what we are all here for, accuracy to the original recording is all that matters and what we pay for.

2) As we CANNOT know how it is supposed to sound unless we were present in the studio your "best in their music system" simply means "I personally like it". So it has nothing to do with perfection, lack of perfection, or anything much else.

I'm not a 100% 'objectivist", far from it, But the only way we can tell whether it's accurate to the studio or not is to measure it. If everything we can measure is 'flat' it likely will be.

Adding 'warmth' or 'sharpness' (Jason's words) moves it away from that flatness.
And it will occur on ALL recordings, so can't possibly be accurate.

allhifi's picture

SH: You make a valid point:

" ...If we have the slightest interest whatsoever in 'High Fidelity', which is presumably what we are all here for, accuracy to the original recording is all that matters and what we pay for."

Great point. For if ones wishes for 'colorations', such equipment can be found for cheap.

Not so sure about this one:

" ...As we CANNOT know how it is supposed to sound unless we were present in the studio ....."

Ultimately, you'd be right. BUT, we can (by deduction) use (artists/recording engineer's logic to figure out what they wished to present front/center; voices, guitar (voice/guitar) -whatever is propelling the song/carrying it. All other instruments would likely be expected to sit/fall behind (in a layered fashion), and naturally, shifting in importance as new elements/instruments (or sound effects) enters the music picture.
I feel that's reasonable -and logical.

And with this comment, you are WAY off:

" ... But the only way we can tell whether it's accurate to the studio or not is to measure it ...".
What precisely would you like to "measure" to ensure accuracy ?
(Beyond a loudspeaker's 20-5KHz frequency response (preferably 20-20 KHz) 'accuracy'.

And, similarly:

" ...Adding 'warmth' or 'sharpness' (Jason's words) moves it away from that flatness..."

The 'perception' of "warmth or sharpness" can easily be massaged into a systems SQ by (for example) cable selection -with nary a blip in any known/used measurement to "show/define" such a change.

pj

supamark's picture

I'm 50 and can still hear out past 16kHz (seems to drop off sharply above 17kHz since I can still barely hear that).

Regarding what a (non-classical/acoustic) recording is "supposed" to sound like, ask the mix engineer because you'll likely never hear an original mix (pre-mastering) of a major label pop/rock recording.

Here is an example of a track I mixed, 1st link is to the original 2 track mix, 2nd to the final released version. Recording notes are included with the 1st youtube link. This was early in the "loudness wars" but the final version is hard limited (clipped) and a lot louder whereas the original mix had no mix buss compression.

1st linky, original mix:
https://www.youtube.com/watch?v=mPx7A2Q4n-s

2nd linky, released version:
https://www.youtube.com/watch?v=ub-tHAIVV3E

spacehound's picture

They are different, obviously (for example, the "Ready lovey" at about 0:12 on the second one sounds like a 1960s BBC announcer, whereas the first one doesn't :))

But so what?
I'm hearing them both on MY equipment, you heard them both on YOUR equipment, in the studio or wherever. I don't know what YOU heard But whatever it was, the second one is the 'correct' version as it is what you released

So I have no way of telling whether my equipment is accurate (equals 'HiFi') or not, have I? I don't know what YOU heard. But I CAN measure it, and if it is 'flat' across the audio range then it likely will be.

So judging equipment by listening is totally pointless. All it can tell you is whether you personally like it or not. Which isn't a 'good' or 'bad' judgement at all as a 'reference' does not exist.

The ONLY way to get a judgement by listening is to have several record produces sitting on my sofa, play several recording made by each of them, and if they all say "That's what I intended it to sound like" the equipment is accurate (again, so is 'HiFi')

And neither Jason nor me can do that.

supamark's picture

The first one is the one I mixed with the band, and is in fact the one that the band's members approved and sent to their label. Their label, Pony Canyon in Japan, mastered it without them (or I) being present or having any input. Almost no band has any control over the sound of their record once it leaves the studio and goes to mastering, the producer usually has some, but the label is the final arbiter. I mean, do you think Adele really wanted her latest album to have such horridly, painfully shrill sound quality (for example)?

My mix has no clipping (but hit -0.0dB so I used all the bits lol), and very little compression (none on the overall mix, just on kick/snare/toms/bass gtr and *maybe* very light on the horn section individual mics which were mixed in with a stereo pair of B&K 4006 omnis and printed to 2 tracks) while the mastered version has a LOT of clipping from hard limiting and compression (avg level is 7dB higher). They also made it brighter, which wasn't needed or wanted (16 bit, 48kHz for tracking, 16 bit, 44.1kHz mix - making it brighter makes it sound a lot more "digital" which is NOT what I or the band were going for). The added compression also screws up the dynamics and some of the horn crescendos.

As long as you are listening on a decent (Stereophile class C or better level) system (and I assume as a reader of Stereophile you are), I guarantee the first one is going to sound as "correct" as a compressed Youtube file can (should be at 320kbits, which is what I uploaded) - I've listened to that original mix on 30+ systems and it sounds pretty much the same on all of them (bass extension and soundstage size being the main differing factors). That, by the way, is how pop/rock mixes are judged before sending to mastering - does it "translate" or not on a variety of systems from cheap boom-box to high quality hifi system? If so, you're good; if not, back to work.

FYI "ready Lovey" is a sample of the character Thurston Howell the 3rd from Gilligan's Island and sounds correct on the original mix (it was a stereo backing track composed by the drummer on an Ensoniq keyboard and printed via a tube mic pre/DI to tracks 15 and 16 on the multitrack and also served as the click track). Most overdubs, and the drum overhead mics, were recorded through a Demeter VTMP-2b tube mic pre (man that was a sweet mic pre).

I've also got another track I'll be putting on my youtube page (different band and genre) when I get around to it that I recorded, mixed, AND produced that wasn't altered in mastering but it's going to be part of a contrast betweeen a track I recorded for them but they self produced and the one I produced as a sort of demonstration of what a producer brings to the recording process (hint, mostly quality control).

spacehound's picture

...you are completely missing my point.

Whether it be the 'producer', the 'mastering engineer', the band, or the f*** cleaner who is finally responsible for what goes out the door,
what we buy and listen to is 'what goes out the door'.

And the ONLY way we get to hear it is on OUR gear.

I heard your two examples. Sure they sounded different. But I heard them through MY equipment.

So as I said, I have no way of telling what that 'recording' that came out of the 'music factory door' sounds as 'those in charge', WHOEVER they may be, intended.

AND A REVIEWER HASN'T EITHER.

Cheers :)

supamark's picture

You seem to have lost my original point:

Quote:

Regarding what a (non-classical/acoustic) recording is "supposed" to sound like, ask the mix engineer because you'll likely never hear an original mix (pre-mastering) of a major label pop/rock recording.

The 2 versions of the track I posted were to illustrate this point by showing how much the sound (artist's original vision) is changed in the mastering process.

by the way, when I say a mix "translates well" I mean that all the essential elements of the mix (overall tonal character, relative levels between instruments and how they're placed on the soundstage, the tone and texture of the instruments/vocals, and the effects like reverb/chorus/delay/etc used) sound as intended. As you noticed, all those things changed between the two versions, which goes to my point that you'll never hear the real deal as the artist intended unless you have access to the original (pre-mastering) mix. Nirvana's "In Utero" would be a famously extreme example of the artist/producer's vision being massively altered by the record label during mastering. Based on who mixed it (Steve Albini) and what I've heard about the mix itself, the original mix (minus the 2 tracks remixed by another guy) was probably unlistenable for most people.

Interesting (to me, maybe you too) aside - until the Genelec 1031A nearfield gained popularity in the early 90's professional monitoring systems were not very accurate so you had to "learn" the speaker's sound and compensate for it when mixing, and you had to do it in every control room you worked in unless you brought your own nearfields (you still had to learn the sound of the control room and main monitors). main monitors were (and I think mostly still are) 2 ways with a horn tweeter and 15" mid-woofer (or 2) not unlike AD's beloved Altecs (ole') but most use TAD drivers now.

spacehound's picture

But I have not lost your original point, it's just that all this stuff is irrelevant to my point, which is that we buy whatever comes out of the factory door and don't know what it's supposed to sound like.

And it looks like you and your colleagues often don't either :)

Maybe I should not have used "supposed" but "the final result". Either way the end user (me, Jason, whoever) have no way of telling by 'listening' whether our gear is accurate (equals good) or not.

Much of it USED to be accurate, but today, particularly in the USA, so-called 'high end' gear seems to have given up 'accuracy' for 'impact', 'blowsiness', and so on. It all started with Audio Research amplifiers, which sound as if the company deliberately set out to make them 'impressive'. Krell in its early days did much the same.

As for pre-mastering, mastering, and PARTICULARLY later remastering and remixing, much of it is just an ego trip on the part of whoever is doing it.
It's just some nonentity incorrectly thinking he can do better than (for example) the Beatles stuff that came out of the door and which we all bought at the time. HE CAN'T.

Interesting that you mention TAD drivers. I suspect it's all 'fashion driven' as in the HiFi world Wilson and Magico are now somewhat 'passe' and the 'true cognoscenti' are all buying TAD speakers :)

supamark's picture

have been popular in soffeted (mounted in the wall on concrete blocks/slabs) studio monitors for at least 30 years, mainly their 15" mid-woofers and compression drivers which are mated to custom horns and x-overs - nothing like their consumer stuff. Tannoy was also popular for years with their 15" dual concentric drivers (and their 10" dual concentric was a legendary nearfield in the 70's and 80's).

In the days before accurate near field monitors (so, before like 1985-1990), the mastering engineer's main job was to use his much better playback system/room and very high quality EQ's and compressors to make sure the tape sent to the pressing plants actually sounded like what the engineer/band/producer had in mind.

The 90's were a period of continual rapid changes in both technology and successful business models in the recording industry (similar causes - tech - but different solutions than the record labels had).

Now it seems their main job is to make whatever they receive sound as loud as possible, especially if they're "remastering" anything making it unlistenable on a good system (but less bad on like a car stereo or cell phone).

As for if your system is giving you a good approximation of (at least for pop/rock/R&B type music) of what was recorded; if it's pretty flat from ~40Hz to ~18kHz, not dynamically compressed at the volume you like to listen at, and images at least okay (like, mono signal is pretty solidly in the middle) you'll get a pretty good idea of what was intended because that's pretty much what most records of a non-audiophile variety are mixed on. Oh, and use a decent modern DAC if you're using digital. And, if you're system is *too good* you're likely to hear things that were NOT heard in the studio (more likely as the original recording gets older), which I always find interesting but maybe not you. For classical/acoustic only good surround sound puts me anywhere near "there".

allhifi's picture

Congrat's on your healthy hearing/HF extension. No, I'm not kidding.

However, there is most definitely some type of (erroneous) belief that a listener must have 10KHz + hearing sensitivity in order to be some type of arbiter of sound -or indeed required to even enjoy it. Bullocks, I say. (Is that the English term -lol)?

High precision/fidelity sound reproduction lies FIRMLY in/at the very low frequencies -upon which all other music/instrument clarity hitches a ride.
So, not that it was suggested (I'm doing a preemptive strike -lol) but there appears to be this false narrative about high-frequency hearing capability and sound enjoyment correlation. There really isn't.
Mind you, it is desirable (and very likely significant) that we hear clear to 8-10 KHz (or, naturally higher) but the notion it's a prerequisite for anything (other than an interesting hearing characteristic) is a fallacy; it has limited meaning in the pursuit or enjoyment of high-fidelity sound reproduction.

pj

spacehound's picture

Ringing doesn't occur at all unless we 'break' the Nyquist/Shannon rule.

Which recordings don't due to the use of filters on the analog sound input. Even if they produce fake high resolution files by upsampling (a common practice), as the claimed higher frequencies in such fakes are not actually there.

Anton's picture

This bridge thing sounds like a 'pre amp' in the digital realm.

It takes multiple inputs and then sends them to the DAC.

But, if I have a DAC that accepts multiple inputs, why do I need a bridge?

I have a Mac that talks to my Meridian DAC and they seem to get along just great. What would a bridge do?

Is it really just a fancy switching device for choosing between inputs?

Why would I need this thing getting in the way between my SACD/CD transport and my DAC?

It sounds like a variation on the old Audio Alchemy DTI Pro line.

It strikes me as another item that is designed to insinuate itself into the signal chain and make things more cumbersome.

I am happy to stand corrected if anybody knows why this device has utility! Does it 'translate' signals so the DAC can understand them better?

spacehound's picture

Where the USB input is not noticeably inferior to the other inputs (which is true of most modern DACS) the Bridge won't make any difference, it just becomes a switch.

Why? because no matter how expensive or 'good' it is it can't make the input signal 'better'. Not even the much worried about 'jitter' - that is made as low as it can be by the DAC clock, not anything preceding it. So provided the jitter' is within reason (IE - the previous box is not broken) any 'jitter' in the incoming signal doesn't matter.

Remember - WHY DID DCS MAKE THIS 'BRIDGE'? Simply to provide a lower cost alternative to their 'upsampler' (which includes the same additional inputs) for those who didn't want or need upsampling.

Basically it's for their Vivaldi and their older DACS. It's pointless putting it in front of the Rossini, as Jason implied. Or any other 'good' DACs that already have an Ethernet input.

It's not made to 'IMPROVE' anything, though it may do so on other makes of DACs which have a noticeably poor USB input. Like the original and long obsolete one hundred and fifty dollar Cambridge Audio 'Dacmagic' for example :)

Jason Victor Serinus's picture

Much of what you say, including dCS's motivation for manufacturing their Network Bridge and the statement that I imply that it's pointless to put the Network Bridge in front of the Rossini, is incorrect. What is correct is in the review.

spacehound's picture

....."but with any DAC. This meant I was forced to endure several months with the state-of-the-art Vivaldi as a replacement for my reference dCS Rossini".....

That's how I read the above. Perhaps I misinterpreted it. You wrote it, and you know what you meant. So I accept that point.

As for correct and incorrect, my opinion is as valid as yours. Writing for a magazine doesn't convey any 'special expertise'.

And you actually QUOTE Mr Quick as saying this:
"The Bridge is a great option for someone who wants to bring network connectivity to a Vivaldi DAC but doesn't want to spend $22,000 on a Vivaldi Upsampler"

Which agrees 100% with my comment. He doesn't say anything about 'improved quality'. His 'quality' comment is about the new 'mappers' in the Vivaldi, it's not about the Bridge.

Jason Victor Serinus's picture

Of course, I did not quote any claims that John Quick made for the product as though they were fact. The whole point of the review was to explore if any claims made for the product jived with my experience of same.

The reason that network connectivity was moved from the Vivaldi DAC to the Vivaldi Upsampler was to avoid noise contamination. This is a potential sonic advantage that the Network Bridge offers for any DAC, even one that has full built-in USB and ethernet connectivity.

spacehound's picture

I think your review is good. It is thorough and comprehensive.

Regarding dCS's claim about noise, it may be true or it may not be. It strikes me as somewhat far fetched, but that's an opinion. Opinions are not facts so they aren't vulnerable to 'truth tests'.

I suspect the reason is to provide Vivaldi buyers with another option - THAT's what you quote John Quick as saying. We readers cannot comment on anything you may have chosen NOT to quote, obviously.

What do I use? (I post this as it is relevant to networking and noise, which is largely what the Bridge is about)

I started with a Cambridge 'Dacmagic'. The USB input was poor. I later replaced it with their 'Stream Magic' which has a better DAC, and unlike most streamers, a USB input too, so I can attach it 'direct' to my PC.

But it still wasn't to my complete satisfaction so I used its 'bypass' function and replaced its internal DAC with a dCS Debussy. (Personally I don't think small upgrades are worth doing as they interfere with the 'money collection' for a big upgrade.)

As for the networking, that's all 'digital' and provided it's bit perfect I think a low cost networking box is as good as a high cost one. Which explains my great disparity in cost between Cambridge and dCS.
Now I have replaced both with the Rossini. But I won't be adding anything else in front of it. Simply because I can't hear any noise, even at full volume with JRiver's 'play silence' function, with an ear right against the Tannoy 'dual concentric' speaker cone.
The Rossini is good on TV sound and BBC streamed 320K AAC (and their occasional streamed 44.1 FLAC) Radio 3 and 4 too. It's now my 'hub' for everything :)

whbg's picture

Thanks for the review of the DCS Network Bridge. I thought I would add a few observations based on my own experience with that device, which I have now owned for several months. Prior to obtaining the NB, I was running a Baetis Reference server into a Berkeley Audio USB Converter, using the SPDF out into an Esoteric D02 DAC. This combination was notably superior to using the USB input on the D02. I was intrigued by the NB because it offered(i)dual AES/EBU output (which would take advantage of the dual AES/EBU input on my DAC), (ii) the ability to sync to an external master clock (I have an Esoteric G01 clock), (iii) a more robust power supply than the Berkeley, (iv) Ethernet input and (v) more current design as compared to the now nearly seven year old Berkeley design.

I can say, without reservation, that substituting the NB for the Berkeley USB converter, using the Ethernet out from my Baetis server into the NB, and the dual AES/EBU outputs from the NB into the D02, and syncing the NB to an external clock which also serves as a master clock for the Esoteric D02 DAC, has made a very dramatic improvement in the sound of my front end. While I have not done the research to be able say with certainty, my intuition tells me that getting the most out of the NB requires leveraging its dual AES/EBU outputs as well as its ability to be slaved to an external clock, both of which require a DAC that has compatible features.

Chasing the latest in DAC technology, especially at retail, in order to get the capabilities of the NB built into one's DAC, is a very expensive pursuit. However, when combined with judicious purchases in the secondary market of a DAC with dual AES/EBU inputs and an external clock (e.g. my DAC and clock can be had for around $13k, but were $42k new), the NB provides an amazingly cost effective way to get within spitting distance of current state of the art in a digital front end, for pennies on the dollar.

allhifi's picture

whbg: Very nice reply.

It's particularly insightful when one has experience with/without (NB) in various combinations -as you clearly have.
I too, see the validity and superiority of both the gear -and SOTA 'cabling' options as your set-up (components) provide.

Yet when you say:

" ..I can say, without reservation, that substituting the NB for the Berkeley USB converter, using the Ethernet out from my Baetis server into the NB, and the dual AES/EBU outputs from the NB into the D02, and syncing the NB to an external clock which also serves as a master clock for the Esoteric D02 DAC, has made a very dramatic improvement in the sound of my front end... "

I completely appreciate.

However, I must wonder (playing devils advocate momentarily), is this:

" ..Prior to obtaining the NB, I was running a Baetis Reference server into a Berkeley Audio USB Converter, using the SPDF out into an Esoteric D02 DAC."

I suspect the D02 has NOT a USB input?

And, if a 'coax' (spdif) to DAC was used -the best digital connection was NOT employed. Ideally, a dedicated 'WC' Input, or i2S (HDMi/RJ-45) would have (I'd suspect) a MUCH better digital 'hand-off (to DAC). As wold have a AES/EBU digital cable.

At this point (over/done with), it's irrelevant; you have a brilliant SOTA set-up -and are clearly enjoying it. I suspect most would !

But I do seriously wonder about the potential SQ improvements of your previous set up -with a different/superior digital cable ?

One thing I have experienced repeatedly is the absolutely delightful SQ improvements when digital is powered by a Balanced/Symmetrical AC power supply -or a quality AC '(Re)Generator.

pj

Ali's picture

How about directly connect Roon-Nuc to Vivaldi via USB and compare it to Nuc to router to Bridge to Vivaldi with network cable?

scottsol's picture

As a switch, the Network Bridge is woefully lacking. It only has two inputs, an Ethernet (or WiFi)and a USB type A. You can't hook it to your disk transport even if you wanted to, nor can you hook up your computer via USB, that requires a Type B connection.

The NBR allows you to listen to music stored on a USB drive, network drive or streamed from the internet. In essence, it precludes the need for a computer in order to play digital files. Conventional DACs do not have this capability.

Even if your a DAC has network connectivity, there is a good chance that using the NBR will give superior results to the circuit built in to your DAC. The idea is similar to using a top flight external phono preamp even if your preamp has one built in.

spacehound's picture

As for your 'superior results' that assumes that a 'high street' computer is inherently inferior to something made by a 'HiFi' company.

The REALITY is that when computers and their peripherals (such as a NAS, USB stick, etc) started to be used as a source for home audio the HiFi companies were left out in the cold.

Had they done nothing they would have been stuck with making turntables, FM tuners, CD players, and tape decks as sources. And we all know what is happening (or has happened) to them.

So, DESPERATE, they invented the 'streamer' which is merely a 'limited' computer in a 'HiFi looking' box. (Some of them even run a version of Windows).

And tried to convince us they were superior. Which many audio enthusiasts fall for.

Also most of these companies were totally clueless about computers. They were and are, mere 'users', not computer experts (I don't include dCS in that, looking at its background.) And many, from the 'technical computer stuff' they say in their blurbs or on forums, still are.

So in most cases, the concept that 'streamers' are inherently superior to home computers for audio is a complete con. (I don't include DACs in that comment.)

supamark's picture

for my network/streaming/etc source. I can even run it on battery power to avoid added line noise (my apt. wiring is.... sub-par).

spacehound's picture

I'm in the UK and I live in the 'countryside', so the supply probably doesn't have much noise. I tried it and didn't hear any difference.

Living in a town and in an apartment you probably will. And it will affect the 'HiFi' gear too.

But on 'streamers' we don't usually have that choice. (Though it is possible to 'DIY' by bypassing the transformer/rectifier totally and using a suitable battery connected to the 'smoothing' part of the power supply. Not many are going to do that.)

As I said, they are just a desperate response by the HiFi business, most of which is clueless about computers, to "The Rise of the PC" :)
And a petty poor value response at that.

I don't use a laptop now, I use a big 'tower' PC in the next room (but only about five feet away from the 'HiFi' gear) and my comments on 'silence' was using that. So the big PC is electrically quiet enough not to interfere with anything else.

I suspect they all are and the 'quietness' of 'streamers' is BS. It doesn't stop us nutcases buying them though, does it?
But there aren't many of us - the sales of HiFi magazines is very small compared to the sales of 'Fifty Shades of Grey" but they all listen to music too. But not on 'HiFi' stuff :)

supamark's picture

turning off light switches can cause a pop in my system (not that loud, thankfully). I suppose part of the problem is my preamp is a 30 year old Tandberg that only uses a two prong IEC plug/cord but I still *shouldn't* be hearing pops on switches that run to different circuits in the fuse box... Perfect situation for a power conditioner, if only I could afford a good one right now.

spacehound's picture

In the UK the standard house wiring, (all three wires, live, neutral, and 'earth'), runs in a 'ring', like the lights on a Christmas tree.
So in effect each socket is fed from both ends and the sockets are electrically 'close to each other'.
For the HiFi I changed that and installed an 100 amp spur meant for a cooker circuit direct from the utilities 'consumer unit'. This 100 amp circuit is standard in the UK and if you use a gas cooker (as we do) it's just switched off and rolled up in the roofspace, so it's easy to use anywhere for whatever you want.

As I said, I don't get any noise anyway, but I like to think the direct 100 amp circuit (which in the UK with our 220 volt supply is 22 Kilowatts) improves the bass.

I'm not sure whether it actually does or not :)

spacehound's picture

For what it's worth,
I just got a notification purporting to be from 'Microsoft' on my email account that there are two new replies to my posts here. I looked 'manually' immediately before posting this and there aren't.

And it asks me to 'verify' my account, before it will 'release' these two notifications, which of course I have not done.

It's a fake, though quite a convincing one - it has all the correct Microsoft logos, and on the face of it appears genuine.

X