From: "Remco Stoutjesdijk"
Subject: [JN] Digital format primer (LONG), was: (Bangers and) MASH!
Date: Thu, 24 Aug 2000 12:31:48 +0200
Source: Sound Digest Archive v02.n640

 ¡ Ola !

[ BE AWARE: The following is not about tubes, but about solid state and even digital stuff. If you can't take it, I strongly suggest you hit the 'del' button... But I found at Arhus that many people are not fully up to speed on this while (IMHO) it is very useful to know at least a bit about digital stuff, in this world of SACD and DVD...  ]

> It was built in 1992, works very well, and has some nice features.

Yumm, features :)

I remember buying 'My First Sony' cd player 'cuz it had so many gimmicks. Used them for a week, after that play, stop, nxt, prev would do :-)

> Instruction booklet says that the DAC is a "MASH (1bit)".
> The front of the machine says "MASH Multi-stage noise shaping" and
> "4DAC".
> So what does this all mean? How do you make a "1-bit" DAC?

Ok, now for the science part, pay attention... I'll start at the very beginning.

There are basically two ways of digitally describing audio:

a/         By mapping the dynamic range onto a set of numbers. This is called PCM (Pulse Code Modulation). The total amount of numbers defines the amplitude resolution, while the amount of samples taken per second defines the maximum digitizeable frequency (Nyquist criterion: Fmax = Fs/2). We all know the standard 16bits x 44 kHz by now, but 24 x 96 kHz and 24 x 192 kHz (DVD audio) are also PCM. Just versions with a larger resolution and frequency range.

b/         By describing only the _changes_ in the signal with respect to its last known value. This is called Delta Modulation. It's actually older than PCM (invented in 1943) but it took until the end of the '80s until it was actually implementable. The resolution and frequency range both depend on both the number of bits used and the number of samples per second, since it's a non-linear system (see below).

As we all know, by the end of the '80s - begin '90s, we started to see the 'bitstream', '1-bit', 'MASH' and other converters. These are all forms of Delta Modulation converters (see below).

Now why did all the companies just switch over all of a sudden?

Well, apart from the fact that you need something new every couple of years to keep selling stuff, there was actually a real reason, too. This has a lot to do with IC technology and computers.

The IC-technologies of the '70s and '80s were based on both bipolar and mosfet devices and had relatively large devices and voltages (10 micrometers is *incredibly huge* for todays standards, while I've also heard people talk about high voltage supplies when they were talking about 5 Volts). In these technologies relatively good analog devices were possible, there was enough dynamic range to allow for some 100 dB of S/N.

When the computer industry arose, new CMOS technologies appeared with ever smaller devices and ever higher switching speeds. Now those devices are *pure shit* for an analog designer. If you look at the curves of a .18 micron transistor you'll see that never in this world this thing will make a nice analog performance. Furthermore, when you make two identical transistors (or resistors, capacitors), the matching is so poor that linearity is hardly present.

Building a DAC for PCM becomes hard here, you can see why: The analog output of a PCM DAC is equal to 1*MSB + 1/2*(MSB-1) + 1/4 * (MSB -2) + ...+ 1/65536*LSB. This means that the accuracy of the current sources (or resistors) in the DAC needs to be within 1/65536 tolerance. No need to tell that is *hard*. Not impossible, Burr-Brown is capable of achieving even 20 bit tolerance (1/1048576 !!!) by laser-trimming their resistors, but that procedure makes an IC very expensive.

So, what's a chap to do? Well, exploit the merits of the technology, which is speed. The idea of the 1-bit DAC is to recalculate the 16 bit words into 1-bit words. You can see that you need at least 16 1-bit words to keep the same amount of information so the sampling frequency goes up by 16 at least. Now how do you do that, you can't simply cut off the last 15 bits...

Here we use what is called a Sigma-Delta modulator. It consists of a truncator, i.e. a 1-bit comparator (in the 1-bit case, which is easiest and most abundant), a delay and a subtractor. The first mulit-bit-word comes in and is truncated to 1 bit in the comparator. The comparator actually decides if the word is smaller or larger than 1/2 of the dynamic range. The rest of the word is put through the delay block and subtracted from the second incoming word. After the subtraction, the second word is also truncated and the rest of this word goes through the delay block again, and so on. A picture makes it somewhat clearer, but I suck at ascii-art:

 

                +
multibit word--->O----------->[comparator]----->1-bit output
                 ^                |
                -|                |remainder
                 |                |
                 |----[delay]------

 

Now the result is a 1-bit stream (hence the name, you guessed it :) which describes the audio signal in terms of differences with respect to the previous sample. This is called a first order sigma delta modulator.

Still paying attention? Then you probably wonder WHY the heck this is all necessary, well, here's the reason: the delay in the discrete time (z) domain is analogous to an time-derivative in the continuous time-domain. But it's in the feedback path so the output is the integral of the original signal. In order to re-construct the original analog output whe only need to take the time-derivative of this signal. Hence the name sigma delta: the sigma is the sum part (or subtractor part), the delta is the delay in the feedback.

Why is this tricky feedback part needed? Well, the comparator is only 1 bit, and thus generates huge amounts of quantization (=white!) noise (1/12 = -21 dB). By adding the delta (time-derivative) feedback loop the loop gain is very high for low frequencies and low for high frequencies. The quantization noise of the comparator is suppressed by the loop gain. This means at low frequencies there is very little noise, while at higher frequencies there is a lot. The noise is shaped! Now you get the name 'noise shaper'!

In frequency domain terms: The sigma-delta signal contains the audio band, then a gap all the way to the sampling frequency (which is at least 16*44100), then a lot of quantization noise. What's the simplest DAC then? Yup, a low-pass filter. That's all.

What do we have now? We have a DAC that's hardly process dependant: there's no devices in need of matching. There's no devices which rely on their analog performance. This has just been a number-crunching experience. What do we need? Speed, and we have plenty of that. Most Sigma-Delta DACs run at 64*Fs, that is 2.8Mhz. Not a problem *at all* for CMOS (they made the Pentium I in the same process as current DACs, so it can run at hundreds of Mhz if needed). So, now we fully exploit the merits of the current IC processes while we still get a good audio performance (THD+n > 120 dB is possible!).

Well, when people got this working they of course wanted even better versions so they started playing around with the scheme from the picture. Philips and Sony experimented with putting two or more of these loops in cascade. Then you get a 2nd, 3rd etc. order sigma delta modulator. Works, but it's susceptible to (digital) instability, called 'limit cycles'. On the first 1-bitters you can hear small tones when there's no music playing. This is a limit cycle close to the clock frequency which has created an audible subharmonic!

Technics also played with the scheme but added loops besides and under the first loop, then summing the results, essentially making a higher order loop without the danger of instability. They called it Multi stAge noiSe sHaping (MASH). I'll spare you the details, but pure technically it's not 1-bit anymore but something like 3.5 bits...

The 4DAC means there are 2 DAC chips per channel operating in differential mode (1 DAC gets the MSB inverted), so their common-mode errors will cancel.

And there you have it! Still awake?

Well, then I'll rave on for a bit more: There is virtually no DAC available anymore today (not even by Burr-Brown, except for old stock) that is not a Sigma-Delta DAC. That has led to SACD: Sony and Philips have created a format (Direct Stream Digital, if you paid attention you now also know what this term means) where the sigma-delta signal is stored immediately, instead of taking the long road: sigma/delta A/D conversion -> recalculation to PCM -> storage -> recalculation to sigma/delta -> analog.

There is of course something to be said for this approach, instead of extending PCM to the big PCM that's now on DVD-audio. But marketing (and licensing politics and so on) will decide which is the next format to be the successor of CD-audio. Some players have been announced that can play both formats, give me one of those!

I've heard 16x44kHz, 24x192kHz and DSD from one dCS harddisk master demo of the same recording and you may guess which one was my favorite! I have never heard a more 'analog digital' than the DSD recording. It was just like vinyl except for the ticks, it send shivers deep, deep down my spine... until I heard the 6 front channel demo at Philips Research... WHOAAA! I'm already trying to convince the dCS guys into coming to Arhus next year

:)

> Are there any
> recommended mods to be made with this system?

- - Find the output of the DAC and make your own postfilter stage with some tubes, I have very good experiences with doing that to older players. The DAC and transport are usually OK, the output stage is rotten. - - Ask Guido Tent about clock modifications, they can also help a lot!  

Regards,
Remco

http://listen.to/rmsaudio