Brief Study of Noise-Shaping SAR ADC – Part B

In the previous post, I’ve shared some basics of sigma-delta ADC. In this post, before we look at the noise-shaping SAR ADC, let’s again do a warm-up.

The z-domain linear model for a 1st-order sigma-delta modulator:

Fig.1 Linear model of a 1st-order sigma-delta modulator

Fig.1 Linear model of a 1st-order sigma-delta modulator

The linear model has the same transfer functions as the one in Fig.6 of the previous post, where a delaying integrator is used as the loop filter.

Before the quantizer, the modulator is doing two tasks:

1. Δ: generate the conversion residue R (=U-V)

2. : add all the previous residues

Keep this in mind. Now let’s try to make the SAR do the noise shaping.

A conventional charge-redistribution SAR ADC:

Fig. 2 Charge-redistribution SAR ADC

Fig. 2 Charge-redistribution SAR ADC

When the conversion is complete for an N-bit SAR, the magnitude of the voltage generated at the top plate of the DAC represents the difference between the sampled input and a representation constructed from decisions of the high-weighted N-1 bits:

V_{DAC} = \frac{V_{REF}}{2}D_{N-1}+\frac{V_{REF}}{2^2}D_{N-2}+\cdots+\frac{V_{REF}}{2^{N-1}}D_1-V_{IN}

If we do one extra switching of the DAC array based on the final decision of LSB, we recalculate the voltage generated at the DAC top plate:

V_{DAC}^{+1} = \frac{V_{REF}}{2}D_{N-1}+\cdots+\frac{V_{REF}}{2^{N-1}}D_1+\frac{V_{REF}}{2^N}D_0-V_{IN}

Yes! We catch the conversion residue! It is further simplified as follows:

V_{RES} = D_{OUT}-V_{IN}

According to Fig.1, the simplified equation can be rewritten as -V_{RES} = V_{IN}-D_{OUT}.

Then we need to sample this residue and store it somewhere else. How about this method?

Step 1: sample the residue on the extra capacitor

Fig.3 Sample the residue on the extra capacitor (discrete-time domain is used to indicate the current sample and the previous one)

Fig.3 Sample the residue on the extra capacitor (discrete-time domain is used to indicate the current sample and the previous one)

Step2: apply the sampled residue to the opposite input of the comparator during the next conversion

Fig.4 Apply the residue to the opposite input of the comparator

Fig.4 Apply the residue to the opposite input of the comparator when the next sample is converted

Now it comes to the discussion about choosing the value of C_R.

Assume C_R = k C_{DAC}

Then V_R(n) = k_1 V_{RES}(n) + k_2 V_R(n-1),

k_1=\frac{1}{1+k} and k_2=\frac{k}{1+k}

What will the linear model look like?

Fig. 5 Linear model of case 2

Fig. 5 Linear model and transfer functions

If  C_R << C_{DAC} (k \approx 0 ),  k_1 \approx 1 and k_2 \approx 0. The memory of the previous residues is ignored and only the current residue is recorded. The linear model can be simplified to:

Fig. 4 Linear model for case 1

Fig. 6 Linear model and transfer functions when k1=1 and k2=0

Take a look at the magnitude responses of the NTFs under different k:

Fig. 5 Magnitude response of NTFs under different k and 1st-order noise shaping

Fig. 5 Magnitude response of NTFs under different k and compared to 1st-order noise shaping

Noise does be shaped! In addition, it seems using a small residue sampling capacitor is fairly good compared to larger ones (Note that the kT/C noise during residue sampling presents itself to the comparator input and can also be shaped together with the quantization noise and the input-referred comparator noise [1]).

However, compared to the 1st-order modulator, this way of noise shaping is much less efficient. We could do better! How? The next post ;-).

References:

[1] J. A. Fredenburg and M. P. Flynn, “A 90-MS/s 11-MHz-Bandwidth 62-dB SNDR Noise-shaping SAR ADC”, JSSC, vol.47, 2012.

Posted in Data Converter | Tagged | 1 Comment

Brief Study of Noise-Shaping SAR ADC – Part A

Sometimes it is much easier to become a fan of something when you only know something about it. Just like Sigma-Delta ADC, it is so complicated that even though I have learned it for several times I still can’t fully understand it!

Nevertheless, I am still a fan of it ;-).

Sigma-Delta ADCs dominate in the high-resolution domain (though they are not extremely fast, actually kind of slow…).

Fig. 1 Signal-to-noise-and-distortion ratio versus sampling frequency of Sigma-Delta ADCs and other Nyquist ADCs (SAR, Pipeline, and Flash). The data were reported in ISSCC, collected by Murmann's ADC survey [1].

Fig. 1 Signal-to-noise-and-distortion ratio (SNDR) versus sampling frequency of Sigma-Delta ADCs and other Nyquist ADCs (SAR, Pipeline, and Flash). The data were reported in ISSCC, and collected by Murmann’s ADC survey [1].

 I am currently doing successive-approximation-register (SAR) ADC.

SAR ADCs are quite energy-efficient, but less accurate than Sigma-Delta ADCs. 

Fig. 2 Energy (P/fs) versus SNDR of SAR ADCs and other ADCs (Sigma-Delta, Pipeline, and Flash). The data were again extracted from Murmann's ADC survey [1].

Fig. 2 Energy (P/fs) versus SNDR of SAR ADCs, Sigma-Delta ADCs, and other Nyquist ADCs (Pipeline and Flash). The data were again extracted from Murmann’s ADC survey [1].

In order to achieve high resolution, can SARs shape the noise just as Sigma-Delta ADCs do? 

People have tried to imploy noise-shaping technique into the SAR architecture [2, 3], but so far the reported performance (with chip measurement) is not very compelling (SNDR = 62dB , Power = 806uW, Bandwidth = 11MHz, FoM = 35.8fJ/conv) [3].

Nevertheless, the idea of noise-shaping SAR is so intriguing.

Before entering into this topic, I would like to do some warm-ups – some basics of Sigma-Delta ADCs (yes, that’s all I know about it).

Some basics of Sigma-Delta ADCs:

1. Oversampling 

Fig.3 Brief illustration of oversampling

Fig.3 Brief illustration of oversampling (OSR is the abbreviation of oversampling ratio)

Doubling the sampling frequency gives 3 dB increase of SNR. However, oversampling is seldom used alone, and it is commonly used together with the noise-shaping technique.

2. Noise-shaping

Fig.4 Brief illustration of noise-shaping

Fig.4 Brief illustration of noise-shaping and the sigma-delta modulator

Filtering is introduced into the ADC to further suppress the in-band quantization noise power. At the same time, the filtering does not affect the input signal. By applying a loop filter before the quantizer and introducing the feedback, a sigma delta modulator is built.

3. Linear model of a sigma-delta modulator 

Fig.5 Linear model of a sigma-delta modulator (More information can be referred to Shriere's book[4])

Fig.5 Linear model of a sigma-delta modulator, STF and NTF are abbreviations of signal transfer function and noise transfer funcion, respectively. (More information can be referred to Schreier’s book[4])

According to STF and NTF, if the transfer function of the loop filter H(z) is designed to have a large gain inside the band of interest and small gain outside the band of interest, the signal can pass the modulator and the noise can be greatly reduced.

4. If an integrator is chosen to be the loop filter

Fig. 6 Modulator with an integrator as the loop filter and its STF and NTF

Fig. 6 Modulator with an integrator as the loop filter and its STF and NTF

We do a plot of H(f), STF(f), and NTF(f) (Matlab ‘fvtool’ is used):

Fig. 7 Magnitude response of H(f), STF(f), NTF(f)

Fig. 7 Magnitude response of H(f), STF(f), NTF(f)

Bingo! The signal is passed to the output with a delay of a clock cycle, while the quantization noise is passed through a high-pass filter.

Doubling the sampling frequency gives 9 dB increase of SNR for 1st order noise shaping.

5. Get more aggressive on the order

Fig. 8 Magnitude response of NTF from 0th - 4th order

Fig. 8 Magnitude response of NTF from 0th – 3rd order

This post tells the basic story of noise-shaping. In the next post, I will try to learn how noise-shaping can be used in SAR ADCs.

References:

[1] B. Murmann, “ADC Performance Survey 1997-2014,” [Online]. Available: http://www.stanford.edu/~murmann/adcsurvey.html.

[2] K. S. Kim, J. Kim, and S. H. Cho, “nth-order multi-bit \Sigma-\Delta ADC using SAR quantiser”, Electronics Letters, vol. 46, 2010.

[3] J. A. Fredenburg and M. P. Flynn, “A 90-MS/s 11-MHz-Bandwidth 62-dB SNDR Noise-shaping SAR ADC”, JSSC, vol.47, 2012.

[4] R. Schreier and G. C. Temes, Understanding Delta-Sigma Data Converters, 2005.

Posted in Data Converter | Tagged , , | 3 Comments

Noise Effect On The Distribution of ADC Output Codes

In the previous post, the probability of comparator decision with the existence of noise was calculated. In this post, the topic about noise and probability will continue. The whole topic is actually inspired by a 1986-paper [1], which discusses noise effect on the distribution of SAR ADC output codes. Following the author’s method, though I end up with slightly different results, still I found some interesting things which I would like post here.

Assume an input voltage of V_i is applied to an ADC, and the ADC has input-referred noise with a standard deviation of \sigma. Then, the input voltage compares with certain reference voltage V_r to determine the corresponding bit. Referring to the equations calculated in the previous post, the probability of the bit being high is written as

P(D_j = 1)=\frac{1}{2} erfc(\frac{V_r-V_i}{\sqrt{2}~ \sigma}).

Similarly the probability of the bit being low is given by

P(D_j = 0)=\frac{1}{2} erfc(\frac{V_i-V_r}{\sqrt{2}~ \sigma}).

Considering the time sequence of Nyquist ADCs when they generate the digital outputs, I roughly group them into two categories: outputs are converted simultaneously and outputs are converted successively. The former needs 2^N-1 times comparison for N-bit, and the latter only needs N times but with the penalty of speed. I’m more interested in the second case, lazy and slow but still doing the job ;-).

Problem formulation: Due to the existence of noise, the input voltage can be converted to erroneous output codes or a correct one (respectively indicated with yellow and blue backgrounds in Fig.1). Probabilities of each converted output code corresponding to one particular input are of interest.

Due to noise, the input can be mapped to a range of digital outputs. The probability for each mapping needs to be calculated.

Fig.1 An example of 3-bit ADC. Due to noise, the input can be mapped to erroneous output codes (with yellow background) or a correct one (with blue background). The probability for each mapping is of interest.

How to calculate the probability of one particular output code?

Fig. 2 gives an example of probability calculation for code “100” converted by a 3-bit SAR ADC. The input voltage V_i corresponds to code “101”. The calculation starts from MSB till LSB. The probability of less-significant bits will depend on the results of more-significant bits. Finally, the probability of a given code is the product of the probability of its individual bits.

Fig. 2 Probability calculation of code "100" generated by a 3-bit SAR ADC.

Fig. 2 Probability calculation of code “100” generated by a 3-bit SAR ADC.

Knowing the way of calculating the probability of a given code, I tried to look at the noise specifications from statistics point of view. There are two noise specifications commonly used (in academia): noise power is equal to the quantization noise or noise standard deviation is equal to 1 LSB. The former introduces 3 dB loss of SNR, and the latter 11 dB. Then, how does the code distribution look like under these two specifications?

Noise power is equal to the quantization noise:

Fig. 3 Probability of output codes for a 10-bit SAR ADC with an analog input corresponding to code 510 with 1/4 LSB margin and input referred noise power equal to its quantization noise.

Fig. 3 Probability of output codes for a 10-bit SAR ADC with an analog input corresponding to code 510 + 1/4 LSB offset and input-referred noise power equal to its quantization noise.

Noise standard deviation is equal to 1 LSB:

Fig. 4 Probability of output codes for a 10-bit SAR ADC with an analog input corresponding to code 510 with 1/4 LSB margin and input referred noise power equal to its quantization noise.

Fig. 4 Probability of output codes for a 10-bit SAR ADC with an analog input corresponding to code 510 + 1/4 LSB offset and input-referred noise standard deviation equal to 1 LSB.

Sorry for the math. Like nonsense in the middle of a summar day. Sleepy?

Reference

[1] Philip W. Lee, “Noise considerations in high-accuracy A/D converters”, JSSC, 1986.

Posted in Data Converter | Tagged , , | Leave a comment

Noise Effect On The Probability of Comparator Decision

In the previous post, it is explained why normal distribution with a standard deviation of \sigma is used to characterize the circuit input-referred noise. In this post, we will calculate the probability of comparator decision (High or Low) with the existence of noise.

We ignore the dc offset of the comparator and only consider the thermal-noise effect. Then the distribution of voltages presented to the comparator input, as shown in Fig.1, can be represented by a normal distribution whose standard deviation is equal to \sigma of the comparator input-referred noise and whose mean value is shifted by the signal Vc.

Input referred effective distribution of voltages with signal Vc presented to the comparator. The shaded area represents the probability of the comparator thinking the input voltage is negative.

Fig. 1 Input-referred distribution of voltages presented to the comparator. The shaded area represents the probability of the comparator thinking the input voltage is low.

The probability of a decision Low is simply given by the area under the curve to the left of zero, which is indicated by the shaded region in Fig.1. This area is given in terms of the error function by (referring to P1, P2, and P3 in Appendix)

P(\textit{Low})=\frac{1}{2}+\frac{1}{2} erf(\frac{-V_C}{\sqrt{2}~ \sigma}).

Note that the error function is an odd function (P5 in Appendix), we can further write

P(\textit{Low})=\frac{1}{2}-\frac{1}{2} erf(\frac{V_C}{\sqrt{2}~ \sigma}).

Finally, based on the above result, the probability can be further calculated in terms of the complementary error function by (P4 in Appendix)

P(\textit{Low})=\frac{1}{2} erfc(\frac{V_C}{\sqrt{2}~ \sigma}).

Similarly, the probability of a decision High is given by the remaining unshaded area under the curve in Fig.1, which is

P(\textit{High})=\frac{1}{2} erfc(\frac{-V_C}{\sqrt{2}~ \sigma}).

In the next post, we will use the derived equations to show noise effect on the distribution of ADC output codes.

Appendix

Some equations on standard normal distribution:

1. Probability density function (PDF)

\phi(x)=\frac{1}{\sqrt{2 \pi}} e^{-\frac{x^2}{2}}.

If denote the mean as \mu and the standard deviation as \sigma, the PDF of general normal distribution can be expressed as

f(x)= \frac{1}{\sigma} \phi(\frac{x-\mu}{\sigma}).

2. Cumulative distribution function (CDF)

\Phi(x)=P[X \leq x] = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^x e^{-\frac{t^2}{2}}dt.

3. Error function

erf(x)=\frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2}dt.

Hence, the CDF can be further expressed using error function

\Phi(x)=\frac{1}{2}+\frac{1}{2} erf(\frac{x}{\sqrt{2}}).

4. Complementary error function

erfc(x)=\frac{2}{\sqrt{\pi}} \int_x^{\infty} e^{-t^2}dt = 1 - erf(x).

5. Error function is an odd function.

erf(-x)=- erf(x).

Posted in Analog Design | Tagged , , | 1 Comment

Normal Distribution and Input-Referred Noise

Normal distribution is frequently assumed when we do circuit analysis.

  • Why?

Because there is a saying that the sum of a large number of random variables converges to the Normal.

  • Under what condition is this true?

The central limit theorem deals with this point. In [1], it defines “the normalized sum of a large number of mutually independent random variables with zero means and finite variances tends to the normal probability distribution function provided that the individual variances are smaller compared to the total sum of variance”.

Fig.1 PDF for sum of a large number of random variables

Fig.1 PDF for sum of a large number of random variables

  • Why people sometimes assume normal distribution and use \sigma to characterize circuit input-referred noise?

It’s understandable that people use \sigma to characterize the offset of a circuit, because many random effects during fabrication tend towards a normal distribution. But when comes to noise, 4kTRBW or kT/C will pop up in our minds. Why \sigma?

In frequency domain, it’s straightforward to do the noise analysis given noise power spectral density and bandwidth. However, when moves to time domain, we need the assumption of normal distribution and its \sigma.

  • How to link time-domain noise to frequency-domain noise? Or let’s ask in this way, how is \sigma related to noise power spectral density n^2(f) and bandwidth F_{max}?

Cadence should have the answer! Because they provide transient noise simulation. Yes then, I find the answer in their application note [2]. 😉

Let’s take white noise as an example for simplicity (in this case, n^2(f)=n^2). In the time domain, a white noise signal n(t) is approximated as

n(t)=\sigma \cdot \eta(t, \Delta t),

where \eta(t, \Delta t) is a random number with standard normal distribution updated with time interval \Delta t. The noise signal amplitude and update time interval are

\sigma = \sqrt{n^2 \cdot F_{max}},

\Delta t = \frac{1}{2F_{max}}.

Let’s then verify it. The auto-correlation function for this noise signal is calculated to be

n^2(t)=\sigma^2 \Lambda(\frac{t}{\Delta t}),

where \Lambda is a triangular pulse function of width \Delta t. The power spectrum of n(t) can then be calculated as a Fourier transform of the auto-correlation function

n^2(f)=\sigma^2 \cdot \Delta t \cdot sinc^2(f \cdot \Delta t).

Finally, the total noise power can be obtained by integrating over the frequency

\int_0^\infty n^2(f) = n^2 \cdot F_{max}.

Fig.2 The noise signal, its auto correlation function, and spectral density [2]

Fig.2 The noise signal, its auto correlation function, and spectral density [2]

For more detailed explanation, please refer to Ref [2].

Reference

[1] H. Stark and J. W. Woods, Probability and random processes with applications to signal processing, 3rd edition, Pearson Education, 2009.

[2] Cadence, “Application notes on direct time-domain noise analysis using Virtuoso Spectre”, Version 1.0, July 2006.

Posted in Analog Design | Tagged , , | 2 Comments

Stay Simple – Square-Law Equation Related

The 1st post in 2013. Happy middle new year! 😉

Nowadays most of us in academy are designing analog circuits in deep-submicron nodes using short-channel transistors. Putting them together with the digital, building up a system, and selling it with extraordinary performance in top conferences or journals—that’s a dream (sometimes triggers a real nightmare) of a PhD student…

Then do you miss the old time when you are designing circuits using long-channel transistors?

I miss! The square-law equation is so neat that the relation among the drain current, the effective voltage, and the transconductance can simply be expressed by

I_D = \frac{V_{GS}-V_T}{2} g_m

With certain effective gate-source voltage, the transconductance linearly follows the biasing current. It is quite straightforward to tell how much power you will pay for your target gain.

Soon, the short-channel transistors together with low voltage come ;-). Life becomes not that straightforward. In [1] and [2], the authors introduce a parameter Veff, defined by

V_{eff}=\frac{I_D}{g_m}

They also compare their parameter with the square-law equation using the following figure.

Veff versus VGS-VT [1][2]

Fig. 1 Veff versus VGS-VT [1][2].

Yes! For modern short-channel transistors, they are often biased in the transition region between sub-threshold and saturation regions. They are making classical equations useless! The proposing of Veff is so decent that it provides a simple way for us to analyze the power bounds of analog circuits in modern CMOS [1][2].

If you ponder on the above simulated curves, do you remember something which also has the beauty of simplicity and continuity?

Yes. The EKV model!  In this model, the Veff, using inversion coefficient, can be derived by [3]

V_{eff}= \frac{I_D}{g_m} = n U_T (\sqrt{IC + 0.25}+0.5)

And the effective gate-source voltage is expressed as [3]

V_{GS}-V_T= 2 n U_T \textnormal{ln}(e^{\sqrt{IC}}-1)

Out of curiosity, I plotted Veff versus VGS-VT based on the EKV model, together with the simulated curves, which I obtained using a similar method as used in [1][2] but for different processes. The figure is shown as bellow.

Veff versus VGS-VT with EKV model included

Fig. 2 Veff versus VGS-VT with EKV model included (Different inversion regions are also indicated).

Though the analog world can’t be as simple as 0 or 1, still the simpler the better!

Reference

[1] T. Sundström, B. Murmann, and C. Svensson, “Power dissipation bounds for high-speed Nyquist analog-to-digital converters”, TCASI, 2008.

[2] C. Svensson and J. J. Wikner, “Power consumption of analog circuits: a tutorial”, Analog Springer, 2010.

[3] D. M. Binkley, B. J. Blalock, and J. M. Rochelle, “Optimizing drain current, inversion level, and channel length in analog CMOS design”, Analog Springer, 2006.

Posted in MOS Models | Tagged , , | 1 Comment

Brief Study of Dither C: Dithered DNL

In ‘Dither’ series B, I have shown dither reduces the harmonic distortion by decolalizing the code mapping of the analog signal from the transfer curve. So if we take a step further, we might say the main purpose of dither is to randomize the DNL errors of the converter.

Then, it seems that DNL is kind of related to harmonic distortion. In ‘ADC performance’ series A, I discussed the relationship between DNL and SNR. In that post, I have taken the DNL error as uniformly distributed noise. However, the story told by the DNL plot could not be that simple. In this post, I will try to explain more about DNL, of course, together with a further discussion of the dither.

1. Dynamic effects of DNL from distortion point-of-view

The location of DNL error is important. For example, shown in Figure 1, a converter may have a DNL error of +2 LSB at code(–FS), which is a quite large error. However, its effect will be minimal for a converter which rarely uses codes near full scale. Conversely, a converter may have a DNL error of +0.25 LSB for 4 codes near midscale, which indicates +1 LSB of transfer function error around that location. If a converter repetitively works in the middle range, the four errors will cause potential dynamic distortions.

Figure 1. The location of the DNL error is important [1].

Figure 1. The location of the DNL error is important [1].

Thus a blanket statement about the INL or DNL of a converter without additional information (location, frequency, etc.) is almost useless [1].

2. Dithered DNL – An example of a subranging pipelined ADC from ADI [2]

The 14-bit 105-MSPS AD6645 block diagram:

Figure 2. The block diagram of AD6645 from ADI.

Figure 2. The block diagram of AD6645 from ADI.

The problem – the significant DNL error occur at the ADC1 transition points:

Figure 3. AD6645 subranging point DNL errors (exaggerated).

Figure 3. AD6645 subranging point DNL errors (exaggerated).

The solution – a peak-to-peak dither noise cover about two ADC1 transitions is added to the input. The DNL is not significantly improved with higher levels of noise.

The result – undithered DNL versus dithered DNL:

Figure 4. AD6645 undithered and dithered DNL.

Figure 4. AD6645 undithered and dithered DNL.

I will end the discussion of dither till this point. I wish I could have some chance to try the dither method in the future and have a better understanding of it. Though 3 posts written, I’m still a little bit puzzled with dither…It’s really not that easy to deal with noise!

Reference

[1] Brad Brannon, Overcoming converter nonlinearities with dither, Analog Devices, AN-410.
[2] Walt Kester, The good, the bad, and the ugly aspects of ADC input noise – is no noise good noise?  Analog Devices,  MT-004.

Posted in Data Converter | Tagged , , , | 4 Comments