## Go Moderate

Either Prof. Sansen’s inversion coefficient (IC) approach or Prof. Murmann’s Gm/Id design methodology is telling the same story of power-aware analog design.

With the help of Gm/Id design kit, I can easily visualize the transistor performance as a function of its gate-source voltage (see Fig.1). As VGS increases, the transistor undergoes the weak, the moderate, and the strong inversion. For high gain, we go left; for high speed, we go right. Being far-left, the gain is not increasing but the speed drops extremely low; being far-right, the speed is not increasing but the drain current is still climbing! For a decent figure-0f-merit (speed*gain), go to the middle, go moderate! Fig.1 ID, gm, gm/ID, fT, fT*gm/ID as a function of VGS at fixed VDS, VBS, and W/L

As CMOSers, we love the square-law equation, we sometimes hate and sometimes embrace the exponential subthreshold current equation. But with regard to the current flowing between the strong and the weak, do we have one equation for it? No, but yes…by doing some math, the EKV model combines all the three. Referring to , the IC-V related equations are copied as follows: $e^{\sqrt{IC}} = e^v+1$, $IC = \frac{I_{DS}}{I_{DSspec}}$, $v = \frac{V_{GS}-V_T}{2nU_T}$, $I_{DSspec} = (\frac{\mu C_{OX}}{2n})(\frac{W}{L})(2nU_T)^2$,

where n is subthreshold slope factor and UT is thermal voltage. At room temperature, 2nUT is about 70mV . As Fig.2 shows, the IC-V curve matches well with the weak for IC < 0.1 or the strong for IC > 10; the moderate locates where IC is between 0.1 and 10. Fig.2 Normalized overdrive voltage as a function of inversion coefficient

Reference

 W. Sansen, “Minimum power in analog amplifying blocks – presenting a design procedure ”, IEEE Solid-State Circuits Magazine, fall 2015.

## The Calculation of Phase Margin

Negative feedback is ubiquitous, and the discussion on its stability can be found everywhere. For ease of reference, I will put a memo on the equations to calculate the phase margin. Fig.1 The symbolized feedback configuration

The amplifying system may includes multiple poles: $A(s)=\frac{A_0}{(1+\frac{s}{\omega_{p1}})(1+\frac{s}{\omega_{p2}})(1+\frac{s}{\omega_{p3}})(...)}$.

Neglecting higher order terms, it could be simplified to a two-pole equation: one dominant pole and one equivalent non-dominant pole which is approximate to: $\frac{1}{\omega_{eq}}=\frac{1}{\omega_{p2}}+\frac{1}{\omega_{p3}}+...$.

The frequency of interest is where the loop gain magnitude is close to unity, denoted as ωt. Normally ωt is much larger than the dominant pole. Hence, βA(s) around ωt can be further simplified to: $\beta A(s)=\frac{\beta A_0 \omega_{p1}}{s(1+\frac{s}{\omega_{eq}})}$.

Considering the first pole introduces -90° phase shift, the phase of the loop gain at ωt is: $Phase_{Loop Gain} = -90^o-tan^{-1}(\frac{\omega_t}{\omega_{eq}})$.

Consequently, the phase margin (PM) is calculated by adding 180° to the phase of the loop gain and it is written as: $PM \approx 90^o-tan^{-1}(\frac{\omega_t}{\omega_{eq}}) = tan^{-1}(\frac{\omega_{eq}}{\omega_t})$.

It can be seen that the phase margin is determined by the relative position between the equivalent non-dominant pole and the unity loop gain bandwidth.

ωeq/ωt        0.5               1               2               3               4

PM            26.6°          45°          63.4°         71.6°         76°

Posted in Analog Design | | 3 Comments

## Gm/Id-Design Methodology

Three times of entering a wrong password to access this site…

Earlier in 2012, I wrote an introductory post about EKV model and later extended the related topic a little bit in another post – Stay Simple – Square-Law Equation Related. Since then I keep following the information about the EKV model and the inversion-coefficient-based analog design methodology.

One of the major contributors on this design methodology is Prof. Willy Sansen. He has given a short tutorial named Impact of Scaling on Analog Design. The tutorial was organized by ISSCC through edX (free access after registration). Most recently he also published an article  to summarize his idea in the IEEE Solid-State Circuits Magazine.

The journey starts with a beautiful equation which nicely links the weak and the strong inversion (see the curve in Fig.1). Fig.1 The relationship between V and IC

Fascinated by Prof.Sansen’s design procedure, I tried to apply it to my daily design work. Theoretically, it does give me a broader view and some insight on the low-power design. However, practically I find it difficult to make full use of it. Especially nowadays most of the design enters into the deep submicron region, and the model parameters are so complicated to interpret.

Then there comes another big guy – Prof. Boris Murmann. Yes, the professor provides the famous ADC performance survey! Now the professor also launches his gm/Id starter kit. The kit provides scripts that can co-simulate between SPICE simulator and Matlab and store transistor DC parameters into Matlab files. The data stored can then be used for systematic circuit design in Matlab. It looks brute-force but yet smart and efficient!

It’s free. Enjoy!

Reference

 W. Sansen, “Minimum power in analog amplifying blocks – presenting a design procedure ”, IEEE Solid-State Circuits Magazine, fall 2015.

## Brief Study of Noise-Shaping SAR ADC – Part C

The topic of noise-shaping SAR ADC will come to an end in this post. In Part A, I briefly talked about the concept of noise shaping applied to sigma-delta modulators. In Part B, I introduced one special property of SAR ADCs which can be utilized to perform noise-shaping – the SAR architecture can generate the conversion residue without a feedback DAC. Then some form of noise shaping was achieved, but the result was not so satisfactory. In this post, I will continue the journey.

A small summarize of performing noise shaping on the SAR architecture from Part B:

1. let the DAC array complete all the switching based on the decisions from MSB to LSB (the conversion residue is generated)
2. sample the conversion residue ( $V_{RES}$) on an extra capacitor
3. apply the residue with opposite sign ( $-V_{RES}$) to the opposite terminal of the comparator

If the extra capacitor is much smaller than the array capacitor, the current residue is sampled and there is almost no memory effect. The linear model of the SAR ADC looks like: Fig. 1 Linear model when residue sampling capacitor is much smaller than the array capacitor

If an integrator is added to Fig. 1, the noise transfer function NTF becomes identical to the 1st-order noise shaping:

The corresponding hardware implementation could look like this:

1st-order noise shaping is finally achieved! BUT, circuit design is all about compromise. There are some concerns. Just list some of them as follows:

1. kT/C_R is not noise-shaped anymore
2. of course, you can never get an amplifier with infinite gain
3. residue attenuation due to charge sharing between sampling capacitor and parasitic capacitor at the amplifier input
4. switch-induced error

I would like to stop here (because weekend is coming ;-).

If you want to know more about practical solutions. I recommend the interesting and well-written paper . I would like to thank the authors. I enjoyed a lot reading their paper.

References:

 J. A. Fredenburg and M. P. Flynn, “A 90-MS/s 11-MHz-Bandwidth 62-dB SNDR Noise-shaping SAR ADC”, JSSC, vol.47, 2012.

Posted in Data Converter | Tagged | 3 Comments

## Brief Study of Noise-Shaping SAR ADC – Part B

In the previous post, I’ve shared some basics of sigma-delta ADC. In this post, before we look at the noise-shaping SAR ADC, let’s again do a warm-up.

The z-domain linear model for a 1st-order sigma-delta modulator:

The linear model has the same transfer functions as the one in Fig.6 of the previous post, where a delaying integrator is used as the loop filter.

Before the quantizer, the modulator is doing two tasks:

1. Δ: generate the conversion residue R (=U-V)

2. : add all the previous residues

Keep this in mind. Now let’s try to make the SAR do the noise shaping.

When the conversion is complete for an N-bit SAR, the magnitude of the voltage generated at the top plate of the DAC represents the difference between the sampled input and a representation constructed from decisions of the high-weighted N-1 bits: $V_{DAC} = \frac{V_{REF}}{2}D_{N-1}+\frac{V_{REF}}{2^2}D_{N-2}+\cdots+\frac{V_{REF}}{2^{N-1}}D_1-V_{IN}$

If we do one extra switching of the DAC array based on the final decision of LSB, we recalculate the voltage generated at the DAC top plate: $V_{DAC}^{+1} = \frac{V_{REF}}{2}D_{N-1}+\cdots+\frac{V_{REF}}{2^{N-1}}D_1+\frac{V_{REF}}{2^N}D_0-V_{IN}$

Yes! We catch the conversion residue! It is further simplified as follows: $V_{RES} = D_{OUT}-V_{IN}$

According to Fig.1, the simplified equation can be rewritten as $-V_{RES} = V_{IN}-D_{OUT}$.

Step 1: sample the residue on the extra capacitor Fig.3 Sample the residue on the extra capacitor (discrete-time domain is used to indicate the current sample and the previous one)

Step2: apply the sampled residue to the opposite input of the comparator during the next conversion Fig.4 Apply the residue to the opposite input of the comparator when the next sample is converted

Now it comes to the discussion about choosing the value of C_R.

Assume $C_R = k C_{DAC}$

Then $V_R(n) = k_1 V_{RES}(n) + k_2 V_R(n-1)$, $k_1=\frac{1}{1+k}$ and $k_2=\frac{k}{1+k}$

What will the linear model look like?

If $C_R << C_{DAC}$ ( $k \approx 0$ ), $k_1 \approx 1$ and $k_2 \approx 0$. The memory of the previous residues is ignored and only the current residue is recorded. The linear model can be simplified to:

Take a look at the magnitude responses of the NTFs under different k: Fig. 5 Magnitude response of NTFs under different k and compared to 1st-order noise shaping

Noise does be shaped! In addition, it seems using a small residue sampling capacitor is fairly good compared to larger ones (Note that the kT/C noise during residue sampling presents itself to the comparator input and can also be shaped together with the quantization noise and the input-referred comparator noise ).

However, compared to the 1st-order modulator, this way of noise shaping is much less efficient. We could do better! How? The next post ;-).

References:

 J. A. Fredenburg and M. P. Flynn, “A 90-MS/s 11-MHz-Bandwidth 62-dB SNDR Noise-shaping SAR ADC”, JSSC, vol.47, 2012.

Posted in Data Converter | Tagged | 1 Comment

## Brief Study of Noise-Shaping SAR ADC – Part A

Sometimes it is much easier to become a fan of something when you only know something about it. Just like Sigma-Delta ADC, it is so complicated that even though I have learned it for several times I still can’t fully understand it!

Nevertheless, I am still a fan of it ;-).

Sigma-Delta ADCs dominate in the high-resolution domain (though they are not extremely fast, actually kind of slow…). Fig. 1 Signal-to-noise-and-distortion ratio (SNDR) versus sampling frequency of Sigma-Delta ADCs and other Nyquist ADCs (SAR, Pipeline, and Flash). The data were reported in ISSCC, and collected by Murmann’s ADC survey .

I am currently doing successive-approximation-register (SAR) ADC. Fig. 2 Energy (P/fs) versus SNDR of SAR ADCs, Sigma-Delta ADCs, and other Nyquist ADCs (Pipeline and Flash). The data were again extracted from Murmann’s ADC survey .

In order to achieve high resolution, can SARs shape the noise just as Sigma-Delta ADCs do?

People have tried to imploy noise-shaping technique into the SAR architecture [2, 3], but so far the reported performance (with chip measurement) is not very compelling (SNDR = 62dB , Power = 806uW, Bandwidth = 11MHz, FoM = 35.8fJ/conv) .

Nevertheless, the idea of noise-shaping SAR is so intriguing.

Before entering into this topic, I would like to do some warm-ups – some basics of Sigma-Delta ADCs (yes, that’s all I know about it).

1. Oversampling Fig.3 Brief illustration of oversampling (OSR is the abbreviation of oversampling ratio)

Doubling the sampling frequency gives 3 dB increase of SNR. However, oversampling is seldom used alone, and it is commonly used together with the noise-shaping technique.

2. Noise-shaping

Filtering is introduced into the ADC to further suppress the in-band quantization noise power. At the same time, the filtering does not affect the input signal. By applying a loop filter before the quantizer and introducing the feedback, a sigma delta modulator is built.

3. Linear model of a sigma-delta modulator Fig.5 Linear model of a sigma-delta modulator, STF and NTF are abbreviations of signal transfer function and noise transfer funcion, respectively. (More information can be referred to Schreier’s book)

According to STF and NTF, if the transfer function of the loop filter H(z) is designed to have a large gain inside the band of interest and small gain outside the band of interest, the signal can pass the modulator and the noise can be greatly reduced.

4. If an integrator is chosen to be the loop filter Fig. 6 Modulator with an integrator as the loop filter and its STF and NTF

We do a plot of H(f), STF(f), and NTF(f) (Matlab ‘fvtool’ is used):

Bingo! The signal is passed to the output with a delay of a clock cycle, while the quantization noise is passed through a high-pass filter.

Doubling the sampling frequency gives 9 dB increase of SNR for 1st order noise shaping.

5. Get more aggressive on the order

This post tells the basic story of noise-shaping. In the next post, I will try to learn how noise-shaping can be used in SAR ADCs.

References:

 K. S. Kim, J. Kim, and S. H. Cho, “nth-order multi-bit \Sigma-\Delta ADC using SAR quantiser”, Electronics Letters, vol. 46, 2010.

 J. A. Fredenburg and M. P. Flynn, “A 90-MS/s 11-MHz-Bandwidth 62-dB SNDR Noise-shaping SAR ADC”, JSSC, vol.47, 2012.

 R. Schreier and G. C. Temes, Understanding Delta-Sigma Data Converters, 2005.

Posted in Data Converter | Tagged , , | 3 Comments

## Noise Effect On The Distribution of ADC Output Codes

In the previous post, the probability of comparator decision with the existence of noise was calculated. In this post, the topic about noise and probability will continue. The whole topic is actually inspired by a 1986-paper , which discusses noise effect on the distribution of SAR ADC output codes. Following the author’s method, though I end up with slightly different results, still I found some interesting things which I would like post here.

Assume an input voltage of $V_i$ is applied to an ADC, and the ADC has input-referred noise with a standard deviation of $\sigma$. Then, the input voltage compares with certain reference voltage $V_r$ to determine the corresponding bit. Referring to the equations calculated in the previous post, the probability of the bit being high is written as $P(D_j = 1)=\frac{1}{2} erfc(\frac{V_r-V_i}{\sqrt{2}~ \sigma})$.

Similarly the probability of the bit being low is given by $P(D_j = 0)=\frac{1}{2} erfc(\frac{V_i-V_r}{\sqrt{2}~ \sigma})$.

Considering the time sequence of Nyquist ADCs when they generate the digital outputs, I roughly group them into two categories: outputs are converted simultaneously and outputs are converted successively. The former needs $2^N-1$ times comparison for N-bit, and the latter only needs $N$ times but with the penalty of speed. I’m more interested in the second case, lazy and slow but still doing the job ;-).

Problem formulation: Due to the existence of noise, the input voltage can be converted to erroneous output codes or a correct one (respectively indicated with yellow and blue backgrounds in Fig.1). Probabilities of each converted output code corresponding to one particular input are of interest. Fig.1 An example of 3-bit ADC. Due to noise, the input can be mapped to erroneous output codes (with yellow background) or a correct one (with blue background). The probability for each mapping is of interest.

How to calculate the probability of one particular output code?

Fig. 2 gives an example of probability calculation for code “100” converted by a 3-bit SAR ADC. The input voltage $V_i$ corresponds to code “101”. The calculation starts from MSB till LSB. The probability of less-significant bits will depend on the results of more-significant bits. Finally, the probability of a given code is the product of the probability of its individual bits. Fig. 2 Probability calculation of code “100” generated by a 3-bit SAR ADC.

Knowing the way of calculating the probability of a given code, I tried to look at the noise specifications from statistics point of view. There are two noise specifications commonly used (in academia): noise power is equal to the quantization noise or noise standard deviation is equal to 1 LSB. The former introduces 3 dB loss of SNR, and the latter 11 dB. Then, how does the code distribution look like under these two specifications?

Noise power is equal to the quantization noise: Fig. 3 Probability of output codes for a 10-bit SAR ADC with an analog input corresponding to code 510 + 1/4 LSB offset and input-referred noise power equal to its quantization noise.

Noise standard deviation is equal to 1 LSB: Fig. 4 Probability of output codes for a 10-bit SAR ADC with an analog input corresponding to code 510 + 1/4 LSB offset and input-referred noise standard deviation equal to 1 LSB.

Sorry for the math. Like nonsense in the middle of a summar day. Sleepy?

Reference

 Philip W. Lee, “Noise considerations in high-accuracy A/D converters”, JSSC, 1986.

## Noise Effect On The Probability of Comparator Decision

In the previous post, it is explained why normal distribution with a standard deviation of $\sigma$ is used to characterize the circuit input-referred noise. In this post, we will calculate the probability of comparator decision (High or Low) with the existence of noise.

We ignore the dc offset of the comparator and only consider the thermal-noise effect. Then the distribution of voltages presented to the comparator input, as shown in Fig.1, can be represented by a normal distribution whose standard deviation is equal to $\sigma$ of the comparator input-referred noise and whose mean value is shifted by the signal Vc. Fig. 1 Input-referred distribution of voltages presented to the comparator. The shaded area represents the probability of the comparator thinking the input voltage is low.

The probability of a decision Low is simply given by the area under the curve to the left of zero, which is indicated by the shaded region in Fig.1. This area is given in terms of the error function by (referring to P1, P2, and P3 in Appendix) $P(\textit{Low})=\frac{1}{2}+\frac{1}{2} erf(\frac{-V_C}{\sqrt{2}~ \sigma})$.

Note that the error function is an odd function (P5 in Appendix), we can further write $P(\textit{Low})=\frac{1}{2}-\frac{1}{2} erf(\frac{V_C}{\sqrt{2}~ \sigma})$.

Finally, based on the above result, the probability can be further calculated in terms of the complementary error function by (P4 in Appendix) $P(\textit{Low})=\frac{1}{2} erfc(\frac{V_C}{\sqrt{2}~ \sigma})$.

Similarly, the probability of a decision High is given by the remaining unshaded area under the curve in Fig.1, which is $P(\textit{High})=\frac{1}{2} erfc(\frac{-V_C}{\sqrt{2}~ \sigma})$.

In the next post, we will use the derived equations to show noise effect on the distribution of ADC output codes.

Appendix

Some equations on standard normal distribution:

1. Probability density function (PDF) $\phi(x)=\frac{1}{\sqrt{2 \pi}} e^{-\frac{x^2}{2}}$.

If denote the mean as $\mu$ and the standard deviation as $\sigma$, the PDF of general normal distribution can be expressed as $f(x)= \frac{1}{\sigma} \phi(\frac{x-\mu}{\sigma})$.

2. Cumulative distribution function (CDF) $\Phi(x)=P[X \leq x] = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^x e^{-\frac{t^2}{2}}dt$.

3. Error function $erf(x)=\frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2}dt$.

Hence, the CDF can be further expressed using error function $\Phi(x)=\frac{1}{2}+\frac{1}{2} erf(\frac{x}{\sqrt{2}})$.

4. Complementary error function $erfc(x)=\frac{2}{\sqrt{\pi}} \int_x^{\infty} e^{-t^2}dt = 1 - erf(x)$.

5. Error function is an odd function. $erf(-x)=- erf(x)$.

Posted in Analog Design | | 1 Comment

## Normal Distribution and Input-Referred Noise

Normal distribution is frequently assumed when we do circuit analysis.

• Why?

Because there is a saying that the sum of a large number of random variables converges to the Normal.

• Under what condition is this true?

The central limit theorem deals with this point. In , it defines “the normalized sum of a large number of mutually independent random variables with zero means and finite variances tends to the normal probability distribution function provided that the individual variances are smaller compared to the total sum of variance”.

• Why people sometimes assume normal distribution and use $\sigma$ to characterize circuit input-referred noise?

It’s understandable that people use $\sigma$ to characterize the offset of a circuit, because many random effects during fabrication tend towards a normal distribution. But when comes to noise, 4kTRBW or kT/C will pop up in our minds. Why $\sigma$?

In frequency domain, it’s straightforward to do the noise analysis given noise power spectral density and bandwidth. However, when moves to time domain, we need the assumption of normal distribution and its $\sigma$.

• How to link time-domain noise to frequency-domain noise? Or let’s ask in this way, how is $\sigma$ related to noise power spectral density $n^2(f)$ and bandwidth $F_{max}$?

Cadence should have the answer! Because they provide transient noise simulation. Yes then, I find the answer in their application note . 😉

Let’s take white noise as an example for simplicity (in this case, $n^2(f)=n^2$). In the time domain, a white noise signal n(t) is approximated as $n(t)=\sigma \cdot \eta(t, \Delta t)$,

where $\eta(t, \Delta t)$ is a random number with standard normal distribution updated with time interval $\Delta t$. The noise signal amplitude and update time interval are $\sigma = \sqrt{n^2 \cdot F_{max}}$, $\Delta t = \frac{1}{2F_{max}}$.

Let’s then verify it. The auto-correlation function for this noise signal is calculated to be $n^2(t)=\sigma^2 \Lambda(\frac{t}{\Delta t})$,

where $\Lambda$ is a triangular pulse function of width $\Delta t$. The power spectrum of n(t) can then be calculated as a Fourier transform of the auto-correlation function $n^2(f)=\sigma^2 \cdot \Delta t \cdot sinc^2(f \cdot \Delta t)$.

Finally, the total noise power can be obtained by integrating over the frequency $\int_0^\infty n^2(f) = n^2 \cdot F_{max}$. Fig.2 The noise signal, its auto correlation function, and spectral density 

For more detailed explanation, please refer to Ref .

Reference

 H. Stark and J. W. Woods, Probability and random processes with applications to signal processing, 3rd edition, Pearson Education, 2009.

 Cadence, “Application notes on direct time-domain noise analysis using Virtuoso Spectre”, Version 1.0, July 2006.

Posted in Analog Design | | 2 Comments

## Stay Simple – Square-Law Equation Related

The 1st post in 2013. Happy middle new year! 😉

Nowadays most of us in academy are designing analog circuits in deep-submicron nodes using short-channel transistors. Putting them together with the digital, building up a system, and selling it with extraordinary performance in top conferences or journals—that’s a dream (sometimes triggers a real nightmare) of a PhD student…

Then do you miss the old time when you are designing circuits using long-channel transistors?

I miss! The square-law equation is so neat that the relation among the drain current, the effective voltage, and the transconductance can simply be expressed by $I_D = \frac{V_{GS}-V_T}{2} g_m$

With certain effective gate-source voltage, the transconductance linearly follows the biasing current. It is quite straightforward to tell how much power you will pay for your target gain.

Soon, the short-channel transistors together with low voltage come ;-). Life becomes not that straightforward. In  and , the authors introduce a parameter Veff, defined by $V_{eff}=\frac{I_D}{g_m}$

They also compare their parameter with the square-law equation using the following figure.

Yes! For modern short-channel transistors, they are often biased in the transition region between sub-threshold and saturation regions. They are making classical equations useless! The proposing of Veff is so decent that it provides a simple way for us to analyze the power bounds of analog circuits in modern CMOS .

If you ponder on the above simulated curves, do you remember something which also has the beauty of simplicity and continuity?

Yes. The EKV model!  In this model, the Veff, using inversion coefficient, can be derived by $V_{eff}= \frac{I_D}{g_m} = n U_T (\sqrt{IC + 0.25}+0.5)$

And the effective gate-source voltage is expressed as $V_{GS}-V_T= 2 n U_T \textnormal{ln}(e^{\sqrt{IC}}-1)$

Out of curiosity, I plotted Veff versus VGS-VT based on the EKV model, together with the simulated curves, which I obtained using a similar method as used in  but for different processes. The figure is shown as bellow. Fig. 2 Veff versus VGS-VT with EKV model included (Different inversion regions are also indicated).

Though the analog world can’t be as simple as 0 or 1, still the simpler the better!

Reference

 T. Sundström, B. Murmann, and C. Svensson, “Power dissipation bounds for high-speed Nyquist analog-to-digital converters”, TCASI, 2008.

 C. Svensson and J. J. Wikner, “Power consumption of analog circuits: a tutorial”, Analog Springer, 2010.

 D. M. Binkley, B. J. Blalock, and J. M. Rochelle, “Optimizing drain current, inversion level, and channel length in analog CMOS design”, Analog Springer, 2006.

Posted in MOS Models | Tagged , , | 1 Comment