1. **Usually the most difficult condition is unity-gain feedback.** As Fig.1 shows, the close-loop bandwidth is normally smaller than or equal to the unity-gain bandwidth. This means it reaches the maximum phase drop at the unity-gain point, making unity-gain feedback the most difficult for stability.

2. **Adding a miller capacitor between the two cascaded stages is a common technique. **As Fig.2 shows, K is the ratio between the second pole and the GBW, which determines the phase margin (referring to this post). In addition, to push the right-half-plane (RHP) zero far more than the second pole, it’s better to keep Cm much smaller than C2. This further puts a demand on the ratio between gm1 and gm2. Finally, the trade-off between noise/speed (small gm1) and current consumption (large gm2) lands on the desk (as expected).

3. **Introducing a nulling resistor is the most popular approach to mitigate the positive zero.** Compared to Fig.2, both the poles (neglecting the third non-dominant pole) and the GBW won’t change, except for the positive zero. How to calculate the new zero? Fig.3 demonstrates a simple way which was introduced by Prof. Razavi in his analog design book. One can either use the zero to (try to) cancel the second pole or simply push the zero to infinity.

4. **Ahuja compensation [1] is another way to abolish the positive zero.** The cause of the positive zero is the feedforward current through Cm. To abolish this zero, we have to cut the feedforward path and create a unidirectional feedback through Cm. Adding a resistor such as in Fig.3 is one way to mitigate the effect of the feedforward current. Another approach uses a current buffer cascode to pass the small-signal feedback current but cut the feedforward current, as is depicted in Fig.4. People name this approach after the author Ahuja.

5. **A good example of using Ahuja compensation is to compensate a 2-stage folded-cascode amplifier.** As is shown in Fig.5, by utilizing the “already existed” cascode stage, the Ahuja compensation can be implemented without any additional biasing current. There are two ways to put the miller capacitor, which normally will provide the same poles but different zeros. In REF[2], the poles and zeros of the two approaches are calculated based on reasonable assumption. Out of curiosity, I also drew the small-signal model and derived the transfer function. With some patience I finally reached the same result as given in REF[2].

6. **The bloody equations of poles and zeros of two Ahuja approaches are shared in Fig.6.** Though these equations looks very dry at the moment, one will appreciate them during the actual design. They do help me to stabilize an amplifier with varying capacitive load. One thing worth to look at is the ratio between the natural frequency of the two complex non-dominant poles and the GBW. Considering Cm and C2 are normally in the same order and C1 is much smaller, the ratio will end up with a relatively large value and the phase margin can be guaranteed.

Oh…finally, it took me quite some time to reach here. The END.

Reference:

[1] B.K.Ahuja, “An improved frequency compensation technique for CMOS operational amplifiers”, JSSC, 1983.

[2] U. Dasgupta, “Issues in “Ahuja” frequency compensation technique”, IEEE International Symposium on Radio-Frequency Integration Technology, 2009.

Filed under: Analog Design Tagged: 2-stage folded cascode, Ahuja compensation, Miller compensation, Stability ]]>

On the other hand, the annotation of the DC operating point provided by Cadence is really helpful. Now we can even have gm/ID annotated beside the transistor (it is called ‘gmoverid’ in the simulator). Hence, a curve showing the gm/ID-IC relationship will be informative, and this Mr.Sansen has [1]! It is plotted in Fig.1.

In order to derive the relationship, we first need to recall the following equations:

Based on the above equations, the gm/ID can be derived:

Now we may have a rough idea of IC based on the annotated gm/ID (assuming nUT is about 35 mV).

gm/ID 25 18 9

IC 0.1 1 10

**Reference**

[1] W. Sansen, “Minimum power in analog amplifying blocks – presenting a design procedure ”, *IEEE Solid-State Circuits Magazine*, fall 2015.

Filed under: Analog Design, MOS Models Tagged: EKV, gm/ID, Inversion Coefficient, moderate inversion ]]>

With the help of Gm/Id design kit, I can easily visualize the transistor performance as a function of its gate-source voltage (see Fig.1). As VGS increases, the transistor undergoes the weak, the moderate, and the strong inversion. For high gain, we go left; for high speed, we go right. Being far-left, the gain is not increasing but the speed drops extremely low; being far-right, the speed is not increasing but the drain current is still climbing! For a decent figure-0f-merit (speed*gain), go to the middle, go moderate!

As CMOSers, we love the square-law equation, we sometimes hate and sometimes embrace the exponential subthreshold current equation. But with regard to the current flowing between the strong and the weak, do we have one equation for it? No, but yes…by doing some math, the EKV model combines all the three. Referring to [1], the IC-V related equations are copied as follows:

,

, ,

,

where n is subthreshold slope factor and UT is thermal voltage. At room temperature, 2nUT is about 70mV [1]. As Fig.2 shows, the IC-V curve matches well with the weak for IC < 0.1 or the strong for IC > 10; the moderate locates where IC is between 0.1 and 10.

**Reference**

[1] W. Sansen, “Minimum power in analog amplifying blocks – presenting a design procedure ”, *IEEE Solid-State Circuits Magazine*, fall 2015.

Filed under: Analog Design, MOS Models Tagged: gm/ID design methodology, inversion coefficient approach, moderate inversion ]]>

The amplifying system may includes multiple poles:

.

Neglecting higher order terms, it could be simplified to a two-pole equation: one dominant pole and one equivalent non-dominant pole which is approximate to:

.

The frequency of interest is where the loop gain magnitude is close to unity, denoted as ωt. Normally ωt is much larger than the dominant pole. Hence, βA(s) around ωt can be further simplified to:

.

Considering the first pole introduces -90° phase shift, the phase of the loop gain at ωt is:

.

Consequently, the phase margin (PM) is calculated by adding 180° to the phase of the loop gain and it is written as:

.

It can be seen that the phase margin is determined by the relative position between the equivalent non-dominant pole and the unity loop gain bandwidth.

ωeq/ωt 0.5 1 2 3 4

PM 26.6° 45° 63.4° 71.6° 76°

Filed under: Analog Design Tagged: Feedback circuit, loop gain, Phase margin ]]>

Earlier in 2012, I wrote an introductory post about EKV model and later extended the related topic a little bit in another post – Stay Simple – Square-Law Equation Related. Since then I keep following the information about the EKV model and the inversion-coefficient-based analog design methodology.

One of the major contributors on this design methodology is Prof. Willy Sansen. He has given a short tutorial named *Impact of Scaling on Analog Design. *The tutorial* *was organized by ISSCC through edX (free access after registration). Most recently he also published an article [1] to summarize his idea in the IEEE Solid-State Circuits Magazine.

The journey starts with a beautiful equation which nicely links the weak and the strong inversion (see the curve in Fig.1).

Fascinated by Prof.Sansen’s design procedure, I tried to apply it to my daily design work. Theoretically, it does give me a broader view and some insight on the low-power design. However, practically I find it difficult to make full use of it. Especially nowadays most of the design enters into the deep submicron region, and the model parameters are so complicated to interpret.

Then there comes another big guy – Prof. Boris Murmann. Yes, the professor provides the famous ADC performance survey! Now the professor also launches his gm/Id starter kit. The kit provides scripts that can co-simulate between SPICE simulator and Matlab and store transistor DC parameters into Matlab files. The data stored can then be used for systematic circuit design in Matlab. It looks brute-force but yet smart and efficient!

It’s free. Enjoy!

**Reference**

[1] W. Sansen, “Minimum power in analog amplifying blocks – presenting a design procedure ”, *IEEE Solid-State Circuits Magazine*, fall 2015.

Filed under: Analog Design, MOS Models Tagged: EKV, gm/ID, Inversion Coefficient ]]>

A small summarize of performing noise shaping on the SAR architecture from Part B:

- let the DAC array complete all the switching based on the decisions from MSB to LSB (the conversion residue is generated)
- sample the conversion residue () on an extra capacitor
- apply the residue with opposite sign () to the opposite terminal of the comparator

If the extra capacitor is much smaller than the array capacitor, the current residue is sampled and there is almost no memory effect. The linear model of the SAR ADC looks like:

If an integrator is added to Fig. 1, the noise transfer function NTF becomes identical to the 1st-order noise shaping:

The corresponding hardware implementation could look like this:

** 1st-order noise shaping is finally achieved! BUT, circuit design is all about compromise. **There are some concerns. Just list some of them as follows:

- kT/C_R is not noise-shaped anymore
- of course, you can never get an amplifier with infinite gain
- residue attenuation due to charge sharing between sampling capacitor and parasitic capacitor at the amplifier input
- switch-induced error

I would like to stop here (because weekend is coming ;-).

If you want to know more about practical solutions. I recommend the interesting and well-written paper [1]. I would like to thank the authors. I enjoyed a lot reading their paper.

References:

[1] J. A. Fredenburg and M. P. Flynn, “A 90-MS/s 11-MHz-Bandwidth 62-dB SNDR Noise-shaping SAR ADC”, *JSSC*, vol.47, 2012.

Filed under: Data Converter Tagged: Noise-shaping SAR ]]>

**The z-domain linear model for a 1st-order sigma-delta modulator:**

The linear model has the same transfer functions as the one in Fig.6 of the previous post, where a delaying integrator is used as the loop filter.

**Before the quantizer, the modulator is doing two tasks:**

**1. ****Δ : generate the conversion residue R (=U-V)**

**2. ****∑ : add all the previous residues**

Keep this in mind. Now let’s try to make the SAR do the noise shaping.

A conventional charge-redistribution SAR ADC:

When the conversion is complete for an N-bit SAR, the magnitude of the voltage generated at the top plate of the DAC represents the difference between the sampled input and a representation constructed from decisions of the high-weighted N-1 bits:

If we do one extra switching of the DAC array based on the final decision of LSB, we recalculate the voltage generated at the DAC top plate:

**Yes! We catch the conversion residue!** It is further simplified as follows:

According to Fig.1, the simplified equation can be rewritten as .

Then we need to sample this residue and store it somewhere else. How about this method?

**Step 1: sample the residue on the extra capacitor**

**Step2: apply the sampled residue to the opposite input of the comparator during the next conversion**

**Now it comes to the discussion about choosing the value of C_R.**

**Assume **

**Then** ,

and

What will the linear model look like?

If ( ), and . The memory of the previous residues is ignored and only the current residue is recorded. The linear model can be simplified to:

Take a look at the magnitude responses of the NTFs under different k:

Noise does be shaped! In addition, it seems using a small residue sampling capacitor is fairly good compared to larger ones (Note that the kT/C noise during residue sampling presents itself to the comparator input and can also be shaped together with the quantization noise and the input-referred comparator noise [1]).

However, compared to the 1st-order modulator, this way of noise shaping is much less efficient. We could do better! How? The next post ;-).

References:

[1] J. A. Fredenburg and M. P. Flynn, “A 90-MS/s 11-MHz-Bandwidth 62-dB SNDR Noise-shaping SAR ADC”, *JSSC*, vol.47, 2012.

Filed under: Data Converter Tagged: Noise-shaping SAR ]]>

Nevertheless, I am still a fan of it ;-).

**Sigma-Delta ADCs dominate in the high-resolution domain (though they are not extremely fast, actually kind of slow…).**

**SAR ADCs are quite energy-efficient, but less accurate than Sigma-Delta ADCs. **

People have tried to imploy noise-shaping technique into the SAR architecture [2, 3], but so far the reported performance (with chip measurement) is not very compelling (SNDR = 62dB , Power = 806uW, Bandwidth = 11MHz, FoM = 35.8fJ/conv) [3].

**Nevertheless, the idea of noise-shaping SAR is so intriguing.**

Before entering into this topic, I would like to do some warm-ups – some basics of Sigma-Delta ADCs (yes, that’s all I know about it).

**Some basics of Sigma-Delta ADCs:**

**1. Oversampling **

**Doubling the sampling frequency gives 3 dB increase of SNR. **However, oversampling is seldom used alone, and it is commonly used together with the noise-shaping technique.

**2. Noise-shaping**

Filtering is introduced into the ADC to further suppress the in-band quantization noise power. At the same time, the filtering does not affect the input signal. By applying a loop filter before the quantizer and introducing the feedback, a sigma delta modulator is built.

**3. Linear model of a sigma-delta modulator **

**4. If an integrator is chosen to be the loop filter**

We do a plot of H(f), STF(f), and NTF(f) (Matlab ‘*fvtool’* is used):

Bingo! The signal is passed to the output with a delay of a clock cycle, while the quantization noise is passed through a high-pass filter.

**Doubling the sampling frequency gives 9 dB increase of SNR for 1st order noise shaping.**

**5. Get more aggressive on the order**

This post tells the basic story of noise-shaping. In the next post, I will try to learn how noise-shaping can be used in SAR ADCs.

References:

[1] B. Murmann, “ADC Performance Survey 1997-2014,” [Online]. Available: http://www.stanford.edu/~murmann/adcsurvey.html.

[2] K. S. Kim, J. Kim, and S. H. Cho, “nth-order multi-bit \Sigma-\Delta ADC using SAR quantiser”, *Electronics Letters*, vol. 46, 2010.

[3] J. A. Fredenburg and M. P. Flynn, “A 90-MS/s 11-MHz-Bandwidth 62-dB SNDR Noise-shaping SAR ADC”, *JSSC*, vol.47, 2012.

[4] R. Schreier and G. C. Temes, *Understanding Delta-Sigma Data Converters*, 2005.

Filed under: Data Converter Tagged: Noise-shaping, SAR, Sigma-delta ]]>

Assume an input voltage of is applied to an ADC, and the ADC has input-referred noise with a standard deviation of . Then, the input voltage compares with certain reference voltage to determine the corresponding bit. Referring to the equations calculated in the previous post, the probability of the bit being high is written as

.

Similarly the probability of the bit being low is given by

.

Considering the time sequence of Nyquist ADCs when they generate the digital outputs, I roughly group them into two categories: outputs are converted simultaneously and outputs are converted successively. The former needs times comparison for N-bit, and the latter only needs times but with the penalty of speed. I’m more interested in the second case, lazy and slow but still doing the job ;-).

**Problem formulation: **Due to the existence of noise, the input voltage can be converted to erroneous output codes or a correct one (respectively indicated with yellow and blue backgrounds in Fig.1). Probabilities of each converted output code corresponding to one particular input are of interest.

**How to calculate the probability of one particular output code?**

Fig. 2 gives an example of probability calculation for code “100” converted by a 3-bit SAR ADC. The input voltage corresponds to code “101”. The calculation starts from MSB till LSB. The probability of less-significant bits will depend on the results of more-significant bits. Finally, the probability of a given code is the product of the probability of its individual bits.

Knowing the way of calculating the probability of a given code, I tried to look at the noise specifications from statistics point of view. There are two noise specifications commonly used (in academia): noise power is equal to the quantization noise or noise standard deviation is equal to 1 LSB. The former introduces 3 dB loss of SNR, and the latter 11 dB. Then, how does the code distribution look like under these two specifications?

**Noise power is equal to the quantization noise:**

**Noise standard deviation is equal to 1 LSB:**

Sorry for the math. Like nonsense in the middle of a summar day. Sleepy?

**Reference**

[1] Philip W. Lee, “Noise considerations in high-accuracy A/D converters”, *JSSC*, 1986.

Filed under: Data Converter Tagged: Noise Effect, Normal Distribution, SAR ADC ]]>

We ignore the dc offset of the comparator and only consider the thermal-noise effect. Then the distribution of voltages presented to the comparator input, as shown in Fig.1, can be represented by a normal distribution whose standard deviation is equal to of the comparator input-referred noise and whose mean value is shifted by the signal Vc.

The probability of a decision *Low* is simply given by the area under the curve to the left of zero, which is indicated by the shaded region in Fig.1. This area is given in terms of the error function by (referring to P1, P2, and P3 in Appendix)

.

Note that the error function is an odd function (P5 in Appendix), we can further write

.

Finally, based on the above result, the probability can be further calculated in terms of the complementary error function by (P4 in Appendix)

.

Similarly, the probability of a decision* High* is given by the remaining unshaded area under the curve in Fig.1, which is

.

In the next post, we will use the derived equations to show noise effect on the distribution of ADC output codes.

**Appendix**

Some equations on standard normal distribution:

1. Probability density function (PDF)

.

If denote the mean as and the standard deviation as , the PDF of general normal distribution can be expressed as

.

2. Cumulative distribution function (CDF)

.

3. Error function

.

Hence, the CDF can be further expressed using error function

.

4. Complementary error function

.

5. Error function is an odd function.

.

Filed under: Analog Design Tagged: Normal Distribution, Probability of Comparator Decision, Thermal Noise ]]>