A Brief Review On the Orders of PLL

The simplest PLL, as is shown in Fig.1, consists of a phase detector (PD) and a voltage controlled oscillator (VCO). Via a negative feedback loop, the PD compares the phases of OUT and IN, generating an error voltage that varies the VCO frequency until the phases are aligned.

Fig.1 A 1st-order PLL

This topology, however, must be modified because the output of PD consists of a dc component (desirable) and high-frequency components (undesirable). The control voltage of VCO must remain quiet in the steady state, which means the PD output should be filtered. Therefore, a 1st-order low-pass RC filter is interposed between the PD and the VCO, as is shown in Fig.2.

Fig.2 A 2nd-order PLL with a low-pass RC filter

PLLs are best analyzed in the phase domain (Fig.3). It is instructive to calculate the phase transfer function from the input to the output. The ideal PD can be modeled as the cascade of a summing node and a gain stage, because the dc value of the PD output is proportional to the phase difference of the input and output. The VCO output frequency is proportional to the control voltage. Since phase is the integral of the frequency, the VCO acts as an ideal integrator which receives a voltage and outputs a phase signal.

Fig.3 The phase domain model of PLL

The overall loop transfer function of the 2nd-order PLL shown in Fig.2 can be written as

\frac{\phi_{out}(s)}{\phi_{in}(s)}=\frac{K_P K_V \omega_{RC}}{s^2+\omega_{RC}s+K_P K_V \omega_{RC}}

where \omega_{RC}=1/RC. The phase error has the following transfer function

\frac{\phi_{e}(s)}{\phi_{in}(s)}=\frac{s^2+\omega_{RC}s}{s^2+\omega_{RC}s+K_P K_V \omega_{RC}}

If the input is a sinusoidal of constant angular frequency ωi, the phase ramps linearly with time at a rate of ωi. Thus, the Laplace-domain representation of the input signal is \phi_{in}(s) = \omega_i / s^2. From the final value theorem, the steady-state phase error is

\Phi_e(t=\infty) = \lim_{s \to 0} s \Phi_e(s) = \frac{\omega_i(s+\omega_{RC})}{s^2+\omega_{RC}s+K_P K_V \omega_{RC}} = \frac{\omega_i}{K_P K_V}

As can be seen, to lower the phase error, KpKv must be increased. Moreover, as the input frequency of the PLL varies, so does the phase error. Subsequently, in order to eliminate the phase error, a pole at the origin can be introduced. The RC loop filter can then be replaced by an integrator. Hence, it comes the popular architecture – charge-pump PLL (Fig.4), which comprises a phase/frequency detector, a charge pump, and a VCO.

Fig.4 A 2nd-order CPPLL

As long as the loop dynamics are much slower than the signal, the charge pump can be treated as a continuous time integrator. The phase model of CPPLL is now shown in Fig.5. Writing the transfer function and doing some calculation, the phase error is finally confirmed to be eliminated. However, one must remember that two integrators are now sitting in the forward path, each contributing a constant phase shift of 90°. It will be frightening to see the phase curve is a straight line at -180° for a negative feedback system.


Fig.5 Phase model of simple CPPLL

In order to stabilize the system, a zero is introduced by adding a resistor in series with the charge pump capacitor (Fig.6). Placing the zero before the gain crossover frequency helps to lift the phase curve up.

Fig.6 A 2nd-order CPPLL with a zero

The compensated PLL suffers from a critical drawback. Each time a current is injected into the RC branch, the control voltage to the VCO will experience a large jump. Even in the locked conditions, the mismatches between charge and discharge current introduce voltage jumps in the control voltage. The resulting ripple disturbs the VCO. To relax this issue, a second capacitor is commonly tied between the control line and ground (Fig.7).

Fig.7 A 3rd-order CPPLL

Finally, the PLL becomes a 3rd-order system. Don’t worry about the phase margin too much, as long as the zero, the unity-gain frequency, and the 3rd pole are positioned well (Fig.8).

Fig.8 Gain plot of a 3rd-order CPPLL

The author refers to two books for writing this post: 1) Behzad Razavi, Design of analog CMOS Integrated Circuits; 2) Ali Hajimiri and Thomas H. Lee, The design of low noise oscillators.

Posted in Analog Design, Circuit Analysis | Tagged , , | Leave a comment

Stabilizing a 2-Stage Amplifier

To stabilize an amplifier is not an easy task. At least for me, I used to be a spice slaver — mechanically change some components’ parameter and run a simulation to check the result, and again and again and again … until the moment some basic analyses saved me! Same as the post on calculation of phase margin, I write a memo here for ease of reference.

1. Usually the most difficult condition is unity-gain feedback. As Fig.1 shows, the close-loop bandwidth is normally smaller than or equal to the unity-gain bandwidth. This means it reaches the maximum phase drop at the unity-gain point, making unity-gain feedback the most difficult for stability.


Fig.1 Loop gain is the difference between open-loop gain and close-loop gain on a log scale.

2. Adding a miller capacitor between the two cascaded stages is a common technique. As Fig.2 shows, K is the ratio between the second pole and the GBW, which determines the phase margin (referring to this post). In addition, to push the right-half-plane (RHP) zero far more than the second pole, it’s better to keep Cm much smaller than C2. This further puts a demand on the ratio between gm1 and gm2. Finally, the trade-off between noise/speed (small gm1) and current consumption (large gm2) lands on the desk (as expected).


Fig.2 The small-signal model of a 2-stage amplifier with miller compensation.

3. Introducing a nulling resistor is the most popular approach to mitigate the positive zero. Compared to Fig.2, both the poles (neglecting the third non-dominant pole) and the GBW won’t change, except for the positive zero. How to calculate the new zero? Fig.3 demonstrates a simple way which was introduced by Prof. Razavi in his analog design book. One can either use the zero to (try to) cancel the second pole or simply push the zero to infinity.


Fig.3 A simple way to calculate the zero introduced by the nulling resistor and the miller capacitor

4. Ahuja compensation [1] is another way to abolish the positive zero. The cause of the positive zero is the feedforward current through Cm. To abolish this zero, we have to cut the feedforward path and create a unidirectional feedback through Cm. Adding a resistor such as in Fig.3 is one way to mitigate the effect of the feedforward current. Another approach uses a current buffer cascode to pass the small-signal feedback current but cut the feedforward current, as is depicted in Fig.4. People name this approach after the author Ahuja.


Fig.4 Ahuja compensation to abolish the positive zero introduced by the feedforward current

5. A good example of using Ahuja compensation is to compensate a 2-stage folded-cascode amplifier. As is shown in Fig.5, by utilizing the “already existed” cascode stage, the Ahuja compensation can be implemented without any additional biasing current. There are two ways to put the miller capacitor, which normally will provide the same poles but different zeros. In REF[2], the poles and zeros of the two approaches are calculated based on reasonable assumption. Out of curiosity, I also drew the small-signal model and derived the transfer function. With some patience I finally reached the same result as given in REF[2].


Fig.5 Two approaches to compensate the folded-cascode amplifier and the corresponding small-signal model

6. The bloody equations of poles and zeros of two Ahuja approaches are shared in Fig.6. Though these equations looks very dry at the moment, one will appreciate them during the actual design. They do help me to stabilize an amplifier with varying capacitive load. One thing worth to look at is the ratio between the natural frequency of the two complex non-dominant poles and the GBW. Considering Cm and C2 are normally in the same order and C1 is much smaller, the ratio will end up with a relatively large value and the phase margin can be guaranteed.

\frac{\omega_n}{GBW} \approx \sqrt{\frac{C^2_m}{C_1 C_2}}


Fig.6 The bloody equations of poles and zeros

Oh…finally, it took me quite some time to reach here. The END.


[1] B.K.Ahuja, “An improved frequency compensation technique for CMOS operational amplifiers”, JSSC, 1983.

[2] U. Dasgupta, “Issues in “Ahuja” frequency compensation technique”, IEEE International Symposium on Radio-Frequency Integration Technology, 2009.

Posted in Analog Design | Tagged , , , | 2 Comments

Gm/ID versus IC

According to the EKV model, the inversion coefficient, IC, is defined by the ratio of drain current to a specified drain current, IDSspec, where VGS-VT = 2n*kT/q [1]. In order to know the IC, I have to set up a separate testbench to simulate the IDSspec. This is not efficient especially at the initial phase of design which might encounter many changes.

On the other hand, the annotation of the DC operating point provided by Cadence is really helpful. Now we can even have gm/ID annotated beside the transistor (it is called ‘gmoverid’ in the simulator). Hence, a curve showing the gm/ID-IC relationship will be informative, and this Mr.Sansen has [1]! It is plotted in Fig.1.


Fig.1 Gm/ID*nUT versus IC

In order to derive the relationship, we first need to recall the following equations:

e^{\sqrt{IC}} = e^v+1       IC = \frac{I_{DS}}{I_{DSspec}}         v = \frac{V_{GS}-V_T}{2nU_T}

Based on the above equations, the gm/ID can be derived:

\frac{g_m}{I_{D}} = \frac{\partial{I_{DS}}}{\partial{V_{GS}}} \frac{1}{I_{DS}} = \frac{\partial{IC} \times I_{DSspec}}{\partial v \times 2nU_T} \frac{1}{I_{DS}} = \frac{\partial{IC}}{\partial v} \frac{1}{2nU_T \times IC} = \frac{1-e^{-\sqrt{IC}}}{nU_T \sqrt{IC}}

Now we may have a rough idea of IC based on the annotated gm/ID (assuming nUT is about 35 mV).

gm/ID       25          18          9

       IC            0.1          1           10


[1] W. Sansen, “Minimum power in analog amplifying blocks – presenting a design procedure ”, IEEE Solid-State Circuits Magazine, fall 2015.

Posted in Analog Design, MOS Models | Tagged , , , | Leave a comment

Go Moderate

Either Prof. Sansen’s inversion coefficient (IC) approach or Prof. Murmann’s Gm/Id design methodology is telling the same story of power-aware analog design.

With the help of Gm/Id design kit, I can easily visualize the transistor performance as a function of its gate-source voltage (see Fig.1). As VGS increases, the transistor undergoes the weak, the moderate, and the strong inversion. For high gain, we go left; for high speed, we go right. Being far-left, the gain is not increasing but the speed drops extremely low; being far-right, the speed is not increasing but the drain current is still climbing! For a decent figure-0f-merit (speed*gain), go to the middle, go moderate!  


Fig.1 ID, gm, gm/ID, fT, fT*gm/ID as a function of VGS at fixed VDS, VBS, and W/L

As CMOSers, we love the square-law equation, we sometimes hate and sometimes embrace the exponential subthreshold current equation. But with regard to the current flowing between the strong and the weak, do we have one equation for it? No, but yes…by doing some math, the EKV model combines all the three. Referring to [1], the IC-V related equations are copied as follows:

e^{\sqrt{IC}} = e^v+1,

IC = \frac{I_{DS}}{I_{DSspec}},         v = \frac{V_{GS}-V_T}{2nU_T},

I_{DSspec} = (\frac{\mu C_{OX}}{2n})(\frac{W}{L})(2nU_T)^2,

where n is subthreshold slope factor and UT is thermal voltage. At room temperature, 2nUT is about 70mV [1]. As Fig.2 shows, the IC-V curve matches well with the weak for IC < 0.1 or the strong for IC > 10; the moderate locates where IC is between 0.1 and 10.


Fig.2 Normalized overdrive voltage as a function of inversion coefficient


[1] W. Sansen, “Minimum power in analog amplifying blocks – presenting a design procedure ”, IEEE Solid-State Circuits Magazine, fall 2015.

Posted in Analog Design, MOS Models | Tagged , , | 1 Comment

The Calculation of Phase Margin

Negative feedback is ubiquitous, and the discussion on its stability can be found everywhere. For ease of reference, I will put a memo on the equations to calculate the phase margin.

Analog - 5 (1)

Fig.1 The symbolized feedback configuration

The amplifying system may includes multiple poles:


Neglecting higher order terms, it could be simplified to a two-pole equation: one dominant pole and one equivalent non-dominant pole which is approximate to:


The frequency of interest is where the loop gain magnitude is close to unity, denoted as ωt. Normally ωt is much larger than the dominant pole. Hence, βA(s) around ωt can be further simplified to:

\beta A(s)=\frac{\beta A_0 \omega_{p1}}{s(1+\frac{s}{\omega_{eq}})}.

Considering the first pole introduces -90° phase shift, the phase of the loop gain at ωt is:

Phase_{Loop Gain} = -90^o-tan^{-1}(\frac{\omega_t}{\omega_{eq}}).

Consequently, the phase margin (PM) is calculated by adding 180° to the phase of the loop gain and it is written as:

PM \approx 90^o-tan^{-1}(\frac{\omega_t}{\omega_{eq}}) = tan^{-1}(\frac{\omega_{eq}}{\omega_t}).

It can be seen that the phase margin is determined by the relative position between the equivalent non-dominant pole and the unity loop gain bandwidth.

                            ωeq/ωt        0.5               1               2               3               4        

                              PM            26.6°          45°          63.4°         71.6°         76° 

Posted in Analog Design | Tagged , , | 1 Comment

Gm/Id-Design Methodology

Three times of entering a wrong password to access this site…

Earlier in 2012, I wrote an introductory post about EKV model and later extended the related topic a little bit in another post – Stay Simple – Square-Law Equation Related. Since then I keep following the information about the EKV model and the inversion-coefficient-based analog design methodology.

One of the major contributors on this design methodology is Prof. Willy Sansen. He has given a short tutorial named Impact of Scaling on Analog Design. The tutorial was organized by ISSCC through edX (free access after registration). Most recently he also published an article [1] to summarize his idea in the IEEE Solid-State Circuits Magazine.

The journey starts with a beautiful equation which nicely links the weak and the strong inversion (see the curve in Fig.1).

Analog - 4 (2)

Fig.1 The relationship between V and IC

Fascinated by Prof.Sansen’s design procedure, I tried to apply it to my daily design work. Theoretically, it does give me a broader view and some insight on the low-power design. However, practically I find it difficult to make full use of it. Especially nowadays most of the design enters into the deep submicron region, and the model parameters are so complicated to interpret.

Then there comes another big guy – Prof. Boris Murmann. Yes, the professor provides the famous ADC performance survey! Now the professor also launches his gm/Id starter kit. The kit provides scripts that can co-simulate between SPICE simulator and Matlab and store transistor DC parameters into Matlab files. The data stored can then be used for systematic circuit design in Matlab. It looks brute-force but yet smart and efficient!

It’s free. Enjoy!


[1] W. Sansen, “Minimum power in analog amplifying blocks – presenting a design procedure ”, IEEE Solid-State Circuits Magazine, fall 2015.

Posted in Analog Design, MOS Models | Tagged , , | Leave a comment

Brief Study of Noise-Shaping SAR ADC – Part C

The topic of noise-shaping SAR ADC will come to an end in this post. In Part A, I briefly talked about the concept of noise shaping applied to sigma-delta modulators. In Part B, I introduced one special property of SAR ADCs which can be utilized to perform noise-shaping – the SAR architecture can generate the conversion residue without a feedback DAC. Then some form of noise shaping was achieved, but the result was not so satisfactory. In this post, I will continue the journey.

A small summarize of performing noise shaping on the SAR architecture from Part B:

  1. let the DAC array complete all the switching based on the decisions from MSB to LSB (the conversion residue is generated)
  2. sample the conversion residue (V_{RES}) on an extra capacitor
  3. apply the residue with opposite sign (-V_{RES}) to the opposite terminal of the comparator

If the extra capacitor is much smaller than the array capacitor, the current residue is sampled and there is almost no memory effect. The linear model of the SAR ADC looks like:

Fig. 1 Linear model when sampling capacitor is much smaller than the array capacitor

Fig. 1 Linear model when residue sampling capacitor is much smaller than the array capacitor

If an integrator is added to Fig. 1, the noise transfer function NTF becomes identical to the 1st-order noise shaping:

Fig.2 Linear model after an integrator is added to the system

Fig.2 Linear model after an integrator is added to the system

The corresponding hardware implementation could look like this:

Fig.3 Hardware implementation

Fig.3 Hardware implementation

1st-order noise shaping is finally achieved! BUT, circuit design is all about compromise. There are some concerns. Just list some of them as follows:

  1. kT/C_R is not noise-shaped anymore
  2. of course, you can never get an amplifier with infinite gain
  3. residue attenuation due to charge sharing between sampling capacitor and parasitic capacitor at the amplifier input
  4. switch-induced error

I would like to stop here (because weekend is coming ;-).

If you want to know more about practical solutions. I recommend the interesting and well-written paper [1]. I would like to thank the authors. I enjoyed a lot reading their paper.


[1] J. A. Fredenburg and M. P. Flynn, “A 90-MS/s 11-MHz-Bandwidth 62-dB SNDR Noise-shaping SAR ADC”, JSSC, vol.47, 2012.

Posted in Data Converter | Tagged | 2 Comments