## CMOS Single Stage

Found an old drawing on CMOS single stage from many years ago. Stay healthy!

## Basics On Active-RC Low-Pass Filters

Just some basic understandings on analog filters which is inspired by ‘The Guru’ in our company. To clarify my thoughts, I will write in the format of Q&A. There are four questions to answer:

1. What do we dream for a low-pass (LP) filter?
2. Why complex poles are required?
3. How to generate complex poles without inductor?
4. Any real-life example?

Q1) What do we dream for a low-pass (LP) filter?

An ideal one, which has a brick-wall response. We only receive what we intend to receive, pure and loss-free. But, in reality… Fig.1 Brick-wall response (in red) Vs. reality (in bluish)

Q2) Why complex poles are required?

Complex poles help to lift up the magnitude around the cut-off frequency by contributing larger pole quality factor (Q).

If we only have real poles, though higher-order gives better roll-off, the loss of magnitude around cut-off frequency becomes bigger. Fig.2 A system from order 1 to 5 which only have real poles

Now we move to a system which has complex poles. Taking the 5th-order Butterworth filter as an example, which has a real pole and two pairs of complex poles, the complex poles with a Q of 1.618 help to compensate the loss of magnitude around cut-off frequency. It tries to approximate the brick-wall response. Fig.3 Generating 5th-order butterworth lowpass by multiplying (cascading) three transfer functions (1 one-pole + 2 biquads)

Q3) How to generate complex poles without inductor?

The answer is Feedback! R and C only generate real poles. When feedback is applied around a system containing real roots, the closed-loop transfer function may contain complex roots.

Let’s think of this example: an amplifier with two poles. Its transfer function can be written as: $A(s)=\frac{A_0 \omega_1 \omega_2}{(s+\omega_1)(s+\omega_2)}$.

The poles are generated by Rs and Cs in the amplifier and they are real. Now assume a negative feedback of beta is placed around the amplifier. The closed-loop transfer function becomes: $A_{cl}(s)=\frac{A(s)}{1+\beta A(s)}= \frac{A_0 \omega_1 \omega_2}{s^2+(\omega_1+\omega_2)s + \omega_1 \omega_2(1+\beta A_0)}$.

We can then calculate the two poles of the closed-loop transfer function: $p_{1,2}=-\frac{\omega_1 + \omega_2}{2} \pm \frac{1}{2} \sqrt{(\omega_1 + \omega_2)^2 - 4\omega_1 \omega_2(1+\beta A_0)}$.

By increasing $\beta A_0$, complex poles can be achieved! Fig.4 An illustration example of root-locus of poles when A*beta is increased from 0 to infinity

Q4) Any real-life example?

Of course. Fig.4 shows the Two-Thomas biquad. Without the feedback resistor R2, the open-loop transfer function has two real poles: one pole generated by R3 and C1 and the other pole at origin. With a feedback resistor applied, the two poles will move towards each other, arrive at the same position, and then leave the real axis, becoming complex poles. Fig.4 The common Two-Thomas biquad filter (Wikipedia)

## Leading Simulators for Analog/RF Circuits

Recently I have the opportunity to touch some RF design. WoW, some simple metal lines need a fancy tool to get them modeled! WoW, both Windows and Unix tools are involved to simulate the EVM!

It seems that Cadence Spectre just can’t do it on its own any more. Really? I start to ask myself which simulator or which way of simulation is more efficient for my design. Then I found this paper published in 2014, titled “Overview of Commercially-Available Analog/RF Simulation Engines and Design Environment”. Yep, it’s helpful (at least to a RF newbie). It reviews four main analog/RF simulators on the market:

• Agilent ADS (now Keysight, which was spun off from Agilent in 2014)
• Agilent GoldenGate (again now Keysight)
• Mentor Graphics AFS

Cadence Spectre, an evolved engine from SPICE, is good at transient analysis. However, the transient analysis may become expensive when it deals with RF signals, the signals normally containing a periodic high-frequency carrier and a low-frequency modulation signal. The carrier forces a small time step while the signal forces a long simulation interval. To speed up the RF simulation, harmonic balance (HB) comes in, which works on frequency domain. Though ADS provides a best-in-class HB simulator for RF design, its layout suite is inferior to Cadence Virtuoso environment. Hence the Spectre simulator  is still widely used as the standard sign-off simulator. Recently, to leverage ADS for RF simulation and Cadence for schematic capture and layout, GoldenGate is highly integrated into the Cadence design flow as an efficient hybrid method.

It is obvious that which simulator to choose depends pretty much on the signal nature. In general, for circuits with a few frequency components, like LNA and mixer, the frequency-domain technique is more efficient, while time-domain technique is more efficient for circuits with abrupt edges, like ADC/DAC or control logics. Many interesting details are described in the paper. Fig.1 provides a short summary. Fig.1 A short performance summary 

Note that MTS is the acronym of multi-technology simulations. We typically design circuits within a single design kit. However, for some RF products, they may be composed of different modules from different processes. According to the paper, Cadence’s ADE GXL can do MTS simulations.

Reference:

 B. Wan and X. Wang, “Overview of commercially-available analog/RF simulation engines and design environment,” 12th IEEE International Conference on Solid-State and Integrated Circuit Technology (ICSICT), 2014.

## Pulse Generator of Asynchronous SAR ADC

When it comes to designing SAR ADC, I am like a tortoise and can never imagine the world of rabbits. Recently I got some time to ponder how the rabbits can run so fast. One of the critical factors is that they have an internal asynchronous clock.

When it comes to digital signals, I always feel dizzy examining their timing sequences. It did take me a while to understand how the internal asynchronous clock is generated. In this post, I tried to write down the core idea behind the pulse generator of asynchronous SAR ADC.

First of all, the pulse generator (Fig.1) needs a dynamic latch comparator! At the reset phase, the differential outputs are both set to low; at the comparison phase, once the regeneration is complete, one of the differential outputs will go to high and another stay at low. This property is utilized by a succeeding NAND which will output a valid signal indicating the completion of comparison.

Since the comparison starts at the negative edge, the sample signal is used to trigger the first comparison of the latch. Once the comparison is finished, the valid signal will generate a positive edge. With an OR function of sample and valid, following certain deliberate delay, the clock of the latch can be generated. Fig.1 An example of pulse generator

Figure 7‑2 depicts the timing sequences of three critical signals (sample, clk, and valid). Four sources of delay keep the asynchronous machine running:

• td1 is composed of the delay of the OR gate, the charging path of VDC, and the succeeding inverter.
• td2 is composed of the regeneration time of the latch, the delay of the following inverter and the NAND gate.
• td3 is composed of the delay of the OR gate, the discharging path of VDC, and the succeeding inverter.
• td4 is composed of the reset time of the latch, the delay of the following inverter and the NAND gate. Fig.2 Timing sequences of critical signals

The low level of the generated clk is the sum of td2 and td3, which varies as the input of the comparator changes. The high level of clk is the sum of td1 and td4, which is input-independent. One must pay attention to the DAC settling, which happens when clk goes high. The VDC can then be programmed/configured to ensure accurate DAC settling.

In summary, in order to increase the sample rate, the low level of clk should be as short as possible by speeding up the comparator, while the high level of clk should be long enough for the DAC to be fully settled. Worth to mention here, though the delay of digital control logic is not included in the discussion, it also become rather critical in high-speed design. This post is just a touch of the sphere of high-speed SAR ADC design. Salut to those who race in this challenging field!

Posted in Data Converter | 1 Comment

## Ringing in Step Response

Is it possible to tell the stability of a negative-feedback circuit by just looking at its step response? The answer is yes. In this post, I will try to find the relationship between phase margin and ringing in step response.

I will start from the frequency domain and play with the equations we are familiar with. To simplify the problem, I assume it’s a unity negative-feedback second-order system. Denoting the DC gain by A, the two poles by f1 and f2, the unity-gain bandwidth by GBW (=Af1), the open-loop gain can be written as $A_o=\frac{A}{(1+j\frac{f}{f_1})(1+j\frac{f}{f_2})}$.

The closed-loop gain can be thereafter written as $A_c=\frac{A_o}{1+A_o} \approx \frac{1}{1+j\frac{f}{GBW}+j^2\frac{f^2}{GBW f_2}}$.

Express the denominator in the familiar control theory form, $1+j 2 \zeta \frac{f}{f_n}+j^2 \frac{f^2}{f^2_n}$, where ξ is the damping factor and fn the natural frequency. $\zeta = \frac{1}{2} \sqrt{\frac{f_2}{GBW}}$     and $f_n = \sqrt{GBW f_2}$

If we take a close look at the damping factor, which tells the relative position between the second pole and the unity loop gain bandwidth, we shall know the phase margin as has been discussed in one of my old posts.

The peaking in the amplitude response can be found at $f=f_n \sqrt{1-2\zeta^2}$ with a value equal to $P_f = \frac{1}{2 \zeta \sqrt{1-\zeta^2}}$. Fig.1 Peaking in amplitude response

Now we have derived the relationship between phase margin and peaking in amplitude response. Let’s continue to look at the time domain. Considering we normally care about the underdamped case (ξ<1), the equation of its step response can be written as (detailed explanation and derivation can be referred to this link) $y(t) = 1 - \frac{1}{1 - \zeta^2} e^{- \zeta \omega_n t} sin(\omega_n (\sqrt{1-\zeta^2})t+acos(\zeta))$.

The peaking in the step response can be found at $\omega_n t = \frac{\pi}{\sqrt{1-\zeta ^2}}$ with a value equal to $P_t = 1 + e^{\frac{-\pi \zeta}{\sqrt{1-\zeta^2}}}$. Fig.2 Ringing in step response

The exact number of the phase margin and peakings can now be easily calculated. Poor phase margin corresponds  to peaking in the frequency domain and ringing in the time domain. Fig.3 Relationship between ξ , PM, Pf, and Pt

## Memos on FFT With Windowing

Coherent sampling is quite difficult to meet under the lab conditions. One has to go for windowing to characterize the dynamic performance of ADC. Though seminal papers and reports [1-2] lie on my desk for quite long time, I still feel difficult to really understand windowing. Recently, I found two posts from Nerd Rage (Windows of Opportunity & ENBW) quite helpful, which are easier to digest.

And now, finally, I decide to write down my very limited understanding on FFT with windowing, focusing on characterizing the dynamic performance of ADC in Matlab. Let’s go straightforward by first looking at the Matlab script I am using to calculate the SNR/SNDR of ADC.

fs = 1e6;             % Sample rate
nfft = 1024;          % Number of FFT
cycles = 113;         % Number of input periods
fin = cycles/nfft*fs; % Input frequency
data = data(1:nfft);  % Number of data to be the same as nfft

w = 0.5*(1 - cos(2*pi*(0:nfft-1)/nfft)); % Hann window
cg = sum(w)/nfft;                        % Normalized coherent gain
enbw = sum(w.*w)/(sum(w)^2)*nfft;        % Normalized equivalent noise bandwidth
nb = 3;                                  % Signal bins
dcbin = (nb+1)/2;                        % Number of DC bins

if (size(data,1)~= size(w,1))            % Check dimention
w = w';
end

ss = abs(fft(data.*w));      % FFT with windowing
ss = ss/nfft/cg;             % Compensate for window attenuation
ss = ss(1:nfft/2).*2;        % Drop the redundant half but keep total power the same

signal_bin = cycles+1;                           % Signal bin, Matlab array starts from 1
dc_bins = 1:dcbin;                               % DC bins
all_bins = setdiff(1:nfft/2, dc_bins);           % Disregard DC bins
signal_bins = signal_bin + (-(nb-1)/2:(nb-1)/2); % Signal leakage bins
other_bins = setdiff(all_bins, signal_bins);     % Further discard signal bins

fh = (2:10)*fin/fs;                              % Harmonic tone: (-/+m)fin + (-/+k)fs
while max(fh) > 1/2
fh = abs(fh - (fh > 1/2));                    % If harmonic tone fh>fs/2, it is equal to fs-fh
end
harm_bins = round(nfft * fh) + 1;                % Harmonic bins (2nd - 10th)
harm_binsl = zeros(length(harm_bins),nb);        % Find Harmonic leakage bins
for i = 1:length(harm_bins)
harm_binsl(i,:) = ((harm_bins(i) + (-(nb-1)/2 : (nb-1)/2)));
end
harm_binsl=reshape(harm_binsl',length(harm_bins)*nb,1);  % Convert matrix to array
harm_binsl=unique(harm_binsl);                           % Discard the repetitive harmonic bins

noise_bins = setdiff(other_bins, harm_binsl);            % Further discard the harmonic bins

Psignal = sum(ss(signal_bin).^2);                        % Signal power
PnoiseD = sum(ss(noise_bins).^2)/enbw/length(noise_bins);% Noise PSD
Pnoise = PnoiseD*length(all_bins);                       % Total noise power
Pharm = sum(ss(harm_bins).^2);                           % Power of harmonics

snr = 10*log10(Psignal/Pnoise);                          % Calculate SNR
sndr = 10*log10(Psignal/(Pnoise+Pharm));                 % Calculate SNDR
enob = (sndr - 1.76)/6.02;                               % Calculate ENOB
thd = -10*log10(Psignal/Pharm);                          % Calculate THD

Pharm_max = max(ss(harm_bins)).^2;
Pnoise_max = max(ss(noise_bins)).^2;
sfdr = 10*log10(Psignal/max(Pharm_max,Pnoise_max));      % Calculate SFDR

sdb = 10*log10(ss.^2);
f = (1:length(ss))/nfft;                                 % Frequency vector normalized to fs

plot(f, sdb, 'k-','linewidth',1.5);
xlabel('Frequency [ f / f_s ]','FontSize',10);
ylabel('Power Spectrum [ dB ]','FontSize',10);
grid on;
text(0.3, -40,...
sprintf('SNDR = %.1fdB \n SNR = %.1fdB \n THD = %.1fdB \n SFDR = %.1fdB \n ENOB = %.1fbits', sndr, snr, thd, sfdr, enob), ...
'FontSize',10);



An example of FFT plot will look like this: Fig.1 An example of FFT plot showing the dynamic performance of ADC

A default FFT using rectangular/box window has a normalized coherent gain as 1, a normalized equivalent noise bandwidth as 1, and only 1 signal bin. Hence, if other types of window are used, the corresponding window properties need to be specified. It’s all about two values: 1) the sum of the window terms; and 2) the sum of the squares of the window terms. Equations from the seminal paper  tell us why the two sums matter.

Let the input sampled sequence be defined by $f(nT) = A e^{+j\omega_knT} + q(nT)$

where q(nT) is a white-noise sequence with variance $\sigma^2_q$. Then the signal component of  the windowed spectrum is given by $F(\omega_k)|_{signal} = \sum\limits_n w(nT) A e^{+j\omega_knT} e^{-j\omega_knT} =A \sum\limits_n w(nT)$

Hence, the output amplitude of the noiseless signal is the input amplitude multiplied by a term which is the sum of the window terms (S1). This term is called the processing gain (sometimes called coherent gain) of the window. The rectangular window has the largest gain compared to other windows. In the Matlab script, the normalized coherent gain (normalized by S1 of the rectangular window) is specified.

The noise component of the windowed spectrum is given by $F(\omega_k)|_{noise} = \sum\limits_n w(nT) q(nT) e^{-j\omega_knT}$

The noise power is calculated using the expectation operator $E\{F(\omega_k)|_{noise}^2\} = \sigma_q^2 \sum\limits_n w^2(nT)$

As additive noise is assumed to be white, the above value represents the noise floor level (or the noise power spectral density), which is also constant. Notice the power gain of noise is the sum of the squares of the window terms (S2).

The noise bandwidth is calculated by the total noise power (sigma^2*S2*fs) divided by the peak power gain of the window (S1^2). Here we only focus on the multiplication term to the input noise (S2*fs/S1^2). Further introducing a parameter, fres, the width of one frequency bin (fs/N), then the normalized equivalent noise bandwidth (ENBW) is given by $ENBW = \frac{S_2 f_S }{S_1^2 f_{res}} = N\frac{S_2}{S_1^2} = N \frac{\sum\limits_n w^2(nT)}{ [\sum\limits_n w(nT)]^2}$

Therefore, to obtain correct power levels of the signal and the noise, the normalized coherent gain is subtracted from the PSD and the normalized ENBW is subtracted from the calculated total noise power, respectively.

Nerd Rage explains the ENBW from window energy point of view, which is quite insightful. Allow me to copy one of his picture and some remarks here: Fig. 2 Some properties of the Hann window and its instrument function. The window function (top left) is normalized to T = 1; graph also shows the window energy relative to the box window. Instrument function (top right) is shown for the spectral region covering the first 10 DFT bins. Close-up (bottom) shows main lobe height, maximum scalloping loss (see below), relative height of first sidelobe, and fraction of energy contained in main lobe, E0, to total energy E. (Courtesy: Nerd Rage)

“For the Hann window the main lobe height is -6.02dB and therefore the height of any single spectral line will be 6.02dB below its real value. To obtain correct energy levels of the spectral peaks (in the absence of scalloping), the main lobe height (in dB units) is usually subtracted from the PSD. However, this overcompensates the window-specific reduction of the noise floor – for the Hann window, peak compensation is 6.02dB while the noise floor is only 4.26dB below its true value. This decreases the observed SNR by 1.76dB or a factor of 1.5.”

Note: for Hann window, 20log(s1/N) = -6.02dB; 10log(s2/N)=-4.26dB; N*s2/s1^2 = 1.5.

Regarding the number of spread bins for a certain window function, a simple Matlab script performing FFT with windowing can do the job:

% Here you create an input
fs = 1024;
nfft = 1024;
fin = 16;
cycles = 16;
vin = sin(2.*pi.*fin.*(0:1:nfft-1)./fs);
% Here you perform FFT with and without windowing
ss = abs(fft(vin(1:nfft)))/(nfft/2);
w = 0.5*(1 - cos(2*pi*(0:nfft-1)/nfft));
cg = sum(w)/nfft;
sshann = abs(fft(vin(1:nfft).*w))/(nfft/2)/cg;
% Here you do the plot
figure(1)
subplot(1,2,1)
stem(ss);
title('Rectangular');
xlim([1 25]);
grid on;
subplot(1,2,2)
stem(sshann);
xlim([1 25]);
title('Hann');
grid on;


Then it comes the plot: Fig.3 FFT with Rectangular and Hann window (For Hann window, signal bins = 3)

Sorry for the boring and not well-written post (though I’ve tried my best). Fig.4 is just for a smile when one finishes reading all the above dull words.

Last but not least,  may Mr. Fourier’s wisdom be always with us! Fig.4 A sketch of Mr. Fourier

 Frederick J. Harris, “On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform,” Proceedings of the IEEE, vol 66, pp. 51-83, January 1978.

 G. Heinzel, A.Rudiger, and R. Schilling, “Spectrum and spectral density estimation by the Discrete Fourier transform (DFT), including a comprehensive list of window functions and some new flat-top windows,” February 15, 2002.

## Capacitor as a Discrete-Time Resistor

Prof. Ali has a column called “Circuit Intuitions” in the IEEE Solid-State Circuits Magazine. This time he wrote about capacitor as a resistor, which is quite helpful for me to understand this property recognized by James Clerk Maxwell 140 years ago. In this post, I tried to write down what I’ve learned from this topic.

First, let’s look at Fig.1 and ask oneself the following question: Between (a) and (b), which one is an integrator and which one is a gain stage? Fig.1 (a) Integrator; (b) Gain stage

The top is an integrator and the bottom a gain stage. The two working modes of the integrator and the gain stage are depicted in Fig.2 and Fig.3, respectively. Fig.2 Two modes of integrator: sample and integrate. Fig.3 Two modes of gain stage: sample and amplify

Now it’s time to take a close look at the switched-capacitor part of the integrator. Why does it act like a resistor?

First, let’s simplify the parasitic-insensitive version of Fig.4(a) to a straightforward implementation of Fig.4(b). In the first half of the period, S1 is closed and S2 is open. The capacitor is connected to V1, acquiring a charge of Q1=C*V1. In the second half, S1 is closed and S2 is open. The capacitor is connected to V2, storing a charge of Q2=C*V2. During this period, the total charge transferred from V1 to V2 is C*(V1-V2). In the next cycle, the capacitor is again connected to V1, replenishing its charge back to C*V1.  Then it transfers a charge of C*(V1-V2) to V2. Accordingly, the average current flowing from source V1 to source V2 is equal to the charge moved in one period: $I_{avg}=\frac{C(V_1-V_2)}{T}=\frac{V_1-V_2}{T/C}$

We can therefore view the discrete-time circuit as a resistor equals to $R_{eq}=\frac{T}{C}$

There are several considerations we shall pay attention to:

1. Time varying of V1/V2 should be slower than the rate of switching.
2. T/2 should be long enough for the capacitor to fully charge/discharge to the intended levels.
3. Different from a resistor, the average currents are the same, but not the instantaneous current.
4. … (to be discovered by oneself during real implementation) Fig.4: (a) A capacitor and four switches that act like a resistor; (b) A straightforward implementation of (a); (c) S1 is closed and S2 is open; (d) S1 is open and S2 is closed.

Knowing the switched-capacitor version of a resistor, Fig.5 gives two types of integrator: the continuous and the discrete. A comparison between the two reveals one of the advantages of the latter: while the product of R and C may have as much as a 10% process variation, the ratio of two capacitances has only a variation within 0.1%. OK. Let’s also be fair to name one disadvantage of the discrete equivalent: it is jitter-sensitive. Fig.5 (a) Continuous-time integrator; (b) discrete-time (switched-capacitor) integrator

Finally, allow me to copy part of Prof.Ali’s summary in the article – “If we view a resistor as an element that transfers charge from one terminal to another at a constant rate, we can implement it using a capacitor and two switches.”

## A Brief Review On the Orders of PLL

The simplest PLL, as is shown in Fig.1, consists of a phase detector (PD) and a voltage controlled oscillator (VCO). Via a negative feedback loop, the PD compares the phases of OUT and IN, generating an error voltage that varies the VCO frequency until the phases are aligned. Fig.1 A 1st-order PLL

This topology, however, must be modified because the output of PD consists of a dc component (desirable) and high-frequency components (undesirable). The control voltage of VCO must remain quiet in the steady state, which means the PD output should be filtered. Therefore, a 1st-order low-pass RC filter is interposed between the PD and the VCO, as is shown in Fig.2. Fig.2 A 2nd-order PLL with a low-pass RC filter

PLLs are best analyzed in the phase domain (Fig.3). It is instructive to calculate the phase transfer function from the input to the output. The ideal PD can be modeled as the cascade of a summing node and a gain stage, because the dc value of the PD output is proportional to the phase difference of the input and output. The VCO output frequency is proportional to the control voltage. Since phase is the integral of the frequency, the VCO acts as an ideal integrator which receives a voltage and outputs a phase signal. Fig.3 The phase domain model of PLL

The overall loop transfer function of the 2nd-order PLL shown in Fig.2 can be written as $\frac{\phi_{out}(s)}{\phi_{in}(s)}=\frac{K_P K_V \omega_{RC}}{s^2+\omega_{RC}s+K_P K_V \omega_{RC}}$

where $\omega_{RC}=1/RC$. The phase error has the following transfer function $\frac{\phi_{e}(s)}{\phi_{in}(s)}=\frac{s^2+\omega_{RC}s}{s^2+\omega_{RC}s+K_P K_V \omega_{RC}}$

If the input is a sinusoidal of constant angular frequency ωi, the phase ramps linearly with time at a rate of ωi. Thus, the Laplace-domain representation of the input signal is $\phi_{in}(s) = \omega_i / s^2$. From the final value theorem, the steady-state phase error is $\Phi_e(t=\infty) = \lim_{s \to 0} s \Phi_e(s) = \frac{\omega_i(s+\omega_{RC})}{s^2+\omega_{RC}s+K_P K_V \omega_{RC}} = \frac{\omega_i}{K_P K_V}$

As can be seen, to lower the phase error, KpKv must be increased. Moreover, as the input frequency of the PLL varies, so does the phase error. Subsequently, in order to eliminate the phase error, a pole at the origin can be introduced. The RC loop filter can then be replaced by an integrator. Hence, it comes the popular architecture – charge-pump PLL (Fig.4), which comprises a phase/frequency detector, a charge pump, and a VCO. Fig.4 A 2nd-order CPPLL

As long as the loop dynamics are much slower than the signal, the charge pump can be treated as a continuous time integrator. The phase model of CPPLL is now shown in Fig.5. Writing the transfer function and doing some calculation, the phase error is finally confirmed to be eliminated. However, one must remember that two integrators are now sitting in the forward path, each contributing a constant phase shift of 90°. It will be frightening to see the phase curve is a straight line at -180° for a negative feedback system. Fig.5 Phase model of simple CPPLL

In order to stabilize the system, a zero is introduced by adding a resistor in series with the charge pump capacitor (Fig.6). Placing the zero before the gain crossover frequency helps to lift the phase curve up. Fig.6 A 2nd-order CPPLL with a zero

The compensated PLL suffers from a critical drawback. Each time a current is injected into the RC branch, the control voltage to the VCO will experience a large jump. Even in the locked conditions, the mismatches between charge and discharge current introduce voltage jumps in the control voltage. The resulting ripple disturbs the VCO. To relax this issue, a second capacitor is commonly tied between the control line and ground (Fig.7). Fig.7 A 3rd-order CPPLL

Finally, the PLL becomes a 3rd-order system. Don’t worry about the phase margin too much, as long as the zero, the unity-gain frequency, and the 3rd pole are positioned well (Fig.8). Fig.8 Gain plot of a 3rd-order CPPLL

The author refers to two books for writing this post: 1) Behzad Razavi, Design of analog CMOS Integrated Circuits; 2) Ali Hajimiri and Thomas H. Lee, The design of low noise oscillators.

## Stabilizing a 2-Stage Amplifier

To stabilize an amplifier is not an easy task. At least for me, I used to be a spice slaver — mechanically change some components’ parameter and run a simulation to check the result, and again and again and again … until the moment some basic analyses saved me! Same as the post on calculation of phase margin, I write a memo here for ease of reference.

1. Usually the most difficult condition is unity-gain feedback. As Fig.1 shows, the close-loop bandwidth is normally smaller than or equal to the unity-gain bandwidth. This means it reaches the maximum phase drop at the unity-gain point, making unity-gain feedback the most difficult for stability. Fig.1 Loop gain is the difference between open-loop gain and close-loop gain on a log scale.

2. Adding a miller capacitor between the two cascaded stages is a common technique. As Fig.2 shows, K is the ratio between the second pole and the GBW, which determines the phase margin (referring to this post). In addition, to push the right-half-plane (RHP) zero far more than the second pole, it’s better to keep Cm much smaller than C2. This further puts a demand on the ratio between gm1 and gm2. Finally, the trade-off between noise/speed (small gm1) and current consumption (large gm2) lands on the desk (as expected). Fig.2 The small-signal model of a 2-stage amplifier with miller compensation.

3. Introducing a nulling resistor is the most popular approach to mitigate the positive zero. Compared to Fig.2, both the poles (neglecting the third non-dominant pole) and the GBW won’t change, except for the positive zero. How to calculate the new zero? Fig.3 demonstrates a simple way which was introduced by Prof. Razavi in his analog design book. One can either use the zero to (try to) cancel the second pole or simply push the zero to infinity. Fig.3 A simple way to calculate the zero introduced by the nulling resistor and the miller capacitor

4. Ahuja compensation  is another way to abolish the positive zero. The cause of the positive zero is the feedforward current through Cm. To abolish this zero, we have to cut the feedforward path and create a unidirectional feedback through Cm. Adding a resistor such as in Fig.3 is one way to mitigate the effect of the feedforward current. Another approach uses a current buffer cascode to pass the small-signal feedback current but cut the feedforward current, as is depicted in Fig.4. People name this approach after the author Ahuja. Fig.4 Ahuja compensation to abolish the positive zero introduced by the feedforward current

5. A good example of using Ahuja compensation is to compensate a 2-stage folded-cascode amplifier. As is shown in Fig.5, by utilizing the “already existed” cascode stage, the Ahuja compensation can be implemented without any additional biasing current. There are two ways to put the miller capacitor, which normally will provide the same poles but different zeros. In REF, the poles and zeros of the two approaches are calculated based on reasonable assumption. Out of curiosity, I also drew the small-signal model and derived the transfer function. With some patience I finally reached the same result as given in REF. Fig.5 Two approaches to compensate the folded-cascode amplifier and the corresponding small-signal model

6. The bloody equations of poles and zeros of two Ahuja approaches are shared in Fig.6. Though these equations looks very dry at the moment, one will appreciate them during the actual design. They do help me to stabilize an amplifier with varying capacitive load. One thing worth to look at is the ratio between the natural frequency of the two complex non-dominant poles and the GBW. Considering Cm and C2 are normally in the same order and C1 is much smaller, the ratio will end up with a relatively large value and the phase margin can be guaranteed. $\frac{\omega_n}{GBW} \approx \sqrt{\frac{C^2_m}{C_1 C_2}}$ Fig.6 The bloody equations of poles and zeros

Oh…finally, it took me quite some time to reach here. The END.

Reference:

 B.K.Ahuja, “An improved frequency compensation technique for CMOS operational amplifiers”, JSSC, 1983.

 U. Dasgupta, “Issues in “Ahuja” frequency compensation technique”, IEEE International Symposium on Radio-Frequency Integration Technology, 2009.

Posted in Analog Design | | 4 Comments

## Gm/ID versus IC

According to the EKV model, the inversion coefficient, IC, is defined by the ratio of drain current to a specified drain current, IDSspec, where VGS-VT = 2n*kT/q . In order to know the IC, I have to set up a separate testbench to simulate the IDSspec. This is not efficient especially at the initial phase of design which might encounter many changes.

On the other hand, the annotation of the DC operating point provided by Cadence is really helpful. Now we can even have gm/ID annotated beside the transistor (it is called ‘gmoverid’ in the simulator). Hence, a curve showing the gm/ID-IC relationship will be informative, and this Mr.Sansen has ! It is plotted in Fig.1. Fig.1 Gm/ID*nUT versus IC

In order to derive the relationship, we first need to recall the following equations: $e^{\sqrt{IC}} = e^v+1$ $IC = \frac{I_{DS}}{I_{DSspec}}$ $v = \frac{V_{GS}-V_T}{2nU_T}$

Based on the above equations, the gm/ID can be derived: $\frac{g_m}{I_{D}} = \frac{\partial{I_{DS}}}{\partial{V_{GS}}} \frac{1}{I_{DS}} = \frac{\partial{IC} \times I_{DSspec}}{\partial v \times 2nU_T} \frac{1}{I_{DS}} = \frac{\partial{IC}}{\partial v} \frac{1}{2nU_T \times IC} = \frac{1-e^{-\sqrt{IC}}}{nU_T \sqrt{IC}}$

Now we may have a rough idea of IC based on the annotated gm/ID (assuming nUT is about 35 mV).

gm/ID       25          18          9

IC            0.1          1           10

Reference

 W. Sansen, “Minimum power in analog amplifying blocks – presenting a design procedure ”, IEEE Solid-State Circuits Magazine, fall 2015.