## Noise Effect On The Probability of Comparator Decision

In the previous post, it is explained why normal distribution with a standard deviation of $\sigma$ is used to characterize the circuit input-referred noise. In this post, we will calculate the probability of comparator decision (High or Low) with the existence of noise.

We ignore the dc offset of the comparator and only consider the thermal-noise effect. Then the distribution of voltages presented to the comparator input, as shown in Fig.1, can be represented by a normal distribution whose standard deviation is equal to $\sigma$ of the comparator input-referred noise and whose mean value is shifted by the signal Vc. Fig. 1 Input-referred distribution of voltages presented to the comparator. The shaded area represents the probability of the comparator thinking the input voltage is low.

The probability of a decision Low is simply given by the area under the curve to the left of zero, which is indicated by the shaded region in Fig.1. This area is given in terms of the error function by (referring to P1, P2, and P3 in Appendix) $P(\textit{Low})=\frac{1}{2}+\frac{1}{2} erf(\frac{-V_C}{\sqrt{2}~ \sigma})$.

Note that the error function is an odd function (P5 in Appendix), we can further write $P(\textit{Low})=\frac{1}{2}-\frac{1}{2} erf(\frac{V_C}{\sqrt{2}~ \sigma})$.

Finally, based on the above result, the probability can be further calculated in terms of the complementary error function by (P4 in Appendix) $P(\textit{Low})=\frac{1}{2} erfc(\frac{V_C}{\sqrt{2}~ \sigma})$.

Similarly, the probability of a decision High is given by the remaining unshaded area under the curve in Fig.1, which is $P(\textit{High})=\frac{1}{2} erfc(\frac{-V_C}{\sqrt{2}~ \sigma})$.

In the next post, we will use the derived equations to show noise effect on the distribution of ADC output codes.

Appendix

Some equations on standard normal distribution:

1. Probability density function (PDF) $\phi(x)=\frac{1}{\sqrt{2 \pi}} e^{-\frac{x^2}{2}}$.

If denote the mean as $\mu$ and the standard deviation as $\sigma$, the PDF of general normal distribution can be expressed as $f(x)= \frac{1}{\sigma} \phi(\frac{x-\mu}{\sigma})$.

2. Cumulative distribution function (CDF) $\Phi(x)=P[X \leq x] = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^x e^{-\frac{t^2}{2}}dt$.

3. Error function $erf(x)=\frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2}dt$.

Hence, the CDF can be further expressed using error function $\Phi(x)=\frac{1}{2}+\frac{1}{2} erf(\frac{x}{\sqrt{2}})$.

4. Complementary error function $erfc(x)=\frac{2}{\sqrt{\pi}} \int_x^{\infty} e^{-t^2}dt = 1 - erf(x)$.

5. Error function is an odd function. $erf(-x)=- erf(x)$.

Posted in Analog Design | | 1 Comment

## Normal Distribution and Input-Referred Noise

Normal distribution is frequently assumed when we do circuit analysis.

• Why?

Because there is a saying that the sum of a large number of random variables converges to the Normal.

• Under what condition is this true?

The central limit theorem deals with this point. In , it defines “the normalized sum of a large number of mutually independent random variables with zero means and finite variances tends to the normal probability distribution function provided that the individual variances are smaller compared to the total sum of variance”.

• Why people sometimes assume normal distribution and use $\sigma$ to characterize circuit input-referred noise?

It’s understandable that people use $\sigma$ to characterize the offset of a circuit, because many random effects during fabrication tend towards a normal distribution. But when comes to noise, 4kTRBW or kT/C will pop up in our minds. Why $\sigma$?

In frequency domain, it’s straightforward to do the noise analysis given noise power spectral density and bandwidth. However, when moves to time domain, we need the assumption of normal distribution and its $\sigma$.

• How to link time-domain noise to frequency-domain noise? Or let’s ask in this way, how is $\sigma$ related to noise power spectral density $n^2(f)$ and bandwidth $F_{max}$?

Cadence should have the answer! Because they provide transient noise simulation. Yes then, I find the answer in their application note . 😉

Let’s take white noise as an example for simplicity (in this case, $n^2(f)=n^2$). In the time domain, a white noise signal n(t) is approximated as $n(t)=\sigma \cdot \eta(t, \Delta t)$,

where $\eta(t, \Delta t)$ is a random number with standard normal distribution updated with time interval $\Delta t$. The noise signal amplitude and update time interval are $\sigma = \sqrt{n^2 \cdot F_{max}}$, $\Delta t = \frac{1}{2F_{max}}$.

Let’s then verify it. The auto-correlation function for this noise signal is calculated to be $n^2(t)=\sigma^2 \Lambda(\frac{t}{\Delta t})$,

where $\Lambda$ is a triangular pulse function of width $\Delta t$. The power spectrum of n(t) can then be calculated as a Fourier transform of the auto-correlation function $n^2(f)=\sigma^2 \cdot \Delta t \cdot sinc^2(f \cdot \Delta t)$.

Finally, the total noise power can be obtained by integrating over the frequency $\int_0^\infty n^2(f) = n^2 \cdot F_{max}$. Fig.2 The noise signal, its auto correlation function, and spectral density 

For more detailed explanation, please refer to Ref .

Reference

 H. Stark and J. W. Woods, Probability and random processes with applications to signal processing, 3rd edition, Pearson Education, 2009.

 Cadence, “Application notes on direct time-domain noise analysis using Virtuoso Spectre”, Version 1.0, July 2006.

Posted in Analog Design | | 2 Comments

## Stay Simple – Square-Law Equation Related

The 1st post in 2013. Happy middle new year! 😉

Nowadays most of us in academy are designing analog circuits in deep-submicron nodes using short-channel transistors. Putting them together with the digital, building up a system, and selling it with extraordinary performance in top conferences or journals—that’s a dream (sometimes triggers a real nightmare) of a PhD student…

Then do you miss the old time when you are designing circuits using long-channel transistors?

I miss! The square-law equation is so neat that the relation among the drain current, the effective voltage, and the transconductance can simply be expressed by $I_D = \frac{V_{GS}-V_T}{2} g_m$

With certain effective gate-source voltage, the transconductance linearly follows the biasing current. It is quite straightforward to tell how much power you will pay for your target gain.

Soon, the short-channel transistors together with low voltage come ;-). Life becomes not that straightforward. In  and , the authors introduce a parameter Veff, defined by $V_{eff}=\frac{I_D}{g_m}$

They also compare their parameter with the square-law equation using the following figure.

Yes! For modern short-channel transistors, they are often biased in the transition region between sub-threshold and saturation regions. They are making classical equations useless! The proposing of Veff is so decent that it provides a simple way for us to analyze the power bounds of analog circuits in modern CMOS .

If you ponder on the above simulated curves, do you remember something which also has the beauty of simplicity and continuity?

Yes. The EKV model!  In this model, the Veff, using inversion coefficient, can be derived by $V_{eff}= \frac{I_D}{g_m} = n U_T (\sqrt{IC + 0.25}+0.5)$

And the effective gate-source voltage is expressed as $V_{GS}-V_T= 2 n U_T \textnormal{ln}(e^{\sqrt{IC}}-1)$

Out of curiosity, I plotted Veff versus VGS-VT based on the EKV model, together with the simulated curves, which I obtained using a similar method as used in  but for different processes. The figure is shown as bellow. Fig. 2 Veff versus VGS-VT with EKV model included (Different inversion regions are also indicated).

Though the analog world can’t be as simple as 0 or 1, still the simpler the better!

Reference

 T. Sundström, B. Murmann, and C. Svensson, “Power dissipation bounds for high-speed Nyquist analog-to-digital converters”, TCASI, 2008.

 C. Svensson and J. J. Wikner, “Power consumption of analog circuits: a tutorial”, Analog Springer, 2010.

 D. M. Binkley, B. J. Blalock, and J. M. Rochelle, “Optimizing drain current, inversion level, and channel length in analog CMOS design”, Analog Springer, 2006.

Posted in MOS Models | Tagged , , | 1 Comment

## Brief Study of Dither C: Dithered DNL

In ‘Dither’ series B, I have shown dither reduces the harmonic distortion by decolalizing the code mapping of the analog signal from the transfer curve. So if we take a step further, we might say the main purpose of dither is to randomize the DNL errors of the converter.

Then, it seems that DNL is kind of related to harmonic distortion. In ‘ADC performance’ series A, I discussed the relationship between DNL and SNR. In that post, I have taken the DNL error as uniformly distributed noise. However, the story told by the DNL plot could not be that simple. In this post, I will try to explain more about DNL, of course, together with a further discussion of the dither.

1. Dynamic effects of DNL from distortion point-of-view

The location of DNL error is important. For example, shown in Figure 1, a converter may have a DNL error of +2 LSB at code(–FS), which is a quite large error. However, its effect will be minimal for a converter which rarely uses codes near full scale. Conversely, a converter may have a DNL error of +0.25 LSB for 4 codes near midscale, which indicates +1 LSB of transfer function error around that location. If a converter repetitively works in the middle range, the four errors will cause potential dynamic distortions.

Thus a blanket statement about the INL or DNL of a converter without additional information (location, frequency, etc.) is almost useless .

2. Dithered DNL – An example of a subranging pipelined ADC from ADI 

The 14-bit 105-MSPS AD6645 block diagram:

The problem – the significant DNL error occur at the ADC1 transition points:

The solution – a peak-to-peak dither noise cover about two ADC1 transitions is added to the input. The DNL is not significantly improved with higher levels of noise.

The result – undithered DNL versus dithered DNL:

I will end the discussion of dither till this point. I wish I could have some chance to try the dither method in the future and have a better understanding of it. Though 3 posts written, I’m still a little bit puzzled with dither…It’s really not that easy to deal with noise!

Reference

 Brad Brannon, Overcoming converter nonlinearities with dither, Analog Devices, AN-410.
 Walt Kester, The good, the bad, and the ugly aspects of ADC input noise – is no noise good noise?  Analog Devices,  MT-004.

Posted in Data Converter | Tagged , , , | 4 Comments

## Brief Study of Dither B: Dither

In ‘Dither’ series A, I mentioned that the quantization will introduce large harmonics to a low-level sinusoidal signal. Let’s look at another example in Figure 1 . Figure 1. Quantization on small signals results in: (a) A clipped output; (b) No output .

Adding dither to the circuit means adding noise to the circuit. Now let’s look at Figure 2, which shows the effects of dither on the situation of Figure 1. The dither noise causes the ADC to make transitions, and the sine wave can be recovered with short-term averaging.

With no dither, each analog input voltage is assigned one and only one code; with dither, each analog input voltage is assigned a probability distribution for being in one of several digital codes . In , it was also mentioned that

• the optimum dither is white noise at a voltage level of about 1/3 LSB rms
• dither is very effective in reducing harmonic distortion for signal levels up to about 10 LSB.

What do you pay for a reduction in harmonic distortion by adding dither?

Answer: A slightly degraded signal-to-noise ratio and, if one uses time averaging, an increase in the effective conversion time. (SFDR/THD is traded for SNR.) Figure 3. Effects of dither on 5-LSB-Vpp signal: (L) Without dither; (R) With dither (25 times average) .

Reference

1. J. Vanderkooy and S. P. Lipshitz, Resolution below the least siginificant bit in digital systems with dither, J. Audio Eng. Soc., 1984.
2. Leon Melkonian, Improving A/D converter performance using dither, National Semiconductor (now TI), AN-804, 1992.
Posted in Data Converter | Tagged , , | 1 Comment

## Brief Study of Dither A: Before Dither

Does it make sense to add noise to enhance the converter’s performance?

Dither – a puzzle to me for quite a long time. Recently I had time to read several related papers and APs. I would like to share some basic understanding (maybe still superficial…) here and as always to show some collected figures from the giants ;-).

Start by recommending an old seminal paper from 1984  . Then write down a list of things which need to be known ‘before dither’:

• Sampling and quantization are the two inherent processes of ADC.
• With ideal sampling, no noise will be added to the signal. (Ideal sampling refers to sampling a band-limited signal at a frequency which is more than twice the bandwidth of the signal – Nyquist sampling rule.)
• Unlike sampling, quantization inherently adds noise to the signal. The part with dashed lines in Figure 1 indicates the error introduced by quantization:
• For large amplitude or complex (complex here in terms of frequency content) signals, the quantization noise will be white .
• For low level or simple (simple here in terms of frequency content) signals, the quantization error cannot be treated as white noise added to the input signal. Here, in Figure 2, we take a large amplitude sinusoidal signal (this is a simple signal) as an example.
• Distortion is greatest in small amplitude, simple signals. Here, in Figure 3, we take a 1-LSB Vpp sinusoidal signal centered on a threshold between two codes as an example. The reconstructed waveform looks like a square wave! Figure3. Quantization in low-level signals resulting in severe error .

Reference

1. J. Vanderkooy and S. P. Lipshitz, Resolution below the least siginificant bit in digital systems with dither, J. Audio Eng. Soc., 1984.
2. Leon Melkonian, Improving A/D converter performance using dither, National Semiconductor (now TI), AN-804, 1992.
Posted in Data Converter | Tagged , , | 1 Comment

## The EKV MOS Model

Recently I’m referring some papers about low-voltage amplifiers. I came to know the EKV MOS model, which is mainly dedicated to low-voltage and low-current analog design. Then I read a short history of the EKV model written by one of the model developers, C. C. Enz. In order to know a little bit more about the model details, I read the authors’ 1995 paper, a seminal paper which fully talked about the model and also  gave its name to the model.

Reading that kind of paper, mainly dealing with physics and sometimes mathematics, yes, is a little bit painful.

The beauty of the EKV model to me is symmetry, continuity, and simplicity.

• Symmetry Fig. 1 NMOS transistor with all voltages referred to the local substrate (bulk).

Different from BSIM model, where all the voltages are referred to the source terminal, in the EKV model, the source, the drain, and the gate voltages are all referred to the local substrate. Bulk reference allows the model to be handled symmetrically with respect to source and drain (a symmetry that is also inherent in CMOS technologies).

• Continuity

The EKV model describes the behavior of the transistor in a continuous manner from low currents (weak inversion, moderate inversion) to large currents (strong inversion).

The drain current is derived and expressed as the difference between a forward component and a reverse component. It is exponential in weak inversion and quadratic in strong inversion. The current in moderate inversion is modeled using an appropriate interpolation function resulting in a continuous expression valid from weak to strong inversion .

The inversion coefficient (IC): $IC = \frac{I_D}{I_0(\frac{W}{L})}$,

where the technology current $I_0$ is given by $IC = 2 \cdot n_0 \cdot \mu \cdot C_{OX} \cdot U ^2_T$. The inversion coefficient results in linear relationships between the drain current and transistor sizing based on the normalization to a fixed technology current. Weak inversion corresponds to IC < 0.1, moderate inversion corresponds to 0.1 < IC < 10, and strong inversion corresponds to IC > 10.

Using the inversion coefficient enables analog design and design optimization freely in all regions of inversion. Fig. 2 illustrates estimated values of VDSAT and (VGS-VT) for its respective IC, in order to provide comparison between different operation modes (weak/moderate/strong). Fig. 3 also shows the estimated gm/ID as a function of IC using.

The general methodology based on inversion coefficient:

To get a more detailed feeling about design analog circuits using inversion coefficient based on the EKV model,  could be a good reference. In addition,  is also a nice paper, which talks about tradeoffs and optimization in analog CMOS design using the EKV model.

• Simplicity

Simplicity here mainly means fewest parameters. “This model has only 9 physical parameters, 3 fine tuning fitting coefficients, and 2 additional temperature parameters .” 9+3+2 = 14, yes! Moreover, it makes analog design possible to be optimized by a spreadsheet or a Matlab script.

Last but not least, one model may not be the answer for everything.

Reference

 Enz, C. C., Krummenacher, F., Vittoz, E.A., “An Analytical MOS Transistor Model Valid in All Regions of Operation and Dedicated to Low-Voltage and Low-Current Applications”, Analog Integrated Circuits and Signal Processing Journal on Low-Voltage and Low-Power Design, July 1995.

 D. M. Colombo, G. I. Wirth, and C. Fayomi, “Design methodology using inversion coefficient for low-voltage low-power CMOS voltage reference”, Proc. 23rd Symp. on Integrated Circuits and System Design, pp. 43-48, 2010.

 Binkley, D.M. , “Tradeoffs and Optimization in Analog CMOS Design,” Mixed Design of Integrated Circuits and Systems, June 2007.

Posted in MOS Models | | 3 Comments