Normal Distribution and Input-Referred Noise

Normal distribution is frequently assumed when we do circuit analysis.

  • Why?

Because there is a saying that the sum of a large number of random variables converges to the Normal.

  • Under what condition is this true?

The central limit theorem deals with this point. In [1], it defines “the normalized sum of a large number of mutually independent random variables with zero means and finite variances tends to the normal probability distribution function provided that the individual variances are smaller compared to the total sum of variance”.

Fig.1 PDF for sum of a large number of random variables

Fig.1 PDF for sum of a large number of random variables

  • Why people sometimes assume normal distribution and use \sigma to characterize circuit input-referred noise?

It’s understandable that people use \sigma to characterize the offset of a circuit, because many random effects during fabrication tend towards a normal distribution. But when comes to noise, 4kTRBW or kT/C will pop up in our minds. Why \sigma?

In frequency domain, it’s straightforward to do the noise analysis given noise power spectral density and bandwidth. However, when moves to time domain, we need the assumption of normal distribution and its \sigma.

  • How to link time-domain noise to frequency-domain noise? Or let’s ask in this way, how is \sigma related to noise power spectral density n^2(f) and bandwidth F_{max}?

Cadence should have the answer! Because they provide transient noise simulation. Yes then, I find the answer in their application note [2]. 😉

Let’s take white noise as an example for simplicity (in this case, n^2(f)=n^2). In the time domain, a white noise signal n(t) is approximated as

n(t)=\sigma \cdot \eta(t, \Delta t),

where \eta(t, \Delta t) is a random number with standard normal distribution updated with time interval \Delta t. The noise signal amplitude and update time interval are

\sigma = \sqrt{n^2 \cdot F_{max}},

\Delta t = \frac{1}{2F_{max}}.

Let’s then verify it. The auto-correlation function for this noise signal is calculated to be

n^2(t)=\sigma^2 \Lambda(\frac{t}{\Delta t}),

where \Lambda is a triangular pulse function of width \Delta t. The power spectrum of n(t) can then be calculated as a Fourier transform of the auto-correlation function

n^2(f)=\sigma^2 \cdot \Delta t \cdot sinc^2(f \cdot \Delta t).

Finally, the total noise power can be obtained by integrating over the frequency

\int_0^\infty n^2(f) = n^2 \cdot F_{max}.

Fig.2 The noise signal, its auto correlation function, and spectral density [2]

Fig.2 The noise signal, its auto correlation function, and spectral density [2]

For more detailed explanation, please refer to Ref [2].


[1] H. Stark and J. W. Woods, Probability and random processes with applications to signal processing, 3rd edition, Pearson Education, 2009.

[2] Cadence, “Application notes on direct time-domain noise analysis using Virtuoso Spectre”, Version 1.0, July 2006.

This entry was posted in Analog Design and tagged , , . Bookmark the permalink.

2 Responses to Normal Distribution and Input-Referred Noise

  1. Anonymous says:

    Insteresting! I will read later.

  2. Pingback: Noise Effect On The Probability of Comparator Decision | EveryNano Counts

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s