In the previous post, the probability of comparator decision with the existence of noise was calculated. In this post, the topic about noise and probability will continue. The whole topic is actually inspired by a 1986-paper [1], which discusses noise effect on the distribution of SAR ADC output codes. Following the author’s method, though I end up with slightly different results, still I found some interesting things which I would like post here.

Assume an input voltage of is applied to an ADC, and the ADC has input-referred noise with a standard deviation of . Then, the input voltage compares with certain reference voltage to determine the corresponding bit. Referring to the equations calculated in the previous post, the probability of the bit being high is written as

.

Similarly the probability of the bit being low is given by

.

Considering the time sequence of Nyquist ADCs when they generate the digital outputs, I roughly group them into two categories: outputs are converted simultaneously and outputs are converted successively. The former needs times comparison for N-bit, and the latter only needs times but with the penalty of speed. I’m more interested in the second case, lazy and slow but still doing the job ;-).

**Problem formulation: **Due to the existence of noise, the input voltage can be converted to erroneous output codes or a correct one (respectively indicated with yellow and blue backgrounds in Fig.1). Probabilities of each converted output code corresponding to one particular input are of interest.

**How to calculate the probability of one particular output code?**

Fig. 2 gives an example of probability calculation for code “100” converted by a 3-bit SAR ADC. The input voltage corresponds to code “101”. The calculation starts from MSB till LSB. The probability of less-significant bits will depend on the results of more-significant bits. Finally, the probability of a given code is the product of the probability of its individual bits.

Knowing the way of calculating the probability of a given code, I tried to look at the noise specifications from statistics point of view. There are two noise specifications commonly used (in academia): noise power is equal to the quantization noise or noise standard deviation is equal to 1 LSB. The former introduces 3 dB loss of SNR, and the latter 11 dB. Then, how does the code distribution look like under these two specifications?

**Noise power is equal to the quantization noise:**

**Noise standard deviation is equal to 1 LSB:**

Sorry for the math. Like nonsense in the middle of a summar day. Sleepy?

**Reference**

[1] Philip W. Lee, “Noise considerations in high-accuracy A/D converters”, *JSSC*, 1986.