Hi guys, I am back with a question regarding ADCs/ Sampling Circuits used in them :
I am following this Analog Mind article by Behzad Razavi (DOI : 10.1109/MSSC.2020.3036143), I am trying to design a gate bootstrapped switch which can sample 12-bits of 250MS/s at 500MHz using the process described in the paper, in a 65nm process node (proprietary/under NDA).
The Ideal SNR is around 74dB for 12-bits (LSB = 1.2V/2^12 = ~0.3 mV) but I am getting nowhere close to it. Following the paper, I calculated the values of sampling capacitor assuming I will experience a 1dB decrement in my SNR as compared to the 74dB value at 348K (75 degrees celsius) and I got a capacitance of around 127fF which I rounded up to 150fF just to be "sure".
Then assuming that I will have 0.5dB attenuation due to the RC behavior of the switch, I calculated that I need an NMOS on resistance of less than 50 Ohms.
After which I made the ideal circuit in figure 1(a) of the paper with a battery of 1.2V...with an input sinusoidal signal of peak-to-peak swing 1.2V (-600mV to +600mV) at a frequency of (31/32)*250MHz (i.e. near the nyquist rate since I want the final switch to function at around 500MHz) and my output spectrum's HD3 and HD5 values were nowhere near as good as Razavi's, best I could get was ~50dB and 55dB respectively for HD3 and HD5 with a very noisy spectrum (all the spectrums I have aren't as clean as Razavi's, I even tried using a Blackmann Harris Windowing function)
This was with a main switch whose resistance varied a bit from 12 to 14 something ohms (average was around 12.7Ohms), which I thought it shouldn't theoretically since we are bootstrapping the gate, so it should get rid of the resistance's dependence on input voltage, my best guess is its happening because the threshold voltage for the NMOS is varying which results in that slight variation across an input voltage sweep of 0-1.2V (I checked the transistor's operating region and the entire time it was shown to be in linear region)
As I keep proceeding through Razavi's suggested steps my HD3 and HD5 values keep getting worse and worse and the spectrum keeps getting noisier and noisier i.e. the noise floor keeps getting higher. At around step 3 I gave up the process because my noise floor was around -45dB and HD3 and HD5 values were around 36 and 48dB respectively. I figured I was doing something wrong and that this circuit wasn't gonna give me 12-bits of sampling any time soon.
To summarize, if anybody could help me with the following questions, it would be really really appreciated, I have just started studying about data converters :
1. Am I doing something wrong in following the design process suggested in the paper/ what am I doing wrong in designing the switch as described in the paper for my requirements ? And how can I correct these mistakes, please help.
2. Am I measuring its performance wrong? i.e. I have configured the process to compute the spectrum the wrong way around? I am using the functionality in ADE which allows one to compute spectrum of transient signals. I run the transient sim for 1us and then I compute the spectrum. I have tried using both the regular rectangular window and the blackmann harris window with 3 bins....but it doesn't help with reducing the noise floor significantly. And besides the HD3 and HD5 values stay the same and I see quite high values for other spurs too.