Hello. I'm novice at designing circuits like ADC, ...
# general
j
Hello. I'm novice at designing circuits like ADC, DAC, clock generator, etc. So I will ask a question about measuring quantitative performance on circuits I designed. I designed digital-to-analog converter with 10-bits INPUT. but it seems little bit bad on performance. and I'm planning to measure 1) Update rate, 2) Intermodulation distortion 3) Noise spectral density. I think that 1) can be measured by AC analysis and find out -6dB frequency. but what is the bias point and specific input? 2) I searched that IM3 arises when two different-frequency signal are inputs of circuit. but I can't come up with the idea applied to DAC. how can I measure it? 3) can be measured by NOISE analysis in SPICE. Thank you for reading my question. I am waiting for your answers :)
l
One of the most important DAC metrics is its linearity, as it shows its true resolution. The 3 dB point should be at the code extremes, I think. You should have a transient response also for an step signal, as large signal analysis is also important.
a
In real world applications of silicon integrated DAC and ADC the Power Supply is very often the limiting factor on system ADC and DAC performance if the circuit and any bias generators it uses are not protected from power supply fluctuations by an LDO. So if your goal is to make this I'd add PSRR to your test bench also. The most common contributor to Power supply problems is that people ignored the impact of the package inductance, on chip power grid impedance and sources of significant peak on chip switching current to the actual power waveforms observed on silicon. To brutally paraphrase Patrick McGoohan: VDD is not just a number, it is a free waveform.
Regarding your question: 1) Rated update rate for a DAC should not result in significant degradation in performance. You should achieve your stated analog conversion performance at whatever update rate you chose. Usually update rate is defined by settling delays and a specified precision of conversion. If for example you spec'd 10 bits input and 0.2% precision then you would say you are failing at the frequency when you do not achieve this precision. (-6db is very signficantly worse than that) 3) The importance of small signal noise spectral density varies depending on the range and precision of conversion. You might find that a post layout extracted netlist with all layout parasitics simulated while doing conversions has much higher noise contribution than the small signal transistor models would show. As Luis said before I think a large signal transient simulation of some reference conversions (1Hz, 10Hz, 100Hz, 1kHz, 10kHz...) is likely to help if the output is viewed in the frequency domain. (It is usually surprisingly bad)🥲 . The good news is that usually some of these noise components have very predictable frequencies (Switching regulator switching frequency, AC mains, CPU clock, IO Bus frequencies, Mother board Clock etc) that are hopefully far from your desired output frequency and are therefore relatively easy to filter.
t
Linearity is hard to determine by simulation, though, because mismatch parameters are usually cited to be correct for "best design practices" without being specific about what those practices are. Understanding sources of mismatch is quite complicated. What linearity you can achieve depends very much on the choice of architecture and choice of devices. For example, the standard R-2R resistor ladder is compact but the poorest choice for linearity. A full resistor chain is better, but a switched-capacitor design is best for matching, and a sigma-delta design will give you the highest precision (but with a significant throughput delay).