<!channel>: Since pretty much every group is goin...
# chipalooza
t
<!channel>: Since pretty much every group is going to have to do some kind of Monte Carlo and/or mismatch analysis, here are some notes about how to do that if you're not familiar with how it's done in ngspice: Generally, a Monte Carlo analysis (also mismatch analysis) will use the same testbench as the same parameter without Monte Carlo or mismatch. Instead of simulating over corners, set the corner type to "mc" to enable the Monte Carlo models, and then simply run the simulation over 100 iterations. In CACE, you just declare a new condition called "iterations" that looks like (corrected per Mitch's comment below):
Copy code
name: iterations
                description:    Iterations to run
                display:        Iterations
                minimum: 1
                maximum: 100
                step: linear
                stepsize: 1
Also in the "simulate" block, you will want to add the line:
Copy code
collate:        iterations
You will then get a spread of values from which are calculated minimum and maximum. A better measurement is to measure the distribution standard deviation, which can be done in CACE, using a "spec {...}" block with
Copy code
spec {
         minimum: -10 fail std3n-above
         typical: 0
         maximum: 10 fail std3p-below
	}
This should set the minimum value to
(mean - 3 * sigma)
of the monte carlo distribution, and the maximum value to
(mean + 3 * sigma)
of the monte carlo distribution. My
sky130_ef_ip__rdac3v_8bit
example has a Monte Carlo example, although it does not use the standard deviation measurement The same method for Monte Carlo can be done for device mismatch, but instead of process corner
mc
, use process corners
ff_mm
,
tt_mm
, and
ss_mm
, which are the process corners but also include mismatch. When using Monte Carlo or mismatch, be sure to add to the simulation netlist the following (using CACE notation):
Copy code
.option SEED=[{seed=12345} + {iterations=0}]
This forces ngspice to set a new random seed for every simulation. The result is random but also reproducible. The Monte Carlo (or mismatch) simulation can also be done by looping over iterations inside ngspice with an ngspice
for
loop; make sure you do a
reset
every loop, and set the
SEED
option inside the loop as well. It should be faster because it doesn't incur the startup time of reading all the device models on every iteration. However, it requires a special control block, whereas by using CACE, the same testbench can be re-used for measurements with monte carlo, mismatch, or no statistical variation, by choosing the appropriate set of conditions.
👍 4
m
@Tim Edwards I haven’t used CACE, but it looks extremely useful. This may not be relevant, but in your explanation above you recommend
100
iterations, but it appears that the iteration maximum is set at
10
. Is this a typo?
simply run the simulation over 100 iterations
Copy code
name: iterations
                description:    Iterations to run
                display:        Iterations
                minimum: 1
                maximum: 10      <- does this limit the parameter value to 10?
                step: linear
                stepsize: 1
Sorry if I’m not understanding the setup.
t
@Mitch Bailey: You're right; it's not a typo, though; I had used 10 in my example so it would run fast for testing, and I copied and pasted the example without recalling that I had set the iterations low on purpose. Based on the usual 1-over-square-root-N error estimate for a statistical distribution, you generally want at least 100 sample points to ensure that the measurement of the mean and standard deviation are reasonably meaningful. I have corrected my statement at top. Thanks for pointing out the error!
👍 1