Brady Etz
03/29/2024, 6:55 PMTim Edwards
03/29/2024, 8:35 PMBrady Etz
03/29/2024, 8:39 PMTim Edwards
03/29/2024, 8:40 PMBrady Etz
03/29/2024, 11:06 PM.wslconfig
file, described here: https://learn.microsoft.com/en-us/windows/wsl/wsl-config. Will report if this resolves my troubles.Tim Edwards
03/30/2024, 12:12 AMlinearize
command in ngspice might be useful here (or not; just spouting off ideas here. . .).Brady Etz
03/30/2024, 12:25 AMlinearize
acts on a finished vector but it doesn't accelerate or compress the data mid-sim. The post-simulation data handling is small enough. But the process of running the simultaneous transient sims occupies significant memory. I haven't tried running single-threaded, but I expect the total runtime will be longer.
I added a .wslconfig
file to my Windows home directory with a couple lines that show improvements. memory=20GB
gives a more generous chunk to WSL than the default 16GB for my system. pageReporting=false
keeps Windows from nabbing memory from WSL whenever it can (I think). autoMemoryReclaim=dropcache
is an experimental feature that seems to help the most. WSL seems to free up memory more aggressively after a job completes and I'm not seeing it climb over 6GB.
EDIT: It does climb over, but only during a batch of runs. Once WSL 2 runs out of swap space and memory, the running processes fail or their pipes close. With these configuration settings, after 10-15 seconds, the swap and memory clears, and the last testbench run can be aborted from the CACE GUI. Resuming the run after memory clears at least gives a chance of success.Tim Edwards
03/30/2024, 12:28 AMBrady Etz
03/31/2024, 3:54 AMhtop
, there's a cace-gui process that climbs by 40-80 MB in resident memory for every completed ngspice run. I thought ngspice did the math (e.g. calculate the mean of a vector, or calculate a .meas time/value) and sent a few words over to cace-gui with the .data
file output it makes using wrdata
, but it seems like there's more going on than that.
When a testbench finishes, all the ngspice processes terminate. This releases 1-2G of memory. But mid-testbench, the memory clumping onto CACE is an issue.
At the end of a testbench cycle, CACE does its thing and processes the data, then the resident memory in the cace-gui process with the highest priority hikes up, which carries forward into the next testbench. Eventually I'm all out of juice over the 45 or 135 corners. Since adding more comprehensive corners, I haven't been able to generate results for every testbench in one sitting. I can still create an updated datasheet, but will do a piecewise data summary in the GitHub README.
Is there a way for me to pare down the data CACE handles? I'm trying "Do not create plot files" (although I wasn't plotting anything). I am definitely wondering if there's anything else I should try.Tim Edwards
03/31/2024, 4:13 PMstdout=subprocess.PIPE
and stderr=subprocess.PIPE
or stderr=subprocess.STDOUT
to stdout=subprocess.DEVNULL
and stderr=subprocess.DEVNULL
. This will have one negative impact that a simulation that hits an error and drops back to the ngspice interpreter prompt will cause CACE to hang. But if it works, then I can add it as an option setting.Tim Edwards
03/31/2024, 4:17 PMcace_simulate.py
lines 152 and 153; I don't think it's needed anywhere else.)
I have a pretty low level of confidence that that will change anything. But I can't think of anywhere else that would be using so much memory. For that matter, the output of ngspice can't be using that much memory, either.