Hello All, Quick question: where can I find a lis...
# xschem
m
Hello All, Quick question: where can I find a list of ngspice error exit codes? My simulation exited with error
338
(if I’m reading this right) and I would like to know what went wrong. The simulation was running and then just killed itself (running it again and restarting xschem or the docker container I’m running the tools out of does not change the behavior). I’ve attached a screen shot of the error. Please forgive me if this is the wrong channel for this message. Alternatively, I’m wondering if the 338 is actually the process number. Thanks! EDIT: Ok, the number is the process number, not an error code. I’m still curious how/where to find logs to see why ngspice was killed?
s
@Micah Tseng did you run the sim from xschem? Try to go in the simulation directory (
/headless/.xschem/simulations
) and then run ngspice manually:
ngspice -i sar_adc_test.spice
I think the directory is read only. but this is just my guess.
@Micah Tseng you can also enable status reporting to get more info from xschem (
Simulations->Configure simulators and tools
)
m
@Stefan Schippers thanks! Yes, that is in xschem. I tried running it in batch mode (after swapping control commands for their batch version) from the terminal and got the same “Killed” message in the log, but without any more helpful information. However, I can run other simulations without issue so the directory can’t be readonly.
I will see if I can get more info from xschem later when I get back to my computer. I have attached the spice here if perhaps you think there is something wrong with it.
@Stefan Schippers The xschem status window doesn’t give any more info. I would really like to know why it is crashing. Do you have any thoughts?
s
@Micah Tseng This can be due to some limits set on process size or other parameters. try to see the output (in a shell) of:
ulimit -a
@Micah Tseng your netlist is fine. It is simulating fine on my system . At 150us, simulation running, no problems so far. the ngspice process is taking 688MB reserved memory.
m
Thanks! That is great to know!
@Stefan Schippers This is what I get out:
Copy code
bash-4.4$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 7517
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1048576
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
do you see anything off?
Ooof. I figured it out. I’m used to Docker on Linux which by default doesn’t impose any limitations on the container resource utilization. However, it turns out on OSX, it defaults to rather low limitations. Increasing the limitations allowed the simulation to run 🙂
@Stefan Schippers Thanks a lot for your help! I really appreciate it.
s
Printing a little more verbose message instead of just 'Killed' would make user's life easier. Good you got it fixed!.