Hi, Openroad is freezed at the stage of Magic DRC...
# magic
b
Hi, Openroad is freezed at the stage of Magic DRC the memory usage got so high that (>60 GB in a 64GB machine) the PC did not response any command The messages said: Running Magic DRC Converting Magic DRC Violations to Magic Readable Format Converting Magic DRC Violations to Klayout XML Database Then I found that in reports/signoff folder, drc.rpt, drc.tcl and drc.tr, which are GB sized ??? Even I can't open the files in VNC server There are errors in DRC related to openram but the log says "DRC Checking DONE" and I found in slack comments that openram macro related DRC errors could be ignored what could went wrong in the flow and it freezed by consuming all the RAM? regards,
Openroad is freezed at the stage of Magic DRC the memory usage got so high that (>60 GB in a 64GB machine) the PC did not response any command The messages said: Running Magic DRC Converting Magic DRC Violations to Magic Readable Format Converting Magic DRC Violations to Klayout XML Database Then I found that in reports/signoff folder, drc.rpt, drc.tcl and drc.tr, which are GB sized ??? Even I can't open the files in VNC server There are errors in DRC related to openram but the log says "DRC Checking DONE" and I found in slack comments that openram macro related DRC errors could be ignored what could went wrong in the flow and it freezed by consuming all the RAM? regards,
t
I don't think that this is a magic issue. Magic's DRC has a pretty low overhead and although it can take a long time to run, it won't eat up 60GB! There should be very little output from magic for this. The SRAM macros are supposed to be detected by the DRC script and replaced with abstract views so that they don't generate gobs of errors (which, still, shouldn't be eating up memory). A (small!) sample of the output in the GB-sized reports would be helpful for determining what it is that is going on.
If I run this manually in magic, there are only 25 errors, all of which appear to be due to an issue that I thought was solved months ago related to the setting of USEMINSPACING in the technology LEF file. They can (probably) be ignored but should not be happening. Regardless, that has no bearing on why the process is taking up GB of memory and generating GB of output.
b
thanks for your elaboration Tim,
I will go into detail in those GB files and upload here