<@U01634FSETZ> <@U016HSALFAN> would it make sense ...
# openlane
a
@User @User would it make sense to rearrange our steps so we fail as early as possible on a bad run, eg moving the klayout steps and xor comparisons later on. I've even moved the LVS checking before magic DRC for now because it takes less time to execute and catches obvious issues.
a
@Anton Blanchard (cc: @Ahmed Ghazy): I'm open to the idea specially with the runtime differences. But, a counter argument would be why would you run LVS on a design that is not DRC clean, and with the introduction of flow quitters eventually the flow will be aborting when any violations are found and so the counter argument would make more sense. Still, the runtime difference (speedup) is something to consider.
@Anton Blanchard: However, we can definitely move the Klayout functions to the end, since at the moment they are just there for show without a real benefit.
p
Hmm, from my point of view, LVS and DRC are independent checks, so we could theoretically even run them in parallel, given that there are enough ressources. Hmm, can we estimate how much RAM LVS and DRC checks will take? I am afraid that both might fail if the RAM demand of both checks exceeds the available ressources. But perhaps an opt-in for parallelisation through an environment variable might be good?
👍 1
Hmm, would it be possible to freeze a docker container in case of low memory, so that docker writes its contents to disk, and when the other parallel job completed it continues operation?
That way we could run several docker containers in parallel, but it would not necessarily fail in case of memory exhaustion?
a
@Philipp Gühring: I like the idea of running DRC and LVS in parallel; I will look into it. RAM usage is definitely the main concern here. Maybe https://docs.docker.com/config/containers/resource_constraints/#--memory-swap-details is the closest to what you suggest above.
p
Hmm, those docker options are quite interesting, but don't seem to be sufficient to me. Hmm, I'll file a feature request at Docker