For GF-MPW-0: I now had two Tapeout jobs try to ru...
# general
t
For GF-MPW-0: I now had two Tapeout jobs try to run on my submission, but both failed. Full log is just spammed with this for thousands of lines. Is this an issue with my project? I see nothing wrong in my repo compared to some that have passed tapeout.
g
for anyone on the efabless side, if this hasn't been worked out, I've seen some similar issues playing around locally that turned out to be magic exceeding the soft limit of 1024 open FDs on my system, fixed with
ulimit -n 500000
beforehand
not sure if this is even related here, but in general it might be worth making magic either close FDs sooner if possible, attempt to automatically increase its limit, or at least fail nicer (i.e. not just assuming the file doesn't exist and continuing) if opening a file gives errno as 24 - cc @Tim Edwards
t
@jeffdi was seeing this error yesterday and at some point said he fixed it, although I don't know what the fix was.
👍 1
g
Another observation looking at the on-platform run is it seems that the generated
caravel_<id>.gds/.oas
are using the default example user_project_wrapper rather than my project's GDS. I think this is because it's using the .mag file from my repo (which I never updated) rather than the .gds file (which is what precheck etc uses and presumably all that should matter). Unless I messed something else up....
@jeffdi was seeing this error yesterday and at some point said he fixed it, although I don't know what the fix was.
I see a failure with the IO error (which I'm pretty certain is the number of open files issue I was seeing locally, it matches 100%) on a tapeout job that ran just an hour or two ago, fwiw
r
Curious why 30-something projects would pass tape out, given this limitation? Are the FDs only running out for more complex designs?
t
The server processes shouldn't be using file locking, anyway, since there's no sense in which the data is being shared and needs mutex locks.