<@U01FYLU6TKP> you may need to tweak HOST_ARCH - i...
# openlane
r
@User you may need to tweak HOST_ARCH - i’m not sure if uname -a returns the same as docker expects for arch
a
Thanks @Rob Taylor! The arch is
ppc64le
. My docker doesn't understand a few flags (eg
--push
). I just removed them for now. Next up I hit the same issue I hit a while ago with cvc - pyinstaller does strange things and creates an
x86-64
binary. I presume I need to teach pyinstaller about
ppc64le
, I'll look into that. @Mitch Bailey: I'm wondering if there is a way to avoid the pyinstaller dependency in cvc.
r
Ah, yes, you’ll need to install the latest buildx plugin then 😕
I’ll have a think about a way to do the job without buildx, it does seem a little problematic
a
@Anton Blanchard: You would need to manually install https://github.com/docker/buildx/ on docker versions < 19.03.
@Rob Taylor: I have been playing with it. One issue I had is that the
COPY
commands that create the
tools
image cannot be used to "merge" directories (e.g., cannot copy
/build/bin
from two images into the current
/bin
). This was easy to work around though; I am currently still testing the whole build process, which is obviously taking really long compared to the previous structure that includes tarballs of the x86 binaries. Will keep you posted.
a
@Rob Taylor We got past the pyinstaller issue. Seems like the bootloader for ppc64le isn't shipped with the package. Building it from source fixed it:
Copy code
-RUN pip3 install pyinstaller
+RUN git clone <https://github.com/pyinstaller/pyinstaller.git> pyinstaller
+WORKDIR pyinstaller/bootloader
+RUN python3 ./waf all
+WORKDIR /pyinstaller
+RUN python3 setup.py install
m
@Anton Blanchard pyinstaller is used to create
check_cvc
- a GUI for CVC results. It requires kivy which is a python GUI library. Kivy needs SDL2 and won't work on some older linux systems. Currently, the GUI for CVC isn't integrated in openlane. I'll take a look at removing
check_cvc
from the default build.
a
@Mitch Bailey Thanks!
r
@Ahmed Ghazy good stuff - I worked around that issue yesterday using stow (for neatness). I’ll push. For the speed issue, the idea is for the tools to have images (with cache) on docker hub, so once we’ve stabilised it should be the same amount of rebuilding as happens with the current tar ball situation
At this point though it’s a lot of long rebuilds, hence my slow progress!
@Ahmed Ghazy What are your thoughts re the multistage approach? I was thinking we can cache the tools and make a multiarch openlane image with inline cache. Then for most people, the build should be near instantanous. The main drag against the current state of play is its probably easier to ‘knock’ the dockerfile and cause a full rebuild
Another approach would be to still use the tarball approach, but have multiarch tarballs. The main issue with that is that actually producing those multiarch tarballs would be painfully slow (it’d use the qemu based builds in docker)
The other thing we should discuss is the CI side.. 😬
@Anton Blanchard could you check if
pip install --no-binary pyinstaller
solves the problem?
a
@Rob Taylor
Copy code
Step 29/165 : RUN pip3 install --no-binary pyinstaller
 ---> Running in f3fdfe781c66
...
pyinstaller -F check_cvc.spec --clean
make[1]: pyinstaller: Command not found
make[1]: *** [Makefile:435: check_cvc] Error 127
r
@Anton Blanchard thanks, that is odd! I’ll update to build it manually
@User did you have any thoughts on my question in the multiarch thread^?
a
@Rob Taylor (cc: @Amr Gouhar): Sorry about the delay... I got distracted with MPW-1 issues. I can see that you pushed the stow stuff yesterday; I will test that again and let you know. About the multi-stage approach, I did indeed run into several cases where I 'knocked' the Dockerfile with simple fixes and caused a full rebuild, but that's not a big deal if one person has to do this only one time... In general, I have no issues with that approach. Do you happen to have pushed the images of the tools to docker hub already so that I can directly take a look at how well that works (even if it's targeting a different architecture)?
Moreover, would you like to set up a meeting to discuss this in more details?
r
Yes, all the images are pushed to shapebuild on docker hub
You can set CACHE_ID=shapebuild and DOCKER_ID to your hub if
Id
A meet would be perfect. When is good for you?
a
@Rob Taylor: What's your timezone?
r
GMT, though I’m usually working up to 2100 to overlap with US
Still some issues still occurring which I need to figure out - some difference between how the openLane tar ball was created and using ADD with .dockerignore
a
@Rob Taylor: Would tomorrow 2:00PM GMT work for you?
r
Perfect!
a
@Rob Taylor (cc: @Ahmed Ghazy): I think we could start a document on what to discuss and try to list all the issues and concerns before we meet.
r
Could you send an invite to rob@shape.build
Good thinking
a
@Rob Taylor (cc: @Ahmed Ghazy): Great! I'll send out an invitation along with the empty document.
r
Brilliant!
a
Perfect 👍