Hey all! My team are having LVS issues with our ca...
# caravel
a
Hey all! My team are having LVS issues with our caravel user project and would appreciate any help! We could be missing something obvious, so please advise! The project we're working on is this: https://github.com/getziadz/caravel_mpw5_prga ; In a nutshell, this is an open-source, synthesizable, embedded FPGA. We have to use a hierarchical design flow, in which a small design (
tile_clb
) is hardened and replicated in a larger design (
top
, which will be instantiated in
user_project_wrapper
). The larger design also contains logic. At this moment, we can get
tile_clb
DRC- and LVS-clean with openlane. Routing of
tile_clb
uses up to met2, except for 5 met4 tracks for PDN (auto-generated with the openlane flow, no hack in
config.tcl
). However, LEF LVS fails on
top
with a bunch of instance and net mismatches. The macros are manually placed in
top
, and we've been careful to avoid met4 PDN overlaps. Enough space is left between macros to make sure all met1 rails are connected to at least one met4 POWER and one met4 GROUND. We've tried: - Connecting the PDN pins in RTL (guarded with
USE_POWER_PINS
) - Set
FP_PDN_ENABLE_MACROS_GRID
to 1 in
top
, and set
FP_PDN_MACROS
to the list of all macros We haven't tried
FP_PDN_MACRO_HOOKS
, because slack history suggests that it is unnecessary if we have only one power domain and the pins are named
vccd1
and
vssd1
. OpenLane log shows that the number of instances match, but the number of nets mismatch (off by 2x number of hard macros). It seems that this is related to PDN, but after checking
results/lvs/top.lvs.lef.log
, a variety of instances are actually not matched. Any insights? We'd be happy to share any logs if anyone offers help. Great thanks in advance!
m
@User I'll take a look and let you know if I find anything.
The
gds/user_project_wrapper.gds
matches
verilog/gl/user_project_wrapper.v
from your repo. I see no
top
blocks. I'm on
e8d85b8681b98f3ef613591326f13b72a9f197f5 (HEAD -> prga-mpw-5b
.
d
@User During Spice extract i see warning Warning: Ports "io_oeb[3]" and "io_in[9]" are electrically shorted. Warning: Ports "io_oeb[3]" and "io_in[5]" are electrically shorted. Warning: Ports "io_oeb[3]" and "io_in[37]" are electrically shorted. Warning: Ports "io_oeb[3]" and "io_in[34]" are electrically shorted. Warning: Ports "io_oeb[3]" and "io_in[32]" are electrically shorted. Warning: Ports "io_oeb[3]" and "io_in[30]" are electrically shorted. Warning: Ports "io_oeb[3]" and "io_in[29]" are electrically shorted. Warning: Ports "io_oeb[3]" and "io_in[10]" are electrically shorted. LVS failure long also shows same nets are shorted .. There are some known issue around global router on not honoring macros .lef rules. LVS passed once added met1/met2 obstruction around macro in user_project_wrapper/config.tcl set ::env(GLB_RT_OBS) " met5 0 0 2920 3520, \ met2 1175 1690 1340 1934.8, \ met1 1175 1690 1340 1934.8 \ "
g
@User, we are currently working on the
prga
submodule which will be integrated with the wrapper as a next step.
a
Hey @User and @User great thanks for helping! Yeah as @User said we're not working at user_project_wrapper right now. we're working on
top
which is under
openlane/prga
at the moment. unfortunately no gds is generated for top yet
@User that global router issue sounds bad.. I'll take note of that. I'll take a look at the spice extraction log of
top
and see if that's the reason
m
@User So the repo you shared doesn't have the data needed to reproduce the problem? @User Where did you get the data to do spice extract?
a
@User the flow failed at LVS and didn't output into the lef/gds directories at the root of the directory. Should we check in the
runs
folder?
make prga
can reproduce the problem, but synthesis takes some serious time.
I can share any log/intermediate results if needed πŸ™
m
If you can't find the gds and powered verilog in the runs directory, you could share your logs.
@User Were you talking about your experience on your own chip and not this one, or are you working with Ang Li?
a
@User I can find
top.lvs.powered.v
under
runs/results/lvs
, and
top.gds
,
top.lef
,
top.lef.mag
,
top.lef.spice
,
top.mag
and
top.spice
under
runs/results/magic
. I'll share these files. @User is not working with us πŸ™‚
merged_unpadded.lef is also included
I'm also trying to check in the entire runs directory. Great thanks for helping! @User
Here are the LVS logs
m
I'll need the verilog or spice for
tile_clb
also.
a
sure one second
This file is found in
caravel/spi/lvs
, but it's not added into the
config.tcl
of
top
, so I don't know if
top
is reading it
Verilog files (tile_clb.bb.v is the black-boxed version of tile_clb.pickled.v)
m
From the logs it looks like the layout power is not connected to the
tile_clb
Copy code
Net: i_tile_x3y3/vssd1                     |(no matching net)
  tile_clb/vssd1 = 1                       |
                                           |
Net: i_tile_x3y3/vccd1                     |(no matching net)
  tile_clb/vccd1 = 1                       |
                                           |
I'll look at the gds.
a
cool thanks!
yeah below that there are more unmatched instances
m
Looks like the M4 power rails over
tile_clb
are not connected to caravel power.
a
is that not expected? those met4 can be connected to the met5 power grid in the end, right?
m
Yes. But in order to pass LVS at this level, they need to be connected, at least virtually (same top level text).
a
i see. what are the options then?
should I create a different PDN for tile_clb? e.g. use met2/met3?
also, what does
FP_PDN_ENABLE_MACRO_GRID
and
FP_PDN_MACROS
do?
m
You'll need to ask someone with more knowledge than me. The
tile_clb.spice
file above is extracted from the layout. The other
tile_clb
verilog files appear to be rtl. Do you have gl verilog? That's what's needed for LVS.
a
oh yes of course.
tile_clb.v
d
@User Did you had change the re-run with adding Met1/MET2 blockage in config.tcl file set ::env(GLB_RT_OBS) " met5 0 0 2920 3520, \ met2 1175 1690 1340 1934.8, \ met1 1175 1690 1340 1934.8 \ " I have locally run your repo https://github.com/getziadz/caravel_mpw5_prga.git, LVS passed after the changes
m
Thanks for you help @User. I don't think the data they are having trouble with is in the repo.
top
is the cell name.
a
Hi @User I haven't. Did you run
make prga
or
make user_project_wrapper
? we haven't started working on the wrapper yet
m
@User Still extracting. I'll look into it more in the morning. met3 doesn't look too congested. You might be able to do horizontal met3 PDN routing.
a
@User thanks! I'm looking into how to override the default pdn generation, and learning the pdngen syntax. met3 is horizontal though, I assume I still need vertical met2/met4 to connect to met1 rails?
m
The met1 rails are already connected to met4. You just need to connect the met4 horizontally.
a
I see. and should this be done at
top
, or I should customize pdn generation within
tile_clb
?
tile_clb
routes up to met2, my hunch is that doing so at
top
is easier?
m
The only problem with LVS was the power nets for
tile_clb
. Connect those, and you should be good to go.
πŸ‘ 1
a
awesome! thanks a lot @User. will update soon
Hey @User I ended up changing
tile_clb
PDN to using met2/met3, and connect met3 to the met4 PDN in
top
. LVS can pass now. There are some hold violations that I need to look into, but they should be unrelated to the PDN. Thanks for your help!
πŸŽ‰ 1