<@U016EM8L91B> So the higher you go in the gds hie...
# caravel
m
@User So the higher you go in the gds hierarchy, the less the names match, correct. I'll keep that in mind. On a different slightly different note, I'm using the sram netlist for storage LVS that I noticed that the device
sky130_fd_pr__special_pfet_pass
is not being extracted. Is this a tech file problem? Here's a link to the netlist for the sram module. https://github.com/efabless/sky130_sram_macros/blob/sky130_name_mapping/sram_1rw1r_32_256_8_sky130/sram_1rw1r_32_256_8_sky130.lvs.converted.sp
m
The extraction of the bitcell is odd because there are "parasitic" devices that aren't real transistors. If you look at the dimensions, it is likely some nonsensical shape. The devices extracted by Magic and the devices extracted by Calibre will be slightly different because sometimes the parasitic devices get droped.
t
Yes, but I thought that I had a tech file that could find and extract the parasitic devices reliably.
m
It can, but it is different than Calibre. And I gave Mitch a Calibre extracted spice model of the memory.
t
Oh, okay.
m
@Matthew Guthaus Could you be referring to the "drainOnly" devices comment in the spi file? The problem I'm seeing is that no
sky130_fd_pr__special_pfet_pass
devices are extracted at all. The nmos drainOnly devices do not appear to be extracted by magic either, so I might filter them. Also, I'm currently using the sram macro from efabless and not the one @Matthew Guthaus sent me. It does appear to have parasitics (if that is what drainOnly is referring to).
m
Actually, what are pfet_pass devices? There are only nfet_pass...
and nfet_latch/pfet_latch
or whatever the names were that were decided on
m
This is the bitcore cell from the spice file
Copy code
.SUBCKT sky130_fd_bd_sram__openram_dp_cell bl0 br0 bl1 br1 wl0 wl1 vdd gnd
**
*.SEEDPROM

* Bitcell Core
XM0 Q wl1 bl1 gnd sky130_fd_pr__special_nfet_latch W=0.21 L=0.15 m=1
XM1 gnd Q_bar Q gnd sky130_fd_pr__special_nfet_latch W=0.21 L=0.15 m=1
XM2 gnd Q_bar Q gnd sky130_fd_pr__special_nfet_latch W=0.21 L=0.15 m=1
XM3 bl0 wl0 Q gnd sky130_fd_pr__special_nfet_latch W=0.21 L=0.15 m=1
XM4 Q_bar wl1 br1 gnd sky130_fd_pr__special_nfet_latch W=0.21 L=0.15 m=1
XM5 gnd Q Q_bar gnd sky130_fd_pr__special_nfet_latch W=0.21 L=0.15 m=1
XM6 gnd Q Q_bar gnd sky130_fd_pr__special_nfet_latch W=0.21 L=0.15 m=1
XM7 br0 wl0 Q_bar gnd sky130_fd_pr__special_nfet_latch W=0.21 L=0.15 m=1
XM8 vdd Q Q_bar vdd sky130_fd_pr__special_pfet_pass W=0.14 L=0.15 m=1
XM9 Q Q_bar vdd vdd sky130_fd_pr__special_pfet_pass W=0.14 L=0.15 m=1

* drainOnly PMOS
XM10 Q_bar wl1 Q_bar vdd sky130_fd_pr__special_pfet_pass L=0.08 W=0.14 m=1
XM11 Q wl0 Q vdd sky130_fd_pr__special_pfet_pass L=0.08 W=0.14 m=1

* drainOnly NMOS
XM12 bl1 gnd bl1 gnd sky130_fd_pr__special_nfet_latch W=0.21 L=0.08 m=1
XM14 br1 gnd br1 gnd sky130_fd_pr__special_nfet_latch W=0.21 L=0.08 m=1

.ENDS
XM8, XM9 are the
sky130_fd_pr__special_pfet_pass
(along with 2 parasitics). I opened the GDS in klayout and saw the nmos diffusion, but I couldn't see any diffusion over the nwell (pmos). There were some conversion errors in magic GDS in. Maybe related?
Copy code
Reading "pk_sky130_fd_bd_sram__openram_dp_cell".
Error while reading cell "pk_sky130_fd_bd_sram__openram_dp_cell" (byte position 134172): Unknown layer/datatype in boundary, layer=33 type=43
Error while reading cell "pk_sky130_fd_bd_sram__openram_dp_cell" (byte position 138364): Unknown layer/datatype in boundary, layer=22 type=21
Error while reading cell "pk_sky130_fd_bd_sram__openram_dp_cell" (byte position 139324): Unknown layer/datatype in boundary, layer=22 type=22
Error while reading cell "pk_sky130_fd_bd_sram__openram_dp_cell" (byte position 139580): Unknown layer/datatype in boundary, layer=235 type=0
Error while reading cell "pk_sky130_fd_bd_sram__openram_dp_cell" (byte position 142508): Unknown layer/datatype in boundary, layer=33 type=42
t
@Matthew Guthaus: Did we cross signals somewhere? I have magic defining extraction for types "nfet_pass", "pfet_pass", and "nfet_latch".
m
@Tim Edwards I think that was the early names that Tim A chose, but pfet_latch makes more sense. I thought we changed it?
OHH, I know what the issue is. @Mitch Bailey you need the updated GDS with the HVTP layer added
t
I have not changed it (apparently); as we do not have an official repository for the SRAM with anything generated by magic, then it would only make a difference when we run LVS like we're doing now. In that case, I should be matching whatever you're doing with the netlists for the SRAM core cells that go into the public repository.
m
I am actually using pfet_pass in the LVS netlist and pfet_latch in the simulation netlist.
m
Hopefully, it's just a rule problem. I'm using the gds from
efabless/caravel
origin/develop
data
commit 384a7d5
. @Tim Edwards Is this what you're using to create the masks? Using the klayout tech file as reference, it looks like the layers that magic doesn't recognize are
Copy code
22/21 cfom.maskAdd
22/22 cfom.maskDrop
33/42 cp1m.maskDrop
33/43 cp1m.maskAdd
235/0 prBoundary.drawing
which don't appear to have anything to do with diffusion. Can someone take a look at the gds data for the
openram_dp_cell
and make sure the diffusion for the pmos is there? Needless to say, if it's not getting streamedout, (insert any expletive).
m
@Mitch Bailey are there any shapes on hvtp.drawing 78/44?
m
@Matthew Guthaus Yes. I confirmed 1 rectangle in
openram_dp_cell_dummy
m
Ok, that is good. Without that, it wouldn't recognize it as the device that is missing. And it was originally missing from the cells they gave us
m
@Tim Edwards @Matthew Guthaus Sorry, I was looking at the dummy cell, which has no pfets.
openram_dp_cell
has the pmos diffusion, but nothing is extracted, so it's probably a rule problem.
m
That is correct for the dummy cell
m
@Tim Edwards So here's the devices from
openram_dp_cell
Copy code
XM8 vdd Q Q_bar vdd sky130_fd_pr__special_pfet_pass W=0.14 L=0.15 m=1
XM9 Q Q_bar vdd vdd sky130_fd_pr__special_pfet_pass W=0.14 L=0.15 m=1
And here's the tech file
Copy code
layer ppu pfetarea
 and-not LVTN
 and HVTP
 and COREID
 # Shrink-grow operation eliminates the smaller ppass device
 shrink 70
 grow 70
 labels DIFF
Looks like the shrink/grow eliminates any device with a width OR length <= 0.14 Parasitic devices have L=0.08, so I'm going to try dropping the shrink/grow to 50
t
@Mitch Bailey: The definitions have gotten into a somewhat meaningless state. The input definition for "npass" and "npd" are the same, which means that you cannot input an "npass" device because it will be overwritten by "npd". Likewise, the shrink/grow operation on the "ppu" device just causes smaller devices to vanish. Here's the full story: I have only two screenshots of the original schematics. The single-port SRAM schematic has device npd = 0.21/0.15, npass = 0.14/0.15, ppu = 0.14/0.15, and pdo = 0.14/0.025. The dual-port SRAM schematic has only "nshort" nad "pshort" labels, but the devices are either 0.21/0.15 for the N devices or 0.14/0.15 for the P devices and 0.14/0.08 for the P devices that are formed by tucking the end of the poly under diffusion. The ultra-short devices are then not really devices but just MOSCAP. The problem is that "nshort" and "pshort" are not defined at those widths and lengths, so clearly they are supposed to be npd and ppu devices as well. There is a ppu model for size 0.14/0.025-0.05. Since this has an L that is way below the process feature size, I assume it is the correct model for the parasitic MOSCAP. But there is no valid model for the size 0.14/0.08 anywhere in the device models. Since all of these devices are unique to the SRAM core cells, it's easy enough to use the COREID layer to distinguish these device types from everything else. To distinguish the npass from the npd, I did the shrink/grow to eliminate the smaller devices. It appears that everything that is a P device should be a ppu. There is no "pdo" device in the models, and the "pdo" device matches one of the valid models for "ppu" in the device models. But that means that the dual-port 0.14/0.08 device extracts as a "ppu" and then won't simulate because there isn't a valid model for it. But then there isn't a valid model of any kind at that size. One option is to ignore it. The other option (haven't tried it) would be to extend the ppu device model upper bound of length from 0.05 to 0.08 and assume that it will probably extrapolate in a sane manner. FYI, in the dual-port SRAM cell there are also two devices formed by poly tucked under tap, which form parasitic varactors. As there is nothing even faintly resembling a model for these, they are completely ignored in extraction. I need magic at least to be computing the proper parasitic cap there during extraction, but I have no guidance in any documentation as to what the value of tha cap would be.
@Mitch Bailey: The long-story-short of that is that my recommendation would be to (1) Remove the shrink/grow from ppu as you observed; (2) Remove the shrink/grow from npass so that the smaller devices come out to be type npass and the larger ones npd; and (3) modify the device model for ppu to raise lmax from 0.5e-007 to 0.8e-007 (Given that all models otherwise have a +/-0.005e-007 margin, I guess it should be changed on both sides to lmin = 0.245e-007 and lmax = 0.805e-007). I am using the nomenclature for magic layers which is based on the original SkyWater s8 device types. The new names are
sky130_fd_pr__special_nfet_pass
for npass,
sky130_fd_pr__special_nfet_latch
for npd, and
sky130_fd_pr__special_pfet_pass
for ppu. As Matt noted, the latter is not a correct name change, because the "pass" devices (which are N-only) are the pass transistors, while the "ppu" are P-pullup devices and "npd" are N-pulldown devices in the back-to-back inverters; if you call the back-to-back inverters a latch, then it's appropriate to call them
nfet_latch
and
pfet_latch
but not
pfet_pass
and I will force that change when I get around to starting pull requests on the library repositories.
🌏 1
Since the magic tech file changes are entirely under my control, I will go ahead and make those corrections now to the open_pdks repository.
Well, that doesn't quite work either, because there is a parasitic N device in the dual-port RAM cell that now extracts as an "npass" but it is 0.21/0.08. There is a model bin for that size in the "npd" model, so it needs to extract as that. . . or not at all. The schematic doesn't have the device at all (to be sure, it forms a very bizarre device). I can make it show up as an "npd", but I can't even say what would constitute a proper extraction of it.
m
@Tim Edwards Sounds like we're getting close to a resolution. Just to clarify, 1) no shrink/grow for ppu 2) leave shrink/grow at 70 for npd 3) no shrink/grow for npass (do we also need to subtract npd?) Does the following cause a problem? I looked at original spice file from the openram, and there are only npd and ppu devices which have been mapped to
sky130_fd_pr__special_nfet_latch
and
sky130_fd_pr__special_pfet_pass
. It doesn't look like
sky130_fd_pr__special_nfet_pass
(npass) is used at all.
@Tim Edwards The converted (device rename) netlists have the following devices which I'm assuming are the parasitics.
Copy code
* drainOnly PMOS
XM10 Q_bar wl1 Q_bar vdd sky130_fd_pr__special_pfet_pass L=0.08 W=0.14 m=1
XM11 Q wl0 Q vdd sky130_fd_pr__special_pfet_pass L=0.08 W=0.14 m=1

* drainOnly NMOS
XM12 bl1 gnd bl1 gnd sky130_fd_pr__special_nfet_latch W=0.21 L=0.08 m=1
XM14 br1 gnd br1 gnd sky130_fd_pr__special_nfet_latch W=0.21 L=0.08 m=1
t
@Mitch Bailey: What you see in the netlists depends on whether you are looking at the single-port or the dual-port cell. They are designed rather differently from the perspective of these weird parasitic devices. Regardless, I will need to modify the tech file to make sure that the W=0.21, L=0.08 nMOS devices extract as "nfet_latch". Also: Your three points of clarification are correct. There is no need to subtract/eliminate npd because the layers are computed in order, so if two layers have the same operators, then the 2nd layer will overwrite the 1st layer. That's probably a bad policy from the standpoint of attempting to parallelize the operations, though, if I ever decide to tackle that.
@Mitch Bailey: I updated the magic tech file today and pushed it to open_pdks. Magic now imports from GDS and extracts all the right devices in the dual port cell. The only issues remaining is that one pFET parasitic device ends up with W and L slightly lower than they should be, and magic spews some error messages where it is doing the right thing, so probably shouldn't be complaining about it.
m
What do you define as "right" devices?
I may have to redo my LVS models
t
This is relative to the discussion above: The netlist had "npd" devices which magic was "re-interpreting" as "npass" when it read in the GDS. But there is only a valid "npd" device with those dimensions in the device models. Regardless, both yesterday and now, some of the devices are extracting with the wrong dimensions. I fixed one of those today. It's only the parasitic devices that end up with the wrong dimensions, so it probably doesn't affect much if you simulate off the extracted netlist. Today I fixed one of those, and the other is off by only a small amount.
m
It will affect LVS though
t
Yes, probably, if you changed your netlists so that they would match what magic was producing. I would hold off changing anything until I can track down why it extracts incorrect L and W for that one device type, though.