<@U0172QZ342D> <@U0175T39732>. If you refer to thi...
# openram
r
@User @User. If you refer to this:
[INFO]: current step index: 25
[INFO]: No DRC violations after detailed routing.
[INFO]: Changing layout from /project/openlane/memory/runs/memory/tmp/routing/23-fastroute.def to /project/openlane/memory/runs/memory/results/routing/24-memory.def
We don't have that error. We have achieved to do almost complete the workflow of a macro that we are building with an openram block. But It fails at step 38 with
[INFO]: Running LEF LVS...
[INFO]: /project/openlane/memory/runs/memory/results/magic/memory.spice against /project/openlane/memory/runs/memory/results/lvs/memory.lvs.powered.v
[INFO]: current step index: 38
*[ERROR]*: There are LVS errors in the design according to Netgen LVS.
[INFO]: Calculating Runtime From the Start...
[INFO]: flow failed for memory/21-10_12-33 in 4h18m0s
[INFO]: Generating Final Summary Report...
What is wrong? If we comment the use of the openram block, the workflow of the macro is complete with the success. Any suggestion or anything that we are missing?
m
We would need to see the LVS output from netgen to be able to debug this.
r
Where can I find it?
m
Can you post your repo pls? And your toolchain commits etc
I can't reproduce
m
I'm not sure where it is
r
@User @User. https://github.com/rodhuega/mpw3-memory-test here is the repo. With the 32 bit word and 1024 (openlane/memory) words generated with openRAM it raise an error at the step index 38 due to LVS errors in the design). Also, we have tried to do the same but instead of a 32 bit word and 1024 words with a memory generated in the same way with openRAM with 32bit word and 32 words and it raises an error earlier, at step index 25 with 6 violations in the design after detailed routing. So, I think that the errors depends on the generated file by openRAM
Moreover, I have opened the temporal gds (openlane/memory/runs/memory/results/klayout/memory.gds)generated until the step 38 even with errors and It seems that it has contents inside it. I comment that because I have seen another message that says that is not sure if his memory is filled or not.
By the way, If you try to replicate the error, be careful, it takes many hours (in my pc it was 4h39m) in the step index 38. Maybe reducing the die helps, I don't know
m
I don't really have time to replicate so if you can send me the report that would be useful.
r
ok, what folders do you need? I can zip it and uploaded
I have zipped the memory(lvs errors at step index 38) and memory2(errors at step index 25) folders. They have most of the files generated by openlane. I think some of them are linked to another path, if you need those ones, ask me and I upload them. Many thanks
m
It looks like it is complaining about your supplies that seem to be disconnected in your Verilog netlist.
Do you have power in your verilog?
r
No, I don't use any kind of power. Use standard verilog things to build the logic
m
That's the problem..
r
Ok, I will try it now, thanks
m
Number of nets: 1266 Mismatch |Number of nets: 1261 Mismatch
It would be useful to learn to debug LVS errors in netgen. This is in the report: ./runs/memory/results/lvs/memory.lvs.lef.log
r
And the problem with the small memory that gives error in the step index 25 is the same problem? I have the same mistake. But it reports the error in a different step
Ok thanks for the tip. I'm new with this kind of tools
m
I haven't looked at step 25. Not sure what that is. Can't help anymore today -- I'm on vacation 🙂
r
Ok, many thanks for the tip and to point the error
m
My advice is to look at my project though and see what settings I had: https://github.com/VLSIDA/openram_testchip
r
ohh, thanks. I'm sure that this would be a future error for me
Hi, I am still trying to get OpenRAM working. I'm still facing the same problem with LVS where it has a difference in net count of 2. I have added the ``ifdef USE_POWER_PINS`
inout vdda1,        // User area 1 3.3V supply
inout vdda2,        // User area 2 3.3V supply
inout vssa1,        // User area 1 analog ground
inout vssa2,        // User area 2 analog ground
inout vccd1,        // User area 1 1.8V supply
inout vccd2,        // User area 2 1.8v supply
inout vssd1,        // User area 1 digital ground
inout vssd2,        // User area 2 digital ground
``endif` on the top of the module (it is the top module of the macro and inside the module is where I call the openram module). Where I call the openram module I have:
sram_32_32_sky130 CPURAM(
``ifdef USE_POWER_PINS`
.vccd1(vccd1),
.vssd1(vssd1),
``endif`
.clk0(clk),
.csb0(1'b0),
.web0(!we),
.spare_wen0(1'b0),
.addr0(addr_to_sram[5:0]),
.din0(data_to_sram),
.dout0(auxiliar_mem_out)
);
I have been looking into your repo, but it seems all similar to me. The main difference is that you aren't compiling a macro, you are building the openRAM module on the top of the user_project_wrapper. So, what I'm missing to get working openRAM? Thanks
m
What do you mean I'm "not compiling a macro"?
Are the vssd1 and vccd1 actually connected to the macro in the layout?
Also, the SRAMs MUST be in the top level and not in a hardened macro. Otherwise you don't have the right layer straps to connect the power.
It is very limiting how openroad/OpenLane does the pdngen right now
r
With the macro I mean that you place the sram macros directly at user_project_wrapper instead of building for example at user_project_example. I mean, in my config as is a macro I have DESIGN_IS_CORE 0 and by default is 1, like in the user_project_wrapper. What do you mean by vssd1 and vccd1 connected to the macro layout. In the verilog files I have the Ifdef USE_POWER_PINS. Ohhh, I see, so it is mandatory that the SRAM to don't be in a macro :((, this will be a problem for me. I will try to solve it. Thanks for answering
👍 1
m
Yeah, each layer of macro loses a layer for power routing.
r
ohhh, so I can't do macros inside macros inside macros? More problems that I have 😂
m
@User could you link your web sram project?
@User we are getting closer
We can build a simple design with an opensram macro
But still can't pass the checks and still have DRC
p
@User @User This is caravel project that instantiates two macros: tiny wishbone wrapper and sram 1kB: https://github.com/embelon/caravel_wb_openram
Current version is using macro installed with PDK, so openram_testchip as a submodule is no longer needed. I'll remove this dependency.
"make user_project_wrapper" reports 3M+ DRC errors, but substituting maglef and doing precheck results only in 16 violations.
r
In the end, I have reached to the DRC violations after using the sram openRAM module in the top of user_project_wrapper. @User Hi, how can I replace the maglefs and where I can find the right ones. Thank you
p
@User I'm assuming you're using one of sram gds files installed with PDK in $PDK_ROOT/sky130A/libs.ref/sky130_sram_macros/gds. Then you can use mag file from $PDK_ROOT/sky130A/libs.ref/sky130_sram_macros/maglef directory. To do a precheck, you can add in openlane/user_project_wrapper/runs/user_project_wrapper/magic/user_project_wrapper.mag file (after hardening) path to maglef path to directory containing macro maglef files like this:
@User Is precheck resulting in 16 violations (after maglef substitution) bad or quite ok? Should I expect 0 or some low number?
r
Thank you @User
m
@User you would need to ask eFabless
I'd assume 0
p
@User Thanks!
@User @User It seems that maglef substitution (before precheck) is now working auto-magically without modifying *.mag file (addidng path to sram macros directory inside PDK). However, it turned out that our 16 violations in precheck are caused by obsm4 layer that is present (and too close to pins) only in maglef but not in GDS. Why is there such discrepancy GDS vs maglef? Is that expected? Is there anything we can do to fix that? The workaround for this seems to be adding obstruction on met4 around sram macro. Are there any drawbacks?
m
Can you show me an example and provide the config? Are you saying that Triton route routed too close to the obsm4 layer? Some people saw that before but there isn't anything I can do about it...
That would be an issue to file with Triton route.
t
@User: The discrepancy between GDS and LEF is due to simplifying the generation of LEF; the pins are marked and the rest of the design is covered in obstruction layers out to the bounding box. This is particularly important for digital designs; otherwise the only choice is to represent all the internal wiring exactly as obstruction layers, which makes the abstract view large and complicated and adds time to running DRC. Openlane is supposed to route around the obstructions since it should be reading the abstract view of the macro. It is not clear to me why there are DRC errors generated. But if Openlane is creating those DRC errors, then it is an issue for the Openlane developers.
p
@User The whole project (with configuration) can be found here: https://github.com/embelon/caravel_wb_openram I'm not sure if it's Triton causing this, as GDS looks fine in places mark by precheck as violations. How can I be sure and file an issue for Triton? What if I use other router?
m
@User I'm sorry but I don't know. I saw this error in some other designs but wasn't able to diagnose it. I have the WOSET workshop today so I won't be able to look for a while.
The wishbone wrapper is something I want to add to the openram_testchip too... in addtion to the GPIO/LA test interfaces. 🙂
If you look at the non-LEF design, it actually passes DRC if I recall.
p
@User Can it be that LEF file is generated (in terms of obstruction layer / obsm4) by Openlane in not so good way and then violations are being found during precheck? Or should I suspect routing? Do you know how to file and issue for Openlane?
@User What do you mean by non-LEF design?
m
If you look at the design with the GDS for the SRAMs instead of the maglef
p
So I need to substitute maglef to GDS in some file?
m
The GDS gets substituted somewhere, but not during that part of the check, so I'm not sure.
p
@User You can use this wishbone wrapper, if you need one 🙂 I hope it will work in silicon as expected. My idea for future is to add second wishbone interface (sram block has second port for read) and some MUXes (controlled on the fly by a signal) to connect sram RW port to one of wishbone interfaces (to choose which wishbone interface will have W possibility). Then I can use sram as: 1) Additional ram for picorv32 (RW connected) 2) Cache/ram for other IP (RW connected) - i.e. caching for my simple HyperRAM memory driver from MPW2 3) Kind of shared memory between picorv32 and other IP (but without fully bidirectional work)
m
@User ^^
👍 1
p
Today I finished adding second Wishbone interface (making wrapper dual_port), MUXes and some logic to cover Wishbone ACKs for both interfaces / OpenRAM ports. FW was updated to cover that in DV. Seemed to be working fine with on-the-fly switching connections between two Wishbone busses and two ports of OpenRAM 1kB.
👍 4