<@U0179KL07EH>: Hello, the script you're referring...
# openlane
@User: Hello, the script you're referring to inserts diodes as close to sinks as possible in order to mitigate antenna violations. When the core utilization is too high (small layout), or the density is too high (cells very close to each other), inserting the diodes may fail because there isn't enough area or the detailed placer gives up because of the low density. Interestingly, I could get spm to run with 74%
. The process of finding the best configuration parameters for a design is systematic that we have it automated (what we call an exploration); please check this for more information on that. You may be interested in seeing the effects of the
parameters, on which you can read here.
what is the The core utilization percentage?
and how does it differ to
The core utilization percentage (
) is used during floorplanning and determines the die area; in case of a logic block, it's basically equal to the
(total area of logic cells)/(die area)
The target placement density (
), which varies from
is used during global placement. Intuitively*, it's a measure of how close the cells are placed to each other, low values meaning further from one another, higher values meaning closer.
is usually a good range to keep cells far enough and allow diode insertion. * The exact equation would be more complex to explain as the placer models cells as electric charges uses the electrostatic equation (here is the paper if you're interested tho!).
thanks for the explanation. I still don't get what FP_CORE_UTIL controls though!
Sorry if it wasn't clear. It determines the layout size (die area) used for your design. Roughly, if your design consists of 200 gates and you specify, say, 50% core utilization, you get a layout size that can take double that number of gates (400). This means only 50% of the area is used. For example, here are two screenshots of the
design with very low utilization vs. high utilization:
what is the use case for setting utilisation to a low number? Getting faster intermediate results? I would have thought that users will always want the highest utilisation.
Also thanks for the help!
That's indeed correct; it's desirable to shoot for the highest utilization possible. Some cases force you to use a lower utilization. For example, if you have a very congested design with so many connections between the cells, you may need to leave some more empty room for routing by lowering the utilization. Another important note is that the core utilization only accounts for the logic cells (which are known after logic synthesis) not accounting for the area needed for physical cells like tap cells, diodes, decaps, etc., so we need to account for those as well.
👍 2
Glad if I could help!