@Matt Venn This weekend, Boris Murman pointed out a significant discrepancy in the models running at subthreshold bias. Another slacker commented that they’d seen the same issues with devices at other bias points and suggested that having measured data would be helpful. Both slackers showed that the resulting simulated data was not physically reasonable.
In my work, I recalled seeing some device IV plots, so I fished around and posted the link with charts, data, etc. It was pointed out to me that there was no information attached to the data that indicated what device was used to collect the data.
After some more work, I noticed that the .spice file that was apparently used to simulate the data had the W/L parameters omitted. So there was no way to link pictures to devices to compare the data. Not only that, the data collected to generate the plot had the column headers deleted. In other words, lots of apparently random numbers without labels to identify how the device terminals were biased during data collection or if the data was voltage or current.
So here is a case where the models produce non-physical results and there is no way to check the models against measured data. This is designing in the blind. Why would anyone design a circuit using SPICE if the models are not accurate? For learning? Sure, I can see how that makes sense, but not to design analog IP expecting a certain performance.
I’ve been designing ICs continually since 1985. I’ve seen countless PDKs over that 35 years. The SKY130 PDK is woefully short of the typical information provided in a PDK.
An open source PDK should carry the same connotation as open source software. Namely, all the pieces should be there in a concise repository, and there should be a way to contact the person(s) who generated the repository data, in this case, Skywater. I get that Skywater is probably not going to support their repository data. Nevertheless, that is the first, and important departure from open source IP. But in that case, the IP that is provided needs to be complete and accurate, otherwise this is just experimental.
In one of the YouTube videos, I listened to the representative from Google who claimed that one purpose of this open source initiative is to test risky ideas that might not work. I get that. But to make that step, one needs to have all of the background pieces in place to move forward. If I design a circuit idea, I want to trust the models, not worry if they are reprentative of how the devices actually work. When the circuit comes back, I want to know that any problems are in my design, not in the device models.
I’ve also seen the videos and discussion about setting up an environment that doesn’t require expertise to design a chip. The presenter from Google proclaimed they are not a chip designer. I think this is a great idea. Programmer becomes chip designer. Enter key => working chip.
I’ve checked the design flow from verilog to gdsii and it does work. But that doesn’t require intimate analog knowledge of the process. As long as the timing data, cell layouts, etc. are in place, one can do precislely what was claimed by the Google engineer. Bravo! Of course this same knowledge and experience can be gained taking RTL to silicon on an FPGA. In many IC design companies, that is the traditional design flow.
But it is a large step from RTL to IC in the digital domain, and transistor schematics to an IC with analog functions. An analog designer needs very intimate knowledge of the process. To be an analog IC designer requires one to have intimate knowledge at the device physics level out to the circuit/system design level.
A programmer is not going to become an analog chip designer in a few weeks. The disciplines of an analog IC design engineer and a digital IC designer are completely different and require compleletely different backgrounds and mindset. Even within traditional semiconductor companies, digital engineers never touch the analog sections of an IC and analog engineers do not run place and route tools.
While I’ve agreed with the value in the openlane flow for digital, there is a rub. Digital-only innovations and risks require much more advanced nodes than 130nm. Conversely, substantial analog innovation is still on going at 130nm and even older nodes. The ability to integrate a small processor on a 130nm device with substantial analog functionality can be state of the art and valuable.
Summarily, I see the digital openlane flow valuble in two areas: (a) learning and (b) generating digital IP for mixed signal applications. But the mixed signal part is only possible for inventive, risky usage if all of the analog design information is available and accurate: accurate spice models, measured data curves for all device types, etc: information always found in a PDK.