Hi everyone.
So @Philipp Gühring and I have been working together with ChatGPT and Bard on the AI silicon challenge, but soon found out, that the project we're aiming at doesn't really fit into a single QFN package.
Medium capacity Tensor Processing Unit, which unlike the Google Coral Edge TPU would allow for training an entire model from scratch, not only the outer layer, we'd need several chiplets flip chip bonded together.
Having already Matrix Multiplication and Matrix Convolution on one single die, makes routing a nightmare.
In order to save space, we wanna have SDRAM hooked up to the chips externally, which means, we need a proper amount of pins for that.
We'd provide a USB-C interface instead of PCIe, which would make it possible to put the accelerator into a USB dongle format or so.
Any chance, we can get support for such a project from chipignite?
Slack Conversation