<Michael P. Frank > <@MikePFrank> <8:30 AM · Nov 9...
# adiabatonauts
a
Michael P. Frank @MikePFrank 8:30 AM · Nov 9, 2022 https://twitter.com/MikePFrank/status/1590381441850167302?s=20&amp;t=SzyZhBjkUPU8FNJpYLQtcg Here’s another way of viewing the latest chart that makes the benefit of adiabatic switching even plainer. Let’s say that you have a system whose power delivery and cooling systems are maxed out at 100 W/cm² from the main processor chips. Let’s now look at the performance limits. With conventional tech that’s voltage-optimized for peak performance per unit chip area, you can pack in at most about 2.6 quintillion logic switching events per square centimeter of chip area by 2037, thus per 100 W of system power. But, what if we use adiabatic switching? If we optimize adiabatic tech for peak energy efficiency, we can perform a quintillion such switching events per second per square cm while dissipating only 0.1 W/cm². But, our cooling system can handle 100 W/cm²! What then makes sense if you want to maximize system performance? Well, assuming you can afford the fab costs, what makes the most sense is to pack in A THOUSAND ADIABATIC CHIPS of the same size for each single chip in the conventional system design. Total power dissipation of those thousand chips is the same as the one, so the cooling is easy. What about performance? Well, those thousand chips are now collectively doing a sextillion (1e21) logic switching events per second per 100 W. (This neglects overheads, but we’ll get to that later.) Note that this is 385x the performance of the conventional system design! Note that power-efficiency has improved by 385x, while cost-efficiency (for the silicon) has only gotten 2.6x worse. And, if most of the manufacturing and deployment cost of the original system was not in the chips, but in other structures, this may not increase total cost much. Some examples of these other costs: Circuit boards, heat sinks, chassis/backplanes, liquid and forced-air coolant handling systems, racks and cabinets, machine room space, power plants, real estate. Note that many of these costs are associated with the need to deliver power to the system and get the waste heat out. So, when we improve the energy efficiency of compute, we increase the performance we can attain for each unit of cost spent on power/cooling systems. It seems plausible that there could be MANY high-performance computing applications in which you would gladly increase raw silicon cost per unit performance by 2.6x if it meant that you could fit 385x the aggregate performance within the power-handling capacity of your facility. To me, this seems like a no-brainer, and users of large-scale compute are literally crazy not to be pouring massive resources into making this technology a reality. I mentioned overheads earlier, and yes, there are some overheads that are associated with the use of adiabatic/reversible design techniques. But that’s why we need additional R&D, to help optimize designs for practical processing architectures to minimize those overheads. Note that a 385x boost in raw efficiency gives you a lot of room to incur some overhead while still maintaining a significantly improved overall efficiency on a given computational workload of interest.