Nvidia turns up the AI heat with 1,200W Blackwell GPUs
AI's power and thermal demands hit home
Nvidia doubles up on GPU compute with second-gen Superchips
According to Nvidia its
DGX B200 chassis with eight B200 GPUs will consume roughly
14.3kW – something that's going to require roughly
60kW of rack power and thermal headroom to handle.
Blackwell's full potential will require switching over to liquid cooling. In a liquid-cooled configuration, Nvidia says the chip can output 1,200W of thermal energy when pumping out the full 20 petaFLOPS of FP4.
While Nvidia may dominate the AI infrastructure market, it's hardly the only name out there. Heavy hitters like Intel and AMD are rolling out Gaudi and Instinct accelerators.
The B200 "Blackwell" is the largest chip physically possible using existing foundry tech, according to its makers. The chip is an astonishing 208 billion transistors, and is made up of two chiplets, which by themselves are the largest possible chips.
Each of the two "Blackwell" chiplets has a 4096-bit memory bus, and is wired to 96 GB of HBM3E spread across four 24 GB stacks; which totals to 192 GB for the B200 package.
Comments
Post a Comment