In numbers, an HBM3 stack can reach 819 GB/s of bandwidth and have 64 GB of capacity. In comparison, the HBM2e stacks used by the AMD MI250 have half the bandwidth, 410 GB/s, and a quarter of the capacity, a mere 16 GB. At eight stacks, the MI250 has a total of 128 GB and 3277 GB/s of bandwidth. Eight stacks of HBM3 would have 512 GB with 6552 GB/s of bandwidth. For now, there aren’t any HBM2e memory modules that meet the maximum specification. HBM3 also doubles the number of independent channels, from eight to 16. And it’s introducing “pseudo-channels” that allow it to support up to 32 virtual channels. According to JEDEC, HBM3 additionally addresses the “market need for high platform-level RAS (reliability, availability, serviceability)” with “strong, symbol-based ECC on-die, as well as real-time error reporting and transparency.” JEDEC expects the first generation of HBM3 products to appear on the market soon but notes that they won’t meet the maximum specification. A more realistic outlook, it says, would be 2 GB modules in 12-layer stacks. Image credit: Stephen Shankland