Nvidia CEO Jensen Huang kicked off the agency’s graphics technology convention (GTC) with a keynote speech full of bulletins. The important thing exhibits include Nvidia’s first-ever discrete CPU, named Grace, in addition to its next-era Hopper structure, to arrive later in 2022.
The Grace CPU Superchip is Nvidia’s first discrete CPU ever, but it won’t be at the heart of your next gaming computer. Nvidia introduced the Grace CPU in 2021, but this Superchip, as Nvidia calls it, is something new. It puts collectively two Grace CPUs, similar to Apple’s M1 ultra, connected through Nvidia’s NVLink technology.
In contrast to the M1 ultra, however, the Grace Superchip isn’t built for general performance. The 144-core GPU is constructed for A.I., records technology-based know-how, and packages with high memory needs. The CPU still uses ARM cores, no matter Nvidia’s abandoned $40 billion bid to purchase the enterprise.
In addition to the Grace Superchip, Nvidia confirmed its subsequent-technology Hopper architecture. This isn’t the architecture powering the RTX 4080, by hypothesis. As a substitute, it’s built for Nvidia’s data center accelerators. Nvidia is debuting the architecture in the H100 GPU, to replace Nvidia’s previous A100.
Read More: Nvidia GeForce RTX 2060 review
Nvidia calls the H100 the “international’s most advanced chip.” It’s built the use of chipmaker TSMC’s N4 production process, packing in an extraordinary 80 billion transistors. As though that wasn’t sufficient, it’s additionally the primary GPU to support PCIe 5.0 and HBM3 memory. Nvidia says just 20 H100 GPUs can “preserve the equivalent of the complete international’s internet site visitors,” showing the strength of PCIe 5. 0 and HBM3.
Nvidia is debuting the new architecture with its EOS supercomputer, which incorporates 18 DGX H100 SuperPods for a complete of 4,608 H100 GPUs. Allowing this machine is Nvidia’s fourth technology of NVLink, which affords a high bandwidth interconnect between massive clusters of GPUs.
Because the wide variety of GPUs scales up, Nvidia showed that the closing-gen A100 might flatline. Hopper and fourth-gen NVLink don’t have that hassle, according to the agency. Because the range of GPUs scales into the hundreds, Nvidia says H100-primarily based structures can provide as much as nine times quicker A.I. training than A100-based structures.
This next-gen architecture affords “sport-converting overall performance benefits,” in step with Nvidia. Even though exciting for the world of A.I. and high-overall performance computing, we’re still eagerly anticipating announcements around Nvidia’s subsequent-gen RTX 4080, which is rumored to release later this year.