Deepening AI Training and Inference with Inspur x Habana Labs Partnership
In the competitive AI hardware landscape, Inspur is partnering with Habana Labs, integrating Gaudi2, Habana’s second-gen deep learning training and inference processor, to fortify its high-performance server portfolio.
Boosting Inspur AI Server Performance with Gaudi2
Inspur’s next-gen OAM platform, based on open computing standards, integrates Habana® Gaudi2® accelerators to deliver highly scalable, industry-leading training performance and accelerate AI development on open architecture. The Inspur NF5688M7 is powered by 2x Intel® Xeon® Sapphire Rapids CPUs and 8x Gaudi2 AI mezzanine cards to support advanced AI computing needs. Gaudi2 is offered in standard OCP OAM 1.1 mezzanine card form and supports up to 600 TDP power with passive cooling.
Gaudi2: Training and Inference Upgraded
The Gaudi2 processor features fully programmable Al customized Tensor Processor Cores (TPCs) as well as a high memory bandwidth and scale-out based on standard Ethernet technology using native integration of 24x100GbE NICs on-chip. Leveraging 24 TPCs and AI-optimized GEMM Engine, Gaudi2 supports the most advanced Al data types like FP8, BF16, FP16, TF32 and FP32 to empower deep learning training and inference workloads.
Gaudi2 also enables building and scaling of training systems, from a single server to complete racks, with a wide array of connectivity options and scale-out topologies. It delivers extreme memory capacity of 96GB and total throughput of 2.4TB/s, providing record-breaking throughput with cutting-edge HBM2 technology, and includes an independent media engine capable of decoding and postprocessing compressed media directly.
By actively pursuing partnerships and innovations like ours with Habana Labs, Inspur aspires to deliver continuously cutting-edge solutions for technology’s evolving landscape.