Monday, June 22, 2020

Inspur Launches New AI Servers to Support the Latest NVIDIA A100 PCIe Gen 4 at ISC20

NF5468M6 and NF5468A5 accommodate 8 pcs of double-width A100 PCIe in 4U chassis. Both support the latest PCIe Gen 4 of 64GB/s bi-directional bandwidth, achieving superior AI computing performance.

San Jose, California, June 22-Inspur, a leading data center and AI full-stack solutions provider, releases NF5468M6 and NF5468A5 AI servers supporting the latest NVIDIA Ampere architecture A100 PCIe Gen4 at ISC High Performance 2020. It will provide AI users around the world with the ultimate AI computing platform with superior performance and flexibility.

Thanks to its agile and strong product design and development capabilities, Inspur is the first in the industry to support NVIDIA Ampere architecture GPUs and build up a comprehensive and competitive next-generation AI computing platform. It realizes high-speed interconnection of 8 to 16 NVIDIA A100 Tensor Core GPUs by NVSwitch and provides AI computing performance of up to 40 PetaOPS and a P2P bandwidth of 600GB/S between GPUs. At present, Inspur’s two new products-NF5488M5-D and NF5488A5 on Ampere architecture have taken the lead in mass production.

The newly released NF5468M6 and NF5468A5 present many innovative designs and strike a balance between superior performance and flexibility, which well meets the increasingly complex and diverse AI computing needs. NF5468M6 and NF5468A5 can offer superb computing performance in terms of high-performance computing and cloud application scenarios.

NF5468M6 and NF5468A5 accommodate 8 pcs of double-width A100 PCIe in 4U chassis. Both support the latest PCIe Gen4 of 64GB/s bi-directional bandwidth, delivering a 100% increase in bandwidth compared to PCIe Gen3 with same power consumption. Its superior performance will meet the requirements of the most complex challenges in data science, high-performance computing, and artificial intelligence. Besides, 40GB of HBM2 memory increases memory bandwidth by 70% to 1.6TB/s, allowing users to train larger deep learning models. The unique NVLINK Bridge design can provide P2P performance of up to 600GB/S between two GPUs, resulting in orders of magnitude increases in training efficiency. For multi-task training and development scenarios, the MIG (Multi-Instance GPU) feature can divide a single A100 into up to seven independent GPU instances, each of them dealing with different computing tasks. Through the more exclusive allocation of GPU resources, it can provide users with more accurate accelerated calculations and increase GPU utilization to an unprecedented level.

Furthermore, another two leading AI servers of Inspur, NF5468M5 and NF5280M5 also support NVIDIA PCIe Gen4 A100.

As the world’s leading AI server manufacturer, Inspur offers an extensive range of AI products, and works closely with AI customers to improve AI application performance in different scenarios such as voice, semantic, image, video, and search.