Inspur Launches Industry-First 4U AI Server Supporting Eight NVIDIA V100 Tensor Core GPUs with NVSwitch Enabled at GTC
SAN JOSE, Calif., March 19, 2019 /PRNewswire/ — Inspur, a leading datacenter and AI full-stack solution provider released the NF5488M5, the industry’s first AI server supporting eight NVIDIA V100 Tensor Core GPUs interconnected with ultra-high bandwidth NVSwitch in a 4U form factor. The Inspur NF5488M5 is designed to facilitate a variety of deep-learning and high-performance computing applications, including voice recognition, video analysis and intelligent customer service.
Key features include:
- Eight NVIDIA® Tesla® V100 Tensor Core 32GB GPUs with 5,120 Tensor Cores, providing up to 1 PFlops of AI computing performance
- The option of two 28-core CPUs to provide top-level, general-purpose computing performance, and 6 TB of persistent memory for high-speed data access
Flexible & Ergonomic
- A design that fits a broad range of data center power and a space-conserving environment, especially for power-constrained racks
- Flexible GPU cluster expansion over PCIe fabric
- Designed to operate on 54VDC, a more power-efficient voltage for GPUs
- A multi-layer heat dissipation design and intelligent PID adjustment that provide industry-leading thermal management and control
“Inspur has been committed to providing world-class AI computing products and solutions to AI users worldwide through innovative design,” said Jun Liu, Inspur general manager of AI and HPC. “The rapid development of AI keeps increasing the requirements for computing performance and flexibility of AI infrastructure. The NF5488M5 help users shorten AI model development cycles, and accelerate AI technology innovation and application development.”
“NVIDIA’s GPU-accelerated computing has transformed AI and HPC,” said Paresh Kharya, director of Product Marketing at NVIDIA. “Inspur has efficiently innovated computing systems based on the latest NVIDIA Tensor Core GPUs, and the new NF5488M5 will help AI and HPC users worldwide break through their computational bottlenecks.”
Learn more about NF5488M5 here.