Advancing AI: A Timeline of Inspur Product Innovations
As a longtime technology partner of NVIDIA and leading GPU server provider, Inspur has been for years leveraging our foundational expertise in extreme hardware design to develop cutting-edge solutions for HPC and artificial intelligence applications.
Latest GPU Server Family Featuring NVIDIA’s A100 Tensor Core GPUs
5488M5-D
Accommodates 8 powerful A100 GPUs in 4U and supports two of the most mainstream general-purpose GPUs, providing users with leading AI computing as well as mature ecological support.
NF5488A5
Supports the integration of A100 GPU and PCle Gen4, providing high-speed CPU-to-GPU data communication and higher data transmission capacity.
NF5488M6
Extreme design system for large-scale HPC and AI computing users, integrating 8 A100 GPUs and 2 next-generation mainstream general-purpose CPUs that support NVSwitch high-speed interconnect.
NF5688M6
Delivers the ultimate expansion performance for AI users by supporting 8 A100 GPUs and accommodating more than 10 PCIe gen4 devices simultaneously to achieve a 1:1:1 balanced ratio of GPU computing, NVMe storage and InfiniBand interconnect.
NF5888M6
16 A100 high-speed interconnections in a single machine, with performance of up to 40 PetaOPs for customers seeking extreme AI computing power.
AI Solutions for Edge Computing
NE5250M5
Edge computing building block optimized for edge AI applications like image and video, as well as 5G edge applications like IoT, MEC and NFV.
NF5488M5
Industry-first integration of 8 Nvidia Tesla V100 Tensor Core 32GB GPUs in a 4U space with high-speed NVSwitch. Delivers high expansion, high efficiency, flexible deployment, and up to 1 petaFLOPS of AI computing performance.
Purpose-Built for AI Applications
NF5280M5-V
Dual-optimized design to provide unparalleled computing and storage capabilities for video processing and intelligent video analysis.
Integrating AI Hardware and Software
AGX-2
The world’s first server with 8 high-performance GPU accelerators integrated with high-speed interconnect in 2U.
AIStation Training Platform
Raises AI computing resource utilization rate to over 90%, greatly shortening the AI model development cycle. The platform supports both training and inference, efficiently enabling model development, training, deployment, testing, publishing, and services.
Caffe-MPI Deep Learning Computing Framework
Based on the Imagenet dataset, Caffe-MPI shows good parallel expansion when training deep learning models and its performance is nearly twice that of Google’s latest deep learning framework TensorFlow.