Thursday, September 30, 2021

Inspur Leads the Pack with Superior AI Performance in MLPerf Inference V1.1

Among mainstream high-end AI servers with 8 NVIDIA A100 SXM4 GPUs, Inspur servers took the top position in all tasks in the Closed Division under the Data Center category.

SAN JOSE, Calif., September 27, 2021 – Recently, MLCommons™, a renowned open engineering consortium, released the results of MLPerf™ Inference V1.1, the leading AI benchmark suite. In the highly competitive Closed Division, Inspur ranked first in 15 out of 30 tasks, making it the most successful vendor at the event.

Inspur Results in MLPerfTM Inference V1.1
Vendor Division System Model Accuracy Score Units
Inspur Data Center Closed NF5688M6 3D-UNet Offline, 99% 498.03 Samples/s
NF5688M6 3D-UNet Offline, 99.9% 498.03 Samples/s
NF5488A5 DLRM Offline, 99% 2607910 Samples/s
NF5688M6 DLRM Server, 99% 2608410 Queries/s
NF5488A5 DLRM Offline, 99.9% 2607910 Samples/s
NF5688M6 DLRM Server, 99.9% 2608410 Queries/s
Edge Closed NE5260M5 3D-UNet Offline, 99% 93.49 Samples/s
NE5260M5 3D-UNet Offline, 99.9% 93.49 Samples/s
NE5260M5 Bert Offline, 99% 5914.13 Samples/s
NF5688M6 Bert SingleStream, 99% 1.54 Latency (ms)
NF5688M6 ResNet50 SingleStream, 99% 0.43 Latency (ms)
NE5260M5 RNNT Offline, 99% 24446.9 Samples/s
NF5688M6 RNNT SingleStream, 99% 18.5 Latency (ms)
NF5688M6 SSD-ResNet34 SingleStream, 99% 1.67 Latency (ms)
NF5488A5 SSD-MobileNet SingleStream, 99% 0.25 Latency (ms)


Developed by Turing Award winner David Patterson and leading academic institutions, MLPerf™ is the leading industry benchmark for AI performance. Founded in 2020 and based on MLPerf™ benchmarks, MLCommons is an open non-profit engineering consortium dedicated to advancing standards and metrics for machine learning and AI performance. Inspur is a founding member of MLCommons™, along with over 50 other leading organizations and companies from across the AI landscape.

In the MLPerf™ Inference V1.1 benchmark test, the Closed Division included two categories – Data Center (16 tasks) and Edge (14 tasks). Under the Data Center category, six models were covered, including Image Classification (ResNet50), Medical Image Segmentation (3D-UNet), Object Detection (SSD-ResNet34), Speech Recognition (RNN-T), Natural Language Processing (BERT), and Recommendation (DLRM). A high accuracy mode (99.9%) was set for BERT, DLRM and 3D-UNET. Every model task evaluated the performance in both Server and Offline scenarios with the exception 3D-UNET, which was only evaluated in the Offline scenario. For the Edge category, the Recommendation (DLRM) model was removed and the Object Detection (SSD-MobileNet) model was added. A high accuracy mode (99.9%) was set for 3D-UNET. All models were tested for both Offline and Single Stream inference.

In the extremely competitive Closed Division that included 19 mainstream vendors, all participants were required to use the same models and optimizers. Doing so provided the ability to easily evaluate and compare AI computing system performance among various vendors. A total of 1,130 results were submitted, including 710 for the Data Center category, and 420 for the Edge category.


Full-Stack AI Capabilities Ramp up Performance

Inspur achieved excellent results in this MLPerf™ competition with its three AI servers — NF5488A5, NF5688M6, and NE5260M5.

  • NF5488A5 is among the first servers on the market with NVIDIA A100 GPUs. Within a 4U space, it accommodates 8 NVIDIA A100 GPUs interconnected via third-generation NVLink and 2 AMD Milan CPUs, and accomplishes this with a unique blend of liquid and air cooling technologies.
  • NF5688M6 is an AI server designed for large data centers due to its extraordinary scalability. It supports 8 NVIDIA A100 GPUs, 2 Intel Icelake CPUs, and up to 13 PCIe 4.0 add-in cards.
  • NE5260M5 comes with optimized signaling and power systems, and offers widespread compatibility with high-performance CPUs and a wide range of AI accelerator cards. It features a shock-absorbing and noise-reducing design, and has undergone rigorous reliability testing. With a chassis depth of 430 mm, nearly half the depth of traditional servers, it is deployable even in space-constrained edge computing scenarios.

Inspur ranked first in 15 tasks covering all AI models, including Medical Image Recognition, Natural Language Processing, Image Classification, Speech Recognition, Recommendation, as well as Object Detection (SSD-ResNet34 and SSD-MobileNet). These results show that from Cloud to Edge, Inspur is ahead in the industry in nearly all aspects. Inspur was able to make huge strides in performance in various tasks under the Data Center category compared to previous MLPerf events despite no changes to its server configuration. Its model performance results in Image Classification (ResNet50) and Speech Recognition (RNN-T) increased by 4.75% and 3.83% compared to the V1.0 competition just six months ago.

The outstanding performance of Inspur’s AI servers in the MLPerf™ Benchmark Test can be credited to Inspur’s exceptional system design and full-stack optimization in AI computing systems. Through precise calibration and optimization, CPU and GPU performance as well as the data communication between CPUs and GPUs were able to reach the highest levels for AI inference. Additionally, by enhancing the round-robin scheduling for multiple GPUs based on GPU topology, the performance of a single GPU or multiple GPUs can be increased nearly linearly.

Inspur NF5488A5 was the only AI server in this MLPerf™ competition to support eight 500W A100 GPUs with liquid cooling technology, which significantly boosted AI computing performance. Among mainstream high-end AI servers with 8 NVIDIA A100 SXM4 GPUs, Inspur’s servers came out on top in all 15 tasks in the Closed Division under the Data Center category.

Inspur is committed to the R&D and innovation of AI computing, including both resource-based and algorithm platforms. It also works with other leading AI enterprises to promote the industrialization of AI and the development of AI-driven industries through its “Meta-Brain” technology ecosystem.

To see the complete results of MLPerf™ Inference v1.1, please visit: