Tuesday, November 9, 2021

Inspur Information Announces Full Support for the NVIDIA AI Platform for Inference

Exceptional AI server performance delivered with NVIDIA’s A100, A30, and A2 Tensor Core GPUs.

SAN JOSE, Calif., November 9, 2021 – Inspur Information, a leading IT infrastructure solution provider, announced at GTC Fall 2021 that its AI and Edge inference servers will support NVIDIA A100, A30, and now the newly announced A2 Tensor Core GPU throughout its entire inference server portfolio.

As the user demand for AI inference continues to grow and diversify, Inspur Information has launched a comprehensive inference product line of NVIDIA-Certified Systems built for applications from data centers to edge computing, providing high performance for users across various application scenarios. Inspur’s NVIDIA-Certified Systems are ideal for running the NVIDIA AI Enterprise software suite to develop, deploy and manage AI workloads.

In data center, NF5468M6 is an intelligent elastic architecture AI server, featuring 4U with 8x NVIDIA A100 and A30 GPUs and 2x Intel 3rd Gen Intel Xeon Scalable processors. It has the unique function of automatic switching of three topologies: balance / common / cascade to flexibly meet the needs of various AI applications including DL training, Language processing, AI inference, Massive video streaming scene, etc. It provides ultra-flexibility for AI workloads.

NF5468A5 is an integrated efficient AI server, featuring 4U with 8x NVIDIA A100, A30 GPUs and 2x AMD Rome/Milan CPUs. It has a high performance architecture, with a CPU-to-GPU non-blocking design that provides faster commutation efficiency and much smaller P2P communication delay. It is also optimized for conversational AI, intelligent search and high-frequency trading scenarios.

NF5280M6 is a reliable and flexible AI server, featuring 2U with 4x NVIDIA A100 and A30 GPUs or 8x NVIDIA A2 GPUs and 2x Intel 3rd Gen Intel Xeon Scalable processors. The NF5280M6 can operate stably in a variety of AI application scenarios, covering small and medium-scale AI training and high-density edge end inference.

In edge computing, NE5260M5 is an open computing standard edge server with NVIDIA A100, A30 and A2 Tensor Core GPUs and two Intel CPUs. With a 430mm chassis it can adapt to unusual spaces and harsh working environments, including high temperatures and humidity. In the recent MLPerf Inference V1.1 results, NE5260M5 ranked first in 4 tasks in the Edge category of the Closed Division. Now, NE5260M5 has been implemented in a variety of edge AI inference scenarios, such as the smart campus, smart shopping mall, smart community, and smart substation, providing diverse computing power support for different AI edge applications.