The fastest and latest GPUs including NVIDIA Tesla® V100 Tensor Core
As a certified server partner of NVIDIA, the global leader in GPU technology, Inspur applies continuous innovation to develop the fastest, most powerful GPU servers on the market. Inspur GPU accelerated servers deliver features that let you achieve your goals, like training deep learning models and deriving AI insights, in hours, not days.
Now supporting NVIDIA H100 Tensor Core GPUs.
GPU Accelerated Servers
Based on NF5468M6
4U 8x A100 GPUs to support large-scale digital twin simulations and virtual modeling within NVIDIA Omniverse Enterprise

4U 8GPU Server with AMD EPYC™
Cloud AI server with 8x NVIDIA A100 GPUs with PCIe Gen4 and 2x AMD EPYC™, memory capacity up to 8T

4U 8GPU Server with AMD EPYC™
4U 8x NVIDIA A100 GPU over NVLink 3.0 interconnect, 2x AMD EPYC™ Rome processors, 5 petaFLOPS AI performance

6U 8GPU NVLink AI server
Supporting 8x 500W A100 GPU with NVSwitch, up to 12 PCIe expansion cards, dual-width N20X, and air-cooling

4U 4-16GPU AI Server
up to 20x PCIe GPUs/accelerators in 4U
supports the latest NVIDIA A40 and A100
flexible topologies

2U 2-Socket General Purpose Compute Server
2x 3rd Generation Intel® Xeon® Scalable processors
Up to 13 PCIe expansion slots
7 versatile configurations

4U 8GPU Server with Intel Xeon Scalable
4U 8x NVIDIA A100 GPU over NVLink 3.0 interconnect, 2x 2nd-Generation Intel Xeon Scalable processors, HBM2e memory

4U 8-16GPU Server
4U 8x NVDIA V100 Tensor Core GPU or 16x Tesla P4 GPU, for Al Inference and Edge Computing

2U 2-Socket Half-Depth Edge Server
Reliable, compact half-depth edge server for MEC, 5G/IOT, and AR/VR. Compute node with front I/O access or head node for GPU expansion with 2x NVIDIA® Tesla® V100

2U 4GPU AI Expansion
JBOG AI expansion with NVMe and flexible configuration, for compute and storage resource pooling

2U 2-Socket 4 GPU Server
4x NVIDIA® V100, T4 GPU
2x Intel® Xeon® Scalable Gold/Platinum
24x 2.5” drive bays
24x DDR4 DIMM slots
