The most powerful GPU servers for AI, deep learning and HPC applications.

As a certified server partner of NVIDIA, the global leader in GPU technology, Inspur applies continuous innovation to develop the fastest, most powerful GPU servers on the market. Inspur GPU accelerated servers deliver features that let you achieve your goals, like training deep learning models and deriving AI insights, in hours, not days.

Now supporting NVIDIA H100 Tensor Core GPUs.

Extreme Acceleration

The fastest and latest GPUs including NVIDIA Tesla® V100 Tensor Core

Advanced Architectures

High-speed NVLink™ and NVSwitch™ GPU-to-GPU interconnect

Enhanced Power

Redundant high efficiency Titanium and Platinum power supplies

Flexible Topologies

Configurable for a variety of applications from AI training to edge computing

GPU Accelerated Servers

NVIDIA-Certified OVX Solution

Based on NF5468M6

4U 8x A100 GPUs to support large-scale digital twin simulations and virtual modeling within NVIDIA Omniverse Enterprise

Learn More
NF5468A5

4U 8GPU Server with AMD EPYC™

Cloud AI server with 8x NVIDIA A100 GPUs with PCIe Gen4 and 2x AMD EPYC™, memory capacity up to 8T

Learn More
NF5488A5

4U 8GPU Server with AMD EPYC™

4U 8x NVIDIA A100 GPU over NVLink 3.0 interconnect, 2x AMD EPYC™ Rome processors, 5 petaFLOPS AI performance

NF5488A5 4U 8x A100 GPU AI Training & Inference Server
Learn More
NF5688M6

6U 8GPU NVLink AI server

Supporting 8x 500W A100 GPU with NVSwitch, up to 12 PCIe expansion cards, dual-width N20X, and air-cooling

Learn More
NF5468M6

4U 4-16GPU AI Server

up to 20x PCIe GPUs/accelerators in 4U
supports the latest NVIDIA A40 and A100
flexible topologies

Learn More
NF5280M6

2U 2-Socket General Purpose Compute Server

2x 3rd Generation Intel® Xeon® Scalable processors
Up to 13 PCIe expansion slots
7 versatile configurations

Learn More
NF5488M5-D

4U 8GPU Server with Intel Xeon Scalable

4U 8x NVIDIA A100 GPU over NVLink 3.0 interconnect, 2x 2nd-Generation Intel Xeon Scalable processors, HBM2e memory

NF5488M5-D AI Server
Learn More
NF5468M5

4U 8-16GPU Server

4U 8x NVDIA V100 Tensor Core GPU or 16x Tesla P4 GPU, for Al Inference and Edge Computing

NF5468M5 4U 16GPU AI Inference Server
Learn More
NE5260M5

2U 2-Socket Half-Depth Edge Server

Reliable, compact half-depth edge server for MEC, 5G/IOT, and AR/VR. Compute node with front I/O access or head node for GPU expansion with 2x NVIDIA® Tesla® V100

NE5260M5
Learn More
GX4

2U 4GPU AI Expansion

JBOG AI expansion with NVMe and flexible configuration, for compute and storage resource pooling

GX4 2U 4GPU JBOG AI Expansion
Learn More
NF5280M5

2U 2-Socket 4 GPU Server

4x NVIDIA® V100, T4 GPU
2x Intel® Xeon® Scalable Gold/Platinum
24x 2.5” drive bays
24x DDR4 DIMM slots

NF5280M5 2U 4GPU Dual-Socket Server
Learn More

Explore Inspur's comprehensive Artificial Intelligence portfolio:

Talk to an Expert