Designed for hyper-scale data center AI training applications

Key Features:

  • Ultra-high GPU Density
  • Energy Conservation
  • Optimization Architecture for AI


This GPU resource pooling solution, PCI-e switch enabled and supporting up to 16GPU in 4OU form factor, delivers the highest density and performance available for AI training and inference scenarios. It provides flexible GPU typologies to meet different customer demands.

Ultra-high GPU Density

  • 4OU 16*GPU for training or 16*FPGA for Inference

Energy Conservation

  • Shared power & fans
  • I/O front access for hyper-scale data center

Optimization Architecture for AI

  • Accelerates training process
  • Improved efficiency on computation expansion


Model Inspur OCP AI Node
CPU ATOM C3000 64bit CPU (2 cores)
Memory Single channel, 1*DDR3 or DDR3L SODIMM
Storage 1* SATA DOM, 1* SATA port
Ethernet Integrated Ethernet GBE port
PCIe 16 standard PCIe x16 slot to support 16 GPU card
I/O 2* USB2.0 + 1* VGA + 1* COM
PSU Bus Bar
Chassis 536mm (W) x 181.9mm (H) x 800.6mm (D)

Interested in learning more?