Product

Inspur OCP AI Node

Key Features:

Ultra-high GPU Density

  • 4OU 16*GPU for training or 16*FPGA for Inference

Energy Conservation

  • Shared power & fans
  • I/O front access for hyper-scale data center

Optimization Architecture for AI

  • Accelerates training process
  • Improved efficiency on computation expansion

Downloadable Resources:
Data Sheets

Overview:

Designed for hyper-scale data center AI training applications

This GPU resource pooling solution, PCI-e switch enabled and supporting up to 16GPU in 4OU form factor, delivers the highest density and performance available for AI training and inference scenarios. It provides flexible GPU typologies to meet different customer demands.

Specifications:

 

Model Inspur OCP AI Node
CPU ATOM C3000 64bit CPU (2 cores)
Memory Single channel, 1*DDR3 or DDR3L SODIMM
Storage 1 x SATADOM, 1 x SATA port
Ethernet Integrated Ethernet GBE port
I/O 16 standard PCIe x16 slot to support 16 GPU card; 2 x USB2.0; 1 x VGA; 1 x COM
PSU Bus Bar
Chassis 536mm (W) x 181.9mm (H) x 800.6mm (D)
Contact Us

Add A Question !

You will get a notification email when Knowledgebase answerd/updated!

+ = Verify Human or Spambot ?