Inspur’s OCP Cloud AI-optimized rack solution extends the foundation of Intel® Xeon® Scalable processors with built-in AI acceleration from integrated Intel Deep Learning Boost with dual ON5293M5 OCP Accepted compute nodes connecting 8x OAMs and supporting large cluster training. With flexible GPU pooling both OAM and UBB modules are designed to universally support different types of AI accelerators and run various AI applications such as AI cloud, deep learning for training, and image recognition.
Inspur also offers an open rack solution with 8x ON5388M5 OCP Accepted JBOGs, with 2 to 4 OCP Accepted ON5293M5 compute nodes, that will accommodate various AI applications.
OCP CLOUD AI RACK ARCHITECTURE
In order to meet the demands of customers for data center management, maintenance optimization and system design, Inspur actively launched the OpenRMC project, which can help small and medium-sized data center customers greatly reduce their IT operation and maintenance costs, simplify software-based rack management and improve efficiency.
OCP Hardware Platform
|See ON5263M5 »||See ON5388M5 »|
|See NF8380M5 »||See XM1 »|
Use Case 1:
Open Accelerator Infrastructure (OAI) and UBB-complaint rack for AI
Use Case 2:
Open Rack with x8 GPU Box and 4 Compute Nodes
|OAM||Inspur XM1 21″ OCP Accelerator Module|
|Server||Inspur ON5263M5 Power Efficiency Compute Node (“San Jose”)|
|Server||Inspur NF8380M5 Cloud-Optimized Server (“Olympus”)|
|JBOG||Inspur ON5388M5 NVLink GPU Expansion Box (“Mission Bay”)|