OCP Global Summit
Open innovation powered by AI.
Join Inspur at the OCP Global Summit booth #C5 from March 14 to 15 to see dynamic, accelerated solutions for the open data center, based on innovations in AI, cloud optimization and rack management.
Product Showcase
This year we’re exploring new avenues to enable open infrastructure benefits in broader applications:
- Dynamic AI solutions for the open data center
- High-density cloud optimized platforms
- Rack management and open RMC software for OCP
19″ Standard Server Rack Solutions
4-Socket Server for Hyperscale Cloud Workloads |
|
|
Olympus 4-Socket Server: NF8380M5
Based on Microsoft’s Project Olympus open standard to provide a modular and flexible open compute standard for cloud workloads. |
|
High-Density Cloud-Optimized Platform |
|
![]() |
Flexible High-Density 2U 4-Socket Server: NF8260M5
High-density, flexible server with ultra-dense memory design, scalable 4P performance, up to eight PCI-E expansion cards and two 2000W power supplies. Delivers 18TB of memory in a single platform and up to 360TB in a 42U rack. Fully optimized with Intel’s latest Optane DC persistent memory. |
|
Advanced New Architecture for AIThis configuration offers increased computational efficiency through a 4-socket design, and delivers 0.5TB aggregated memory at 16TB/s.
|
|
![]() |
4-Socket Server for Hyperscale Cloud Workloads: NF8380M5
Based on Microsoft’s Project Olympus, delivers M.2 riser flash array expandability for GPU, FPGA, and NVMe to enable flexible configurations for multi-purpose applications. 8U 16GPU Supercomputing Server: AGX-5 Equipped with 16 NVIDIA Tesla® V100 Tensor Core GPUs, delivers 2 Petaflops of performance and NVSwitch™ fabric interconnectivity. |
21″ OCP Rack Solutions
Composable Infrastructure and Open RMCInspur’s GPU Composable Infrastructure rack, integrated with Liqid Composable, delivers unparalleled rack-scale agility and enables dynamic configuration, allocation and deployment of GPU elements. |
|
|
High Expansion OCP Compute Node: ON5293M5
Provides high I/O expansion supporting 1 PCIe 3.0 x16 or 2 PCIe 3.0 x8, ideal for data acceleration. High-Density NVlink GPU Expansion: ON5388M5 This high-density JBOG features an NVlink-enabled architecture that provides flexible topology for different applications. |
|
AI Inferencing |
|
![]() |
I/O Expansion OCP Compute Node ON5293M5 + Intel® Programmable Acceleration Card (Intel® PAC)
Compute node’s high I/O expansion paired with Intel PAC® and OpenVINO enables massive data acceleration and rapid AI scaling. |
|
Inspur OCP ContributionsAs a longtime collaborator in the OCP community, Inspur has contributed a number of hardware innovations in compute, storage and GPU platforms based on the newest open standards. |
|
![]() |
OCP Compute Nodes
High-Density: ON5163M5 » | Energy-Efficiency: ON5263M5 » | I/O Balance: ON5273M5 » OCP AI Node: ON5488M5 Designed for hyperscale data center AI training applications. Ultra-high GPU density and optimized architecture for AI. OCP Storage JBOD: ON5266M5 34 hard-drive 2OU JBOD with NVMe SSD/HDD support and flexible architecture.
|
See our product showcase page for full details.
On-Site Activities
Inspur will conduct technical speaking sessions and workshops throughout the 2-day summit, introducing our robust new lineup of open standards products and technologies.