Converged Architecture

Converged Architecture

Switching Module

Switching
Module

Management Module

Management
Module

Power Supply Configuration

Power Supply
Configuration

Cooling Configuration

Cooling
Configuration

IOBox Configuration

Enterprise HPC
Features Design

Switching Module

  • Supporting maximum 4 switching modules numbered 1,2,3,4 from left to right
  • 10Gb switching modules and FC switching modules can be configured on slots numbered 1,2,3,4 (for blade installation daughter cards, see the blade configuration instructions)
  • Slots 1&3 support converged switch modules, corresponding to onboard Lan0 and Lan1
  • If converged switching module is configured on slots 2 & 4, 1Gb switching function of switching module will be disabled, and 10Gb switching function can be achieved only through the 10Gb daughter card
  • Stack management can be achieved through 1-2 management modules
Converged Architecture

Inspur Blade System Converged Architecture

Converged Architecture
Uplink port 8x1Gb RJ45
8x10Gb SFP+
2x40Gb QSFP, modular design, supporting stack; each port supports 4 roll-out 10Gb
1×RJ45, for management and maintenance
Downlink port 16x1Gb link links 1Gb LOM on Blade
32x10Gb link links 10Gb Mezz on Blade
Links for I2C, GbE and GPIO of redundant management modules
Switching function Supporting standard two-layer and three-layer switching (IPv4&IPv6)
Supporting features and functions of Data Center
10Gb port supports FCoE function
40GE port supports stack
Management function SNMP v1/v2/v3, Telnet, Console, MGMT, WEB, RMON, SSHv1/v2, FTP/TFTP file upload and download management, supporting NTP clock, Syslog, SPAN/RSPAN, IPFIX traffic analysis
Direct network configuration and FW upgrade by the system management module
Switch monitoring management interface (temperature, voltage, log and alarm information), informing the management module of alarm information
Supporting startup, shutdown and reboot of switching modules by the management module
Supporting configuration of switching modules by the management module (command line and management interface)
Cooling fan 2 hot-swappable cooling fans
coverged_architecture_s2_img3
No. Port Description
1 RJ45 ports 1-8
2 SFP+ ports 1-4
3 SFP+ ports 5-8
4 Module, supporting QSFP ports 1-2
5 RJ45 ports 1-8
6 RJ45 ports 1-8
7 RJ45 ports 1-8
8 RJ45 ports 1-8
Suggest configuring on switch slots 1 and 3 on the chassis backside, and providing each half-width node with an onboard 1Gb interface and a 10Gb network interface; or configuring on switch slots 2 and 4, but providing a 10Gb network interface for each half-width node. Note: configure two modules to support the onboard dual 1Gb for each half-width node and the dual 10Gb interface for Mezz1 daughter card
10Gb switching module (Star-Net)
16G FC converged switching module

Management Module

  • Stack management can be achieved through 1-2 management modules
  • Dual management modules are both active, without master and backup relationship
  • In case of any single management module failure, seamless management switch is possible
coverged_architecture_s3_img1
coverged_architecture_s3_img2

Power Supply Configuration

coverged_architecture_s4_img
  • For NX8880M4, calculating power consumption by 650W for a single blade
  • For NX5460M4, calculating power consumption by 500W for a single blade
  • Supporting maximum 6 power modules with the output power 3000W, supporting N + N or N + M redundancy
  • Example: 1 + 1 redundancy supports 4 NX8880M4 or 6 NX5460M4
  • 1 PDU for 1-3 power modules
  • 2 PDUs for 4-6 power modules

Cooling configuration

  • Switch module cools by its fans, redundant design, hot-swappable
  • Power module cools by itself, hot swappable
  • Each IOBox module is equipped with cooling fans
  • System cooling module is mainly used to cool the front computing (or expansion) nodes, each module supports single-module counter-rotating redundancy design, and cooling modules corresponding to half-width nodes are configured as follows:
Number of half-width blade nodes Number of Cooling modules(same side)
0-1 1
2-3 2
4-5 3
6-7 4
8 5
9 6
10-11 7
12-13 8
14-15 9
16 10
coverged_architecture_s5_img

IOBox Configuration

  • Each of 8 IOBox modules provides a PCIE 3.0 8x standard half-height, half-length expansion slot to each of two half-width computing nodes up and down on the same side, and the corresponding relations are shown by the red arrow on the right
  • By IOBox module, a user can add standard expansion cards for computing blades (such as network card, HBA card, IB card, etc.), which requires no special pass-thru module and also avoids the mutual exclusion between pass-thru module and switching module. It is the most flexible pass-thru switching solution currently.
Blade System Converged Architecture
Contact Us

Add A Question !

You will get a notification email when Knowledgebase answerd/updated!

+ = Verify Human or Spambot ?