The School of Aeronautics and Astronautics is one of the most powerful teaching and research-oriented schools of Xi’an Jiaotong University. The sector on engineering mechanics and aeronautics and astronautics has hosted and participated in more than 100 national scientific research projects, including the “National 863 Program”, “National 973 Program”, “Key Project of National Natural Science Foundation”, and received several national awards for scientific and technological advancements and inventions. Boosted by the jumbo jet program, the “Jumbo Jet Manufacturing Technology and Equipment Workshop” was held in Xi’an Jiaotong University during which it was determined that the general assembly of China’s jumbo jet would be convened by XAC. With the launch of the jumbo jet program, the Chinese are ready to churn out globally-competitive jumbo jets with their diligence and wisdom.
In the face of such great opportunities, the School of Aeronautics and Astronautics, the School of Materials and the School of Energy and Power will undertake numerous scientific research duties to further enhance the scientific research rigor and reputation of the University. In order to facilitate the pace of development, the School decided to establish high-level basic platforms to promote the development of various disciplines. As the main computing tool, the HPC platform hence became one of the key development projects. The HPC platform shall be developed according to the principles of high performance, ease of use, high scalability and safety.
Analysis of application features
Based on Inspur’s in-depth surveys and research as well as years of experience in the aeronautic and astronautic sector, Inspur’s engineers have analyzed the customer’s applications according to their requirements and considered customers’ interests from multiple perspectives.
Multileveled computation and high scalability: airplane design and other applications of the School are highly characterized by multileveled computation needs and therefore medium-end computation nodes are required for parallel computation so as to reduce the operating time of program. It is also necessary to provide high-end computing nodes with extremely high computing capacity to cater to the pretreatment and post-treatment of images. Some tasks require a single computer to process several tens of millions of grids. The computing nodes need to possess a certain degree of scalability to ensure there is no need to add network facilities.
Collective data storage and safety needs: the storage system requires collective storage of server node data and the storage capacity can be expanded as required. As a key national laboratory, many test data are confidential and the clusters shall be physically isolated to ensure system security.
Ease of use and collective management of cluster: the whole system can be remotely operated and managed. For the convenience of application, the installation, deployment and collective monitoring, management and alarm of the whole clustering system have to be managed via special nodes. Special operation dispatching software has to be used for task distribution, dispatching and management.
The main components of the project system include 62 medium-end computing nodes, 8 high-end computing nodes, 1 fat node, 1*10Gb core network, 1 management node and 1 storage array. The system comprises 600 CPU processing kernels and 1 GPU computing unit for the computation so as to provide the user with affordable high-performance computing.
Also, 62 medium-end computing nodes will provide parallel computing and processing to shrink the program’s operating time; 8 high-end computing nodes with highly robust pretreatment and post-treatment image processing ability are provided; the system’s general performances and the School’s multileveled computing needs match each other effectively and the system is an actual “prolific” system; the fat node with robust computing performances is furnished with 4 latest processors to undertake the task of computing and processing several tens of millions of grids on a single computer; the GPU computing unit is established with NVIDA’s Tesla S1070 (Height 1U) with a peak processing capacity of 4 trillion.
The storage system is designed on the basis of a 4Gb optical fiber architecture and furnished with one FC-SATA disk array and a system storage capacity of 12TB for collective storage of server node data. The storage capacity can be expanded when necessary. An isolating network is adopted to establish the clustering system, and to ensure physical isolation of the cluster to guarantee system safety.
Establishing a stable, reliable and easy-to-operate high-performance platform is the vision of Teacher Li of the School of Aeronautics and Astronautics of Xi’an Jiaotong University. Inspur wavered between the IB and 10Gb solutions. In order to choose the most appropriate solution, we performed practical tests which showed that 10Gb solution and this was affirmed by Teacher Li and several other experts. To ensure the ease of operation and maintenance of the computing platform, we recommend the latest Intel platform processor and GPU computing tool. The combination of computing node and dual systems was lauded by the teachers and Inspur’s professionalism and dedication impressed Teacher Li. According to Li, Inspur has been most meticulous and considerate towards the customer’s well-being among all the project bidders.