Inspur Information AI Servers to Fully Support the Newly Announced NVIDIA H100 Tensor Core GPU
AI speed and performance continues to accelerate with the introduction of faster NVIDIA GPUs and more efficient resource scheduling
SAN JOSE, Calif.—March 22, 2022 (BUSINESS WIRE)— Inspur Information, a leading IT infrastructure provider, announced its AI servers fully support the new NVIDIA H100 Tensor Core GPU in four- and eight-socket configurations. The servers will offer unprecedented computing performance, flexibility in resource scheduling and mature ecosystem support for various AI application scenarios.
With NVIDIA H100, Inspur AI servers can achieve greater computing performance, higher GPU interconnection bandwidth and innovative computing architecture, enabling AI training as well as AI inference of larger-scale and more complex models. Inspur’s AIStation will further amplify these benefits by offering a computing resource management platform to more conveniently and efficiently utilize computing power of GPU clusters.
“Inspur has had a long-term partnership with NVIDIA,” said Liu Jun, Vice President of Inspur Information and General Manager of AI and HPC. “Through innovative optimization design, the NVIDIA H100-based Inspur AI servers will help customers efficiently manage various challenging AI scenarios and will promote the development of AI industrialization and AI transformation.”
“The NVIDIA H100 is the world’s most advanced GPU, delivering a giant performance leap for workloads at every scale — from small, partitioned GPU instances to trillion-parameter AI such as large language models and recommender systems ,” said Paresh Kharya, senior director of product management for data center computing at NVIDIA. “Inspur AI servers powered by NVIDIA H100 will help enterprises accelerate time to market, lower costs and energy consumption, and meet the computing requirements for next generation AI and HPC.”