Newsroom
Tuesday, December 4, 2018

Inspur Announces Aggressive AI Platform Investments for Hyperscale Service Providers & Enterprises

Posted to HPCWire

AGX-5 GPU “Super Server” integrates 10,240 NVIDIA Tensor Cores into a tunable, configurable, 2-petaflop platform and software stack tailor made to meet the economic and operational demands of hyperscale service providers and enterprises.

LAS VEGAS — December 4, 2018 — GARTNER NA IT IOC CONFERENCE — Two weeks after launching the industry’s most powerful and dense GPU platform—the AGX-5 Super Server—Inspur Systems is at the Gartner NA IT Infrastructure, Operations and Cloud Strategies Conference to showcase the company’s aggressive investments in platform integration for hyperscale operators that must cost-efficiently deliver powerful AI training and inference capabilities to their customers.

According to IDC’s 2017 China AI Infrastructure Market Survey Report, Inspur supplies 57 percent of the AI servers for the Chinese market. The company is the top server provider to hyperscale service providers like Baidu, Tencent and Alibaba. With substantial capabilities in serving hyperscale and AI customers worldwide, Inspur is ideally positioned to help hyperscale service providers address the major challenges of supporting AI workloads for their customers:

  • A focus on cost-efficient data center operations
  • Full-stack platforms that help service providers deliver high-performance AI instances to customers on demand
  • 8U design is the most dense high-performance AI server available
  • Built for operators, with features that simplify operations and maintenance while optimizing PUE

Inspur’s full-stack AI solutions feature a four-layer suite of AI capabilities:

  1. efficient, purpose-built computing platforms,
  2. agile system management,
  3. framework optimization, and
  4. application acceleration

The graphic below illustrates the four layers in the AI hardware/software stack that Inspur has developed for hyperscale customers. For more information on the stack, visit https://www.inspursystems.com/ai-deep-learning/.

The AGX-5 Super Server: Built with Hyperscale Providers in Mind

The AGX-5 uses the NVIDIA HGX-2 platform but offers improvements specifically for hyperscale service providers: an 8U form factor versus the standard 10U form factor. Operators get 16x NVIDIA Tesla V100 32GB GPUs, dual Intel Xeon Scalable CPUs and the NVSwitch fabric, and they also gain an important improvement in rack density with the 8U form factor, which allows service providers to add another server to each rack if desired.

The AGX-5 is designed to deliver the customization and tuning that service providers demand. Customers can select the number of GPUs needed, and the platform offers a full set of 12x DDR4 DIMMs per CPU and 24x DIMMs in total. Once the Intel Xeon “Cascade Lake-SP” processors become available, the AGX-5 will support Intel Optane Persistent Memory DIMMs ranging from 128-512GB each. The platform also supports SATA and NVMe storage options, and it utilizes integrated controllers for 4x 10GbE networking.

Inspur also offers options in the AGX-5 platform such as liquid cooling for hyperscale customers on the leading edge of maximizing TCO efficiency.

“Meeting hyperscale demands—for quantity, performance, efficiency and reliability—are core strengths of Inspur, which has the supply chain to deliver clusters of the AGX-5 platform for customers that have moved beyond the proof-of-concept phase,” said Matthew Thauberger, general manager of sales at Inspur Systems. “Additionally, Inspur is the largest AI training system vendor in China, and that gives us the depth to supply the quantities hyperscale customers need.”

AI Case Studies: Success in Production Applications

  • Internet Service Providers — Inspur and Baidu developed a facial recognition solution “ABC”, optimizing Baidu’s deep learning framework and cloud management technology on Inspur’s leading AI hardware.
  • Facial & Speech Recognition — Inspur and iFlyTek developed the “AI Booster” 16-card computing cluster which delivered 18% higher acceleration ratio compared to a standard 16-card cluster, helping them speed up applications like speech recognition and auto-translation.
  • Finance — Utilizing Keras and Tesorflow frameworks, Inspur used AI to aid a nation’s foreign exchange to forecast market trends in the next 10 minutes. Forecast accuracy was 67 percent, exceeding customer expectations.
  • Public Utilities — Using AI and Tensorflow framework to examine pictures of electric power equipment for inspection, Inspur helped the company achieve 98 percent accuracy of items identified.

The Gartner IT Infrastructure, Operations and Cloud Strategies Conference 2018 is the premier gathering for senior IT and business leaders, offering a depth and breadth of I&O topic coverage unavailable at any other event. Gartner I&O analysts present fresh, research-based content and actionable, unbiased advice – all designed to accelerate decision-making, prioritize initiatives, and link I&O strategies to the goals of the business.

Supporting Commentary

Hear from Inspur partners regarding the company’s leadership in AI for hyperscale in the videos linked below.

  • NVIDIA — Keith Morris, senior director of product management, Accelerated Computing
  • Falcon Computing — Julian Forero, senior manager of business development
  • Xilinx — Freddy Engineer, vice president

About Inspur

Inspur is the leader in intelligent computing and ranked top three in worldwide server manufacturing. We provide cutting-edge hardware design and deliver extensive AI product solutions. Inspur provides customers with customized IT infrastructures and AI solutions that are Tier 1 in quality and energy efficiency. Inspur’s products are optimized for applications and workloads built for data center environments. To learn more, http://www.inspursystems.com.