Inspur Blog

Inspur leads 5-billion-dollar (and growing) global AI server market

Recently, IDC released the 2020 Global Semiannual Artificial Intelligence Tracker, which provides data insights into the global artificial intelligence server market in the first half of 2020: it shows that the market has now reached USD 5.59 billion.

The market breakdown:

  • Inspur ranks first in the world with a market share of 16.4%, becoming the number one player in global AI servers;
  • Dell clocked in at second place with 14.7% market share;
  • HPE is at third with market share of 10.7%;
  • Huawei (6%) and Lenovo (5.7%) ranked fourth and fifth.

Artificial intelligence servers are usually equipped with GPU, FPGA, ASIC and other acceleration chips. The combination of CPU and accelerator can meet the needs of high-throughput interconnection, and provide powerful computing power support for artificial intelligence application scenarios such as natural language processing, computer vision, and voice interaction. This integration has become a driving force for the development of artificial intelligence. According to IDC statistics, global artificial intelligence servers account for more than 84.2% of the artificial intelligence infrastructure market and are the main component of AI computing power infrastructure. In the future, artificial intelligence servers will maintain rapid growth, and the global market is expected to reach US$25.1 billion in 2024.

Artificial intelligence is one of the fastest-growing emerging technology applications in the global IT industry, and the developmental potential there is huge in the next few years. Computing power is critical to the support of data and algorithms and has become a key element in the development of AI. Currently, the number and complexity of advanced deep learning models are showing an exponential growth trend, further precipitating the demand for high compute performance.

In February 2020, Microsoft released the latest IntelliSense computing model Turing-NLG with 17.5 billion model parameters. It takes more than a whole day to complete a single training on 125 POPS AI computing power. Subsequently, OpenAI proposed the GPT-3 model with 175 billion parameters, which requires 3640 PetaFLOPS/s-day of performance. At present, the computing performance requirement for artificial intelligence doubles every two months, and the supply of new hardware infrastructure to support AI directly impacts the innovation and development of AI applications.

Facing the exponential curve for AI hardware demand, Inspur committed to focus on the production, aggregation, and deployment of compute power that sustains the AI industry: a comprehensive portfolio of high-performance computing products supporting a variety of AI accelerators — GPU, FPGA, ASIC, etc — to enable full-stack AI scenarios such as training, inference, and edge. Inspur also launched the first AI ​open compute acceleration system that meets the OAM standard to provide performance through a diversified and open AI server architecture. It is through the continuous development of agile and efficient artificial intelligence infrastructure that Inspur can stay ahead of the curve to serve the digital industry.

Leave a Reply

Your email address will not be published. Required fields are marked *