ArticlesArtificial IntelligenceC2 Summer 2021 What makes HPE servers with the latest Intel Xeon platform ideal for AI workloads? by Frederic Van Haren June 15, 2021 written by Frederic Van Haren June 15, 2021 266 This article was first released as part of the April 6th 2021 Intel announcement and can be found here. On April 6th, 2021, Intel announced a new Xeon platform as the “only data-center processor with Built-in AI acceleration”. It will bring Artificial Intelligence (AI) everywhere, from the Edge to the Cloud. This blog looks at the announcement from an AI perspective focusing on the new HPE servers with the 3rd Gen Intel Xeon scalable processor. The same day HPE released its upgraded portfolio of servers, enabling the latest Intel platform’s new and improved AI capabilities. The HPE servers that are available with the new Xeon platform are: HPE ProLiant DL360 & DL380 HPE 480 Synergy HPE Apollo 2000 The Intel & HPE announcements go beyond the new Xeon. Some updates and improvements are the system memory with more channels and bandwidth, increased bus throughput, more DIMM channels, and Optane Persistent memory. Many of the improvements play right into the AI playbook. Figure 1 provides an overview of the announcements with data points. Figure – Intel 3rd Gen Intel Xeon Platform – Overview AI An AI application’s performance is driven by how efficiently all the hardware components (processing, memory, network) work together and the ability for the software to take full advantage of the underlying hardware. I was pleasantly surprised to see that the announcements touched on all those facets. The new HPE servers are now more than ever enabled to handle the most challenging AI workloads. It is essential to review “why” the Xeon is the right solution for AI and “how” Intel has addressed the challenges that come with AI workloads. Why Xeon? Let us first have a look at why Intel© believes Xeon for AI. The three main reasons are: Simplicity People are already familiar with the Xeon architecture and the Intel© ecosystem. It makes it easier to get up and run with AI and reduces the complexity of dealing with other architectures from accelerators such as GPUs. Productivity Intel© has put a lot of effort and resources into expanding its software ecosystems with partners and optimizing end-to-end data science tools. The Intel© ecosystem allows developers to build and deploy AI solutions faster with a lower TCO benefit. Performance The performance is obtained by maximizing the synergies between hardware and software. First, the processors received new AI capabilities and additionally tuned for a wide range of workloads. Secondly, Intel©-optimized software extensions enable applications to take full advantage of the processor’s capabilities. Hardware AI acceleration When Intel© talks about “built-in AI acceleration”, they refer to the Intel© Deep Learning Boost (aka DL Boost) technology. Even the most straightforward Deep Learning workflow requires the execution of billions of mathematical operations. It is not difficult to imagine that there could be significant performance improvements if we could optimize those operations. Some operations require only 8-bits of floating-point precision, while others require 16-bits or even 32-bits. The more bits are involved in multiplication, the longer it takes to execute. DL Boost’s built-in power can perform multiplications at a lower precision while still returning results at the higher precision. The impact is that more multiplications can occur in the same amount of time. Considering this happens billions of times, the acceleration gains quickly add up. The latest iteration of DL Boost shows up to 1.74x performance improvement against the previous generation. Figure 2 depicts the support for DL Boost across generations with more improvements to expect in the future. Figure 2 – DL boost support over time. Software Optimization To get the most out of hardware innovation, you need to have software integration. It is a real challenge to develop efficient pipelines that get you from the data processing stage to model training and finally deploy the model—the ability to consolidate the different stages into a single platform results in improved productivity and scalability. The frequent innovations in AI make it difficult for developers to keep up with all the changes. A supported software ecosystem provides an abstraction layer allowing developers to stay focused on solving their problems. Intel understands the need for software optimizations and has a rich ecosystem of AI tools available to its customers. There are too many tools to list them all (figure 2), but the following tools are worth a closer look. Analytics Zoo delivers unified analytics and an AI platform for distributed TensorFlow, PyTorch, Keras, and Apache Spark. For customers who want a Python-based toolchain, Intel© has the oneAPI AI analytics toolkit. Developers looking to deploy pre-trained models from Edge to Cloud should look at the OpenVINO toolkit. Figure 3- AI made flexible with Intel© Software Optimizations. Summary The latest Intel & HPE announcements outline that AI applications, from Edge to Cloud, are the future. There are two main parallel tracks to increase their market share. The first track is to consistently deliver innovative hardware to address the many AI challenges with built-in acceleration. The second track is to keep investing time and resources in building ecosystems that provide software optimization. This combo makes the Xeon a true AI workhorse and an AI enabler for their customers. Additionally, it is a solid strategy to build on towards the future. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. HPE, the HPE logo, and other HPE marks are trademarks of Hewlett Packard Enterprise Development LP or its subsidiaries. AIConnect ConvergeConnect WorldwideConnectConvergeHPE Apollo SystemsHPE ProLiant serversHPE ServersHPE SynergyHPE user communitysoftware optimizationXeon 0 comment 0 FacebookTwitterPinterestEmail Frederic Van Haren Frederic Van Haren is the Chief Technology Officer @HighFens. He has over 20 years of experience in high tech and is known for his insights in HPC, Big Data, and AI from his hands-on experience leading research and development teams. He has provided technical leadership and strategic direction in the Telecom and Speech markets. He spent more than a decade at Nuance Communications building large HPC and AI environments from the ground up and is frequently invited to speak at events to provide his vision on the HPC, AI, and storage markets. Frederic has also served as the president of a variety of technology user groups promoting the use of innovative technology. As an engineer, he enjoys working directly with engineering teams from technology vendors and on challenging customer projects. Frederic lives in Massachusetts, USA but grew up in the northern part of Belgium where he received his Masters in Electrical Engineering, Electronics, and Automation. previous post PCI DSS 4.0 Is Coming. Will You Be Ready? next post New look. New brains. All the tools! You may also like Modern file storage accelerates the AI-driven search for... August 31, 2023 Historic collaboration: Next-gen virtual infrastructure accelerates apps, boosts... October 16, 2022 Modernize the hospital data center with personalized healthcare... October 16, 2022 Are we on the path to a National... October 16, 2022 The Struggle with Threat Intelligence October 16, 2022 Modernize your data management with HPE GreenLake and... October 16, 2022 Introducing Qualcomm Cloud AI 100 Accelerators for HPE... October 16, 2022 Recap HPE Discover 2022 October 16, 2022 Making App Modernization Easier with HPE and vFunction October 16, 2022 The Insider Threat Problem: Your biggest threat may... October 16, 2022 Leave a Comment Cancel ReplyYou must be logged in to post a comment.