AMD 30x25 Energy Efficiency Goal in High-Performance Computing and AI-Training

By MARK PAPERMASTER, Executive Vice President, and Chief Technology Officer, AMD.

  • 1 year ago Posted in

As more and more devices become “smart devices” containing embedded processors with internet connectivity and often cameras, the explosion of data creation continues at an exponential pace. Artificial Intelligence (AI) and High Performing Computing (HPC) are transforming the computing landscape, enabling this massive data trove to be analyzed, leading to higher quality analytics, automated services, enhanced security, and many more purposes. The challenge: the scale of these advanced computations demands more and more energy consumption.

 

As a leader in creating high-performance processors to address the world’s most demanding analytics, AMD has prioritized energy efficiency in our product development. We do this by holistically approaching the design for power optimization across architecture, packaging, connectivity, and software. Our focus on energy efficiency aims to reduce costs, preserve natural resources, and mitigate the climate impacts.

 

Prioritizing energy efficiency at AMD is not new. In fact, we voluntarily set a goal for ourselves in 2014 to accelerate the typical use energy efficiency of our mobile processors to 25x by 2020. We met this goal, and exceeded it by achieving a 31.7x improvement. 

 

Last year we announced a new vision - a 30x25 goal - to achieve 30x energy efficiency improvement by 2025 from a 2020 baseline for our accelerated data center compute nodes.[i] Built with AMD EPYC™ CPUs and AMD Instinct™ accelerators, these nodes are designed for some of the world’s fastest growing computing needs in AI training and HPC Applications.  These applications are essential to scientific research in climate predictions, genomics, and drug discovery, as well as training AI neural networks for speech recognition, language translation and expert recommendation systems. The computing demands for these applications are growing exponentially. Fortunately, we believe it is possible to optimize energy use for these and other applications of accelerated compute nodes through architectural innovation.

 

Caption: Application specific accelerated compute nodes enable greater efficiency

 

AMD, along with our industry, understands the opportunity for data center efficiency gains to help reduce greenhouse gas emissions and increase environmental sustainability. For example, if all global AI and HPC server nodes were to make similar gains, we project up to 51 billion kilowatt hours (kWh) of electricity could be saved from 2021-2025 relative to baseline industry trends, amounting to $6.2B USD in electricity savings as well as carbon benefits from 600 million tree seedlings grown for 10 years.[ii]

 

Practically speaking, achieving the 30x goal means that in 2025, the power required for these AMD accelerated compute nodes to complete a single calculation will be 97% lower than in 2020. Getting there will not be easy. To achieve this goal means that we will need to increase the energy efficiency of an accelerated compute node at a rate that is more than 2.5x faster than the aggregate industry-wide improvement made during the period 2015-2020.[iii] 

 

One Year Progress Update

 

So how are we doing? Nearly midway through 2022, we are on track toward achieving 30x25, having reached 6.79x improvement in energy efficiency from the 2020 baseline using an accelerated compute node powered by one 3rd Gen AMD EPYC CPU and four AMD Instinct MI250x GPUs. Our progress report utilizes a measurement methodology[iv] validated by renowned compute energy efficiency researcher and author, Dr. Jonathan Koomey. 

 

Caption: 2022 update on 30x25 energy efficiency goal. AMD actual achievements are on track to achieve the 30X goal and well above the industry improvement trend from 2015-2020.

 

Caption: Comparative energy use projections for data center compute nodes globally running AI-training and HPC workloads. Source: AMD Internal Data

 

As seen in the graphic above, the business-as-usual “baseline industry trend” estimates global energy use for 2020-2025 following the same historical trend observed in 2015-2020 data. The AMD goal trendline shows global energy use based on the efficiency gains represented by the AMD 30x25 goal with the desirable result of lower energy consumption. The AMD actual trendline shows global energy use based on AMD compute node energy efficiency gains reported to date.

 

While there is more to go to reach our 30x25 goal, I am pleased by the work of our engineers and encouraged by the results so far. I invite you to check in with us as we continue to annually report on progress.

 

##

 

[1] Includes AMD high-performance CPU and GPU accelerators used for AI training and High-Performance Computing in a 4-Accelerator, CPU hosted configuration. Goal calculations are based on performance scores as measured by standard performance metrics (HPC: Linpack DGEMM kernel FLOPS with 4k matrix size. AI training: lower precision training-focused floating-point math GEMM kernels such as FP16 or BF16 FLOPS operating on 4k matrices) divided by the rated power consumption of a representative accelerated compute node including the CPU host + memory, and 4 GPU accelerators.

 

[1] Scenario based on all AI and HPC server nodes globally making similar gains to the AMD 30x goal, resulting in cumulative savings of up to 51.4 billion kilowatt-hours of electricity from 2021-2025 relative to baseline 2020 trends. Assumes $0.12 cents per kwh x 51.4 billion kwh = $6.2 million USD.  Metric tonnes of CO2e emissions, and the equivalent estimate for tree plantings, is based on entering electricity savings into the U.S. EPA Greenhouse Gas Equivalency Calculator on 12/1/2021. https://www.epa.gov/energy/greenhouse-gas-equivalencies-calculator 

 

[1] Based on 2015-2020 industry trends in energy efficiency gains and data center energy consumption in 2025.

 

[1] Calculation includes 1) base case kWhr use projections in 2025 conducted with Koomey Analytics based on available research and data that includes segment-specific projected 2025 deployment volumes and datacenter power utilization effectiveness (PUE) including GPU HPC and machine learning (ML) installations, and 2) AMD CPU socket and GPU node power consumptions incorporating segment-specific utilization (active vs. idle) percentages and multiplied by PUE to determine actual total energy use for calculation of the performance per Watt.

6.79x = (base case HPC node kWhr use projection in 2025 x AMD 2022 perf/Watt improvement using DGEMM and typical energy consumption + Base case ML node kWhr use projection in 2025 *AMD 2022 perf/Watt improvement using ML math and typical energy consumption) /(2020 perf/Watt * Base case projected kWhr usage in 2025). For more information on the goal and methodology, visit https://www.amd.com/en/corporate-responsibility/data-center-sustainability

By Eric Herzog, Chief Marketing Officer at Infinidat.
The Detroit Pistons of the National Basketball Association (NBA) had a game plan to improve its...
It’s been around 20 years since flash memory – in its hugely dominant NAND variant – first...
High-Performance Computing (HPC), has become critical in assisting the energy sector, offering the...
By Eric Herzog, Chief Marketing Officer at Infinidat.
By James Blake, Global Head of Cyber Resiliency GTM Strategy at Cohesity.