2024 Compute Performance Considerations and Expectations

Large language models compute loads captures a lot of focus, following an “AI on Top of HPC” architecture, LLMs became feasible, and are now, maybe, the world’s major technology and business competency arena.

Typical HPC clusters utilized in LLMs training has 10s to 1000s of compute nodes, making the long training jobs feasible. The majority of this arena’s spotlights are focused upon the processing units, primarily GPUs, which provide an average of 60 TFLOPS of FP64, though, such HPC clusters usually configured with dual sockets CPUs providing an average of 8 TFLOPS , some high speed RAM, high speed network (100 – 400 Gbps) + multiple NICs per node, and High Performance Storage (usually NAS).

Now the inevitable question of compute efficiency of these modern “super” systems reveals a lot of surprises, highlighted in the following lines.

A study by Google suggests that the actual load of BERT1 on a GPU is around 10-20% of the peak FLOPS capability of the GPU.

Another study by Facebook AI suggests the actual load of RoBERTa2 on a GPU is around 20-30% of the peak FLOPS capability of the GPU.

A closer look at LLMs training and inference indicates that these are data-centric workloads, the poor utilization of the GPUs by these models, clearly implies a need for higher rates data access and transfer technologies, rather than increasing the compute capability. In other words, a higher ROI of a compute facility running data-centric workloads, can be achieved by increasing the investment in data access and transfer capabilities, specifically, the memory and the computer architecture, instead of investing in additional “unusable” compute capability.

Surprisingly, this means:

  1. Development of (an order of magnitude) faster memory systems is what the industry must expect in the near future
  2. Currently, Modified Harvard architecture (a little different than Von Neumann architecture), which is the most adopted computer architecture, does not suit modern data-centric workloads, however, modern compute requirements clearly necessitate a technology development shift, towards a “Data Flow/In Memory Computing” as an alternative architecture for such workloads.
  3. Development and innovation of new hardware processing platforms, along with their enablement by software and programming models, will be increasing, as much newer workloads do evolve.
  4. The chipmakers market landscape is about to change, the domination of GPUs as AI processing units is not going to last for long, although, it is currently the best fit.
  5. In the short term, the compute capability differences between the various GPU platforms currently available in the market might soon be realized as insignificant, specifically comparing Nvidia, AMD, and Intel datacenter GPUs.

Although, above GPU market expectations might be considered as bold, and seems to suggest a bubble approach, it is worth preparing for.

———————————————————————————-

1 : BERT (Bidirectional Encoder Representations from Transformers), is a popular LLM that has been shown to achieve state-of-the-art results on a wide range of NLP tasks.

2: RoBERTa (Robustly Optimized BERT Pretraining Approach): RoBERTa is a variant of BERT that was specifically designed for text classification tasks

Author:
Tamer Assad Hassan Mahmoud
HPC & Media Streaming Consultant
CEO of PHOTON COMPUTING LLC
LinkedIn: https://www.linkedin.com/in/tamerassad
https://www.photon-computing.com

Share

Related

eHealth Supports Biden’s Call for Public-Private Sector Cooperation

eHealth announced their support for the Biden-Harris Administration Emergency...

Taking the AI Route to Generate Unprecedented Efficiency Across Your IT Department

Atomicwork, the leading modern service management provider, has officially...

Orchestrated Building Administration System: Energy Savings in Hospital

Every aspect of an HVAC system is controlled by...

How insight-driven security builds business resiliency

The acceleration of digitization initiatives was paramount to make...

Skio Raises $3.7 Million in Seed Funding; Plans to Help Subscription Sales over at Shopify

One of the more factual things about human life...

A Subscription that can Start a Whole New Era on the Roads

One of the greatest things about a human life...

Adding Value to Your Journey

It’s fascinating how our expectations as a society have...

AI Finally Visits The Dentist

Science and technology has taken over every field over...

London Biotechnology Show Makes Grand Return to Unveil Cutting-edge Advances in Health & Medicine

London, August 23: The London Biotechnology Show returns for...

Ambi Robotics Raises $32 Million in Additional Funding; Plans to Expand Product Portfolio and Workforce

Human beings have always boasted a wide range of...

Latest

No posts to display

No posts to display