2024 Compute Performance Considerations and Expectations

Large language models compute loads captures a lot of focus, following an “AI on Top of HPC” architecture, LLMs became feasible, and are now, maybe, the world’s major technology and business competency arena.

Typical HPC clusters utilized in LLMs training has 10s to 1000s of compute nodes, making the long training jobs feasible. The majority of this arena’s spotlights are focused upon the processing units, primarily GPUs, which provide an average of 60 TFLOPS of FP64, though, such HPC clusters usually configured with dual sockets CPUs providing an average of 8 TFLOPS , some high speed RAM, high speed network (100 – 400 Gbps) + multiple NICs per node, and High Performance Storage (usually NAS).

Now the inevitable question of compute efficiency of these modern “super” systems reveals a lot of surprises, highlighted in the following lines.

A study by Google suggests that the actual load of BERT1 on a GPU is around 10-20% of the peak FLOPS capability of the GPU.

Another study by Facebook AI suggests the actual load of RoBERTa2 on a GPU is around 20-30% of the peak FLOPS capability of the GPU.

A closer look at LLMs training and inference indicates that these are data-centric workloads, the poor utilization of the GPUs by these models, clearly implies a need for higher rates data access and transfer technologies, rather than increasing the compute capability. In other words, a higher ROI of a compute facility running data-centric workloads, can be achieved by increasing the investment in data access and transfer capabilities, specifically, the memory and the computer architecture, instead of investing in additional “unusable” compute capability.

Surprisingly, this means:

  1. Development of (an order of magnitude) faster memory systems is what the industry must expect in the near future
  2. Currently, Modified Harvard architecture (a little different than Von Neumann architecture), which is the most adopted computer architecture, does not suit modern data-centric workloads, however, modern compute requirements clearly necessitate a technology development shift, towards a “Data Flow/In Memory Computing” as an alternative architecture for such workloads.
  3. Development and innovation of new hardware processing platforms, along with their enablement by software and programming models, will be increasing, as much newer workloads do evolve.
  4. The chipmakers market landscape is about to change, the domination of GPUs as AI processing units is not going to last for long, although, it is currently the best fit.
  5. In the short term, the compute capability differences between the various GPU platforms currently available in the market might soon be realized as insignificant, specifically comparing Nvidia, AMD, and Intel datacenter GPUs.

Although, above GPU market expectations might be considered as bold, and seems to suggest a bubble approach, it is worth preparing for.

———————————————————————————-

1 : BERT (Bidirectional Encoder Representations from Transformers), is a popular LLM that has been shown to achieve state-of-the-art results on a wide range of NLP tasks.

2: RoBERTa (Robustly Optimized BERT Pretraining Approach): RoBERTa is a variant of BERT that was specifically designed for text classification tasks

Author:
Tamer Assad Hassan Mahmoud
HPC & Media Streaming Consultant
CEO of PHOTON COMPUTING LLC
LinkedIn: https://www.linkedin.com/in/tamerassad
https://www.photon-computing.com

Share

Related

Combating Risk on the Wheels

We, as individuals, love nothing more than perfection, In...

A Billion-Dollar Take to Save the Environment

While a human arsenal is made up from a...

TMV Secures $64 Million in Funding; Plans to Back More “Triple Bottom Lines” Companies

In life, we come across many important factors, but...

Nurturing the New-Age of E-Commerce

One great thing about human beings is how we...

Embracing Cloud-Based Collaboration for Hospital Success

The need for an effective collaboration is growing because...

An Underwhelming Debut

Even though it can look rather impossible at times,...

A Matter of Life and Pocket

Even though human life is expansive in every imaginable...

Latest

No posts to display

No posts to display