2024 Compute Performance Considerations and Expectations

Large language models compute loads captures a lot of focus, following an “AI on Top of HPC” architecture, LLMs became feasible, and are now, maybe, the world’s major technology and business competency arena.

Typical HPC clusters utilized in LLMs training has 10s to 1000s of compute nodes, making the long training jobs feasible. The majority of this arena’s spotlights are focused upon the processing units, primarily GPUs, which provide an average of 60 TFLOPS of FP64, though, such HPC clusters usually configured with dual sockets CPUs providing an average of 8 TFLOPS , some high speed RAM, high speed network (100 – 400 Gbps) + multiple NICs per node, and High Performance Storage (usually NAS).

Now the inevitable question of compute efficiency of these modern “super” systems reveals a lot of surprises, highlighted in the following lines.

A study by Google suggests that the actual load of BERT1 on a GPU is around 10-20% of the peak FLOPS capability of the GPU.

Another study by Facebook AI suggests the actual load of RoBERTa2 on a GPU is around 20-30% of the peak FLOPS capability of the GPU.

A closer look at LLMs training and inference indicates that these are data-centric workloads, the poor utilization of the GPUs by these models, clearly implies a need for higher rates data access and transfer technologies, rather than increasing the compute capability. In other words, a higher ROI of a compute facility running data-centric workloads, can be achieved by increasing the investment in data access and transfer capabilities, specifically, the memory and the computer architecture, instead of investing in additional “unusable” compute capability.

Surprisingly, this means:

  1. Development of (an order of magnitude) faster memory systems is what the industry must expect in the near future
  2. Currently, Modified Harvard architecture (a little different than Von Neumann architecture), which is the most adopted computer architecture, does not suit modern data-centric workloads, however, modern compute requirements clearly necessitate a technology development shift, towards a “Data Flow/In Memory Computing” as an alternative architecture for such workloads.
  3. Development and innovation of new hardware processing platforms, along with their enablement by software and programming models, will be increasing, as much newer workloads do evolve.
  4. The chipmakers market landscape is about to change, the domination of GPUs as AI processing units is not going to last for long, although, it is currently the best fit.
  5. In the short term, the compute capability differences between the various GPU platforms currently available in the market might soon be realized as insignificant, specifically comparing Nvidia, AMD, and Intel datacenter GPUs.

Although, above GPU market expectations might be considered as bold, and seems to suggest a bubble approach, it is worth preparing for.

———————————————————————————-

1 : BERT (Bidirectional Encoder Representations from Transformers), is a popular LLM that has been shown to achieve state-of-the-art results on a wide range of NLP tasks.

2: RoBERTa (Robustly Optimized BERT Pretraining Approach): RoBERTa is a variant of BERT that was specifically designed for text classification tasks

Author:
Tamer Assad Hassan Mahmoud
HPC & Media Streaming Consultant
CEO of PHOTON COMPUTING LLC
LinkedIn: https://www.linkedin.com/in/tamerassad
https://www.photon-computing.com

Share

Related

Cracking the Code of Enterprise AI to Unlock Unprecedented Performance Levels

Hitachi Vantara, the data storage, infrastructure, and hybrid cloud...

Cockroach Labs Secures $278 Million in the Latest Round; Hits 5 Billion Valuation

It’s great that we humans are outright committed to...

ICON Outlook Digital Transformation Award Winner

Olive is the ultimate solution for Enterprises looking to...

The Way Telemedicine is Transforming with Technology

The telemedicine space is at a tipping point with...

A Coming-of-Age E-Commerce Bid

A human life comes packaged together with various factors,...

Imperva launches Sonar for unified enterprise security analytics

Cybersecurity cloud company Imperva launched its Sonar platform in...

Getting the EV Trend Rolling

Humans are always looking to cater their different needs...

Sony Acquires Bungie for $3.6 Billion; Looks to Heat up the Gaming Consolidation War

The usefulness of certain opportunities in our lives depends...

Claravine Raises $16 Million in Series B Financing; Plans to Improve Workforce and Product Development

There are many things that make human beings special,...

Latest

No posts to display

No posts to display