Just One Word: FLOPS

by | May 25, 2023

“I Have One Word for You, Benjamin: Plastics” – The Graduate

In 1859 Edwin L. Drake struck oil in Titusville, PA.

Oil as an energy resource was born. And for the next 60 years, energy was oil’s only purpose.

But plastics changed all that.

Suddenly, oil companies had a whole new set of customers. As demand from plastics manufacturers expanded, so too did oil company profits.

Today, oil has many customers.

It’s a critical feedstock for a wide range of petrochemical processes that include not only plastics but also fertilizers, detergents, and cosmetics. It’s utilized as a coolant and lubricant for maintaining industrial machinery and equipment. We wouldn’t have asphalt without the oil used to make it.

Between all the competing uses for oil, roughly half or more supplied to global markets finds its way into something other than energy for electricity generation or transportation.

And now that AI has gone public, another 60-year old resource suddenly has a demanding new customer.

Gimme Those FLOPS

That resource is FLOPS, or “floating point operations per second.”

Specifically, FLOPS indicate how many floating point calculations a processor (or a set of processors, as in a supercomputer) can execute in one second.

When you hear terms like “petaFLOPS” (PFLOPS, quadrillions of FLOPS), “exaFLOPS” (EFLOPS, quintillions of FLOPS), and so on, it’s a reference to a computer’s ability to process FLOPS.


Once the computing market became a thing in the 50s and 60s, FLOPS became the hot new resource.

And as plastics were to oil, AI is now the big, new customer for FLOPS.

From Birth to Big Model

AI, especially deep learning models like ChatGPT, depend on FLOPS in many ways.

For instance, the training phase of large-scale machine learning models involves numerous mathematical operations (multiplications, additions, etc.) over high-dimensional data. These operations are often floating-point calculations, and the number of FLOPS can give an estimate of how quickly a system can train a model.

During the inference phase, the model uses the learned parameters to make predictions. Though not as computationally intensive as training, inference also involves a substantial number of floating-point operations, particularly for large, complex models and high-dimensional inputs.

As these models scale – models with billions or even trillions of parameters are not uncommon – model size increases. So to, then, do the number of floating-point operations needed to train and use them.

And now that ChatGPT and other deep learning, large language models are available to the public, more FLOPS means faster computation to meet exploding demand for real-time processing.

In short, when it comes to AI, FLOPS matter.

And that’s why Nvidi Corporation (NVDA) is at all-time highs.


Essentially, chip makers like NVDA create FLOPS.

They make processing power a usable resource by manufacturing semiconductors just like oil companies make oil usable by pulling it out of the ground.

And, right now, markets are grasping to understand the full revenue-generating implications of this new customer.

It’s creating a strong bid to equity markets because the implications touch almost every industry and service.

The drive to capitalize on the benefits of AI is waging war with the treasury supply bottleneck crowding out capital.

It’s a critical market dynamic.

To watch the battle play out in real-time, click the video below.

Take What the Markets Give You.

WRITTEN BY<br>Ileana Wolfort

WRITTEN BY
Ileana Wolfort

What to read next