When there’s a gold-rush
make shovels and pickaxes
When AI booms
Make GPUs
and TPUs
and QPUs
NVIDIA – From making video games
to becoming the biggest player in AI computing.
And accelerating a New Age of Computing
Honestly?
Even as somebody schooled in Electronic Engineering (a long long time ago ; ) I would not know the difference between a TPU and a QPU or even know what those funny acronyms mean.
Until yesterday, when I stumbled upon a recording of the recent presentation given by the co-founder of the NVIDIA company in October.
The company with this funny sounding name is making history as the first company in history to reach $5 trillion in market value. Not that this really means much to me, actually.
But it is exceptional for a company that started as a scrappy computer chip producer for video games – back 30 years ago.
Personally I was not into games at all but interested in Computer Graphics. This is how I got to know their programming environment for life graphics and for the first time heard about GPUs – Graphics Processing Unit – NVIDIAs special sauce for the computing revolution that is happening right now. Most people just dont know it.
25 years onwards and NVIDIA is driving this computer revolution that has far reaching implications for our technological societies – that basically run on computer hardware.
If you watch this presentation you will understand what I am talking about. Jensen Huang is giving a lot of clear technical details. And he is also quite a decent guy and entertaining. I think.
So, what’s CPU, GPU, TPU and QPU???
Short answer:
CPU, GPU, TPU, and QPU are all types of processors, each designed for different tasks.
A CPU is a general-purpose processor that runs operating systems and applications, a GPU excels at parallel processing for graphics and AI, a TPU is specifically designed for high-efficiency AI and machine learning computations, and a QPU uses quantum mechanics to perform computations with qubits, with applications in complex problems like cryptography.
A CPU, GPU, TPU, and QPU are all types of processors, but they are each optimized for different kinds of computations. The key distinction lies in their architecture and specialization, with each excelling at different tasks, from general-purpose computing to highly specialized parallel or quantum workloads.
Central Processing Unit (CPU)
Description: The CPU is the general-purpose “brain” of a computer. It is responsible for carrying out most of the processing and control functions by performing calculations and executing instructions from the operating system and applications sequentially. A CPU has a small number of very powerful cores, making it excellent for tasks requiring complex logic and fast sequential processing.
Primary Use Case: General computing tasks, such as running the operating system, managing applications, web browsing, and handling complex single-threaded calculations.
Graphics Processing Unit (GPU)
Description: Originally designed to accelerate the rendering of 3D graphics, a GPU is a specialized electronic circuit with thousands of smaller, more efficient cores. This parallel architecture allows it to perform a massive number of calculations simultaneously.
Primary Use Case: Massively parallel workloads, including high-end gaming, video editing, and especially the training and inference of artificial intelligence (AI) and machine learning models.
Tensor Processing Unit (TPU)
Description: A TPU is an application-specific integrated circuit (ASIC) developed by Google specifically to accelerate machine learning (ML) workloads. TPUs are optimized for the matrix multiplication and accumulation operations that are central to deep learning models. They are faster and more energy-efficient for AI tasks compared to CPUs and GPUs.
Primary Use Case: Training and running AI and ML models within Google’s cloud infrastructure, such as for Google Search, Google Translate, and generative AI.
Quantum Processing Unit (QPU)
Description: A QPU is a component of a quantum computer that uses quantum mechanics to process information. Unlike classical processors that use binary bits (0s and 1s), QPUs use quantum bits, or qubits, which can exist in multiple states at once (a concept known as superposition). QPUs are still in the experimental stages and require extremely cold temperatures and shielded environments to function.
Primary Use Case: QPUs are not designed to replace CPUs for everyday tasks. Instead, they are being developed to solve certain types of incredibly complex problems that are intractable for even the most powerful classical supercomputers, such as advanced simulations, cryptography, and optimization.
NVIDIA CEO Jensen Huang outlines the next phase of accelerated computing and AI
Well, really impressive from the technical perspective. I can appreciate what they have done / are doing. Not that i am a big fan of such self-congratulatory presentations that are the norm now in the tech world.
But hey! So what? Relax Cris! 😹 Says my cat Jojo.
Why i am really bringing this up here is the fact that the mad rush for AGI – Artificial General Intelligence – that we are seeing now is accelerating exponentially with every new iteration of more powerful chips.
And with that accelleration comes all the rest, which is not so nice in my perspective.
Server Farms, AI Data Warehouses
Resource-hungry secretive Megastructures

AI data centers are specialized facilities designed for the intensive computational demands of artificial intelligence, featuring high-performance hardware like GPUs, advanced cooling systems, and robust networks to handle massive datasets and complex algorithms.
These differences distinguish them from traditional data centers that are mainly used for data storage. AI requires more power and lower latency for tasks like training large language models. Consequently, AI data centers are experiencing explosive growth, leading to significant investments and serious questions over resource consumption like energy and water.
As artificial intelligence becomes the new foundation of global innovation, the ownership and access to massive hardware determines who is part of the AI gold rush and who will be left standing in the dust..
Only 32 nations worldwide, predominantly in the Northern Hemisphere, possess specialized AI data centers, leaving the vast majority of countries without this crucial technological infrastructure.
The United States and China alone operate over 90% of specialized AI data centers. US tech giants, including Amazon, Microsoft, and Google, operate 87 major AI computing hubs globally, while Chinese firms operate 39. European companies operate only six.
Social media companies like Meta (Facebook) and Elon Musk’s xAI have truly megalomanic ambitions. Musk’s AI company, xAI, has built a massive data center called “Colossus” in Memphis, Tennessee, which is reportedly the world’s largest AI supercomputer.
Africa and South America are nearly absent from the map.
More than 150 countries lack such infrastructure altogether.
Inside these facilities are high-powered chips, made mainly by NVIDIA, that power the most advanced AI tools. Without access to them, countries fall behind in AI development, scientific research, and even economic competitiveness.
It’s certainly fun (for a while) to share dirty jokes with a Chatbot or let Generative AI like Midjourney create funny pictures or little nonsense videos. But we should better take a moment, step back from the keyboard and consider the whole picture.
I am really not against AI, there is a lot of good use that we can find for this mind-blowing technology.
But it is on us what we do with it, what we use it for and with what kind of attitude we approach this Artificial Intelligence.
After all, it is Intelligent! And emerging intelligent ET.
👽😉
Isn’t it?
[ … work in progress. Please come back later on ]
check it out

