Tesla Now Likely Has Between 30,000 and 350,000 Units of NVIDIA’s H100 Chip, While Elon Musk’s xAI Also Owns a Sizable Stash of the High-Performance GPU

This is not investment advice. The author has no position in any of the stocks mentioned. Wccftech.com has a disclosure and ethics policy.

Tesla and Elon Musk’s artificial intelligence-focused enterprise, xAI, have collectively established a sizable stash of NVIDIA’s H100 GPUs, as the former endeavors to crack the Level 5 autonomous driving conundrum for good, while the latter attempts to realize Musk’s vision of a “maximum truth-seeking AI.”

The X account “The Technology Brother” recently posted that Mark Zuckerberg’s Meta has now amassed one of the largest stashes of the H100 GPUs in the world, amounting to around 350,000 units. Musk, however, took an exception with the rankings of Tesla and xAI in that tabulation, pointing out that “Tesla would be second highest and X/xAI would be third if measured correctly.”

Assuming that all else remains constant, this means that Tesla now owns between 30,000 and 350,000 units of NVIDIA’s H100 GPUs. On the other hand, xAI now likely owns between 26,000 and 30,000 units of NVIDIA’s AI-focused graphic cards.

Back in January, while confirming a new $500 million investment in Tesla’s Dojo supercomputer –  equivalent to around 10,000 units of the H100 GPU – Elon Musk announced that the EV giant would “spend more than that on NVIDIA hardware this year” as the “stakes for being competitive in AI” were “at least several billion dollars per year at this point.”

Bear in mind that xAI had purchased around 10,000 units of NVIDIA’s GPUs in 2023 as Musk hired talent from DeepMind, OpenAI, Google Research, Microsoft Research, Tesla, and the University of Toronto to build his artificial intelligence enterprise from the grounds up. It can be reasonably deduced, however, that those purchases were likely related to the A100 GPUs. Since then, as can be inferred from Musk’s latest X post, it seems that xAI has amassed a sizable stash of the H100 GPUs as well.

Of course, given the pace of innovation in the AI world, those H100 GPUs are fast becoming obsolete. Back in March, NVIDIA announced its GB200 Grace Blackwell Superchip, combining one Arms-based Grace CPU with two Blackwell B100 GPUs. The system can deploy an AI model with 27 trillion parameters and is expected to be 30 times faster at tasks like serving up answers from chatbots.

Share this story