Bandung, IndonesiaSentinel.com — Elon Musk developed a supercomputer called Colossus. Powered by 100,000 Nvidia AI chips, this number is far more than any AI system that has ever existed in the world.
Colossus was built in Tennessee, United States, by artificial intelligence startup xAI.
Earlier this week, Musk announced that the sophisticated data center was finally online after being assembled for 122 days, which Nvidia claimed was the fastest ever record.
“Colossus is the world’s most powerful AI training system,” Musk said in a tweet, quoted from Futurism, Friday, September 6.
This Elon Musk supercomputer was built with Nvidia H100 graphics processing units, which are the most sought-after hardware for training and running generative AI systems, such as AI chatbots and image generators.
The First International AI Treaty Opened for State Signatures, Addressing the Risks Technology
Musk claims that, in a few months, Colossus will “double” in size to 200,000 AI chips, which will include 50,000 H200 GPUs. The newer version will have nearly twice the memory capacity of its earlier version, and 40 percent more bandwidth.
Musk founded xAI last year with its flagship product Grok, an AI chatbot previously integrated into X. The startup has managed to match the capabilities of other tech giants that have started earlier, such as archrival OpenAI and its supporter Microsoft.
As Fortune notes, Nvidia sees Musk as one of its best customers, having bought tens of thousands of GPUs for Tesla, worth an estimated USD $3 billion to USD $4 billion, before branching out into xAI startups.
To get 100,000 H100 GPUs, Musk will likely have to spend billions, with each AI chip costing around USD $40,000. That’s roughly USD $4 billion Musk has spent to buil this supercomputer
xAI raised about USD $6 billion in investment in May thanks to backing from tech VC firms including Andreessen Horowitz.
(Raidi/Agung)