Elon Musk managed to build a supercomputer for his artificial intelligence Guruk in just 122 days. The billionaire now wants to multiply the power by 10 by upgrading to 1 million Nvidia graphics chips!
In September, teams ofElon MuskElon Musk has completed constructionconstruction Colossus, the new supercomputer from his artificial intelligence company xAI. It is used to train its Grok AI and has 100,000 Nvidia Hopper graphics chips. According to Nvidia, it is the largest supercomputer dedicated to AI. In October, a video was published during the conference. YoutubeYoutubeElon Musk announced, among other things, that he wanted to increase the number of GPUs to 2200,000.
The billionaire has clearly decided that simple necessity is not enough. According to Financial Timesinstead of doubling the capacity of Colossus, he decided to multiply it by 10 and therefore equipped it with a total of 1 million Nvidia Hopper graphics chips!
Explore the inside of xAI’s Colussus supercomputer and its 100,000 Nvidia Hopper chips. In English, enable automatic translation of subtitles. © ServeTheHome
An investment of several tens of billions of dollars
Grok is very controversial, with Elon Musk wanting to make it an “anti-hack” chatbot, accusing it of training its competitors to lie. Ultimately, it’s just as underdeveloped as its main competitors. Chat GPTChat GPT or GoogleGoogle Gemini, with less number of users. It seems Kasturi wants to capture the power of the supercomputer he’s trained on. However, the cost is likely to be high.
Given the cost of the chips, a supercomputer with a million Nvidia Hopper GPUs risks costing several tens of billions of dollars. For the world’s richest man, whose fortune is currently worth more than $350 billion, it might not be such a big investment. However, increasing computing power will not be enough to improve Grok. This will allow developers to quickly train to test different versions, but it won’t solve the problem of aliasing and errors that are still very common in large language models.