Computing giant Nvidia will also establish operations in the US city of Memphis to support Musk’s AI ambitions.

Elon Musk’s artificial intelligence (AI) start-up xAI plans to expand its supercomputer to incorporate more than one million graphics processing units (GPUs), it was announced on Wednesday, as the company aims to compete against generative AI (GenAI) rivals such as OpenAI.

Musk’s supercomputer, “Colossus,” was built earlier this year in Memphis, Tennessee, in three months and currently has 100,000 Nvidia GPUs to train xAI’s chatbot, Grok. 

AI companies are in a race to build large clusters of GPUs, or interconnected chips, which can lead to more capable AI models developed at faster rates.

Nvidia, Dell, and Supermicro Computer would also establish operations in Memphis to support the expansion, the city’s chamber of commerce said, adding that they would establish an “xAI special operations team” to “provide round-the-clock concierge service to the company”.

‘Only one person could do that’

In October, Nvidia CEO Jensen Huang praised xAI in an interview with B2g Pod, saying that Colossus was a feat of engineering. 

“As far as I know, there’s only one person in the world who could do that; Elon is singular in his understanding of engineering and construction and large systems and marshalling resources; it’s just unbelievable,” Huang said. 

However, the battle for bigger chip clusters comes with challenges such as the resources needed to cool the power-hungry technology.

No details have been given about the energy resources that it would need, nor where the energy would come from. 

As well as stockpiling Nvidia GPUs, xAI – like many other AI companies – is also developing its own AI chips. 

Musk has been working on Dojo, Tesla’s custom-built supercomputer to train its autonomous vehicle capability, since 2023. 

Share.
Exit mobile version