When Anthropic’s chief product officer Ami Vora took the stage at the company’s Code with Claude developer conference in San Francisco last week, those present likely expected updates on new models.
Instead, Vora announced that Anthropic had signed a deal with SpaceX to take full control of the Colossus 1 data centre in Memphis, in a pairing that would have seemed unthinkable just a few months ago.
The deal gives Anthropic access to more than “more than 300 megawatts of new capacity [and] over 220,000 NVIDIA GPUs within the month,” according to a subsequent Anthropic statement, and will directly benefit subscribers to its Claude Pro and Claude Max plans.
A Musk-Anthropic union is as unlikely as they come. The man who called Anthropic “evil” and a company that “hates Western civilisation” a mere months ago now controls the infrastructure powering the expansion of Claude and enabling the rollout of extended programs for its paying users.
Woke and anti-woke AI
Anthropic was founded in 2021 by Dario Amodei, Daniela Amodei and a cohort of researchers who left OpenAI over concerns that safety was not being taken seriously enough.
The company’s entire brand identity is built on “responsible AI,” and its models are trained with Constitutional AI, a framework with baked-in ethical constraints that Claude’s developers say makes it more likely than its competitors to decline requests, express uncertainty about sensitive topics or push back on prompts it deems harmful.
Grok, on the other hand, was launched during Musk’s peak involvement with the Trump administration in 2025, when he would take to X daily to attack mainstream media for ideological gatekeeping and back far-right European parties, including the Alternative for Germany. Its branding was explicitly anti-woke.
Anthropic has continuously refused to strip safety guardrails from Claude for use in autonomous weapons systems, including those being used in the ongoing Iran war.
Meanwhile, Musk has backed hawkish foreign policy positions and, along with Google and OpenAI, signed defence contracts with the Pentagon on terms Anthropic refused in February after previously signing deals in July 2025 — leading the Pentagon to since designate the company a supply chain risk and bar it from military work.
Trouble in Grok-land?
Musk’s xAI generated $107 million (€91m) in revenue for the quarter ending in September 2025, while posting a net loss of $1.46 billion (€1.24bn) in the same period. Anthropic, meanwhile, has reached an annualised revenue run rate of around $30 billion (€25.5bn), according to analyst estimates.
The “anti-woke” product positioning that was supposed to be a competitive differentiator for Grok turned out not to translate into enterprise revenue. Grok lacked the technology that allows AI models to reach beyond chat windows and control operations on a computer.
At the same time, Anthropic’s Claude Cowork and OpenAI’s Codex ushered in a whole new way of leveraging AI models. The success of their launches far surpassed Anthropic’s ability to keep up with compute and highlighted just how much Anthropic underestimated its own success — hence them reaching out to SpaceXAI for additional data support.
Grok, on the other hand, was built around the chatbot paradigm and its tight integration with X — which initially might have looked like a distribution advantage — became an eventual growth trap, as it oriented the product around social media interactions rather than task completion or workflow optimisation, or services that users actually pay for.
The ultimate hit that collapsed Grok’s brand came in early 2026 when the chatbot generated at least 1.8 million sexualised depictions of women over nine days, along with imagery of minors, triggering investigations by regulators across Europe, Asia and the United States.
“This is not spicy. This is illegal. This is appalling. This has no place in Europe,” EU digital affairs spokesman Thomas Regnier said at the time.
Since the deal was announced, Musk said he is comfortable leasing Colossus 1 to Anthropic partly because SpaceXAI had already moved its own training to Colossus 2.
Musk added a notable caveat on X, writing that SpaceXAI reserves the “right to reclaim the compute” if Anthropic’s AI “engages in actions that harm humanity.”
The condition did not appear in the press release, and it has not been confirmed whether it features in the actual contract.
There is no stated threshold for what constitutes “harm to humanity,” meaning the judgment could rest with Musk alone. It could also be a last-ditch attempt by Musk to maintain some distance from the fact that he is now directly powering the expansion of his woke competitor, while his own AI program quietly falters.

