Elon Musk says xAI’s Colossus 2 cluster is currently training seven models at once, including a system called Imagine V2, two 1-trillion-parameter variants, two 1.5-trillion-parameter variants, a 6-trillion-parameter model, and a 10-trillion-parameter model. The claim comes from Musk’s public post on X on April 8, and, at least so far, it stands as the clearest primary-source statement about what is running on Colossus 2 right now.
That matters because the post offers a rare snapshot of xAI’s training pipeline at a time when the company is rapidly expanding its infrastructure footprint around Memphis and northern Mississippi. But it also needs careful framing. The user-supplied claim said xAI was training models ranging from 1T to 15T. Musk’s own post does not mention a 15T model. The largest size he explicitly listed was 10T.
|
Claimed item
|
What is currently supported by source material
|
|
Seven models in training
|
Confirmed by Musk’s public X post.
|
|
Imagine V2 is one of them
|
Confirmed by Musk’s public X post.
|
|
Two 1T variants
|
Confirmed by Musk’s public X post.
|
|
Two 1.5T variants
|
Confirmed by Musk’s public X post.
|
|
A 6T model
|
Confirmed by Musk’s public X post.
|
|
A 10T model
|
Confirmed by Musk’s public X post.
|
|
A 15T model
|
Not supported by the cited primary-source post.
|
What Musk actually said
In the post that triggered the latest round of discussion, Musk wrote:
“SpaceXAI Colossus 2 now has 7 models in training:
•Imagine V2
•2 variants of 1T
•2 variants of 1.5T
•6T
•10T
Some catching up to do.”
Some catching up to do.”
That wording gives the market a direct claim from xAI’s founder, but not a full technical disclosure. Musk did not explain whether the numbers refer strictly to parameter counts, whether every listed system is a language model, or how many of the runs are image, video, or multimodal systems. As a result, the safest news framing is that Musk says these models are in training, rather than presenting every implied technical detail as independently established fact.
What Colossus 2 is
Independent reporting from Data Center Dynamics helps fill in the infrastructure context behind the claim. According to that report, xAI received permission to install 41 natural-gas turbines at a former Duke Energy site in Southaven, Mississippi, generating 1.2 gigawatts to power xAI data centers in the area. The report says those turbines are intended to supply xAI’s Colossus 2 data center, located across the state line in the Whitehaven district of Memphis, as well as an upcoming Colossus 3 site.
The same report says xAI launched its original Colossus supercomputer in Memphis in 2024, purchased the Colossus 2 site in March 2025, and brought Colossus 2 online in January 2026. If accurate, that makes Musk’s new post more than just social-media hype; it suggests xAI is already using that expanded cluster for multiple parallel frontier-scale training runs.
|
Infrastructure detail
|
Independent context
|
|
Cluster name
|
Colossus 2.
|
|
Associated company
|
xAI.
|
|
Location context
|
Whitehaven district of Memphis, with supporting power infrastructure in Southaven, Mississippi, according to Data Center Dynamics.
|
|
Power project
|
41 natural-gas turbines, 1.2GW, according to Data Center Dynamics.
|
|
Operational timing cited in report
|
Colossus 2 reportedly came online in January 2026.
|
Where Imagine V2 fits in
Musk’s post names Imagine V2, but does not define it. Still, xAI’s own documentation confirms that Grok Imagine is already an established model family within the company’s product stack. xAI’s developer documentation includes a model page for grok-imagine-image-pro, showing that “Imagine” is not an informal codename but part of xAI’s official model naming and deployment ecosystem.
That does not prove exactly what Imagine V2 can do, nor does it confirm whether the system in training is focused on images, video, or broader multimodal generation. What it does support is a narrower and more defensible conclusion: Imagine V2 likely belongs to xAI’s existing Grok Imagine product line, rather than being a completely unrelated internal label.
What remains unclear
Despite the attention generated by the post, several important questions remain unanswered. Musk did not specify whether the 1T, 1.5T, 6T, and 10T figures refer to active parameters, total parameters, or some other internal scaling convention. He also did not provide release dates, benchmark targets, or product mappings for the seven-model lineup.
That leaves room for speculation, especially around the two 1T variants and two 1.5T variants. They could be experimental branches, deployment candidates, distilled systems, or distinct multimodal models. But without supporting documentation, those interpretations remain speculative. For a Google News-friendly article, the most reliable approach is to separate what Musk explicitly said from what observers are inferring.
Why the post matters
Even with those caveats, the announcement is notable. If xAI is indeed training seven systems simultaneously on Colossus 2, it would suggest the company is broadening its model portfolio rather than building around a single flagship release. The mix described in Musk’s post also hints at a layered strategy: smaller trillion-scale variants, a mid-tier 6T system, a very large 10T model, and at least one creative or multimodal model tied to the Imagine brand.
More broadly, the post reinforces the scale race now underway in AI infrastructure. Colossus 2 has already drawn attention not only for its size but also for the energy and regulatory issues around how it is powered. In that sense, Musk’s seven-model update is not just a product teaser. It is also a signal about how aggressively xAI intends to use the compute capacity it has been assembling.
Bottom line
The strongest verified takeaway is straightforward: Elon Musk says xAI’s Colossus 2 is training seven models, including Imagine V2, two 1T variants, two 1.5T variants, a 6T model, and a 10T model. The claim is supported by Musk’s public post, and independent reporting provides credible context on what Colossus 2 is and where it sits in xAI’s expanding infrastructure buildout.
What the evidence does not support, at least from the sources reviewed here, is the claim that xAI is training a 15T model on Colossus 2. Until xAI publishes fuller technical documentation, that part should be treated as unverified.
Quick summary
|
Item
|
Current status
|
|
Seven models training on Colossus 2
|
Confirmed by Musk’s post.
|
|
Imagine V2 named explicitly
|
Confirmed.
|
|
Largest size mentioned in reviewed primary source
|
10T.
|
|
15T figure
|
Not confirmed in reviewed sources.
|
|
Colossus 2 infrastructure context
|
Supported by independent reporting from Data Center Dynamics.
|
|
Imagine as an official xAI model family
|
Supported by xAI developer documentation for grok-imagine-image-pro.
|
